An interesting claim from an OpenAI employee has stirred discussion about the current state of artificial intelligence. Vahid Kazemi, a technical staff member at the company, recently posted on social media platform X (formerly Twitter), stating that OpenAI has already achieved Artificial General Intelligence (AGI) with the release of their new O1 model. This declaration has drawn attention in the AI community, as it suggests a major leap forward for the company’s research.
Kazemi explained his view by saying, "In my opinion, we have already achieved AGI, and it’s even more clear with O1." However, he added a key caveat, noting that OpenAI’s AI is "better than most humans at most tasks"—but not necessarily "better than any human at any task." This clarification is important, as it shows that Kazemi isn’t suggesting that the O1 model outperforms humans in all areas but rather that it can handle a wide variety of tasks better than the average human.
Critics have pointed out that Kazemi’s definition of AGI is unconventional. Traditionally, AGI refers to an AI that can perform at or above human level across a broad range of tasks and not just a few specialized functions. Kazemi’s argument focuses on the breadth of tasks the O1 model can handle, even if the outcomes are not always perfect. According to him, the model excels in versatility, which is why it stands out as "better than most humans" in its ability to take on different challenges.
Kazemi also took the opportunity to discuss large language models (LLMs) and their learning processes. He challenged the idea that LLMs merely "follow a recipe," arguing instead that even humans work within frameworks of trial and error when creating scientific hypotheses. As Kazemi explained, "Nothing can’t be learned with examples," which echoes the way AI might be trained—through countless examples, patterns, and data.
This statement comes at a time when OpenAI recently removed the term AGI from its agreement with Microsoft, fueling speculation about the business and research implications of such a decision. While Kazemi’s opinion may be a personal one, it brings attention to the nuanced and evolving understanding of AGI within the company.
Despite Kazemi's optimistic view, the AI field is still far from having a system that can outperform humans in complex, general tasks. There’s still no AI system that can compete with human intelligence in a comprehensive and adaptive way. However, the conversation about AGI continues to evolve, and this latest claim by OpenAI’s staff member adds fuel to the ongoing debate over how close we truly are to achieving it.
Kazemi’s comments reflect the differing perspectives within the AI industry. While some believe that the rapid development of machine learning models like O1 is pushing us closer to AGI, others remain cautious, stressing that much work remains to be done. The definition of AGI itself is still a subject of considerable debate, with no clear consensus on what it will look like when fully realized.
FUTURISM
Read More