The Chief Executive Officer of Anthropic, Dario Amodei, has expressed his conviction that contemporary AI models hallucinate, or fabricate information and present it as factual, at a lower rate than humans.

He made this statement during a press briefing at Anthropic’s inaugural developer event, Code with Claude, in San Francisco on Thursday.

Amodei stated this in the context of a more general point that AI hallucinations are not a constraint on Anthropic’s journey to AGI, which refers to AI systems that possess human-level intelligence or higher.

In TechCrunch’s media report, Amodei stated, “It really depends on the method of measurement, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”

The CEO of Anthropic is one of the most optimistic leaders in the industry regarding the possibility of AI models achieving AGI.

Amodei stated in a widely circulated paper he wrote last year that he was of the opinion that AGI could be implemented as early as 2026.

The Anthropic CEO stated during the press briefing on Thursday that he was observing consistent progress toward this objective, observing that “the water is rising everywhere.”

Amodei stated, “Everyone is perpetually seeking these concrete examples of what AI can accomplish.”

“They are not visible.” There is no such entity.

Other AI specialists are of the opinion that hallucination is a significant impediment to the attainment of AGI.

Demmis Hassabis, the CEO of Google DeepMind, stated earlier this week that the AI models of today are deficient in terms of their ability to answer straightforward queries and have an excessive number of “holes.”

For instance, earlier this month, a lawyer representing Anthropic was compelled to repent in court after the AI chatbot hallucinated and incorrectly identified names and titles in a court filing.

Amodei’s assertion is challenging to verify in large part due to the fact that the majority of hallucination criteria compare AI models to one another rather than to humans.

Certain methods, such as providing AI models with access to web search, appear to be effective in reducing the incidence of hallucinations.

you might also like