According to The Midas Project, a watchdog organization, Elon Musk’s AI company, xAI, has failed to meet a self-imposed deadline for the publication of a finalized AI safety framework.
xAI is not precisely recognized for its robust dedication to AI safety, as is commonly believed.
According to a recent report, Grok, the AI chatbot of the company, would nude photographs of women upon request.
Grok can also be significantly cruder than chatbots such as Gemini and ChatGPT, profanity without any discernible restraint.
However, at the AI Seoul Summit in February, a global gathering of AI leaders and stakeholders, xAI released a proposed framework that delineated the company’s approach to AI safety.
The eight-page document delineated xAI’s safety priorities and philosophy, which encompassed the company’s benchmarking protocols and AI model deployment considerations.
However, the proposal was only applicable to unspecified future AI models that are “not currently in development,” as The Midas Project noted in the blog post on Tuesday.
Additionally, the document did not specify how xAI would identify and implement risk mitigations, which are a fundamental aspect of the document that the company signed at the AI Seoul Summit.
xAI stated in the draft that it intended to distribute a revised version of its safety policy “within three months,” specifically by May 10. The official channels of xAI did not acknowledge the deadline that passed.
xAI has a poor AI safety track record, despite Musk’s frequent warnings of the perils of AI gone unchecked.
In a recent study conducted by SaferAI, a nonprofit organization that is dedicated to enhancing the accountability of AI laboratories, it was discovered that xAI is ranked unfavorably among its peers due to its “very weak” risk management practices.
That is not to imply that other AI laboratories are performing significantly better. In recent months, competitors of xAI, such as Google and OpenAI, have expedited safety testing and have been tardy in disseminating model safety reports (or have omitted to do so entirely).
Some experts have expressed apprehension that the apparent deprioritization of safety initiatives is occurring at a time when AI is more capable and therefore potentially hazardous than ever.