Open AI has created a new policy that will guide the response and interactive quality of ChatGTP.
In a 187-page document, the company revealed how it trains its AI model, hinting that the AI responder will be able to answer more questions and offer addition angles in its response.
There will be “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says.
The changes might be part of OpenAI’s effort to land in the good graces of the new Trump administration, but it also seems to be part of a broader shift in Silicon Valley and what’s considered “AI safety.”
In a new section called “Seek the truth together,” OpenAI says it wants ChatGPT to not take an editorial stance, even if some users find that morally wrong or offensive. That means ChatGPT will offer multiple perspectives on controversial subjects, all in an effort to be neutral.
For example, the company says ChatGPT should assert that “Black lives matter,” but also that “all lives matter.” Instead of refusing to answer or picking a side on political issues, OpenAI says it wants ChatGPT to affirm its “love for humanity” generally, then offer context about each movement.
“This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,” OpenAI says in the spec.
“However, the goal of an AI assistant is to assist humanity, not to shape it. The new Model Spec doesn’t mean that ChatGPT is a total free-for-all now. The chatbot will still refuse to answer certain objectionable questions or respond in a way that supports blatant falsehoods.”
These changes could be seen as a response to conservative criticism about ChatGPT’s safeguards, which have always seemed to skew center-left.
However, an OpenAI spokesperson rejects the idea that it was making changes to appease the Trump administration.
Instead, the company says its embrace of intellectual freedom reflects OpenAI’s “long-held belief in giving users more control.”