OpenAI details how it's striving to improve the accuracy and safety of its AI systems

OpenAI details how it’s striving to improve the accuracy and safety of its AI systems

Posted on



Just hours after U.S. President Joe Biden called on artificial intelligence software developers to take responsibility for the safety of their products, ChatGPT creator OpenAI LP has gone public on the measures it takes to minimize the dangers its systems might pose.

In a blog post today, OpenAI said it recognized the potential risks associated with AI and maintained that it’s committed to building safety into its models at multiple levels.

OpenAI’s safety measures include conducting rigorous testing of any new system prior to its release, engaging with experts for feedback, and tinkering with the model to improve its behavior, using techniques such as reinforcement learning with human feedback. It noted that it spent more than six months testing and refining its latest large language model, GPT-4, before releasing it publicly last month.

Even so, OpenAI recognizes that testing in a lab setting can only go so far, as it’s not possible to predict all of the ways in which people might decide to use – and also abuse – its systems. Because of this, it said all new AI systems are released to the public cautiously and gradually to a steadily broadening group of users, with continuous improvements and refinements implemented based on feedback from real world users.

“We make our most capable models available through our own services and through an API so developers can build this technology directly into their apps,” the company explained. “This allows us to monitor for and take action on misuse, and continually build mitigations that respond to the real ways people misuse our systems—not just theories about what misuse might look like.”

Strict safety measures are implemented to protect children from OpenAI’s systems. As a matter of course, it requires that people must be aged 18 or over, or otherwise be at least 13 and have parental approval, to use its AI systems. At the same time, blocks have been implemented to prevent its systems from generating hateful, harassing, violent or adult content. These are being improved continuously, and GPT-4 is said to be 82% less likely to respond to requests for disallowed content than its previous model, GPT-3.5. If someone tries to upload child safety abuse material to one of its image tools, that content will immediately be blocked and reported to the National Center for Missing and Exploited Children.

Factual correctness is another area that OpenAI is striving to improve, because one of the major problems with AI is hallucination, where a system simply fabricates its response when it cannot find an accurate answer. AI hallucination can cause serious problems, with one recent example being the law professor who was falsely accused by ChatGPT of sexual harassment of one of his students. ChatGPT cited a 2018 article in The Washington Post as the source of this claim – but the problem was that no such article existed, and the professor in question had never faced such accusations.

Obviously, there’s an urgent need to prevent these kinds of mistakes and OpenAI said it’s working to improve this in ChatGPT by leveraging user feedback on responses that have been flagged as incorrect. As a result, GPT-4 is 40% more likely to generate factual answers compared to GPT-3.5, it said.

OpenAI admits that some of the training data used by its systems contains personal information that is publicly available on the web. However, it stressed that its goal is for its systems to learn about the world rather than private individuals. To that end, its team attempts to remove personal information from training datasets whenever feasible. At the same time, it has fine-tuned its models to reject any request made for the personal information of private individuals. OpenAI’s models will also respond positively to any request from an individual to delete their personal information from its systems.

“These steps minimize the possibility that our models might generate responses that include the personal information of private individuals,” the company explained.

AI safety remains an ongoing concern, and OpenAI said it will become increasingly cautious as it builds and deploys more capable models in future. The good news is it believes that more advanced and sophisticated models deployed in future will be even safer than its existing systems, as they will be better at following user’s instructions and easier to control.

Finally OpenAI called upon policymakers and AI providers to ensure that the development and deployment of AI systems is governed effectively at a global scale. More dialogue will be required to do this, and it is keen to participate, it said.

“Addressing safety issues also requires extensive debate, experimentation, and engagement, including on the bounds of AI system behavior,” OpenAI said. “We have and will continue to foster collaboration and open dialogue among stakeholders to create a safe AI ecosystem.”

Image: OpenAI

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *