Got It AI launches truth checking tool to fix ChatGPT's inaccuracies

Got It AI launches truth checking tool to fix ChatGPT’s inaccuracies

Posted on

Autonomous conversational artificial intelligence startup Got It AI Inc. said today it has developed a new “truth checker” tool for the popular chatbot ChatGPT that can identify when it is “hallucinating” — generating fabricated answers to questions.

The startup said its tool makes it possible for enterprises to deploy ChatGPT-like experiences without the risk of it providing incorrect responses to employees or users. As a result, companies can use the tool to deploy conversational AI that leverages extensive knowledge bases, such as those used for external customer support or internal user queries.

ChatGPT is an advanced chatbot created by OpenAI LLC that has taken the internet by storm since its launch last month. The AI system can provide detailed natural language answers to complex user questions. It can generate humanlike conversational responses to almost any kind of question. Its responses are so impressive that some believe it could even become a viable alternative to Google Search, and indeed Microsoft Corp., one of the main financial backers of OpenAI, is already said to be looking to incorporate the AI into its own Bing search engine technology.

Although the vast majority of users have only been impressed with ChatGPT’s capabilities, Got It AI said in a blog post it and other conversational AI systems aren’t always entirely accurate, and that makes them risky to deploy in situations such as customer support scenarios.

To fix things, Got It AI has created an autonomous truth checking AI model that relies on an advanced Large Language Model-based AI system to train itself autonomously, with no human intervention, specifically for one task — that is, to check that conversational AI’s statements are true. It explains that when its model is deployed alongside ChatGPT and other systems, they can then be used to answer questions in a contextual, multi-turn chat dialog, where each response is evaluated for its truthfulness.

If an inaccurate response is detected, it will not be presented to the human party. Instead, they’ll instead receive a reference to relevant articles where they can find the answers they’re looking for.

Got It AI co-founder Chandra Khatri, who formerly worked on’s Alexa, said his team has tested its truth checking AI on a dataset of more than 1,000 articles across multiple knowledge bases using multi-turn conversations, incorporating complex linguistic structures such as co-reference, context and topic switches.

“ChatGPT produced incorrect responses for about 20% of the queries, given the relevant content for the query through prompts,” Khatri said. “The autonomous truth checking AI was able to detect 90% of inaccurate responses. We will also provide customers with a simple user interface to the truth checking AI to further optimize it to identify remaining inaccuracies and eliminate virtually all inaccurate responses.”

Got It AI’s truth checking AI is available in private beta as part of its Autonomous Articlebot service, which leverages the same OpenAI generative LLMs used by ChatGPT. Companies can point Articlebot to an internal knowledge base or set of articles with no additional training required. In this way, it can effectively deploy ChatGPT as an enterprise-grade conversational AI chatbot for customer support, help desk and agent assist applications.

Got It AI’s other co-founder Amol Kellar explained that the truth checking AI is aimed at correcting ChatGPT with regard to “known” domain conversations that draw on enterprises’ knowledge bases, rather than for “open” domain conversations about random topics. In this respect, he said, the company’s AI is a “major breakthrough.”

“This goes beyond prompt engineering, fine tuning or just a UI layer,” Kellar said. “It is a separate, LLM-based AI that enables us to deliver scalable, accurate and fluid conversational AI for customers planning to leverage ChatGPT’s LLM. Truth checking the generated responses cost-effectively is a key capability that closes the gap between an R&D system and an enterprise-ready system.”

Image: Got It AI

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *