Aporia adds protection against AI hallucinations

Aporia adds protection against AI hallucinations

Posted on



Machine learning observability startup Aporia Technologies Ltd. is broadening its line of tools for ensuring responsible artificial intelligence use with a new product that it says enhances the performance of generative AI products and safeguards against hallucinations or misuse.

AI Guardrails can be integrated into any generative AI product and positioned between the large language model and the end user. Aporia said it ensures fair and responsible usage by eliminating discriminatory or inappropriate LLM/chatbot responses depending on the standards of the workplace. It’s also intended to reduce the risk of data leakage and protect against the inadvertent disclosure of sensitive information such as credit card or medical data, a feature that the company said guarantees user safety and helps optimize performance.

Hallucinations are a phenomenon that occurs when generated content like text, images and audio appears to be surreal or bizarre. They are a byproduct of the ability of generative AI models, in particular, to produce creative and unexpected outputs. While that capability often produces interesting and imaginative content, it can also occasionally generate misleading information or nonsense.

An online survey by Tidio LLC found that 86% of the nearly 1,000 people who responded have personally experienced hallucinations and 46% said they encounter them frequently. For example, asking ChatGPT about the record for crossing the English Channel on foot generates a response even though such an achievement is impossible.

Aporia said its technology encompasses observability, visibility, detection and control to promote the responsible and secure integration of AI technology into various scenarios. Real-time alerts notify organizations about potential issues related to AI performance and unified visibility provides a transparent and consolidated view of all LLM operations. This allows organizations to proactively analyze model behavior to stay ahead of hallucinations.

The platform, which is in limited testing, is intended to be used with the company’s growing collection of tools for centralized model management, AI anomaly detection, proactive control, dashboards, root cause analysis and explainable AI. In July it released a root cause analysis tool for large language, natural language processing and computer vision for real-time analysis of AI models.

Aporia has raised $30 million in funding claims to already have a number of enterprise customers.

Image: Unsplash

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *