[ad_1]
Hallucinations continue to be one of the most critical challenges with AI. Although databases and a few other methods can mitigate this, they’re not the only solution. Some AI frameworks, like CTGT, have aimed to nearly eradicate hallucinations, but their effectiveness has yet to be universally felt. Notably, this approach involves another form of AI.
At MLDS 2025, India’s largest GenAI summit for developers organised by AIM, Ratnesh Singh Parihar, principal architect at Talentica Software, said his team uses AI to fix AI hallucinations.
Parihar discussed the difficulty of handling over 10 million stock-keeping units (SKUs) in e-commerce search with AI and how AI is used to fix AI-related problems.
AI Fixing AI Hallucinations
Ritesh Agarwal, solution architect at Talentica Software, explained that when users search for items, such as ‘pink t-shirt for toddlers’, traditional AI methods convert queries into embeddings and use cosine similarity to find relevant products.
However, hallucinations occur when irrelevant items like jeans or wristbands appear in search results. To combat this, his team integrated AI-powered validation checks using OpenAI to flag inaccurate results that stemmed from a hallucination with inconsistencies generated by AI tools.
According to Agarwal, Talentica Software ran test queries to retrieve results from its comparison models, whether based on semantics or cosine similarity. The system provided a simple true or false flag, which they stored in our database.
Parihar further expanded on this idea, “Let’s say you want to generate a research paper using ChatGPT. It will generate good content, but the conferences will reject it because they can figure out what you say is machine-generated. No? But, some other AI tools can take that content and humanise it. So, AI has generated one problem, but you can use another AI tool to solve it. That’s how you can go about it.”
Minimising Humans in Building AI
While he mentioned that it is better to build things manually unless you absolutely need to, Parihar also stressed the need to minimise humans when building AI. “You cannot say…’I will create XYZ’, and then some people will come and verify the XYZ. You need to build the bots.”
“You might need a person who knows a lot of AI tools. So they can use the tools. And, most importantly, you require someone who can convert those tools into bots, human-like,” he added.
Combining AI Tools Not Just to Fix Problems But To Save Cost
The team at Talentica Software mentioned specifically that they used both Llama3 and OpenAI to combine with one another for better results, maintain efficiency and reduce the percentage of errors.
In particular, Llama 3 was used for large-scale product categorisation and tagging, significantly reducing costs compared to OpenAI, according to Parihar.
In combination, OpenAI was used as a validation mechanism to identify errors and hallucinations in search results for the e-commerce system. The AI system compared search results against image descriptions and product details, flagging inaccuracies through a true/false validation system.
Parihar also provided insights into why they switched to Llama 3 for tag or category generation and how they saved 97% in the cost required when dealing with SQL queries. With OpenAI, it cost them around $500 for 1 million SQLs and, in comparison, the Llama 3 model cost them just $15 for the same.
It is intriguing to see that the AI we built comes in handy in solving errors generated by the same family and reducing costs. We have come a long way in a short span of time, from relying on HTML to reduce hallucination overdose to trusting another AI. Fortunately, we do not need extra human efforts to find solutions to problems, ultimately making good use of AI.
[ad_2]
Source link