Free Quiz
Write for Us
Learn Artificial Intelligence and Machine Learning
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books
Learn Artificial Intelligence and Machine Learning
No Result
View All Result

Home » A better approach for detecting overconfident large language models

A better approach for detecting overconfident large language models

Tarun Khanna by Tarun Khanna
March 19, 2026
in Artificial Intelligence, Machine Learning
Reading Time: 4 mins read
0
A better approach for detecting overconfident large language models

Photo Credit: https://news.mit.edu/

Share on FacebookShare on TwitterShare on LinkedInShare on WhatsApp

This latest metric for measuring uncertainty could flag hallucinations and support customers recognise whether to trust an AI model.

Large language models (LLMs) can create credible but incorrect responses, so researchers have generated uncertainty quantification strategies to test the dependability of predictions. One popular approach includes filing the same prompt various times to see if the model generates the identical answer.

But this technique measures self-confidence, or even the most superb LLM might be clearly wrong. Overconfidence can deceive users about the correctness of a prediction, which would possibly bring about devastating outcomes in high-stakes settings like healthcare or finance.

Also Read:

Anthropic Targets Europe With Data Center Hiring Push to Scale AI Infrastructure

OpenAI Introduces ChatGPT Images 2.0: Advancing Multimodal AI Capabilities

SpaceX Nears Major AI Deal With Cursor

Exclusive: Google intensifies Thinking Machines Lab ties with new multi-billion-dollar deal

To resolve this issue, MIT researchers introduced a latest approach for measuring a special type of uncertainty that more steadily recognize confident but inaccurate LLM responses.

Their technique includes comparing a goal model’s response to responses from a set of similar LLMs. They determined that measuring cross-model disagreement more appropriately captures this sort of uncertainty than traditional strategies.

They blended their method with a measure of LLM self-consistency to generate a total uncertainty metric, and analyze it on 10 sensible tasks, together with question-answering and math reasoning. This general uncertainty metric continuously surpasses other measures and was good at detecting unreliable predictions.

“Self-consistency is being utilizes in lots of different methods for uncertainty quantification, however in case your evaluation of uncertainty depends on a single model’s outcome, it is not necessarily trustable. We went back to the starting to apprehend the limitations of recent processes and used the ones as a beginning line to design a complementary technique which can empirically improve the outcome,” stated Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist on the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who’s is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a team of research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular approaches for uncertainty quantification includes asking a model for a confidence score or checking out the consistency of its responses to the same prompt. These strategies estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

Moreover, LLMs may be confident when they are fully wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is the use of the right model, can be a better way to assess actual uncertainty whilst a model is overconfident.

The MIT researchers evaluate epistemic uncertainty by measuring disagreement across a similar group of LLMs.

“If I ask ChatGPT the same question number of times and it gives me the same solution every time, that doesn’t mean the solution is always accurate. If I switch to Claude or Gemini and ask them the same question, and I get a exceptional solution, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty tries to capture how far a target model diverges from the ideal model for that task. But considering that it’s impossible to form an ideal model, researchers use surrogates or approximations that regularly depend on incorrect assumptions.

To enhance uncertainty quantification, the MIT researchers require a more correct way to evaluate epistemic uncertainty.

An ensemble approach

The approach they generated includes measuring the divergence among the target model and a small ensemble of models with same size and structure. They discovered that comparing semantic similarity, or how intently the meanings of the responses match, could give a better evaluation of epistemic uncertainty.

To acquire the most correct evaluation, the researchers require a set of LLMs that included various responses, weren’t too just like the target model, and had been weighted primarily based on credibility.

“We found that the perfect way to fulfill these kind of properties is to take models which are trained by special corporations. We tried many different techniques that were more complicated, but this very simple approach ended up running best,” Hamidieh says.

Once they had generated this approach for evaluating epistemic uncertainty, they blended it with a standard approach that measures aleatoric uncertainty. This general uncertainty metric (TU) provided the most correct reflection of whether or model’s confidence level is truthful.

“Uncertainty relies upon on the uncertainty of the given prompt as well as how close our model is to the optimal model . This is why summing up those two uncertainty metrics is going to provide us the best estimate,” Hamidieh says.

TU could more efficiently detect conditions in which an LLM is hallucinating, since epistemic uncertainty can flag confidently incorrect outputs that aleatoric uncertainty possibly pass over. It could also permit researchers to strength an LLM’s considerably accurate solutions during training, which may also enhance performance.

They tested TU using numerous LLMs on 10 common tasks, together with question-answering, summarization, translation, and math reasoning. Their approach ,more efficiently detected unreliable predictions than both measure on its own.

Measuring total uncertainty often needed fewer queries than calculating aleatoric uncertainty, which could lessen computational costs and save energy.

Their experiments also discovered that epistemic uncertainty is most effective on tasks with a completely unique accurate solution, like real question-answering, however may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may alsp build on this work by means of exploring different sorts of aleatoric uncertainty.

ShareTweetShareSend
Previous Post

Nvidia restarting production of China AI chip varient, CEO says

Next Post

Trump Administration Official Drives Crypto Into US Banking System

Tarun Khanna

Tarun Khanna

Founder DeepTech Bytes - Data Scientist | Author | IT Consultant
Tarun Khanna is a versatile and accomplished Data Scientist, with expertise in IT Consultancy as well as Specialization in Software Development and Digital Marketing Solutions.

Related Posts

This AI mines the numbers buried in scientific papers and turns them into usable data fast
Artificial Intelligence

This AI mines the numbers buried in scientific papers and turns them into usable data fast

April 22, 2026
China seeks to rein in risks from AI ‘digital human beings’
Artificial Intelligence

China seeks to rein in risks from AI ‘digital human beings’

April 21, 2026
Bank of Canada governor Tiff Macklem raises alarm on Anthropic’s latest AI model Mythos; says: As a financial system, both within Canada and outside, we need to find a way to …
Artificial Intelligence

Bank of Canada governor Tiff Macklem raises alarm on Anthropic’s latest AI model Mythos; says: As a financial system, both within Canada and outside, we need to find a way to …

April 20, 2026
Cadence, Nvidia working together on developing AI for robotics
Artificial Intelligence

Cadence, Nvidia working together on developing AI for robotics

April 16, 2026
Next Post
Trump Administration Official Drives Crypto Into US Banking System

Trump Administration Official Drives Crypto Into US Banking System

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

+ 88 = 90

TRENDING

Google is using old news reviews and AI to predict flash floods

Google is using old news reviews and AI to predict flash floods

Photo Credit: https://techcrunch.com/

by Tarun Khanna
March 13, 2026
0
ShareTweetShareSend

AMD and OpenAI Strike Multi-Billion-Dollar AI Chip Partnership

AMD and OpenAI Strike Multi-Billion-Dollar AI Chip Partnership

Photo Credit: https://opendatascience.com/

by Tarun Khanna
October 6, 2025
0
ShareTweetShareSend

Mythos AI and lomarlabs set up sea-pilot AI assistance

Mythos AI and lomarlabs set up sea-pilot AI assistance

Photo Credit: https://www.artificialintelligence-news.com/

by Tarun Khanna
September 18, 2025
0
ShareTweetShareSend

Winklevoss Twins Strike Settlement With SEC Over Gemini Earn Program

Winklevoss Twins Strike Settlement With SEC Over Gemini Earn Program

Photo Credit: https://cryptonews.com/

by Tarun Khanna
September 16, 2025
0
ShareTweetShareSend

Using generative AI to assist robots jump higher and land safely

Using generative AI to assist robots jump higher and land safely

Photo Credit: https://news.mit.edu/

by Tarun Khanna
July 2, 2025
0
ShareTweetShareSend

Enormous Big Data Changing The Internet Experience For Average Consumers

Enormous Big Data Changing
by Tarun Khanna
February 23, 2021
0
ShareTweetShareSend

DeepTech Bytes

Deep Tech Bytes is a global standard digital zine that brings multiple facets of deep technology including Artificial Intelligence (AI), Machine Learning (ML), Data Science, Blockchain, Robotics,Python, Big Data, Deep Learning and more.
Deep Tech Bytes on Google News

Quick Links

  • Home
  • Affiliate Programs
  • About Us
  • Write For Us
  • Submit Startup Story
  • Advertise With Us
  • Terms of Service
  • Disclaimer
  • Cookies Policy
  • Privacy Policy
  • DMCA
  • Contact Us

Topics

  • Artificial Intelligence
  • Data Science
  • Python
  • Machine Learning
  • Deep Learning
  • Big Data
  • Blockchain
  • Tableau
  • Cryptocurrency
  • NFT
  • Technology
  • News
  • Startups
  • Books
  • Interview Questions

Connect

For PR Agencies & Content Writers:

connect@deeptechbytes.com

Facebook Twitter Linkedin Instagram
Listen on Apple Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
Listen on Google Podcasts
DMCA.com Protection Status

© 2024 Designed by AK Network Solutions

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Artificial Intelligence
  • Data Science
    • Language R
    • Deep Learning
    • Tableau
  • Machine Learning
  • Python
  • Blockchain
  • Crypto
  • Big Data
  • NFT
  • Technology
  • Interview Questions
  • Others
    • News
    • Startups
    • Books

© 2023. Designed by AK Network Solutions