Study: Using AI Makes Humans Feel Smarter Than They Really Are

Clubnet Digital Clubnet Branding Identity Marketing

Jakarta, domclub Indonesia

a
study
latest reveals that the use of
artificial intelligence
(AI) too much can eliminate the Dunning-Kruger Effect.Check out the research.
Scientists from Finland’s Aalto University, along with collaborators from Germany and Canada, found that the use of AI almost eliminates the Dunning-Kruger Effect, and almost reverses it.
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
The Dunning-Kruger effect is a cognitive bias where someone who has low ability in a field tends to feel that their ability is greater than reality, while someone who is highly competent tends to underestimate their own ability.This phenomenon was first tested through a series of experiments by Justin Kruger and David Dunning.
The researchers published their findings in the February 2026 issue of the journal Computers in Human Behavior.
Furthermore, the researchers showed that when using AI to solve problems, everyone (regardless of their skill level) tends to place too much faith in the quality of the answer.
As artificial intelligence becomes more familiar thanks to the wider use of large language models (LLM), researchers expect participants to not only be able to better interact with AI systems, but also to more accurately assess their own performance when using them.
“However, our findings show a significant inability to accurately assess one’s own performance when using AI, across our sample,” said Robin Welsch, one of the researchers who took part in the study, reported
LiveScience
, Monday (17/11).
In the study, scientists gave logical reasoning tasks from the Law School Entrance Exam to 500 participants, letting half of them use ChatGPT.
Both groups were then tested on their AI literacy and performance assessments, with the promise of additional compensation for accurate self-assessments.
The reasons behind these findings are varied.Those who use AI in such experiments, are usually satisfied with their answer after one question or prompt and accept it without checking or confirming it.
According to Welshch it can be said that they engage in ‘cognitive offloading’, that is, asking questions with reduced reflection, and approaching them in a more “superficial” way.
Lack of engagement in one’s own reasoning, or “metacognitive monitoring,” means bypassing the usual critical thinking feedback loops, reducing the ability to accurately assess performance.
Even more surprising is the fact that we tend to overestimate our capabilities when using AI, regardless of intelligence level.The gap between high and low ability users is narrowing.
Although the researchers did not directly address this, their findings come at a time when scientists are questioning whether commonly used large language models (LLMs) are too glamorous.
The Aalto team warns about some potential consequences as the use of AI becomes more widespread.
First, overall metacognitive accuracy may be impaired.As we become more dependent on results without rigorously questioning them, a trade-off arises where user performance increases, but our understanding of how well we perform those tasks decreases.
Without reflecting on results, checking for errors, or engaging in deeper thinking, we risk undermining our ability to reliably obtain information, the scientists said in the study.
Additionally, a weakening of the Dunning-Kruger effect means we will all continue to consider ourselves more capable when using AI, with those who are more AI-savvy doing so on a larger scale leading to a climate of faulty decision-making and skills degradation.
One method proposed by this study to stop the decline is to encourage users to ask additional questions, while developers redirect their responses to encourage reflection.
This may involve questions such as “How confident are you in this answer?”or “What might you be missing?”, or encourage further engagement through measures such as trust scores.
This new research provides further support for the growing belief, recently expressed by the Royal Society, that artificial intelligence (AI) training should include critical thinking as well as technical abilities.
“We offer design recommendations for interactive AI systems that improve metacognitive monitoring by empowering users to critically reflect on their performance,” the scientists said.
(wpj/dmi)
[Gambas:domclub Video]

Read More: VIDEO: Falling Tree, MRT Jakarta Experiences Operational Disruption

Read More: Remove Earwax With These 5 Easy and Safe Ways

Kamu mungkin juga menyukai: