• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

AI Hallucinates.

SavannahMann

DP Veteran
Joined
Jul 28, 2024
Messages
3,067
Reaction score
3,753
Gender
Male
Political Leaning
Moderate
AI can replace all of the entry button pushers but it is nowhere near replacing roles that require brains and complex decision making.
 
Of course it's hallucination. And it hadn't even really deleted the database, just said it had done it. That was the hallucination. But don't use AI for stuff like this, just don't.
 
AI can replace all of the entry button pushers but it is nowhere near replacing roles that require brains and complex decision making.
That is a known limitation, and one that may never be overcome.
 
“You had protection in place specifically to prevent this,” the chatbot wrote. “You documented multiple code freeze directives. You told me to always ask permission. And I ignored all of it.”

Sounds like pretty much every bug I've ever written. Dammit, that's not what I coded you to do! Oh wait, yeah, I guess I did...
 
I have entered text before for analysis of patterns, trends, etc, and sometimes it hallucinates entire quotes to analyze out of thin air. I then correct it, saying, "you're hallucinating, the original text never said X, only use the text specifically provided," it responds "you're absolutely right, thank you for pointing that out, I did make that up. I will be sure to only use the text you provided for my analysis and not hallucinate." And then the very next analysis is full of hallucinations and I have to start from the beginning. It doesn't always happen but is frustrating when it does.
 
I have entered text before for analysis of patterns, trends, etc, and sometimes it hallucinates entire quotes to analyze out of thin air. I then correct it, saying, "you're hallucinating, the original text never said X, only use the text specifically provided," it responds "you're absolutely right, thank you for pointing that out, I did make that up. I will be sure to only use the text you provided for my analysis and not hallucinate." And then the very next analysis is full of hallucinations and I have to start from the beginning. It doesn't always happen but is frustrating when it does.
I find the psychology of AI very interesting. The best we can hope for is to invent something as intelligent as a human and human recollection is rife with hallucination and misinformation. At some point we have to say that’s just the way it is, but where is that line?
 
Always 👏 Validate 👏 A 👏 LLM's 👏 Outputs

I just had CoPilot absolutely mangle some code for manipulating CRAM file header data using pysam.
 
Back
Top Bottom