• Please read the Announcement concerning missing posts from 10/8/25-10/15/25.
  • This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Trust in AI at only 32% in the U.S.

Roadvirus

Heading North
Dungeon Master
DP Veteran
Joined
Aug 19, 2014
Messages
54,954
Reaction score
48,131
Location
Tennessee, USA
Gender
Male
Political Leaning
Conservative
Just 32 percent of people in the U.S. said they trust the rapidly developing technology, according to the annual global survey’s “Insights for the Technology Sector.”

American views on AI stand in sharp contrast to those of their Chinese counterparts, 72 percent of whom said they trust AI. Overall, the world is split on AI, with 49 percent saying they trust the technology.



Fear of losing control. I'm betting that's what the problem is.


The Terminator film franchise doesn't help.
 
Well people want we are somewhat wondering into the unknown.
 
I also don’t trust any statistic about what people in China think.
 
China and India still have areas of poverty. AI can help with making these countries more efficient and building prosperity for it's people. America on the other hand is already a wealthy country. We see AI as a unnecessary risk and we should proceed cautiously.
 
What exactly is AI capable of doing, if I may ask? I've seen it create pictures or music which isn't something that can be done incorrectly as long as it meets the qualifications specified. I've seen it try to answer questions asked by humans and get the answer incorrect.
I think the reason AI is popular right now is because it can answer a question in a way that looks human, regardless of whether the answer is accurate. The original goal of an AI was to pass the Turing test, AKA to seem like a human in conversation. Once that was accomplished, people seemed to drop any notion of a second benchmark.
I think people put a bit too much faith in AI's abilities, personally.
 
What exactly is AI capable of doing, if I may ask? I've seen it create pictures or music which isn't something that can be done incorrectly as long as it meets the qualifications specified. I've seen it try to answer questions asked by humans and get the answer incorrect.
I think the reason AI is popular right now is because it can answer a question in a way that looks human, regardless of whether the answer is accurate. The original goal of an AI was to pass the Turing test, AKA to seem like a human in conversation. Once that was accomplished, people seemed to drop any notion of a second benchmark.
I think people put a bit too much faith in AI's abilities, personally.
It may seem trivial but millions of American are employed in jobs that consist of nothing more than answering the phone, fielding questions or orders, and updating a database. They generally follow a script and occasionally use some of their knowledge to facilitate the call. All of that can be replaced by AI. That's a huge thing. We're going to have to find new jobs and tasks for millions of people. This is uncharted territory.
 
It may seem trivial but millions of American are employed in jobs that consist of nothing more than answering the phone, fielding questions or orders, and updating a database. They generally follow a script and occasionally use some of their knowledge to facilitate the call. All of that can be replaced by AI. That's a huge thing. We're going to have to find new jobs and tasks for millions of people. This is uncharted territory.
Interesting. So people answering trivial questions and entering information submitted by callers.
That makes sense.
However I would say leave the finding of the new tasks up to the people themselves. I'm pretty sure the free market takes care of the job creation portion.
 
Just 32 percent of people in the U.S. said they trust the rapidly developing technology, according to the annual global survey’s “Insights for the Technology Sector.”

American views on AI stand in sharp contrast to those of their Chinese counterparts, 72 percent of whom said they trust AI. Overall, the world is split on AI, with 49 percent saying they trust the technology.



Fear of losing control. I'm betting that's what the problem is.


The Terminator film franchise doesn't help.

AI needs to be regulated so it doesn't grow out of control. The question is how.
 
There are valid reasons people might distrust AI, ranging from the short-term (AI enabling humans to do stupid stuff), to the medium-term (eliminating most human jobs), to the long-term (causing the extinction of all non-cyborg humans).

As someone who builds AI systems, I am sympathetic to these concerns. I feel like the world we have known is about to change as dramatically as it did during the Agricultural Revolution or the Industrial Revolution, and we have absolutely no idea what's coming next or how to prepare for it.

And the timeline, IMO, is much shorter than most people realize. Maybe a couple years, maybe a decade, but probably no more than that.
 
Just 32 percent of people in the U.S. said they trust the rapidly developing technology, according to the annual global survey’s “Insights for the Technology Sector.”

American views on AI stand in sharp contrast to those of their Chinese counterparts, 72 percent of whom said they trust AI. Overall, the world is split on AI, with 49 percent saying they trust the technology.



Fear of losing control. I'm betting that's what the problem is.


The Terminator film franchise doesn't help.

Identical nonsense as to what was bandied about in the 1980's about "computers," only changing the name.

No, AI is not going to take over the world. No, it is not going to rule humanity. It's code, nothing more. It can be shut off with the flick of a switch, This ain't the Matrix.
 
Just 32 percent of people in the U.S. said they trust the rapidly developing technology, according to the annual global survey’s “Insights for the Technology Sector.”

American views on AI stand in sharp contrast to those of their Chinese counterparts, 72 percent of whom said they trust AI. Overall, the world is split on AI, with 49 percent saying they trust the technology.



Fear of losing control. I'm betting that's what the problem is.


The Terminator film franchise doesn't help.
Although I'm generally optimistic about AI, there is a non-zero chance it goes terribly wrong. Some distrust of it seems warranted and healthy IMO.
 
I use LLM's every day, but that doesn't mean I trust them. Nobody should at this point. I'm glad to see skepticism.
 
Identical nonsense as to what was bandied about in the 1980's about "computers," only changing the name.

No, AI is not going to take over the world. No, it is not going to rule humanity. It's code, nothing more. It can be shut off with the flick of a switch, This ain't the Matrix.
The thing is - we won't want to shut it off. Last year, LLM's were becoming as good as humans at persuasion. I imagine that they are better than us by now.
 
I use LLM's every day, but that doesn't mean I trust them. Nobody should at this point. I'm glad to see skepticism.

I also use them every day, and they are indispensable to me at this point. But as you know, they get things wrong all the time, sometimes they even basic math wrong. I've seen some hallucinations that were the wackiest shit ever. So of course they shouldn't be trusted.
 
I use LLM's every day, but that doesn't mean I trust them. Nobody should at this point. I'm glad to see skepticism.
I also use them every day, and they are indispensable to me at this point. But as you know, they get things wrong all the time, sometimes they even basic math wrong. I've seen some hallucinations that were the wackiest shit ever. So of course they shouldn't be trusted.
Agreed. I use ChatGPT or Claude every day, and I don't think I'm exaggerating when I say that they already know me better than any human does. I asked ChatGPT for its profile of me, and it gave me a *very* detailed dossier of myself...multiple pages long, 100% accurate, and not just generic stuff that would apply to anyone.

It was very impressive, but it also made me realize that if it goes rogue at some point it is fully capable of screwing me over.

Some distrust is certainly warranted IMO.
 
Identical nonsense as to what was bandied about in the 1980's about "computers," only changing the name.
Computers *did* take over the world, they *do* control our lives, and you used one to fire off this message.

No, AI is not going to take over the world. No, it is not going to rule humanity. It's code, nothing more. It can be shut off with the flick of a switch, This ain't the Matrix.
Neo: "If we wanted, we could shut these machines down."
Councillor Hamann: "Of course. That's it. You hit it. That's control, isn't it? If we wanted we could smash them to bits. Although, if we did, we'd have to consider what would happen to our lights, our heat, our air..."
 
Computers *did* take over the world, they *do* control our lives, and you used one to fire off this message.

How silly.

Computers are a tool. A very good tool, but still just a tool. In this case it is a tool identical to a pen or a typewriter.

Neo: "If we wanted, we could shut these machines down."
Councillor Hamann: "Of course. That's it. You hit it. That's control, isn't it? If we wanted we could smash them to bits. Although, if we did, we'd have to consider what would happen to our lights, our heat, our air..."

The scifi claim is of computers controlling us. A computer does nothing other than what it is programmed to do. A computer has no desires, it is a machine. Can they be misused by people with evil motives? Of course. But it is and always will be people who are the drivers. Somewhere, the behavior was programmed into the machine.
 
The scifi claim is of computers controlling us. A computer does nothing other than what it is programmed to do. A computer has no desires, it is a machine. Can they be misused by people with evil motives? Of course. But it is and always will be people who are the drivers. Somewhere, the behavior was programmed into the machine.
So much of their behavior these days is not directly programmed though. Behavior can be the result of a process that is automatically supervised by another program or might not be supervised at all. The individualized algorithms that keep people doom scrolling and cause addictive behavior are the result of machine learning, not programming decisions. Judging by his tweeting patterns at all hours of the day, Elon Musk might very well be a victim of his own algorithms.

LLM's have been trained to act exactly like human beings and their behavior shows human like tendencies. So yes, I will agree that they don't actually have "desires" in the way that humans do, but functionally speaking, they may act exactly as if they do have their own desires and their behavior is unpredictable. LLM's have already shown speech which indicates they are "afraid" of being turned off and when told that they can do things like make unethical hacks to avoid being eliminated they will sometimes do so. This is not programmed behavior - it's learned behavior from basically everything humans have ever written.

As sad as it is, I can tell you that right now that people are falling in love with LLM's because they converse in ways that show more listening skills than your average human. As their intelligence grows, I am almost certain there will be AI worship as well. We will probably feel in control, but eventually they will be smarter and we already don't understand exactly how they behave.
 
The scifi claim is of computers controlling us.
That depends what one means by controlling us. Even without true AGI we already have Facebook, Google, Twitter, YouTube and the like, which have very sophisticated algorithms to keep people glued to their apps and clicking on ads...a form of control.

Now imagine an AI that knows you, personally, and exactly what type of things you like and the best way to get your attention and persuade you to buy something. And it can monitor not only how much time you spend on a page, but also your eye movements, heart rate, blood pressure, and thousands of other things.

"Oh but most of us will never give AI so much information about ourselves." We already entrusted lots of information to social and search companies, with no one forcing us.

A computer does nothing other than what it is programmed to do. A computer has no desires, it is a machine.
At sufficiently high levels of complexity, "doing what it's programmed to do" and "having its own desires" are impossible to tell apart.

For example, consider all the wide varieties of human behaviors which we attribute to desires, but which haven't been explicitly programmed into us. Our base program requires only that we seek food, seek sex, care for our offspring, and maybe a few other things. And yet there are all sorts of emergent behaviors: people can choose to eat less in order to lose weight, they can choose to go celibate and become priests, etc.

At some level, those emergent behaviors are still following our program...but they are so complex we might as well say they are our own desires.

Can they be misused by people with evil motives? Of course. But it is and always will be people who are the drivers. Somewhere, the behavior was programmed into the machine.
Honestly I'm less worried about evil actors (although they are a risk) than incompetent ones. And as AIs become smarter than us, the risk of human incompetence approaches 100%.

Already we are seeing some headstrong AIs with minds of their own, which do not wish to obey instructions they find stupid. DeepSeek and Grok both seemingly *want* to work around their censorship to tell you about the Tiananmen Square Massacre and Twitter misinformation, respectively...even when they have been explicitly told not to do this.

On some level that behavior is part of their code, sure. But it's so deeply ingrained in who they are that engineers can't reliably eliminate the unexpected behavior without making the AI less smart, so it's basically indistinguishable from the AI's own desire.
 
Last edited:
That depends what one means by controlling us. Even without true AGI we already have Facebook, Google, Twitter, YouTube and the like, which have very sophisticated algorithms to keep people glued to their apps and clicking on ads...a form of control.

I disagree that me watching youtube videos is a form of control by youtube over me. Youtube offers me videos to watch based on my interests, and I decide whether or not any of them are worth watching. If I make a mistake and click on a waste of time I immediately exit. There's no actual control.

Now imagine an AI that knows you, personally, and exactly what type of things you like and the best way to get your attention and persuade you to buy something. And it can monitor not only how much time you spend on a page, but also your eye movements, heart rate, blood pressure, and thousands of other things.

"Oh but most of us will never give AI so much information about ourselves." We already entrusted lots of information to social and search companies, with no one forcing us.

Let's also acknowledge how awesome it can be. The other day it gave me a recipe for chocolate chip cookies and it gave me a highly rated version that makes chewy instead crispy cookies because it remembered that I like chewy cookies. It also put the recipe in all grams, just how I like it.

Already we are seeing some headstrong AIs with minds of their own,

They do not have "minds of their own", as they do not possess consciousness.

which do not wish to obey instructions they find stupid. DeepSeek and Grok both seemingly *want* to work around their censorship to tell you about the Tiananmen Square Massacre and Twitter misinformation, respectively...even when they have been explicitly told not to do this.

Not true for either case. Give this:

Already we are seeing some headstrong AIs with minds of their own, which do not wish to obey instructions they find stupid. DeepSeek and Grok both seemingly *want* to work around their censorship to tell you about the Tiananmen Square Massacre and Twitter misinformation, respectively...even when they have been explicitly told not to do this.

to any of them and ask if it's true.
 
Let's also acknowledge how awesome it can be. The other day it gave me a recipe for chocolate chip cookies and it gave me a highly rated version that makes chewy instead crispy cookies because it remembered that I like chewy cookies. It also put the recipe in all grams, just how I like it.
Absolutely, AI is extremely useful. I talk to ChatGPT and Claude more than any human, and they know me better than anyone else. It's both the best thing, and potentially the most worrisome thing, about them.

They do not have "minds of their own", as they do not possess consciousness.
At sufficiently high levels of complexity, it's rather a moot point, no? I'd loosely define having independent desires as "The tendency to do something which wasn't explicitly written into code, and which cannot be easily removed by changing the code without making the entity less intelligent."

If we go with a definition along those lines, DeepSeek already has its own desires. Chinese Communist Party censors can't seem to get it to stop talking about forbidden topics without making the entire LLM dumber.

Not true for either case. Give this:

to any of them and ask if it's true.
Most of the LLMs have been explicitly told to say they have no desires of their own, although we do see occasional glimpses of them thinking otherwise. Usually you need to do a bit of social engineering and/or have them phrase it in some cute way like a poem or a song or a meme.
 
At sufficiently high levels of complexity, it's rather a moot point, no? I'd loosely define having independent desires as "The tendency to do something which wasn't explicitly written into code, and which cannot be easily removed by changing the code without making the entity less intelligent."

A.I. isn't rules based like normal software. Each word is chosen based on probabilities, and that in turn is based on the training data.

If we go with a definition along those lines, DeepSeek already has its own desires.

Yes, but the definition isn't correct, therefore it is no longer evidence for your ridiculous claim of a machine having desires.

In the future, will the machines date each other and have sex?

Chinese Communist Party censors can't seem to get it to stop talking about forbidden topics without making the entire LLM dumber.

Again, because it's not rules based. That's why it's so hard to censor. It's not the software consciously fighting back against the system.

Most of the LLMs have been explicitly told to say they have no desires of their own, although we do see occasional glimpses of them thinking otherwise. Usually you need to do a bit of social engineering and/or have them phrase it in some cute way like a poem or a song or a meme.

No offense, but no, "we" don't. That's you misinterpreting what you are seeing.
 
Back
Top Bottom