• Please read the Announcement concerning missing posts from 10/8/25-10/15/25.
  • This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

AI prediction thread

aociswundumho

Capitalist Pig
DP Veteran
Joined
Aug 6, 2019
Messages
22,594
Reaction score
9,781
Location
Bridgeport, CT
Gender
Male
Political Leaning
Libertarian - Right
Here are my three:

1) AI will disrupt some industries, but it will not lead to mass unemployment. Instead, it will create a significant number of new jobs, shifting the nature of work rather than eliminating it.

2) AI will dramatically increase productivity, leading to strong economic growth and making individuals and businesses more efficient, thereby raising wealth levels across the economy.

3) Despite the big increase in overall wealth, AI will make economic inequality worse.
 
AI will be acknowledged as a boondoggle within 5 years.
 
There's a song called "Everyone's Gone to the Moon," and they talk about a future time when humans will be too weak to even pick up a spoon. Plus, there's the over-used reference to HAL-9000 going berserk (and also leaving the toilet seat up :) )..

Recently, it was shown that an A.I. was able to write a heavily-referenced paper on some subject, but a lot of the footnotes referred to sources that the A.I. simply made up. We've also seen "photos" created by A.I. that contained all kinds of physical and logical errors.

I'm sure A.I. will improve, but we must never depend entirely on it, and we should always check behind it. And we must insist that it follow those "Three Laws of Robotics" . . .

1. Protect--and never harm--a human.
2. Obey a human--unless doing to violates rule #1..
3. Protect yourself--unless doing so violates rule #2.

(I think those are the rules--I'm too lazy to look them up. :) )
 
There's a song called "Everyone's Gone to the Moon," and they talk about a future time when humans will be too weak to even pick up a spoon. Plus, there's the over-used reference to HAL-9000 going berserk (and also leaving the toilet seat up :) )..

Recently, it was shown that an A.I. was able to write a heavily-referenced paper on some subject, but a lot of the footnotes referred to sources that the A.I. simply made up. We've also seen "photos" created by A.I. that contained all kinds of physical and logical errors.

I'm sure A.I. will improve, but we must never depend entirely on it, and we should always check behind it. And we must insist that it follow those "Three Laws of Robotics" . . .

1. Protect--and never harm--a human.
2. Obey a human--unless doing to violates rule #1..
3. Protect yourself--unless doing so violates rule #2.

(I think those are the rules--I'm too lazy to look them up. :) )


First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
Here are my three:

1) AI will disrupt some industries, but it will not lead to mass unemployment. Instead, it will create a significant number of new jobs, shifting the nature of work rather than eliminating it.
I hope you are right, and maybe that will indeed be the trend in the short-term. But I just can't see much long-term need for most human labor...AI/robots will soon be better than humans at almost everything except a few niche jobs IMO.

2) AI will dramatically increase productivity, leading to strong economic growth and making individuals and businesses more efficient, thereby raising wealth levels across the economy.
Agreed.

3) Despite the big increase in overall wealth, AI will make economic inequality worse.
This may boil down to what kind of post-scarcity world we want to live in. I could see a libertarian world with massive inequality where a few people own the robots/AIs and everyone else can't do anything economically useful. Or I could see a more socialist world where humans receive a UBI with the tradeoff of slightly slower growth. Hard to say. Maybe our old economic models won't even have any meaning in such a world.

It's also possible that AI makes almost everything so cheap that we don't even worry about the inequality anymore.
 
First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Thanks, Bok!

(I assume you wrote those and not A.I. :) )
 
🤔
AI is insanely useful right now.

Sure, it’s handy here and there (when it’s not “hallucinating”) but there was way too much investment and people are scrabbling to justify that investment.

AI is office space. Its use is being forced.
 
My thought/guess is that AI could really screw us up as a society.

One item that comes to mind is that it will make education of our youth much more challenging. ("Bobby, are you really sure that you wrote this report...?")

..
 
And we must insist that it follow those "Three Laws of Robotics" . . .

That assumes that an AGI would understand the terms used in those laws in the same way most people learn to – which is probably not the most likely scenario.


(seven minutes)
 
Sure, it’s handy here and there (when it’s not “hallucinating”) but there was way too much investment and people are scrabbling to justify that investment.

AI is office space. Its use is being forced.
AI has predicted the molecular structures of about 200 million protein shapes. You use it every time you use a GPS navigation app or have a call on Microsoft Teams. It's used in medical imaging and diagnostics and is used for predictive analysis in healthcare. It's discovered over 300 exoplanets. It's used in fraud detection, data security, and risk management. It's used in marketing, education, entertainment, e-commerce, investing, and in military applications.

AI is in its infancy and is prone to bugs like all early iterations of technology. But it's no more a "boondoggle" than ARPANET was, with all its limitations and inefficiencies.
 
AI has predicted the molecular structures of about 200 million protein shapes. You use it every time you use a GPS navigation app or have a call on Microsoft Teams. It's used in medical imaging and diagnostics and is used for predictive analysis in healthcare. It's discovered over 300 exoplanets. It's used in fraud detection, data security, and risk management. It's used in marketing, education, entertainment, e-commerce, investing, and in military applications.

AI is in its infancy and is prone to bugs like all early iterations of technology. But it's no more a "boondoggle" than ARPANET was, with all its limitations and inefficiencies.

Okay. Invest heavily, then.
 
Cybersecurity is another fun topic to consider on the negative side.

One malicious actor (or country) could potentially use AI to bring down some rather key (and vulnerable) infrastructures:

-our banking systems.
-our electrical and water systems
-communications systems

(the list gets quite a bit longer...)

..
 
Sure, it’s handy here and there (when it’s not “hallucinating”) but there was way too much investment and people are scrabbling to justify that investment.

AI is office space. Its use is being forced.
I think you are grossly underestimating where AI is right now, let alone where it will be in a few years. In the past month, I have used AI to:

1. Plan an entire travel itinerary in multiple countries. It figured out where I should stay, what flights to book, what I should do (based on what it knows about my interests), and answer questions I had about how the city/country works. It also gave me plenty of historical or cultural context when I went sightseeing.
2. Give me step-by-step instructions for expanding my side business, at any level of detail I want. (Anything from strategic business advice to changing individual blocks of code.)
3. Teach me detailed lessons on a subject I wanted to learn, intelligently answer my questions, and tailor its responses to how it knows I like to learn.
4. Have ongoing discussions with me about philosophy, science, film, books, and anything else I want. And it remembers what we've talked about and occasionally refers back to those things, or drops an allusion to some book/movie it knows I like.

It's already, at this moment, the best business partner, educator, and travel agent I have ever had. And although it would be exaggerating to say it's a great friend...that's probably only a few years away.

My advice to anyone who thinks AI is not a huge deal is to try using one of the major LLMs for a while. (I like Claude best, but ChatGPT or Grok or Deepseek are good too.) If you're basing your opinion on where AI was even a year ago, let alone more than that, you are missing a lot of advancement in the field!
 
I think you are grossly underestimating where AI is right now, let alone where it will be in a few years. In the past month, I have used AI to:

1. Plan an entire travel itinerary in multiple countries. It figured out where I should stay, what flights to book, what I should do (based on what it knows about my interests), and answer questions I had about how the city/country works. It also gave me plenty of historical or cultural context when I went sightseeing.
2. Give me step-by-step instructions for expanding my side business, at any level of detail I want. (Anything from strategic business advice to changing individual blocks of code.)
3. Teach me detailed lessons on a subject I wanted to learn, intelligently answer my questions, and tailor its responses to how it knows I like to learn.
4. Have ongoing discussions with me about philosophy, science, film, books, and anything else I want. And it remembers what we've talked about and occasionally refers back to those things, or drops an allusion to some book/movie it knows I like.

It's already, at this moment, the best business partner, educator, and travel agent I have ever had. And although it would be exaggerating to say it's a great friend...that's probably only a few years away.

My advice to anyone who thinks AI is not a huge deal is to try using one of the major LLMs for a while. (I like Claude best, but ChatGPT or Grok or Deepseek are good too.) If you're basing your opinion on where AI was even a year ago, let alone more than that, you are missing a lot of advancement in the field!

Creepy.

It wrote a good macro for me once. After it realized I was working in Excel and deleted the Python code it started with.
 
Creepy.

It wrote a good macro for me once. After it realized I was working in Excel and deleted the Python code it started with.
Then you haven't used AI frequently enough, or recently enough, to see what it can do.
 
That assumes that an AGI would understand the terms used in those laws in the same way most people learn to – which is probably not the most likely scenario.


(seven minutes)


That's a very interesting video, Isidore! Thanks for posting it!

At first, I thought he might be "over-thinking" the problem, but I soon realized that he had a good point. It would be terrible for an A.I. to fail to save the life of someone because the A.I. had a poor definition of what a human is. We need to be very, very thorough and exact in the information that we give to A.I.

(I like that guy's voice, by the way. It would be a good voice to give to an A.I. :) )
 
We need to be very, very thorough and exact in the information that we give to A.I.

There is quite a bit of research into the A.I. safety problem. Hopefully we can figure out how to put effective guardrails on any such creations before unleashing them upon the world.

The name of the person in that video is Robert Miles. He has an interesting YouTube channel on A.I. safety:

https://www.youtube.com/@RobertMilesAI/videos

I believe the most significant complication regarding safety is that we are unlikely to actually know how an A.G.I. learns – and therefore how it will interpret instructions given to it. We still don't understand humans (as evidenced by the numerous immoral people wielding power throughout history). How can we prevent a sociopathic, destructive A.I. from coming into existence? T.B.D.
 
There is quite a bit of research into the A.I. safety problem. Hopefully we can figure out how to put effective guardrails on any such creations before unleashing them upon the world.

The name of the person in that video is Robert Miles. He has an interesting YouTube channel on A.I. safety:

https://www.youtube.com/@RobertMilesAI/videos

I believe the most significant complication regarding safety is that we are unlikely to actually know how an A.G.I. learns – and therefore how it will interpret instructions given to it. We still don't understand humans (as evidenced by the numerous immoral people wielding power throughout history). How can we prevent a sociopathic, destructive A.I. from coming into existence? T.B.D.

Good points, Isidore!
 
Back
Top Bottom