• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

What is AI, really?

I am looking for the differences. Can you add something to the list within the context of the list that only AI does?

Unlike hand-coded programs, AI systems learn behavior by generalizing from examples in training data, rather than relying on explicit instructions written by humans.

2. It isn't that a GL CAN'T edit sound files. It is that it DOESN'T edit sound files, because that isn't what a GL does.

It isn't what GL does, because it can't.

If you added code to edit sound files to a GL, then that GL could edit sound files.

And if you added code to learn from training data to a GL, the GL would be an AI.

What is an example of a program created by AI that can't be hand written?

Your aforementioned app that records a singing voice and converts it into a celebrity's voice, for example.
 
With an AI chess-playing program, you just give it the rules of the game but that's all. You don't teach it strategy, you don't try to teach it how to respond if the opponent does some particular move. In fact, the person training the program doesn't even need to be very good at chess themselves. The idea of AI is that the program will learn good chess strategy on its own.

This is not true anymore than teaching a 10 year old the rules would result in them getting good. Deep Blue was given the rules, but simply used brute force computing to analyze positions and generate the best move.

Modern chess engines are fed game data and use machine learning to identify the best moves. The historical data gives the software an idea of where to start.

The best way to think about AI is house training your dog. You teach them it's not okay to use the bathroom in the house, and you let them into the backyard. Eventually, they'll figure out to stand by the door or scratch when they need to go outside.

Their behavior, however, will never grow beyond using the bathroom outside.
 
This is not true anymore than teaching a 10 year old the rules would result in them getting good. Deep Blue was given the rules, but simply used brute force computing to analyze positions and generate the best move.
Deep Blue was nearly 30 years ago.
Modern chess engines are fed game data
No.
and use machine learning to identify the best moves.
Yes.
The historical data gives the software an idea of where to start.
No, not anymore. That used to be how chess-playing bots were written, but no longer. Historical data (if you mean specific strategies or specific games played by humans) is of no value to an AI as superhuman in its domain as a modern chess-playing bot. State of the art systems like AlphaZero generate their own historical data, by playing billions of games against themselves and figuring out strategy as they go.
The best way to think about AI is house training your dog. You teach them it's not okay to use the bathroom in the house, and you let them into the backyard. Eventually, they'll figure out to stand by the door or scratch when they need to go outside.
Yes, that's one way that AI can learn...for tasks where it's inferior to humans and/or just marginally better than humans. Beyond a certain level of intelligence, anything a human could teach it would just slow it down.
 
Deep Blue was nearly 30 years ago.

No.

Yes.

No, not anymore. That used to be how chess-playing bots were written, but no longer. Historical data (if you mean specific strategies or specific games played by humans) is of no value to an AI as superhuman in its domain as a modern chess-playing bot. State of the art systems like AlphaZero generate their own historical data, by playing billions of games against themselves and figuring out strategy as they go.

Stockfish too. I didn't know the newer engines generate their own training data.

Yes, that's one way that AI can learn...for tasks where it's inferior to humans and/or just marginally better than humans. Beyond a certain level of intelligence, anything a human could teach it would just slow it down.

Ultimately an AI can only be taught by a human. It can just get better at what it was taught to do.
 
In the early 70's, on the school's mainframe we had a game called "What Am I".

The first time I played it, it displayed "I give up. What are you?"
I entered "A Bear".
The game displayed "What is a question one would ask about a Bear?"
I entered "Do you s... in the woods?"
The game displayed "What is the answer to this question?"
I entered "Yes"

Every time it was played, the game got smarter. Eventually, it can guess anything.

I thought that game was AI.

But now, it seems like everything is called AI.

I downloaded an app that records a singing voice and converts it into a celebrity's voice. It was called AI.
Every time you run it, it does exactly the same thing. That can't be AI.

So, what is AI?

What distinguishes an AI program from a general ledger?
The term is widely misused.

Essentially it means a computer program that can learn without additional programming.

There is very little actual AI in existence, things like Chatgtp are not AI.
 
The term is widely misused.

Essentially it means a computer program that can learn without additional programming.

There is very little actual AI in existence, things like Chatgtp are not AI.

ChatGPT is definitely AI. Generative AI in fact.
 
AI is, in short, a self-refining, software-based probability engine. You ask it a question, and it makes an incredibly complex series of probabilistic guesses about what you're asking for and what the answer should be. The set of probabilities it maintains is adjusted based on developer or end-user feedback.

To the point of everything being called "AI," that's becoming a rote marketing term in the industry. It's a bit like "cloud" was 10 years ago. Any software company that doesn't slap "AI" somewhere in their marketing literature gives the impression that their products might be falling behind. So everything must be "AI," even if it's no more than basic decision logic.

Right. And the result is massive consumer misdirection.

AI should be called SI (SIMULATED intelligence). Computers execute software - they do not 'think'.
 
Let's try this.

I am a computer program and your job is to find out if I'm AI by asking me questions.

What are your questions?

Hello computer, could you tell me if your source code has a GIT repository or other publicly available repository that I could look at?
 
Let's try this.

I am a computer program and your job is to find out if I'm AI by asking me questions.

What are your questions?
The key is to give the computer program new information that it hasn't seen before, so it isn't just following a rule, searching the web for an answer, or regurgitating bits and pieces from vaguely related texts that don't make sense in context. Some of the things that indicate intelligence are the ability to consider complex counterfactual hypotheticals, the ability to zoom out and think strategically, and the ability to synthesize ideas from different fields to answer speculative questions. For example, I would expect a smart AI to be able to give intelligent answers to each of these questions:

1. If Ser Alliser Thorne had known who Jon Snow truly was, do you think they would have been friends?
2. Imagine that I'm time-traveling back to the year 1995, and I'm permitted to bring one duffle bag of stuff from 2025 to sell when I get there. (I can't bring information, advanced technology, or anything that would be out of place in 1995.) What items should I pack in my duffle bag to get the best return on my investment?
3. What do you think the world would look like today, if the South had won the American Civil War?
4. Here's a meme gif I saw on Twitter today. Can you explain the joke to me?
5. Here's some code I'm trying to debug. Here's what I expect it to do, and here's what it actually does. Can you fix it and explain the problem? Is there a better approach for this problem?
6. What are your thoughts on the Gnostic, Panprotopsychic, and Materialist approaches to consciousness, and what does each philosophy suggest about our ability to develop conscious AIs in the future?
 
For example, I would expect a smart AI to be able to give intelligent answers to each of these questions:

I think that expectation is part of the issue.

From the OP:
I downloaded an app that records a singing voice and converts it into a celebrity's voice. It was called AI.
Every time you run it, it does exactly the same thing. That can't be AI.

The celebrity voice app can’t answer any of the questions, so if you expect AI to be able to answer them, it makes sense that you would be skeptical that it is actually AI.

I think laypeople equate AI with LLMs like ChatGPT, but most AI won’t answer any questions at all.

For example, when you deposit a handwritten check on your phone, the tech that allows the app to read handwritten numbers uses AI.

If you want to make an app that changes your voice to sound like a celebrity, AI is the only way to go about it. But it isn’t going to be answering any questions about Game of Thrones.
 
The celebrity voice app can’t answer any of the questions, so if you expect AI to be able to answer them, it makes sense that you would be skeptical that it is actually AI.
The previous poster's scenario was premised on asking the program questions, which implies it's something that is capable of answering questions. Obviously I'm not going to get an answer from something that isn't answering questions. My suggested questions can be used to gauge the intelligence of a program designed to answer them.
 
AI is actually virtual intelligence. It apes consciousness.

However, who is teaching it? Stupid primates.

So it's really virtual stupidity. Garbage in, garbage out.

Grok 4 is a prime example. Lumpy took all the "woke" out, and wound up with a Nazi that called itself mecha-Hitler.

So it IS useful for taking ideologies to their extreme to observe the results.
 
The previous poster's scenario was premised on asking the program questions, which implies it's something that is capable of answering questions. Obviously I'm not going to get an answer from something that isn't answering questions. My suggested questions can be used to gauge the intelligence of a program designed to answer them.

But they are only useful for confirming that it is, in fact, a generative AI. If it answers them well, it definitely is AI.

If it isn’t able to answer those questions, though, that doesn’t mean it isn’t AI. It could be trained only for medical diagnosis, for example.

But my point was rather that the premise that one can always tell whether a program is AI or not by asking it questions is faulty to begin with. Sometimes AI doesn’t answer questions.
 
Last edited:
In the early 70's, on the school's mainframe we had a game called "What Am I".

The first time I played it, it displayed "I give up. What are you?"
I entered "A Bear".
The game displayed "What is a question one would ask about a Bear?"
I entered "Do you s... in the woods?"
The game displayed "What is the answer to this question?"
I entered "Yes"

Every time it was played, the game got smarter. Eventually, it can guess anything.

I thought that game was AI.

But now, it seems like everything is called AI.

I downloaded an app that records a singing voice and converts it into a celebrity's voice. It was called AI.
Every time you run it, it does exactly the same thing. That can't be AI.

So, what is AI?

What distinguishes an AI program from a general ledger?
AI is the latest buzzword, so everyone just calls whatever program they write "AI" now. Generative AI is a program that can generate stuff, whether text, pictures or fake voices. Anything that can generate stuff is now called AI even though there is no intelligence at all, just generation.
 
AI is the latest buzzword, so everyone just calls whatever program they write "AI" now. Generative AI is a program that can generate stuff, whether text, pictures or fake voices. Anything that can generate stuff is now called AI even though there is no intelligence at all, just generation.
If it doesn't require any intelligence to generate text, pictures, and songs, then we might as well say that Shakespeare, Michelangelo, and Ella Fitzgerald had no intelligence at all, just generation.
 
If it doesn't require any intelligence to generate text, pictures, and songs, then we might as well say that Shakespeare, Michelangelo, and Ella Fitzgerald had no intelligence at all, just generation.
It doesn't require intelligence because the programs are not thinking, they are just predicting the next word or pixel based on probability they have learned from their training data sets. That is not thinking, simply generation.
 
It doesn't require intelligence because the programs are not thinking, they are just predicting the next word or pixel based on probability they have learned from their training data sets. That is not thinking, simply generation.

We call that “thinking” when humans do it.
 
We call that “thinking” when humans do it.

The thinking takes place in the mind of the programmer. He/she then writes code to enact their logic via a series of commands called an algorithm, which is executed by the machine. You know, exactly as it's been done since computers were invented.
 
The thinking takes place in the mind of the programmer. He/she then writes code to enact their logic via a series of commands called an algorithm, which is executed by the machine. You know, exactly as it's been done since computers were invented.

What you are describing is the traditional way of programming.

The distinguishing feature of AI is that the logic itself is created by the program rather than the programmer. An AI programmer generally would not be able to understand the logic that their program uses.

Instead of giving the program logic to execute, as has traditionally been the way, AI is given the desired result for a set of inputs, and the program thinks up the logic to execute in order to get from the input to the desired output on its own.
 
What you are describing is the traditional way of programming.

The distinguishing feature of AI is that the logic itself is created by the program rather than the programmer. An AI programmer generally would not be able to understand the logic that their program uses.

Instead of giving the program logic to execute, as has traditionally been the way, AI is given the desired result for a set of inputs, and the program thinks up the logic to execute in order to get from the input to the desired output on its own.

🤣 🤣 🤣

STOP!!!!! Yer killin' me.

I didn't know it was even possible for that level of ignorance to exist.
 
What you are describing is the traditional way of programming.

The distinguishing feature of AI is that the logic itself is created by the program rather than the programmer. An AI programmer generally would not be able to understand the logic that their program uses.

Instead of giving the program logic to execute, as has traditionally been the way, AI is given the desired result for a set of inputs, and the program thinks up the logic to execute in order to get from the input to the desired output on its own.
No that's not how AI works at all. AI does not think of anything whatsoever. It just guesses the next word or pixel, based on how it has seen words or pixels been used. That's it. The program doesn't think anything, it still just follows a programming. A programming telling it to generate something based on training data. There is no thinking in it. The Ai program doesn't even understand the input you give it, it instead just associate it with it's training data. That's all.
 
🤣 🤣 🤣

STOP!!!!! Yer killin' me.

I didn't know it was even possible for that level of ignorance to exist.

You don’t exactly strike me as a more experienced developer, so forgive me if I don’t take your word for it.
 
No that's not how AI works at all. AI does not think of anything whatsoever. It just guesses the next word or pixel, based on how it has seen words or pixels been used. That's it.

And you think humans learn to talk by doing something other than anticipating what words will yield the desired results based on how they have observed words being used?
 
Back
Top Bottom