• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

AI Doesn't Exist

It's not mimicking, its predicting what is most likely to follow every word and character it writes based on its training and using various logic engines to make sure what it's saying makes sense.
I think they did that though by teaching it variations of what "usually" comes next. To be actually thinking it also needs to be thinking " I can make it sound better than this."

It writes very standard prose with lot of cliches and stock phrases. Not bad for standard form letters or even advertising material, but if you ask it for a story and dialogue it's really repetitive and quite amateurish.
 
I think they did that though by teaching it variations of what "usually" comes next. To be actually thinking it also needs to be thinking " I can make it sound better than this."

It writes very standard prose with lot of cliches and stock phrases. Not bad for standard form letters or even advertising material, but if you ask it for a story and dialogue it's really repetitive and quite amateurish.
It's more complicated than that. There are billions of variables in the models. The quality of its writing is related to its context window size. Likely, from the sound of it, you were using the free OpenAI GPT, which is GPT 3.5. The paid models, specifically GPT4o, GPT4plus, Claude and Gemini Advanced are all orders of magnitude more capable.
 
That
It's more complicated than that. There are billions of variables in the models. The quality of its writing is related to its context window size. Likely, from the sound of it, you were using the free OpenAI GPT, which is GPT 3.5. The paid models, specifically GPT4o, GPT4plus, Claude and Gemini Advanced are all orders of magnitude more capable.
Thats correct the premium apps are certainly better than the one I've used. But surely that just means better at mimicking and not actually innovating. It just means they can do more of it.

When a writer chooses a word or phrase they do it because the feel it's the right one to convey their message, not just another option from the repertoire.
 
People are just a set of internally competing physical and emotional drives with a layer of consciousness on top. We will never be more than that.

We don't know how to program consciousness yet, or have it program itself either.

But really only just that.
 
Alan Turin proves it is mathematically possible to write a program to perform any given task, but it's is not possible to create a absolutely general purpose program that can perform any possible task. In that respect, no deterministic AI system can ever become more than it's programming.

It's not as if we can prove our own intelligence isn't deterministic, since we can't reproduce it yet, so it's hard to make this distinction.

So, conversely, If we start making AI that more closely resembles whatever it is our prefrontal cortex does there's nothing saying that originally deterministic systems can't "break free" depending on how they work.

I find the logic involved here to be quite messy from scientists/philosophers given that all our descriptions are of systems we don't fundamentally understand yet.
 
Last edited:
It's more complicated than that. There are billions of variables in the models. The quality of its writing is related to its context window size. Likely, from the sound of it, you were using the free OpenAI GPT, which is GPT 3.5. The paid models, specifically GPT4o, GPT4plus, Claude and Gemini Advanced are all orders of magnitude more capable.

The Capability is getting mighty close to passing the Turing test (if it hasn't already). Which is where an outside observer can't tell the difference between an AI and the real thing.

We might be making some P zombies (philosophical zombies) though that sound and act like minds but aren't a conscious mind. It might even eventually be difficult to tell the difference between a conscious AI and a very elaborate computation.

Human minds though are layered one on top of another though, where the automatic functions are taken by the "unconscious" (programmed) portions and it's hard to rule out the idea that the consciousness might arise out of this without intention.
 
Why would you dismiss his opinion without substance?


Big Tech’s ‘fake-it-till-you-make-it’ hype is now infecting AI​

ChatGPT and its peers do not understand what words mean​

The fundamental problem with ChatGPT and other large language models (LLMs) is that they do not understand what words mean. They are very much like a young savant who can recite every word in all six volumes of the History of the Decline and Fall of the Roman Empire without comprehending any of the content. Without such comprehension, LLMs are not going to morph into artificial general intelligence (AGI) — the ability to perform any intellectual task that human beings can do.
ChatGPT was also not the first. Developed at MIT by Joseph Weizenbaum between 1964 and 1967 ELIZA did precisely the same thing as ChatGPT. A LISP version of ELIZA was also developed during the 1970s. Programmers have been developing LLMs for more than 50 years. While the current versions are more sophisticated due to improved technology, they really are not any different from what they were originally.

The Limits of Computation : Joseph Weizenbaum and the ELIZA Chatbot - Weizenbaum Journal of the Digital Society, Volume 3, Issue 3, 2023 (open access)

It is not really the computer making decisions, but rather the developer who tells the computer what decisions to make, when to make those decisions, and how they should be made. There is no indication of intelligence on the computer's part. All the intelligence for the program originates with the developer(s).
 
Interesting consideration...all three of those examples were posthumously diagnosed with ADHD - they all had neurodivergent brains. Can't prove it entirely, due to the posthumously part, but kind of interesting to throw that variable into the mix. If this is true, it means that rather than being "programmed like everyone else", they simply mastered the art of *appearing* to be programmed like everyone else (neurodivergent masking), which their neurodivergency allowed them to do while simultaneously pushing the boundaries of humanity's understanding of the universe.

Kinda speaks to what you were saying about AI as you have defined it would require a move to something more complex than binary code... soon we won't be creating computers, we'll be creating brains. That's both exciting and scary, depending on what the last headline you read was...hehe
Everyone is diagnosed with ADHD. ADHD is a scam that the medical profession manufactured as an excuse for why teachers are not able to teach = blame the student, not the teacher for their incompetence. Their scam ADHD diagnoses also allowed the doctors and pharachuetical companies to profit by diagnosing everyone with ADHD.

Only a dishonest doctor or an employee of a pharmachuetical company could push the ridiculous bullshit scam that Copernicus, Newton, and Einstein had this imaginary ADHD. :rolleyes:

Software developers are no closer to mimicking a human brain than they were 50 years ago. How can you mimic something when you don't have the vaguest clue how it actually works?
 
I've been in the field for 25+ years now. What we have currently are AI systems, not expert systems. Claude, ChatGPT (and others) model algorithmic neuronal propagation and it's connection weights are self-trained using a massive set of training data. These AI models can be self correcting, and in no way has their output been pre-programmed.

The behaviour is an aggregate of countless neuronal relationships, and is emergent.
I've been in the field since the 1970s, with a BSCS from the University of Minnesota, and we are no closer to developing AI than we were 50 years ago. ChatGPT is merely a more sophisticated version of the 1967 ELIZA program. Calling an algorithm a "neuronal relationship" is merely a word salad that says absolutely nothing and does not change the fact that it is still nothing more than a human-developed algorithm.

As long as the computer follows its programming, exactly as written by the developer(s), then it cannot be construed as AI. Just an Expert System.
 
What the uninformed - such as the media - call Artificial Intelligence is not actually Artificial Intelligence. They just don't know any better.

Artificial Intelligence (AI) is the ability of a computer to go beyond its original programming and solve problems that it was never specifically programmed to accomplish. A computer that follows its programming exactly as written cannot be construed as Artificially Intelligent, no matter how clever its programming may be. Those systems are called "Expert Systems" and they are not AI.

We have Expert Systems capable of making accurate medical diagnosis. We have Expert Systems capable of flying aircraft and spacecraft. We have Expert Systems doing a wide variety of complex tasks, and none of them are AI. Expert Systems are when the programmer(s) write their expertise into a form that the computer can calculate. It is therefore not the expertise of the computer, but the expertise of the programmer(s) involved.

A computer can only be considered artificially intelligent when it can exceed its own programming by performing tasks it was never specifically programmed to accomplish. If a program that was designed and developed to diagnose medical issues, and it also provided a solution for cleaning up an oil spill, for example, then that would be an example of true AI.

Unfortunately, what we have are a bunch of ignorant fools in the media calling everything AI, when none of it is. Everything the media calls AI is actually an Expert System and not AI at all.

As someone who has worked in the field since the 1970s, AI is quite impossible to achieve with binary computers. It may be possible with quantum computers, but that is at least another 50 to 100 years into the future before that will become a reality.

Everyone in the tech industry is calling it AI as well. Go ahead and tell the experts they're wrong.

By your logic, humans aren't intelligent either.
 
I think the purpose of this thread was to just complain about the media. For example, your very first sentence:


And with this latest, somewhat summarizing, post you jump right back to attacking the media over this.
Yes, in large part that is what this thread is about. Complaining about how the term "AI" is being misused by those who don't have the vaguest clue what it actually means.

The media didn't come up with "AI." Outlets merely report on something, as described by those who deal with it. And software engineers and the like, right or wrong, have been referring to computer programming as AI for decades. In fact, "artificial intelligence" laboratories were set up at a number of British and U.S. Universities as early as the 1950s.
The media did not come up with the term "AI," but they certainly are incorrectly using the term at every opportunity. That, in large part, is why the masses are so misinformed about what AI truly means.

Here is a contrary example: while most of the world's media outlets referred to Covid, FOX News and other right-wing sources referred to it as the "Wuhan Virus" or the "Chinese Virus." Now, that was pure maliciousness, not an "error" of terminology. But did those media outlets come up with that terminology? No. They may have promoted that political game, but these outlets simply used what was provided to them.
They were not being malicious at all. That is how we referred to virus' in the past, by their place of origin. For example, the Hong Kong Flu of 1969, or the Spanish Flu of 1919. Calling it the Wuhan Virus makes logical sense. However, calling it the "Chinese Virus" is too vague and does not narrow down the location of the virus.

So, back to AI, should the media refer to something differently from what the experts refer to it as? Here is a complimentary example: The media stopped referring to a certain period of history as the "Dark Ages," because Historians, the experts, agreed that the terminology was wrong and misleading. It is the "Middle Ages." Until scientists and software engineers agree to call something in their field differently, the media will continue to refer to things as they do.
The media should at least learn the meaning of the terms they use. Particularly if they are going to use those terms to educate an ignorant public. I don't think that is an unreasonable request.
 
I think you're missing critical information, because this is EXACTLY reversed in IA. It is their ability to recognize patterns independent of expert guidance that makes them different. It is in fact the AI's intelligence recognizing the patterns.

What you say was true 10+ years ago, but no longer. You should study how modern AI's are built, how they work and revisit your opinion.
How do you think a program can recognize a pattern?

Do you really believe that the computer spontaneously created a means where it can devise a pattern for itself without any input from human developers? How did this miracle come about?

Or is it more likely that a human specifically instructed the computer what a "pattern" actually means, thus giving the computer the ability to identify the "pattern" the human specifically told the computer to watch out for whenever it occurs? Who is actually demonstrating the intelligence in this case, the human developer or the computer program?
 
It's semantics. According to the Turing Test, most current LLMs would qualify as AI. Also, what you call "Expert Systems" is not a description of Large Language Models at all. In fact, you aren't describing any sort of machine learning, which is what we commonly associate with "AI". LLMs, and most current AI research use GPUs, not traditional CPUs.


LLMs are so complex that the researchers building the next generations of them don't really know their full capabilities.
LLMs have been around since the 1960s. They haven't changed in more than 50 years.

Apparently you are unaware that GPU means "Graphics Processing Unit." Why would anyone be using their video card instead of their actual CPU?
 
AI is absolutely capable of providing its own expertise . Look at AlphaGo Zero:

Its programmers didn't need to know anything about Go besides the rules. No knowledge of strategy. No expertise. AlphaGo didn't rely on brute forcing X number of moves forward like Deep Blue either. It learned and taught itself actual strategies. In fact, the best human players are now learning those strategies from AlphaGo. So it has come to actual new insights on its own.
Wikipedia is not a credible source. However, I was able to locate this paper on the subject:

Mastering the game of Go without human knowledge - Nature, Volume 550, Pages 354-359, 2017 (free preprint)

Guess who wrote the program so that it could comprehend the rules of Go? Guess who developed the pattern recognition algorithms by which the program is able to recognize their opponent's move?

So far I am seeing a great deal of human intelligence, and nothing to indicate artificial intelligence. The program is functioning precisely as it was intended and developed to function by one or more humans. Crediting the computer for the intelligence of its human developers is demeaning and ignorantly misplaced.
 
Everyone in the tech industry is calling it AI as well. Go ahead and tell the experts they're wrong.

By your logic, humans aren't intelligent either.
Not everyone. Just the ignorant.

Humans are intelligent. Unfortunately, many of them are also uneducated.
 
Guess who wrote the program so that it could comprehend the rules of Go? Guess who developed the pattern recognition algorithms by which the program is able to recognize their opponent's move?

So far I am seeing a great deal of human intelligence, and nothing to indicate artificial intelligence. The program is functioning precisely as it was intended and developed to function by one or more humans. Crediting the computer for the intelligence of its human developers is demeaning and ignorantly misplaced.
Who is the expert at Go though? It isn't the programmers. It's AlphaGo. This is not an expert system.

Certainly creating AlphaGo required quite a bit of expertise and intelligence, but it wasn't expertise at Go. It was expertise in designing systems that learn and teach themselves. Saying it "functions precisely as it was intended" manufactures precision out of thin air and ignores actual intent and intelligence of the programmers.
 
Who is the expert at Go though? It isn't the programmers. It's AlphaGo. This is not an expert system.

Certainly creating AlphaGo required quite a bit of expertise and intelligence, but it wasn't expertise at Go. It was expertise in designing systems that learn and teach themselves. Saying it "functions precisely as it was intended" manufactures precision out of thin air and ignores actual intent and intelligence of the programmers.
The rules of the game had to be programmed into the computer. It did not spontaneously know how to play the game. It had to be instructed on how to play the game, including what patterns to look for. That just makes the computer mimicking the programmer's expertise, there is no intelligence involved in mimicry.

If you want to test the intelligence of a computer using Go, then you cannot tell it the rules, but you can allow it to observe the game being played. If the program can learn the rules of the game, and develop the ability to identify its own patterns - without being told what a pattern is - then you have true artificial intelligence. It doesn't even have to win or be very good at the game. It is true AI because it was able to perform a task which it was never specifically coded to do.

However, if you program in the rules of the game and tell the computer what patterns to look for and why, then that is the developer's intelligence and not artificial intelligence. I also would expect the computer to win almost every time since it can check every possibility much faster than humanly possible.
 
Everyone is diagnosed with ADHD. ADHD is a scam that the medical profession manufactured as an excuse for why teachers are not able to teach = blame the student, not the teacher for their incompetence. Their scam ADHD diagnoses also allowed the doctors and pharachuetical companies to profit by diagnosing everyone with ADHD.

Only a dishonest doctor or an employee of a pharmachuetical company could push the ridiculous bullshit scam that Copernicus, Newton, and Einstein had this imaginary ADHD. :rolleyes:

Software developers are no closer to mimicking a human brain than they were 50 years ago. How can you mimic something when you don't have the vaguest clue how it actually works?

🤣🤣... wtf??

Yeah, nevermind, carry on... lol
 
How do you think a program can recognize a pattern?

Do you really believe that the computer spontaneously created a means where it can devise a pattern for itself without any input from human developers? How did this miracle come about?

Or is it more likely that a human specifically instructed the computer what a "pattern" actually means, thus giving the computer the ability to identify the "pattern" the human specifically told the computer to watch out for whenever it occurs? Who is actually demonstrating the intelligence in this case, the human developer or the computer program?
I've written neural nets (simple and exceedingly tiny compared to today's models). In fact, this is exactly how it works, they identify patterns with a reward function. Positive training efforts are rewarded, strengthening the neural pathways that led to the output. The behaviour of the network after training is complete is not pre-programmed. Again, you need to read up on how these moderns AIs are created. The behaviours are emergent and patterns are self-recognized. Things have changed since the 70's.
 
The rules of the game had to be programmed into the computer. It did not spontaneously know how to play the game. It had to be instructed on how to play the game, including what patterns to look for. That just makes the computer mimicking the programmer's expertise, there is no intelligence involved in mimicry.

If you want to test the intelligence of a computer using Go, then you cannot tell it the rules, but you can allow it to observe the game being played. If the program can learn the rules of the game, and develop the ability to identify its own patterns - without being told what a pattern is - then you have true artificial intelligence. It doesn't even have to win or be very good at the game. It is true AI because it was able to perform a task which it was never specifically coded to do.

However, if you program in the rules of the game and tell the computer what patterns to look for and why, then that is the developer's intelligence and not artificial intelligence. I also would expect the computer to win almost every time since it can check every possibility much faster than humanly possible.

They supplied the constraints of the game, yes. They did not supply the patterns to look for. This is where you keep tripping over old conventional knowledge. That's not how modern AIs are built anymore. It self-recognizes the patterns.

Additionally, modern AIs *can* perform tasks which it was never specifically coded to do. This is why modern AIs are such a big deal. These LLMs are doing all kinds of things now that they were not specifically designed to do. You have to read up on this stuff, and lots. You are *way* behind.
 
Last edited:
What the uninformed - such as the media - call Artificial Intelligence is not actually Artificial Intelligence. They just don't know any better.
It's Marketing Departments. Any computer that can adapt is suddenly marketed as AI. But that doesn't meet my layman's definition of AI: a computer that has sentient characteristics at its core.

As someone who has worked in the field since the 1970s, AI is quite impossible to achieve with binary computers. It may be possible with quantum computers, but that is at least another 50 to 100 years into the future before that will become a reality.
If quantum computers can ever achieve real AI capabilities I think you are grossly overestimating the time-frame.
 
LLMs have been around since the 1960s. They haven't changed in more than 50 years.

Apparently you are unaware that GPU means "Graphics Processing Unit." Why would anyone be using their video card instead of their actual CPU?
Okay, I am sorry, but you don't know what you are talking about. There have been chatbots since the 1960s. However, the first true LLMs was introduced in 2017.

As to GPUs. GPUs are used by LLMs due to their parallel processing capabilities. They are also much more power efficient than traditional CPUs. https://www.marktechpost.com/2024/03/23/cpu-vs-gpu-for-running-llms-locally/
 
The rules of the game had to be programmed into the computer. It did not spontaneously know how to play the game.
Humans don't know how to spontaneously play games either. I'm not sure why you think that should apply.
It had to be instructed on how to play the game, including what patterns to look for. That just makes the computer mimicking the programmer's expertise, there is no intelligence involved in mimicry.
It needed reward functions that were consistent with the rules. I would compare to needing vision to know when we are about to step off a cliff. An environment was created where it could teach itself through self play. The programmers did not impart any expertise in the game of Go. Yes it does require skill and knowledge to create the right environment, but once again, it is the software that learns.
If you want to test the intelligence of a computer using Go, then you cannot tell it the rules, but you can allow it to observe the game being played. If the program can learn the rules of the game, and develop the ability to identify its own patterns - without being told what a pattern is - then you have true artificial intelligence. It doesn't even have to win or be very good at the game. It is true AI because it was able to perform a task which it was never specifically coded to do.
Watching humans play actually seems like something that shows far less intelligence than extrapolating from a simple rule set. Wouldn't it be learning by imitation then? The fact that it was able to discover strategies that humans had never thought of without ever learning from watching humans play, is far more impressive to me.

Once again, the specific task that the AI was coded to do was learn on its own. You can't claim that is it doing what it was programmed to do and claim that it isn't showing intelligence at the same time.
However, if you program in the rules of the game and tell the computer what patterns to look for and why, then that is the developer's intelligence and not artificial intelligence. I also would expect the computer to win almost every time since it can check every possibility much faster than humanly possible.
As I mentioned before, it doesn't work by checking every possibility. This isn't a brute force strategy. That's why Go was selected as a challenge to begin with, because unlike chess, brute force simply doesn't work.
 
It's Marketing Departments. Any computer that can adapt is suddenly marketed as AI. But that doesn't meet my layman's definition of AI: a computer that has sentient characteristics at its core.
I agree. Using the term "AI" is more marketable to the uneducated than calling it an "Expert System."

If quantum computers can ever achieve real AI capabilities I think you are grossly overestimating the time-frame.
You might be right. The first quantum computer was built in the 1990s, and in the last 30 years they are just now becoming marketable. IBM's quantum computer is currently being marketed as a 400 qubit computer. Which is a significant improvement from the 4 and 8 qubit quantum computers of the 1990s.

I estimate that quantum computers today are about where Sperry-Univac was during the 1950s. I could be off in my estimate, but I'm thinking it might be sometime around 2075 to 2125 before quantum computers reach the same level of availability as we have with binary computers today.
 
Humans don't know how to spontaneously play games either. I'm not sure why you think that should apply.
Because humans have the ability to learn, computers don't. As I pointed out above. A human can have absolutely no understanding of the game, and after careful observation learn all of the rules without having to be instructed. This denotes intelligence. Something computers obviously do not have.

It needed reward functions that were consistent with the rules. I would compare to needing vision to know when we are about to step off a cliff. An environment was created where it could teach itself through self play. The programmers did not impart any expertise in the game of Go. Yes it does require skill and knowledge to create the right environment, but once again, it is the software that learns.

Watching humans play actually seems like something that shows far less intelligence than extrapolating from a simple rule set. Wouldn't it be learning by imitation then? The fact that it was able to discover strategies that humans had never thought of without ever learning from watching humans play, is far more impressive to me.

Once again, the specific task that the AI was coded to do was learn on its own. You can't claim that is it doing what it was programmed to do and claim that it isn't showing intelligence at the same time.

As I mentioned before, it doesn't work by checking every possibility. This isn't a brute force strategy. That's why Go was selected as a challenge to begin with, because unlike chess, brute force simply doesn't work.
The skill and knowledge you are referring to is not that from the computer, but the developer. The computer has neither skill nor knowledge. You continually demean developers by giving inanimate objects all the credit. It is the developers who are intelligent here, not the computers. The computer is doing nothing more than what it was specifically instructed to do, and that does not indicate intelligence.
 
Back
Top Bottom