• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

What is AI, really?

Everything you do can be traced back to your mom bumping uglies with your dad, so nothing you do is initiated "on your own" either in that sense.

You can't possibly think that's a legitimate analogy, so I'll assume you're taking the Mickey.
 
You can't possibly think that's a legitimate analogy, so I'll assume you're taking the Mickey.

Why wouldn't I think that?

Your argument appears to be that a machine can't do anything own its own because it didn't create itself. But humans didn't create themselves either, so how precisely do you imagine the analogy isn't legitimate?
 
Why wouldn't I think that?

Your argument appears to be that a machine can't do anything own its own because it didn't create itself. But humans didn't create themselves either, so how precisely do you imagine the analogy isn't legitimate?

Nope. I'm saying it's not sentient. It has nothing with which to think, whereas I have at least 2 brain cells.
 
Nope. I'm saying it's not sentient. It has nothing with which to think, whereas I have at least 2 brain cells.

Do you know what those brain cells are called? They are called neurons. You know what else has neurons? Neural networks.

Your neurons, or "brain cells" as you call them, send a signal to other neurons in the neural network that makes up your brain. And that is how ANNs work as well.

Everything you do can be traced back to your own neural network being formed in your mommy's womb, just as everything an AI does can be traced back to its neural network being initially coded by a developer.

But once its neural network has been constructed, the AI can learn how to write a limerick on its own, rather than having the process for writing a limerick coded by the developer. The developer doesn't even have to know what a limerick is.
 
Do you know what those brain cells are called? They are called neurons. You know what else has neurons? Neural networks.

Your neurons, or "brain cells" as you call them, send a signal to other neurons in the neural network that makes up your brain. And that is how ANNs work as well.

Everything you do can be traced back to your own neural network being formed in your mommy's womb, just as everything an AI does can be traced back to its neural network being initially coded by a developer.

But once its neural network has been constructed, the AI can learn how to write a limerick on its own, rather than having the process for writing a limerick coded by the developer. The developer doesn't even have to know what a limerick is.

My question to ChatGPT: Can it be argue that a computer has a neural network analogous to that of a person

ChatGPT said:​

It can be argued, but the analogy has limits depending on how strictly you define “neural network” and “analogous.” Here’s how the argument can be made—and where it breaks down:

Ways the analogy holds

  1. Structural resemblance (in AI systems):
    • Artificial neural networks (ANNs) are explicitly inspired by biological neural networks.
    • Both consist of nodes (neurons/artificial neurons) connected via pathways (synapses/weights).
    • Information flows through these nodes, with weighted influences and activation thresholds, somewhat like signals across synapses.
  2. Learning and adaptation:
    • ANNs adjust their “weights” based on experience (training), just as synaptic strengths change in human learning (neuroplasticity).
    • Both systems generalize patterns from data, enabling recognition, prediction, and decision-making.
  3. Parallel processing:
    • Biological brains and many modern neural networks process multiple signals at once in a distributed manner, unlike classic von Neumann architectures.


Where the analogy breaks down

  1. Complexity and scale:
    • Human brains have ~86 billion neurons and trillions of synapses; even the largest ANNs are orders of magnitude simpler.
    • Biological neurons have rich internal dynamics, while artificial neurons are usually simple mathematical functions.
  2. Architecture:
    • A computer’s hardware isn’t inherently neural. Most ANNs still run on traditional CPU/GPU architectures that are sequential at a low level.
    • The brain integrates sensing, processing, and motor output in a deeply intertwined, embodied way; most computers are modular and lack that integration.
  3. Functionality and consciousness:
    • Human neural networks are tied to emotion, embodiment, and subjective experience.
    • ANNs, even advanced ones, lack self-awareness and grounding in a lived body or environment.
 
My question to ChatGPT: Can it be argue that a computer has a neural network analogous to that of a person

ChatGPT said:​

It can be argued, but the analogy has limits depending on how strictly you define “neural network” and “analogous.” Here’s how the argument can be made—and where it breaks down:

Ways the analogy holds

  1. Structural resemblance (in AI systems):
    • Artificial neural networks (ANNs) are explicitly inspired by biological neural networks.
    • Both consist of nodes (neurons/artificial neurons) connected via pathways (synapses/weights).
    • Information flows through these nodes, with weighted influences and activation thresholds, somewhat like signals across synapses.
  2. Learning and adaptation:
    • ANNs adjust their “weights” based on experience (training), just as synaptic strengths change in human learning (neuroplasticity).
    • Both systems generalize patterns from data, enabling recognition, prediction, and decision-making.
  3. Parallel processing:
    • Biological brains and many modern neural networks process multiple signals at once in a distributed manner, unlike classic von Neumann architectures.


Where the analogy breaks down

  1. Complexity and scale:
    • Human brains have ~86 billion neurons and trillions of synapses; even the largest ANNs are orders of magnitude simpler.
    • Biological neurons have rich internal dynamics, while artificial neurons are usually simple mathematical functions.
  2. Architecture:
    • A computer’s hardware isn’t inherently neural. Most ANNs still run on traditional CPU/GPU architectures that are sequential at a low level.
    • The brain integrates sensing, processing, and motor output in a deeply intertwined, embodied way; most computers are modular and lack that integration.
  3. Functionality and consciousness:
    • Human neural networks are tied to emotion, embodiment, and subjective experience.
    • ANNs, even advanced ones, lack self-awareness and grounding in a lived body or environment.

Cool. I'm glad that you learned something. Congratulations.

A good follow up question would be whether ChatGPT composed that response on its own based on things it had learned from its training data, or whether that response was actually composed by a developer who then coded ChatGPT to return that response when asked that question.
 
Last edited:
Cool. I'm glad that you learned something. Congratulations.

Yes, I learned that "ANNs, even advanced ones, lack self-awareness". Oh wait, that's what I've been saying all along.

A good follow up question would be whether ChatGPT composed that response on its own based on things it had learned from its training data, or whether that response was actually composed by a developer who then coded ChatGPT to return that response when asked that question.

I asked. It said:

I composed the response on my own when you asked the question. It wasn’t written ahead of time by a developer or coded as a fixed answer for that specific question.

Here’s how it worked:

  • Your question triggered my model to generate a reply in real time.
  • The reply was based on patterns in my training data (books, articles, and examples about neural networks and analogies between computers and brains).
  • There isn’t a “canned” response for that question in my system. Developers don’t manually write answers to specific queries like that.
 
@AConcernedCitizen - more from ChatPGT:

Here’s a breakdown of how I generated that answer step by step:

1.​

When you asked, “can it be argue that a computer has a neural network analogous to that of a person”, I first parsed:
  • Topic: neural networks (both artificial and biological).
  • Action: evaluating whether an analogy can be made.
  • Tone: exploratory, not asking for a yes/no fact but for reasoning.

2.​

My training contains a wide range of text about:
  • How biological neural networks (brains) work.
  • How artificial neural networks in computers work.
  • Philosophical and technical discussions on whether they are analogous or fundamentally different.
When I saw the phrase “analogous to that of a person,” I weighted sources that discuss similarities and differences.

3.​

I don’t have a pre-written answer for that question. Instead, my neural network combined patterns learned during training:
  • Structure: start by acknowledging the argument could be made → outline the similarities (nodes/neurons, connections, learning) → contrast with differences (biology vs. silicon, emergent consciousness).
  • Tone: balanced and explanatory, because the question is speculative.

4.​

My model predicts text one token at a time, guided by probabilities based on the context. It doesn’t “look up” a paragraph; it assembles one dynamically, much like a person formulating an answer from memory and reasoning.

5.​

OpenAI developers didn’t write or hard-code the reply. They built the architecture and trained it, but the specific answer is unique to our conversation. If you asked the same question twice, you’d likely get a similar type of answer, but the exact wording might vary because it’s generated on the spot.
 
I composed the response on my own when you asked the question.

And there you have it, straight from the LLM’s mouth. Language oriented AI composes responses on its own, as opposed to non-AI software which merely returns responses that it was given and coded to return.
 
And there you have it, straight from the LLM’s mouth. Language oriented AI composes responses on its own, as opposed to non-AI software which merely returns responses that it was given and coded to return.

Not the gotcha you think it is.

We all know that it's programmed to create content. No matter how hard you try to give machines a life of their own, it all comes back to them doing what they're programmed to do. Again - SIMULATED intelligence.
 
Not the gotcha you think it is.

We all know that it's programmed to create content. No matter how hard you try to give machines a life of their own, it all comes back to them doing what they're programmed to do.

It comes back to what they learned to do, just as everything a human does comes back to what they learned to do.

Learning from experience implies some kind of intelligence, artificial or otherwise.
 
It comes back to what they learned to do, just as everything a human does comes back to what they learned to do.

Learning from experience implies some kind of intelligence, artificial or otherwise.

My toaster oven assures me you're wrong. I've been reading your reponses to it, told it to flash its red light if it agrees with me, and it did.
 
My toaster oven assures me you're wrong. I've been reading your reponses to it, told it to flash its red light if it agrees with me, and it did.

I doubt your toaster oven is able to learn things. AI programs, on the other hand, do learn things.
 
I doubt your toaster oven is able to learn things. AI programs, on the other hand, do learn things.
They're way better at making shit up.

And the problem is that you have to know enough to check their work, and actually put in the effort to do it, or else what you're really doing is rolling dice to see whether it hallucinated the answer.
 

Where the analogy breaks down

  1. Complexity and scale:
    • Human brains have ~86 billion neurons and trillions of synapses; even the largest ANNs are orders of magnitude simpler.
    • Biological neurons have rich internal dynamics, while artificial neurons are usually simple mathematical functions.
I would just like to say that this is somewhat out of date now, as we are fast approaching an era of trillion parameter open weight models. (I mean, we're actually already there). There is still more complexity in the human brain regarding synaptic dynamics and I don't think any AI model has a context window (aka input field in lay speak) that matches the bandwidth of the human perception system (visual, auditory, smell, touch...), but it's getting closer each day.
 
I would just like to say that this is somewhat out of date now, as we are fast approaching an era of trillion parameter open weight models. (I mean, we're actually already there). There is still more complexity in the human brain regarding synaptic dynamics and I don't think any AI model has a context window (aka input field in lay speak) that matches the bandwidth of the human perception system (visual, auditory, smell, touch...), but it's getting closer each day.

Yes, our ability to simulate intelligence will only grow. Machines will never be sentient in the way we are, but it won't take long for us to not be able to tell the difference.
 
They're way better at making shit up.

And the problem is that you have to know enough to check their work, and actually put in the effort to do it, or else what you're really doing is rolling dice to see whether it hallucinated the answer.
The counter-point of course is that human beings are in fact way better at making shit up, hallucinating answers then failing to cross-check their work, than AI models are.

Don't take my word for it, just ask any of the tens of millions of Americans who believe that the Haitians of Springfield, Ohio are eating the cats, simply because some loser told them that it's happening.
 
The counter-point of course is that human beings are in fact way better at making shit up, hallucinating answers then failing to cross-check their work, than AI models are.

Don't take my word for it, just ask any of the tens of millions of Americans who believe that the Haitians of Springfield, Ohio are eating the cats, simply because some loser told them that it's happening.
AI doesn't make those people better, it makes them worse. They'll ask the AI to prove that Haitians are eating the cats, and it will hand them slop that reinforces their belief. And unscrupulous people will use it to intentionally generate and spread more of that.
 
AI doesn't make those people better, it makes them worse. They'll ask the AI to prove that Haitians are eating the cats, and it will hand them slop that reinforces their belief. And unscrupulous people will use it to intentionally generate and spread more of that.
Humans don't need AI to prove that Haitians are eating cats. They are able to hallucinate that belief all by themselves--and more frequently than an AI hallucinates, to boot! At least an AI model can be fed inputs to cross-check its prior outputs. Go ahead, just try walking up to your local MAGA and asking them to cross-check their beliefs. See what happens.
 
Your interpretation of my post is stupid and I don't know what else I can say to you. At no point did I suggest AI was the only way people will get fed bullshit.
 
Your interpretation of my post is stupid and I don't know what else I can say to you. At no point did I suggest AI was the only way people will get fed bullshit.
Your interpretation of my post is stupid and I don't know what else I can say to you. At no point did I suggest that you were suggesting that AI was the only way people will get fed bullshit.
 
Your interpretation of my post is stupid and I don't know what else I can say to you. At no point did I suggest that you were suggesting that AI was the only way people will get fed bullshit.
Then you used the word "need" wrong.
 
Back
Top Bottom