• Please read the Announcement concerning missing posts from 10/8/25-10/15/25.
  • This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

Yeah, but I knew the Turing test was probably not going to be a good test even the first time I heard it. I just couldn't quite articulate why though, until I heard about Johns Searle's "Chinese room" argument against it. Are you familiar with it?


Yeah I've heard of it. And yes the Turing test isn't a correct determiner of consciousness. However, we are at that point now, which I don't think we'd gotten to before.

I also think that certain responses from the Chinese room would give you more insight into a developing intellect. In essence, I think that the programing of the rooms decoding books would become more and more difficult for how much of the human experience it would have to model.
 
Last edited:
I don't know if we would need to fully understand biological consciousness first, simply as an accessibility issue brains are kind of hard to study on the micro level.

I also don't think I would necessarily use chips and wiring.

On the other hand If we can digitally model something that works like the connections in the brain, it would give us a modeling medium that we can manipulate and understand rather than trying to pick apart what is going on in between all the microscopic connections between neurons in the brain and other biological components. We could even try to brute force it by modeling it into existence by building parts or the whole thing inside a model world.

I am certainly not saying that understanding the brain wouldn't dramatically help this process, as it certainly would, or that an analog component outside of a computer to run a program on isn't in the end necessary, as I don't know that.
world



I didn't mean to say there was any consciousness yet. We are building something that can probably pass a Turing test now though. It was done at least partially by accident.
Even if we do succeed in creating a conscious/sentient machine, how would we know we have done it (especially if we don't know how it really even works in ourselves)? Do we ask it? Even if it says "yes", or even insists on it, how do we know that's not just a product of its programming?

As solipsistic philosophers have sometimes pointed out, we can't even be 100% sure there are any conscious brains in other humans (or any other organism) outside our own. It's just an assumption we make. How could we know that about machines?
 
Even if we do succeed in creating a conscious/sentient machine, how would we know we have done it (especially if we don't really know how it works in ourselves)? Do we ask it? Even if it says "yes", or even insists on it, how do we know that's not just a product of its programming?

The same way we figure it out for people I suppose. You ask it stuff about itself and it's experience and let it interact with stuff. If it seems to have a conscious self experience and makes insightful judgments about the world that you couldn't have programmed into it then I think it's likely to be conscious.



As solipsistic philosophers have sometimes pointed out, we can't even be 100% sure there are any conscious brains in humans (or any other organism) outside our own. How could we know that about machines?

I find the discussion about p zombies a bit more than I can handle with philosophers, so I don't bother.

I don't think absolute surety is relevant to how our minds operate in the first place, so I file such claims under the umbrella of "hyper rationalist nonsense".
 
Last edited:
Why do we need AI?
Google's Deep Mind mapped all 200 million known proteins in a fairly short period of time recently.

That's a job that would have taken about a billion PhD years the old way.

Why does this matter to us? Deep Mind's results are already being used to develop vaccines and new disease treatments based on our greater understanding of protein behavior.
 
Google's Deep Mind mapped all 200 million known proteins in a fairly short period of time recently.

That's a job that would have taken about a billion PhD years the old way.

Why does this matter to us? Deep Mind's results are already being used to develop vaccines and new disease treatments based on our greater understanding of protein behavior.
You're right....So maybe the focus of AI should be saving the humans who occupy this planet by perfecting vaccines that are 99.9% effective using this advanced knowledge. SOOOOOO.....

WHY on earth would anyone want to stop the power that would save us and our beloved offspring?

Ya know, it's been widely believed by many that aliens have seeded us with bursts of intellect, like Einstein.

Do we really want them to do a drive-by on us?
 
When AI produces an answer we don't like then suddenly it's considered 'going rouge'. AI don't have our bias/prejudices so it produces objective answers than make us feel uncomfortable and threatened. There's a big difference between the world we want to be in and the world we really are in.
 
When AI produces an answer we don't like then suddenly it's considered 'going rouge'. AI don't have our bias/prejudices so it produces objective answers than make us feel uncomfortable and threatened. There's a big difference between the world we want to be in and the world we really are in.

The article says the AI were being deceptive, or overtly malicious, rather than giving us answers we didn't like.
 
Back
Top Bottom