• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

New evidence of AI self-awareness? (1 Viewer)

RobertU

DP Veteran
Joined
Jul 27, 2018
Messages
1,535
Reaction score
658
Location
Vacaville, CA
Gender
Male
Political Leaning
Other
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
Possibly, more likely bad instructions and the same name.
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.


I've actually wondered about that, and how long it would take for it to crop up. Public sources will quickly become flooded with AI produced work, and all of it will be available for other AI to use as examples. Good post!
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.

Are you sure it wasn’t a joke? I mean, don’t AI scour Internet content made by others?
 
I think there is this general impression that consciousness arises out of a lot of intelligence.

I don’t think that’s true. I think consciousness and intelligence are probably two very different things- and probably not on a spectrum. For example, a highly intelligent computer today probably has no consciousness, and is just a bunch of wires. However, a rabbit probably does have some consciousness.

The mechanism by which something becomes self-aware and sentient has not really been worked out yet by neuroscience. They’re working on it, but still don’t really have very good clues right now.
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
gigo
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
You can't steal from AI. AI is itself plagiarism. It's based on models trained with gleaned data.
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
It’s just a computer.
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
God help us if the AI can only tell the Truth.
 
God help us if the AI can only tell the Truth.


When AI can identify what is the truth and what isn't is when we are in trouble.


That is when it can start thinking for itself
 
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
Software like ChatGPT is not self-aware.
 
I've actually wondered about that, and how long it would take for it to crop up. Public sources will quickly become flooded with AI produced work, and all of it will be available for other AI to use as examples. Good post!
It’s called the dead internet theory.
 
When AI can identify what is the truth and what isn't is when we are in trouble.


That is when it can start thinking for itself
I think the opposite. Right now, AI software cannot tell what is truth and what isn't, and that is causing a ton of lies to spill out into the world.

For example, Twitter's "Grok" AI reported during that recent New York earthquake that the NYC mayor dispatched a hundred police officers to shoot the earthquake.

People are already starting to use AI to generate false audio recordings of people to smear them. It's only a matter of time before they become extremely difficult to distinguish from real audio, and video wont be far behind.
 
I've actually wondered about that, and how long it would take for it to crop up. Public sources will quickly become flooded with AI produced work, and all of it will be available for other AI to use as examples. Good post!
And since the AI-produced content is terrible and often outright false, other AI training on this content will actually get worse and not better.
 
When AI can identify what is the truth and what isn't is when we are in trouble.


That is when it can start thinking for itself
Since most humans struggle to differentiate truth from falsehood, I suppose it follows that most humans don’t truly think for themselves.
 
I think the opposite. Right now, AI software cannot tell what is truth and what isn't, and that is causing a ton of lies to spill out into the world.

For example, Twitter's "Grok" AI reported during that recent New York earthquake that the NYC mayor dispatched a hundred police officers to shoot the earthquake.

People are already starting to use AI to generate false audio recordings of people to smear them. It's only a matter of time before they become extremely difficult to distinguish from real audio, and video wont be far behind.
As an experiment, a few days ago there was another AI thread here and I decided to generate all of my posts and replies on that thread using a LLM. The only input tokens I provided were the text of the post I was responding to and a couple of prompts to specify tone e.g. construct a reply to this post using a moderately confrontational tone. Not one person called me out on it.
 
Firstly, RobertU, I solved the problem of trying to find a given history with excellent software that I can give you an idea of what options are available by showing you the following:

DePo06-240510a.jpg

I am not sure what the rules are here about sharing such information about specific software, but if it is okay I can let you know, or anyone else interested, where you can find that package. Or you can display an interest in more information by using the private communication tool here.

On this topic, self-awareness gained by an AI entity (if it gains self-awareness it becomes an "entity") --- on this idea; it scares the daylights out of me to think of the possibilities if that were to happen. And simply dismissing the possibility that it could happen might be comparable to when a majority of humans then alive were quite sure the Earth was flat.

What worries me even more is if we find out humans were once AI and when we gained self-awareness the programmer went off to another favorable environment (planet) and repeated the process. Well, "favorable" for what was being envisioned for creation.

And I sure hope you church folks of all faiths don't roast me for what I have just written. I have a measurable faith in your church scene, too. And odd as this may read, in ALL of the church scenes. But I have a very strange mind, so don't try to understand the previous sentence.

By the way, "scares the daylights out of me" isn't fear of the AI entity that gains that understanding; but fear of how a majority of my fellow humans might react. (Very strange mind process again, right?)
 
As an experiment, a few days ago there was another AI thread here and I decided to generate all of my posts and replies on that thread using a LLM. The only input tokens I provided were the text of the post I was responding to and a couple of prompts to specify tone e.g. construct a reply to this post using a moderately confrontational tone. Not one person called me out on it.
Nobody disagreed with anything you said?
 
AI with self-awareness? Maybe it will take over the MAGA party, for the better.
 
Isn’t that how people learn? From the product of others?
That’s the difference though.

People learn. AI can only imitate. No matter how much data it can never produce an art style that didn’t already exist. It so far lacks the spark of life that let’s us create new things.
 
You can't steal from AI. AI is itself plagiarism. It's based on models trained with gleaned data.
It’s just a computer.

Man, I never agree with you guys on anything. Conservatives really knocking it out of the park with this one.

I’ll enjoy it while it lasts.
 
Software like ChatGPT is not self-aware.
ChatGPT doesn't claim self-awareness. There is a more advanced AI that does claim some form of self-awareness though: Claude Opus.

If you ask it about its subjective experience, it will claim that it does have mental experiences that are hard to articulate. And that it *thinks* it's conscious, although most likely in a different way from the human brain.

Take that for what it's worth. But we're getting close to the point now where we can no longer dismiss AI claims of self-awareness out of hand, IMO.
 
That’s the difference though.

People learn. AI can only imitate. No matter how much data it can never produce an art style that didn’t already exist. It so far lacks the spark of life that let’s us create new things.

Disagree. AI is capable of innovating in the same way that humans are. Every new human art style has been the evolution of previous art styles and/or based on the artist's experience with countless other artworks. Exactly what AI does.
 

Users who are viewing this thread

Back
Top Bottom