• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

New evidence of AI self-awareness? (1 Viewer)

You can't steal from AI. AI is itself plagiarism. It's based on models trained with gleaned data.
Every thought that every human has ever had has been trained with data...based on their education, upbringing, life experiences, biological instincts, things they've read/watched/heard, etc.
 
Every thought that every human has ever had has been trained with data...based on their education, upbringing, life experiences, biological instincts, things they've read/watched/heard, etc.

I'm not so sure about this and the reason why is I am thinking about a human in a disaster situation that they never dreamed of happening and never had any training to handle.

Frankly, I was first agreeing in my mind as I read your third post, but then I shifted to that which I wrote about above.

BUT, I'm still "not so sure" and am sort of posting as I give further thought to this. But credit to your last three posts, Gatsby. And thank you for making note of this:

Claude Opus.

And that vocabulary usage --- "biological instincts" --- is going to take me some time to figure out.
 
Disagree. AI is capable of innovating in the same way that humans are. Every new human art style has been the evolution of previous art styles and/or based on the artist's experience with countless other artworks. Exactly what AI does.
AI is literally incapable of creating an art style it hasn't seen before. It can't evolve new art styles based on what it has seen.

It's like that saying, sufficiently advanced science is indistinguishable from magic. A few articles talking about how LLMs mimic the connection in human branis drastically misled people.

No matter how many tokens our current models process they cannot become intelligent or "think". We would need to come up with a fundamentally new model. Our current ones are basically just trying to maximize the value of thousands of variables.
 
I'm not so sure about this and the reason why is I am thinking about a human in a disaster situation that they never dreamed of happening and never had any training to handle.

Frankly, I was first agreeing in my mind as I read your third post, but then I shifted to that which I wrote about above.

BUT, I'm still "not so sure" and am sort of posting as I give further thought to this. But credit to your last three posts, Gatsby. And thank you for making note of this:
Nope you're exactly right. If our current AIs could only train off of caveman art, no matter how long we ran them, they could only ever produce caveman art of varying quality.
 
Man, I never agree with you guys on anything. Conservatives really knocking it out of the park with this one.

I’ll enjoy it while it lasts.
There's a great deal of anthropomorphism in this matter (typically by people who don't understand how computers work). Seventy years ago, most people would've assumed that a computer that could play chess (to say nothing of one that was good at chess) was self-aware. Today we have chess engines that can beat any human player, but no one thinks they are anything other than tools. Soon we'll have LLMs that can write better essays than any human (having spoken to people in the know, I strongly suspect that such things as "writing your own dissertation" will be a thing of the past in twenty years, if not sooner), but they will still just be tools. Computers don't have reason or free will, they can only execute code that's fed into them (even very complicated code that's a black box to its authors), they can't consciously deliberate or decide what goals to pursue.
 
ChatGPT doesn't claim self-awareness. There is a more advanced AI that does claim some form of self-awareness though: Claude Opus.
If someone executed Claude Opus's algorithm manually, using pen and paper, would the pen and paper be self-aware?
 
Computers don't have reason or free will, they can only execute code that's fed into them (even very complicated code that's a black box to its authors), they can't consciously deliberate or decide what goals to pursue.
I have a bachelors in computer science. The idea that our current form of AI (LLMs) is intelligent, alive, or can think is almost entirely perpetuated by those outside the field.

All they know is they've heard things like "black box" and "simulate connections in a brain" and see the results and just kinda think, "damn...maybe its like a human! I mean it learns from looking at things like a human does right?"

The quickest way to dispel this marriage is something like the Nightshade program.

It is a program that applies basically a filter of to an image that makes it unusable for training AI. This filter is imperceptible to humans and persists even if you do something like take a photo of the image, it has nothing to do with individual tiny pixels. If an AI uses even a couple dozen of these poisoned images to train on the model with collapse and start producing completely incorrect images.

1715348706688.png
1715348845657.png

There is a problem where if an AI trains on an image made by AI, the model will get worse until it collapses. Even if that image looks like it could have been made by a human, the model will still collapse. If it learned like a human it could take art an AI made and improve or improvise off of it. But it CANT. In fact it is the opposite. That's how alien to how it "learns" from how we learn.
 
AI is literally incapable of creating an art style it hasn't seen before. It can't evolve new art styles based on what it has seen.
The line between "evolving a new art style" and "putting a creative new spin on existing art styles" is subjective, and AI can most certainly do the latter.

For example, here's what I get when I tell GPT4 / DALL-E the prompt "Draw me a solarpunk art-deco building."

1715350230881.png
Is solar-deco a new artistic style? Just a fusion of two existing styles? An abomination that should never become reality? Maybe all of the above. But I don't think you can easily make a distinction and say that it can't evolve new art styles. The next-gen of these tools are going to have working memory as well, so if I like the solar-deco building, it will remember it for me and be able to produce more buildings in a similar style with different variations. At that point it's a new art style in every way that matters.
 
Nope you're exactly right. If our current AIs could only train off of caveman art, no matter how long we ran them, they could only ever produce caveman art of varying quality.
This is correct in the narrowest sense...if by "current AI" you mean "an AI that's publicly available as of May 2024." But it will likely be false of AIs in May 2025, and almost certainly false of AIs in May 2026.

The question I would ask is this: What would an AI need to have, in order to go from only seeing caveman art to producing the Mona Lisa? Obviously it's possible to do so, since humanity managed to do it. The main thing that's missing right now is memory. An AI that produces artwork can be trained on its own output. So if you're a human and your goal is to get the AI to produce a Renaissance painting after only seeing cave paintings, you could first have it produce millions of cave paintings, and you keep the 1% of them that look the most Renaissance-like. Then train the AI on that dataset, and repeat this process until it's producing Renaissance paintings.

But of course, this AI-human hybrid system would still have knowledge of what a Renaissance painting is and what they should look like, and have an end goal in mind.
 
If someone executed Claude Opus's algorithm manually, using pen and paper, would the pen and paper be self-aware?
One could argue that it would. Unless one believes there's something metaphysical about the human brain, I don't see how one gets to the conclusion that there's anything special about it that makes its algorithms more self-aware than an advanced AI, or an advanced abacus, or an advanced pen-and-paper system.
 
An AI that produces artwork can be trained on its own output.
But...it can't? It can't even train from the output of other AI. It causes the model to devolve and eventually collapse into producing total garbage.
 
But...it can't? It can't even train from the output of other AI. It causes the model to devolve and eventually collapse into producing total garbage.
This is factually untrue. In fact these precise techniques are rather often the backbone of a reinforcement learning program wherein agent states, probabilistic inputs etc. are often generated by another AI to establish the appropriate volume.
 
But...it can't? It can't even train from the output of other AI. It causes the model to devolve and eventually collapse into producing total garbage.
What is it about the human brain that you believe makes it immune to this? Human artists trained off of cave paintings for thousands of years, and eventually produced the Mona Lisa. Training on the constantly-evolving "human artist paintings" dataset didn't cause artistic collapse among future artists and cause them to start producing garbage.

A generative adversarial network works basically the same as human artists. One component draws pictures, and the other component rejects them as being too different from the training set (i.e. pictures that suck). Eventually the system becomes a competent artist, just like a human going from making a mess with fingerpaints to becoming a world-renowned artist by expanding their training dataset and getting feedback.
 
"Draw the Battle of Cannae, in a brand new art style that has not been common throughout human history."
1715352478719.png
I guess one can debate whether this is a new "art style," but as someone with no formal expertise in art I don't immediately recognize it as belonging to some existing tradition. I'm seeing some Van Gogh vibes from the sky, some art-deco vibes from the jukebox-looking thing on the left, etc. I'm sure an art historian could probably identify more components.

I think DALL-E understood the prompt and did a reasonably good job inventing a brand new art style. At least as well as a human artist could.
 
Let me take my thoughts on this into an area where it worries me to no end. AND I just ran into this on NHK, so it is weird timing, because I was already having thoughts about AI and combat missions and how to post those thoughts here. In fact, my first kind of work with a kind of AI was with the CH47 and the SAS system --- Stability Augmentation System --- that adjusts flight configuration in accordance with wind conditions. Let's see, I think that was started to be used back in the late 60s to early 70s. I think we had the system in the A model bird. It was considered a new hotshot system and it sure worked.

Okay, enough of that remembering. Here I wanted to focus on AI used in combat situations and this NHK story covers that, but I am going to try and see if I can post a link that will show you the full story translated to English.

First, though, the Japanese:


Now I might have the proper arrangement for an English translation:

***NOTE/EDIT -> It works, BUT you have to wait for just a minute or two to load. Well, maybe more minutes than that.

 
Last edited:
One of my frustrations with the internet is that I will run across a story then, a couple of days later, realize the article may be significant but then I can’t find it again when I run a search.

A few days ago, I found a story, I believe on my Microsoft News feed, about a researcher who asked AI to write his resume. The resume recorded him as having a history of plagiarism. Such an obvious mistake showed that AI still has serious flaws, concluded the researcher.

But was it a mistake? Maybe AI considers itself the author of material it generates. So, every time the researcher uses AI, he is copying AI’s creative work. AI was saying, “You stole from me.”

I wish I could provide a link but, alas, I couldn’t find the original story.
The machine isn't what is wrong with the code. It's the programmer that coded it wrong. My belief is the machine considered itself the writer, so it acknowledged that in the paper, but it could have just easily done other things if it was instructed to do so.
 
One could argue that it would. Unless one believes there's something metaphysical about the human brain, I don't see how one gets to the conclusion that there's anything special about it that makes its algorithms more self-aware than an advanced AI, or an advanced abacus, or an advanced pen-and-paper system.
Human self-awareness is a fact to be explained, not a claim to be justified. I'm happy to get into a metaphysical discussion if you want, but that isn't necessary to appreciate the fact that people are sentient while pieces of paper aren't (unless you adhere to some form of panpsychism).

Presumably you'd understand the problem with saying "human beings have gone from living in small huts to building skyscrapers over the last few thousand years, therefore construction equipment is (or may become) self-aware". Your reasoning in regard to AI is similar, but harder to recognize because anthropomorphism is easier with computers than e.g. construction equipment.
What is it about the human brain that you believe makes it immune to this? Human artists trained off of cave paintings for thousands of years, and eventually produced the Mona Lisa. Training on the constantly-evolving "human artist paintings" dataset didn't cause artistic collapse among future artists and cause them to start producing garbage.

A generative adversarial network works basically the same as human artists. One component draws pictures, and the other component rejects them as being too different from the training set (i.e. pictures that suck). Eventually the system becomes a competent artist, just like a human going from making a mess with fingerpaints to becoming a world-renowned artist by expanding their training dataset and getting feedback.
The basic problem here is that you're assuming that what the human mind does is just run a sophisticated algorithm. When Da Vinci painted the Mona Lisa, he was not simply replicating patterns he had seen from other creative people, he was expressing his own creativity.

While it's probably true that some of what the brain does is analogous to an AI, there is much more to it. Human beings can also reason - we can, for example, recognize art as boring and derivative, even if it accords perfectly with our "training set".
 
I have a bachelors in computer science. The idea that our current form of AI (LLMs) is intelligent, alive, or can think is almost entirely perpetuated by those outside the field.

All they know is they've heard things like "black box" and "simulate connections in a brain" and see the results and just kinda think, "damn...maybe its like a human! I mean it learns from looking at things like a human does right?"

The quickest way to dispel this marriage is something like the Nightshade program.

It is a program that applies basically a filter of to an image that makes it unusable for training AI. This filter is imperceptible to humans and persists even if you do something like take a photo of the image, it has nothing to do with individual tiny pixels. If an AI uses even a couple dozen of these poisoned images to train on the model with collapse and start producing completely incorrect images.

View attachment 67508981
View attachment 67508983

There is a problem where if an AI trains on an image made by AI, the model will get worse until it collapses. Even if that image looks like it could have been made by a human, the model will still collapse. If it learned like a human it could take art an AI made and improve or improvise off of it. But it CANT. In fact it is the opposite. That's how alien to how it "learns" from how we learn.
Interesting.

In regard to the last bit, I assume manual human curation could make AI-generated data net positive for training purposes? Like if you had a person selecting only the best X% of AI generated content, for some sufficiently low value of X (and assuming no malicious poisoning)?
 
Interesting.

In regard to the last bit, I assume manual human curation could make AI-generated data net positive for training purposes? Like if you had a person selecting only the best X% of AI generated content, for some sufficiently low value of X (and assuming no malicious poisoning)?
My understanding is no, actually. Any AI image regardless of quality is basically “poison” for an AI model. It can train on a few of them but it picks up on minuscule trends and patterns and will try to replicate them.


The problem is that the AI generated image kinda is inherently poison. My understanding of the nightshade program is that it slightly alters the image in a way that almost mimics an AI generated image to recreate the effect that happens naturally. Even a couple dozens poisoned images that for all intents and purposes look IDENTICAL to the original human made ones are enough. So even selecting the best images from a set of AI images wouldn’t prevent model collapse.

Right now sites like the Internet Archive are getting slammed with requests from new AI companies who can no longer create models like the original ones did by just mass harvesting data. They need to make sure to only include pre-AI internet images.
 
This is factually untrue. In fact these precise techniques are rather often the backbone of a reinforcement learning program wherein agent states, probabilistic inputs etc. are often generated by another AI to establish the appropriate volume.
That’s adversarial machine learning.

That is not the same thing as what they like to call “AI” these days, like LLMs and diffusion models.

An “AI” text to image diffusion model can be reduced to producing incoherent garbage from training on just a couple hundred AI generated images being included in its training data of millions of images.
 
My understanding is no, actually. Any AI image regardless of quality is basically “poison” for an AI model. It can train on a few of them but it picks up on minuscule trends and patterns and will try to replicate them.


The problem is that the AI generated image kinda is inherently poison. My understanding of the nightshade program is that it slightly alters the image in a way that almost mimics an AI generated image to recreate the effect that happens naturally. Even a couple dozens poisoned images that for all intents and purposes look IDENTICAL to the original human made ones are enough. So even selecting the best images from a set of AI images wouldn’t prevent model collapse.

Right now sites like the Internet Archive are getting slammed with requests from new AI companies who can no longer create models like the original ones did by just mass harvesting data. They need to make sure to only include pre-AI internet images.
According to the Scientific American article, it took several iterations of AI (depending on the type) training on its own data for it to become complete garbage (meaning that second generation AI was still mostly okay).

Having skimmed Nightshade's paper, it seems that it is reliant on intentional poisoning toward a specific target (dog->cat, car->cow, etc.). With general iterative poisoning, the issue seems to be homogenization (the AI loses less common patterns), not false patterns.
 

Users who are viewing this thread

Back
Top Bottom