I have a bachelors in computer science. The idea that our current form of AI (LLMs) is intelligent, alive, or can think is almost entirely perpetuated by those outside the field.
All they know is they've heard things like "black box" and "simulate connections in a brain" and see the results and just kinda think, "damn...maybe its like a human! I mean it learns from looking at things like a human does right?"
The quickest way to dispel this marriage is something like the Nightshade program.
It is a program that applies basically a filter of to an image that makes it unusable for training AI. This filter is imperceptible to humans and persists even if you do something like take a photo of the image, it has nothing to do with individual tiny pixels. If an AI uses even a couple dozen of these poisoned images to train on the model with collapse and start producing completely incorrect images.
View attachment 67508981
View attachment 67508983
There is a problem where if an AI trains on an image made by AI, the model will get worse until it collapses. Even if that image looks like it could have been made by a human, the model will still collapse. If it learned like a human it could take art an AI made and improve or improvise off of it. But it CANT. In fact it is the opposite. That's how alien to how it "learns" from how we learn.