• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

A speculation about AI images

Craig234

DP Veteran
Joined
Apr 22, 2019
Messages
58,564
Reaction score
29,803
Gender
Male
Political Leaning
Progressive
I see a lot of AI images that are remarkable, yet have an obvious flaw, like six fingers or toes. They seem to give people a sense of relief that they can still 'spot' AI, 'it's still flawed'.

I find it unlikely that AI that can generate such could images would make such basic errors, and suspect they seem to be intentional errors. So, what's the motive? Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?
 
I see a lot of AI images that are remarkable, yet have an obvious flaw, like six fingers or toes. They seem to give people a sense of relief that they can still 'spot' AI, 'it's still flawed'.

I find it unlikely that AI that can generate such could images would make such basic errors, and suspect they seem to be intentional errors. So, what's the motive?

AI doesn't understand anatomy, it just matches patterns based on training data. There's no conspiracy, it's just growing pains.

Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?

There is already a massive increase in AI lobbying, and you can bet the ranch none of it will benefit you. Who do you think is going to write these extremely technical regulations, Bernie Sanders? They will be written by lobbyists, to benefit giant corporations.
 
Yeah tis AI being unworldly. See also hands holding an umbrella. And music AI trying to produce an atonal piece.
 
AI will eventually “learn” to do fingers correctly.
 
I see a lot of AI images that are remarkable, yet have an obvious flaw, like six fingers or toes.
Not exactly. AI image generation typically makes loads of different kinds of error, it's just that you only notice the obvious ones (by definition).

Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?
No. There's no co-ordination across the industry and making all AI image generation consistently produce the same kind of obvious errors would be even more difficult that just making it work as intended, especially with none of the many people who would need to be involved leaking that manipulation.
 
Not exactly. AI image generation typically makes loads of different kinds of error, it's just that you only notice the obvious ones (by definition).

That's not correct. The images I'm referring to are mostly high quality - not saying 'perfect' - with one glaring error. They don't also have a nose on their chin, three ears, two vertical eyes, and so on. The point is, it's one defect that isn't very plausible.

No. There's no co-ordination across the industry and making all AI image generation consistently produce the same kind of obvious errors would be even more difficult that just making it work as intended, especially with none of the many people who would need to be involved leaking that manipulation.

It wouldn't take any co-ordination across the industry. One or a few actors could generate them. It wouldn't be an issue of it being difficult to remove the flaw, it'd be an issue of the flaw being intentional. Not many people are needed, as I said.
 
That's not correct. The images I'm referring to are mostly high quality - not saying 'perfect' - with one glaring error. They don't also have a nose on their chin, three ears, two vertical eyes, and so on. The point is, it's one defect that isn't very plausible.
Again though, just because those are the only errors you notice or can see doesn't mean they're the only errors that exist. AI image generation can and does generate a whole swathe of different visual errors, from the obvious (like malformed bodies or clashing objects) to the subtle (like flaws perspective or mismatched lighting). Our brains have evolved to be very good at spotting specific things, especially variations in other humans, so those are the ones we immediately notice. There can also be a whole range of flaws or errors that aren't visually obvious, such as not correctly following the prompt or producing output that is physically valid but logically flawed (like a garage door smaller than the car inside).

It wouldn't take any co-ordination across the industry. One or a few actors could generate them. It wouldn't be an issue of it being difficult to remove the flaw, it'd be an issue of the flaw being intentional. Not many people are needed, as I said.
What you're describing is true of all AI image generation though, and there are dozens of different tools developed by multiple organisations, including ones that individuals can access and adjust themselves. Achieving what you propose simply wouldn't be feasible, practically or technically.
 
I see a lot of AI images that are remarkable, yet have an obvious flaw, like six fingers or toes. They seem to give people a sense of relief that they can still 'spot' AI, 'it's still flawed'.

I find it unlikely that AI that can generate such could images would make such basic errors, and suspect they seem to be intentional errors. So, what's the motive? Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?
There's an ad I see every once in awhile I'll try and pay attention to it next time I see it the men in it looks very realistic I don't see any of those flaws with it but it's ai and I can tell. I can't tell what it is that makes me able to tell I'm thinking it's just too perfect of skin.

But something that people should keep in mind if you're one of these people that thinks we should do stuff to save the environment stop using AI. The energy it consumes is probably greater than anything else you do.
 
Again though, just because those are the only errors you notice or can see doesn't mean they're the only errors that exist.

You're just badly missing the point, and repeating the point over and over doesn't seem promising. It's not proven, but it is very suspicious.

Achieving what you propose simply wouldn't be feasible, practically or technically.

It would be trivial.
 
You're just badly missing the point, and repeating the point over and over doesn't seem promising. It's not proven, but it is very suspicious.
I'm not missing the point. You don't find it believable that AI image generation could naturally make these kind of "obvious" errors, so you're proposing the possibility that those specific errors are somehow artificially inserted in to all sorts of different AI images. I am telling you that it is perfectly believable for AI image generation to naturally make those kind of errors, just as it is for it to make all sorts of other errors. To the AI, it's all the same thing anyway, just a whole load of numbers.

It would be trivial.
What evidence or experience are you basing that assertion on exactly? There is all sorts of competition, disagreement and personality conflicts at the top of the nascent AI industry. There doesn't seem to be much they're able to agree on, let alone the kind of initiating and keeping secret the kind of grand conspiracy you're speculating about. And that's without even considering how it would reliably and consistently done technically, especially without people much more deeply involved in it's use noticing.
 
AI will eventually “learn” to do fingers correctly.
A lot already have. As I proved in another thread, AI can now do indistinguishable video/voice. Yes, some people might catch a few mistakes. But mostly it's good enough.

(1:30)


You telling me 6 fingers is going to tell you that none of that is real?

PS the joke at the end is funny and topical.
 
Meghan Markle is taking heat for this publicity photo for some new product line.


1000041764.webp

Look at her foot and her right hand.
 
The quality really isn't even worth talking about, because the rate of improvement is so crazy. Look at the video in post 13. This level of realism would’ve been impossible just a few years ago. Imagine what AI videos will look like five years from now.
 
The quality really isn't even worth talking about, because the rate of improvement is so crazy. Look at the video in post 13. This level of realism would’ve been impossible just a few years ago. Imagine what AI videos will look like five years from now.
Yeah it's sometimes sometimes hard to tell but if I could pick anything to criticize about that video's AI it's that it looks too perfect. Skin tones reflections stuff like that that's the only thing that gives it away from me but I would have a hard time telling if that was edited video or AI if I didn't know going into it.
 
The quality really isn't even worth talking about, because the rate of improvement is so crazy. Look at the video in post 13. This level of realism would’ve been impossible just a few years ago. Imagine what AI videos will look like five years from now.
I think the image in post number 14 minus a few errors look some more realistic. The weird bamboo cane to hide the feet the weird blade on her hand notwithstanding the subject matter makes the picture look more realistic.
 
Does anyone figure that AI will some day develop situational awareness?

I was thinking the other day that such like the story about an apple and that Newton fella was a classic case of situational awareness, because he was asking himself why that apple fell straight down and not off to the side, and such. And many great thoughts that advanced human civilization have come from situational awareness type discoveries. That "Why did that happen?" question and then I got to thinking that is what AI lacks. BUT, could it develop to a point where one day a unit with AI installed suddenly can have situational awareness?

If that could be even remotely possible, we could be in for bad trouble.

Is anyone absolutely sure the answer would be 'no'?

Frankly, I guess it is because I am old sod and such that I have an uneasiness about this AI stuff.

For example, those taxi vehicles that run on AI programming; do they have really good built-in safety devices the passenger can easily, and quickly, get to, if that machine has gone into some weird and bad mode of operation? AND don't they need a certain level of situational awareness programmed into them?

Sorry this post is easing away from image generation, but - - - well, the main theme is AI, yes?

In fact, I recently was taking an interest in some YouTube video about some aircraft accident and it took me at least 60 to 90 seconds to realize the scene I was watching was AI generated, and that seems like a long time. It really was that good. In fact, I could probably find that one in my browser history folder.

One more thing, do we have an AI comedy thread around here? There must be some good AI jokes out-and-about. For example, what if I use a deep southern drawl to give an AI driven taxi instructions? You reckon it would understand me? And if it didn't would it have already turned on the meter, but be asking for clarification? That's not really comedy, is it? But might be able to twist it into a routine, if you are good at that comedy stuff.

I wonder if an AI taxi knows when to tell me to shut up?
 
Back
Top Bottom