- Joined
- Apr 22, 2019
- Messages
- 58,564
- Reaction score
- 29,803
- Gender
- Male
- Political Leaning
- Progressive
I see a lot of AI images that are remarkable, yet have an obvious flaw, like six fingers or toes. They seem to give people a sense of relief that they can still 'spot' AI, 'it's still flawed'.
I find it unlikely that AI that can generate such could images would make such basic errors, and suspect they seem to be intentional errors. So, what's the motive? Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?
I find it unlikely that AI that can generate such could images would make such basic errors, and suspect they seem to be intentional errors. So, what's the motive? Could it be that the industry wants to give people a false sense of security so they are less worried about AI and wanting to regulate it?