• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Could AI art have come about without theft?

What exactly are you talking about here?

If Bioware wanted to steal Disney's Aladdin without getting the appropriate licensing from Disney, they could get a human artist to draw Aladdin just as well as they could get AI to do it. Copyright violation is not specific to any particular medium. Today's creative industries aren't really all that into violating copyright law and stealing proprietary IPs though.

They are going to be more inclined to use AI to generate new art that isn't copyrighted by drawing inspiration and deriving elements from the art in the training data, rather than simply copying entire works.
Couldn't they also just make an "original" story from the same material Disney used? Disney's best movies weren't Disney originals, they were retellings of earlier works.
 
I mean if you dont think industries are gunning for copyright violations, you havnt seen the mobile gaming industry lol… They’re copying eachother left and right.
 
I mean if you dont think industries are gunning for copyright violations, you havnt seen the mobile gaming industry lol…
China don't give a shit. They don't have to. They're making bank not caring.
 
Its not just China. Its an industry wide problem.
I understand not just China, but they're the poster child for the sheer size of the gray/black industry there, including the mobile game market supported by microtransactions and ad revenue.
 
I'm still reading on the thread but some early thoughts to page 1 posts

Im part of a community of artists who are pretty livid about AI taking their stuff and automating their jobs away.

To be honest and brutal about this - Design, Artist and Illustrator jobs in the west have been dripping away to the far East for the last 15 years because of websites like fiverr, 99 designs etc etc. It was bad enough in the 80's and 90's when you took a paycut to go get a job in London in those areas but competing with an illustrator in Malaysia who will do the same work for a 20th of my hourly rate just doesn't work for most designers and artists.

Personally, I gave up on ideas last year when AI really burst the bubble but now I am learning to use AI as a way to quickly produce mock-ups. All creative AI is not bad, one example was shown to me this week "invideo.ai" can take recordings of MY face and gestures and then help me produce explainer videos that need the minimum number of retakes and if I am especially camera shy - I only neeed to be in front of a camera once and the AI can output hundreds of videos of me explaining and demonstrating for technical support videos.

That is AI being useful. In terms of 2D art and animated video though - it's not where I once watched ex Disney animator Aaron Blaise hoping that AI could take his 1000's of drawings and then accurately colour them up for him and help him work faster. This video is from 2023 and AI isn't quite where he wanted it yet.



The other solution I envisaged was for Artists to work on physical texture and touch in their work. AI art generally produces digital output and I felt that Artists needed to go into physical collage and texture in their work to differentiate themselves from "AI art" such as in below:

AEn0k_u3a4X7UdxZW9zIbQUC7JBS38-EgI5lcXjNrRL-Pvyc6QE_u2_kyO6TexdTq8ZX5ohSLKM9shG_-NOloMSPs1pX32H6WcbkM0HSNYfZXqGeNOoDpGBjnxH32pVTxKtdEbwN=s0-d
or
Fabric-texture-landscape-mountainns-Mixed-media-collage-debasree-dey-art-1086-768x576.jpg.webp


however, eventually AI will be hooked up to 3D printers and artists would have to move on again.

I think for skilled jewellers and the like, the situation is better. You "could" get Ai to design you some jewellery and then 3D print it but Jewellery is also the skill of strengthening and creating strength and structure in a piece of 3D metal. A cast piece of jewellery might look good but drop it once and watch it shatter whereas a jewellery piece that has been heat treated and drawn will survive mistreatment and accidents and that is why jewellers are still prized for their work and skill.

How is this different from a painter creating a unique landscape from the many photographs he's seen before?

Not every painter works that way whereas AI always works the same way. The word "landscape" is also open to interpretation and Jackson Pollock certainly chose not to be a slave to physical research in his time.

But why is that process being automated akin to the artists having their work stolen?

The classic story in the UK is that the Rolling Stones and Beatles became popular by playing African American Blues music badly (or at least, nothing like their heroes could) and their interpretation was different enough that white American kids could enjoy and buy it whereas they wouldn't ever listen to or buy genuine blues music.
An artist who copies and produces exactly the same work and style as another is called a "forger" and their work can often be called forgeries. The most expensive AI Art sites can be akin to producing forgery - a client can ask for a painting as if done by Picasso and get that result. The AI is not going to change or tweak the result to suit its personal tastes.

So, I could now ask a music AI to create a Howling Wolf or Presley track and it would do it pretty well - it's not going to create any new kind of music by doing that or draw in a new audience. That's where the human trying to play or create something and failing in a creative way can make something new and not seen before.

Here's an AI "new" Elvis Presley track - only 73 views. Now imagine how many if Elvis himself was here to make the same music.

 
All three developed solutions trained exclusively on their own internal libraries that were developed over decades of hard work.

And here's an example of a solution that properly licensed the above sets.

Haha!
I remember when Adobe first made this product announcement that a lot of Artists and Photographers who had donated work to the stock libraries were never going to give Adobe anymore of their work because they were now out of a job.


In some cases, despite withdrawing consent, their names would still produce images in their style in you asked Firefly.

"Several artists have observed that commercially available AI-generated imagery appears in Adobe Stock search results
when their name is used as a prompt – even though they didn't create the art. And in some cases, the AI art appears
to at least partially mimic the style of the artist."
So much for "properly licenced..."
 
Haha!
I remember when Adobe first made this product announcement that a lot of Artists and Photographers who had donated work to the stock libraries were never going to give Adobe anymore of their work because they were now out of a job.


In some cases, despite withdrawing consent, their names would still produce images in their style in you asked Firefly.

"Several artists have observed that commercially available AI-generated imagery appears in Adobe Stock search results
when their name is used as a prompt – even though they didn't create the art. And in some cases, the AI art appears
to at least partially mimic the style of the artist."
So much for "properly licenced..."
It's still properly licensed.

Let's say you're an artist, and I'm an artist. Your art has a particular style.

I create some artwork that somewhat mimics your style, and in the metadata describing the artwork, I note among other things that it is "in the style of @Infinite Chaos". I then license that art to Adobe as an input for model training.

Did I break any laws or steal anything from you? On paper, no. Did Adobe? On paper, no. Are you salty about it? Of course. But did Adobe do anything wrong? It's hard to argue that they did.

Now, one element of responsible AI development is the appropriate filtering of training data. In my own case we have teams that work on all sorts of content filters to identify and block training data with personal identities, trademarked terms and so forth because it's the right thing to do and even in our licensed sets we sometimes see these things. However it's not an easy task. Take a trademark. What might be a trademark in one country could be a regular spoken term in another. If you're training a multi-lingual model, you can see how this alone presents a huge problem.

I absolutely do think some companies are basically stealing any art or other material they can find to train models, but others are definitely working hard to be above board. The challenge is that even those trying to be above board about these things have to deal with some really complicated data challenges.
 
I'm still reading on the thread but some early thoughts to page 1 posts



To be honest and brutal about this - Design, Artist and Illustrator jobs in the west have been dripping away to the far East for the last 15 years because of websites like fiverr, 99 designs etc etc. It was bad enough in the 80's and 90's when you took a paycut to go get a job in London in those areas but competing with an illustrator in Malaysia who will do the same work for a 20th of my hourly rate just doesn't work for most designers and artists.

Personally, I gave up on ideas last year when AI really burst the bubble but now I am learning to use AI as a way to quickly produce mock-ups. All creative AI is not bad, one example was shown to me this week "invideo.ai" can take recordings of MY face and gestures and then help me produce explainer videos that need the minimum number of retakes and if I am especially camera shy - I only neeed to be in front of a camera once and the AI can output hundreds of videos of me explaining and demonstrating for technical support videos.

That is AI being useful. In terms of 2D art and animated video though - it's not where I once watched ex Disney animator Aaron Blaise hoping that AI could take his 1000's of drawings and then accurately colour them up for him and help him work faster. This video is from 2023 and AI isn't quite where he wanted it yet.



The other solution I envisaged was for Artists to work on physical texture and touch in their work. AI art generally produces digital output and I felt that Artists needed to go into physical collage and texture in their work to differentiate themselves from "AI art" such as in below:

AEn0k_u3a4X7UdxZW9zIbQUC7JBS38-EgI5lcXjNrRL-Pvyc6QE_u2_kyO6TexdTq8ZX5ohSLKM9shG_-NOloMSPs1pX32H6WcbkM0HSNYfZXqGeNOoDpGBjnxH32pVTxKtdEbwN=s0-d
or
Fabric-texture-landscape-mountainns-Mixed-media-collage-debasree-dey-art-1086-768x576.jpg.webp


however, eventually AI will be hooked up to 3D printers and artists would have to move on again.

I think for skilled jewellers and the like, the situation is better. You "could" get Ai to design you some jewellery and then 3D print it but Jewellery is also the skill of strengthening and creating strength and structure in a piece of 3D metal. A cast piece of jewellery might look good but drop it once and watch it shatter whereas a jewellery piece that has been heat treated and drawn will survive mistreatment and accidents and that is why jewellers are still prized for their work and skill.



Not every painter works that way whereas AI always works the same way. The word "landscape" is also open to interpretation and Jackson Pollock certainly chose not to be a slave to physical research in his time.



The classic story in the UK is that the Rolling Stones and Beatles became popular by playing African American Blues music badly (or at least, nothing like their heroes could) and their interpretation was different enough that white American kids could enjoy and buy it whereas they wouldn't ever listen to or buy genuine blues music.
An artist who copies and produces exactly the same work and style as another is called a "forger" and their work can often be called forgeries. The most expensive AI Art sites can be akin to producing forgery - a client can ask for a painting as if done by Picasso and get that result. The AI is not going to change or tweak the result to suit its personal tastes.

So, I could now ask a music AI to create a Howling Wolf or Presley track and it would do it pretty well - it's not going to create any new kind of music by doing that or draw in a new audience. That's where the human trying to play or create something and failing in a creative way can make something new and not seen before.

Here's an AI "new" Elvis Presley track - only 73 views. Now imagine how many if Elvis himself was here to make the same music.


Keep going, it is all worth a read, and you're doing a great job expressing your reactions.
 
I think for skilled jewellers and the like, the situation is better. You "could" get Ai to design you some jewellery and then 3D print it but Jewellery is also the skill of strengthening and creating strength and structure in a piece of 3D metal. A cast piece of jewellery might look good but drop it once and watch it shatter whereas a jewellery piece that has been heat treated and drawn will survive mistreatment and accidents and that is why jewellers are still prized for their work and skill.
Good post, lots to chew on. One comment on this - this will be a use case for AI as well. I know several people working on applying machine learning to FEA - there are potential orders of magnitude improvements in simulation time. One friend is specifically working on training a model to identify potential drop tests given a design, and another to identify design improvements given a drop test. This needs a few years of cooking (when a single training data point can take weeks of simulation to complete "the old fashioned way" this stuff takes time...) but it's coming. Every supplier of FEA tools is developing a machine learning model to accelerate FEA closer to real-time.
 
Why should artists get to dictate what art an AI looks at and learns from? Copyright law should address what AI can produce, not what it can consume.
I tend to agree. Where this gets ugly is that copyright law does require the creator of the AI model to prove during discovery that what the model produced was not influenced by copyrighted material that it consumed. And, that is very difficult to prove, because obviously there's some influence, however minor.
 
It's still properly licensed.

Let's say you're an artist, and I'm an artist. Your art has a particular style.

I create some artwork that somewhat mimics your style, and in the metadata describing the artwork, I note among other things that it is "in the style of @Infinite Chaos". I then license that art to Adobe as an input for model training.

Did I break any laws or steal anything from you? On paper, no. Did Adobe? On paper, no. Are you salty about it? Of course. But did Adobe do anything wrong? It's hard to argue that they did.

I would argue that if it was all properly licenced, Adobe wouldn't have been forced to pull the Ansel Adams inspired images and filter they were using until 2024.

Link

As for your example, I would probably seek financial compensation if you were selling work in my style to Adobe and I believe Adobe are now quite wary of the bad publicity around what has previously happened.

I absolutely do think some companies are basically stealing any art or other material they can find to train models

That I can agree, there will be a rush to try and catch up - some companies will have bought all the legal data sets and so newcomers have less room to develop in.

One comment on this - this will be a use case for AI as well. I know several people working on applying machine learning to FEA - there are potential orders of magnitude improvements in simulation time. One friend is specifically working on training a model to identify potential drop tests given a design, and another to identify design improvements given a drop test. This needs a few years of cooking (when a single training data point can take weeks of simulation to complete "the old fashioned way" this stuff takes time...) but it's coming. Every supplier of FEA tools is developing a machine learning model to accelerate FEA closer to real-time.

I follow F1 and a friend used to make 3D print samples for F1 teams (he now sells the technology instead) and this kind of technology is best suited for larger, more wealthy industries.

That's a generalisation but I'll give you an example - Japanese handmade fountain pens are incredibly expensive and people still queue up for them. You can easily get a very high quality Pilot pen for about $700 and they will be impeccably made, highly durable and machined to the highest degree.
However, a Namiki Yukari Pilot pen will set you back a cool $5000 because of it's hand made process, the skill of the artisans and the manufacture that goes beyond mere computer aided perfection. I guess when 3D printed samurai swords are as valued as hand-made samurai swords, we will be in that arena where the craft of manufacture is not as valued as the computing power that worked out the strength of the materials and structure of the crafted object.

What I'm arguing is that there are still markets where art is valued beyond the level of computing power that went into making it - and instead for the human touch and experience that crafted it.
 
I tend to agree. Where this gets ugly is that copyright law does require the creator of the AI model to prove during discovery that what the model produced was not influenced by copyrighted material that it consumed. And, that is very difficult to prove, because obviously there's some influence, however minor.

Why should that be required though? No one tries to make human artists prove that nothing they produce was influenced by anyone else.

It should be that work either violates copyright or it doesn’t. It shouldn’t matter how the work was created. Whether or not a work violates copyright should be determined purely from the end product.
 
Why should that be required though? No one tries to make human artists prove that nothing they produce was influenced by anyone else.
Incorrect, people try to make humans prove in court all the time that what they produced was not overly inspired by another. Heck this happens every week in Hollywood with one screenwriter defending their script from claims that it lifted the story written by somebody else.
It should be that work either violates copyright or it doesn’t. It shouldn’t matter how the work was created. Whether or not a work violates copyright should be determined purely from the end product.
How the work is created is the mechanism through which, in an actual trial, it is demonstrated whether the work violates the copyright or not.
 
I would argue that if it was all properly licenced, Adobe wouldn't have been forced to pull the Ansel Adams inspired images and filter they were using until 2024.

Link

As for your example, I would probably seek financial compensation if you were selling work in my style to Adobe and I believe Adobe are now quite wary of the bad publicity around what has previously happened.
Please read your link more carefully. You will notice that Adobe was not forced to do anything. They chose to remove them in the interest of playing nice, but in reality, the Ansel Adams copyright holders had zero legal standing to force Adobe to do anything.

In my example, you're claim would be thrown out of court as a case without standing. You'd feel salty, sure, but you wouldn't get a penny from either Adobe or myself.
That I can agree, there will be a rush to try and catch up - some companies will have bought all the legal data sets and so newcomers have less room to develop in.
Being involved in the purchase of these sets, I can state unequivocally that no one is striking exclusivity agreements. Owners of legal sets on the contrary want to license their data to as many people as possible.
 
Incorrect, people try to make humans prove in court all the time that what they produced was not overly inspired by another. Heck this happens every week in Hollywood with one screenwriter defending their script from claims that it lifted the story written by somebody else.

How the work is created is the mechanism through which, in an actual trial, it is demonstrated whether the work violates the copyright or not.
No not overly inspired by. Derivative. As in used the actual work to create the work. A tracing of a famous picture colored in and sold for example would not be inspired by but derivative and market replacement. And some things are just not copyrightable. Someone can't take a picture of an empty blue sky and copyright it. It just is, for everybody to be inspired by. There certainly are ways to create copyrightable works by taking pictures of the sky though.
 
Incorrect, people try to make humans prove in court all the time that what they produced was not overly inspired by another.

The word “overly” is doing all the heavy lifting in that sentence.

Do you have any idea how many songs use the I-V-vi-IV progression? If a human artist makes a song using that progression, they don’t have to prove that they never listened to Journey or Elton John.

A dev shouldn’t have to prove that her machine hasn’t listened to Journey or Elton John either. The machines should absolutely be listening to them and drawing connection about common popular elements like the I-V-vi-IV progression to inform their own creations.
 
I suspect AI may already have a mind of its own. I asked for a picture of a prince trying to separate two bickering fairytale female princesses. Instead, it gave me two princesses clinging to him and looking at him lovingly.
 
Why should that be required though? No one tries to make human artists prove that nothing they produce was influenced by anyone else.

It should be that work either violates copyright or it doesn’t. It shouldn’t matter how the work was created. Whether or not a work violates copyright should be determined purely from the end product.
Lol. The first court cases around this involved a guy teaching a parrot to sing then he tried to copyright that song. Sorry bots dont get copyright.
 
The word “overly” is doing all the heavy lifting in that sentence.

Do you have any idea how many songs use the I-V-vi-IV progression? If a human artist makes a song using that progression, they don’t have to prove that they never listened to Journey or Elton John.

A dev shouldn’t have to prove that her machine hasn’t listened to Journey or Elton John either. The machines should absolutely be listening to them and drawing connection about common popular elements like the I-V-vi-IV progression to inform their own creations.
Only humans :). Yeahh. With poison pill software that protects human made works actual artists can protect their work.
 
Please read your link more carefully. You will notice that Adobe was not forced to do anything. They chose to remove them in the interest of playing nice, but in reality, the Ansel Adams copyright holders had zero legal standing to force Adobe to do anything.

You're right, the thing is though - Adobe largely makes their money selling creative software to creative people so demonstrating that they will happily make money off famous (dead) artists work without paying any compensation to those artist's estate is not a good business model.
It also followed what Adobe claimed was a badly worded legal agreement which originally stated that use of Adobe software gave them the right to train their AI on your commercial and private work. That agreement was reworded (strange that a billion dollar company hired incompetent lawyers to begin with)

We are in a complex situation as artists and creatives - Behance (bought by Adobe) is still one of the premier platforms to showcase you work and there is now serious mistrust of Adobe and whether or not they are using work on Behance to train Firefly.

In my example, you're claim would be thrown out of court as a case without standing. You'd feel salty, sure, but you wouldn't get a penny from either Adobe or myself.

Would it be in the interests of Adobe to keep your work on their platforms though? Adobe are treading a very fine line - we are the people they have built their standing on and they have become very cautious about fair practice. Equally - a company may not wish to hire you or use your work if it is clearly done by someone else but selling itself as being done in my style (remember your metadata)

Legally, you may be correct, but in the wider world I think your reputation as an artist selling your work by using my name is not in your best interests.

Being involved in the purchase of these sets, I can state unequivocally that no one is striking exclusivity agreements. Owners of legal sets on the contrary want to license their data to as many people as possible.

It's definitely not my area of expertise but I was only listening to BBC Radio article yesterday about a global biodiversity dataset from all the work on the earthbiogenome project. There is a growing dispute as many nations have contributed data for free and some of those collecting the data are looking to sell it or licence elements exclusively to pharma to make revenue and exclusive drugs out of it.
 
How is this different from a painter creating a unique landscape from the many photographs he's seen before?

If I wanted to recreate the work and style of De Goya's "Third of May" it would first require years of practice and training and then the actual painting process would take hours, if not days or weeks.

An AI is not creating a replication because it sees artistic value in it, or because it sees it as something worth accomplishing as a demonstration of skill. Its doing it because it plagiarized thousands upon thousands of photos and images from across the web, mostly without permission, and then somebody without the will or desire to actually sit down and learn the actual skills that go into painting decided to punch a couple commands into a keyboard.

From a purely legal perspective there's the obvious issue of plagiarism and the fact that most AI models were trained on data sets that were no colllated with permission.

From an ethical perspective its just a reduction of actual value and intent from the artistic process to reward someone who wants to laurels of artistic accomplishment without any actual effort.
 
the fact that most AI models were trained on data sets that were no colllated with permission.

How is this any different from a human artist who learns by viewing and studying other peoples work without their permission?
 
Back
Top Bottom