Will AI Destroy Us?

Isn’t the problem that already to train these AI engines, pirating of copyrighted already happens.

Watermarks included.

So in a sense you couldn’t copyright already copyrighted work. It’s already been stolen.

This is not the same as a human referencing other works of art, imitating a style or deriving new work.

AI is literally copy pasting and blending pixels between other artists in a very opaque fashion. Such that no credits can be attributed to the origin artist.

Is that not important?

If an artist, on their own, trained an AI on their own work, and used that AI to generate more art based on what they’ve already produced, essentially they’ve externalised a part of their process so that they can make their decisions on other aspects (a calculator, but for art). Okay, cool.

If someone else does it without consent, it’s theft.

Unlike maths, art is not universal and free.

Free, not in the capitalism sense, but in the sense that art requires a human to begin with. You cannot freely access the thoughts, feelings, the context in which an art has been made without simply reducing it to the fact you can reduce anything down to binary data.

And if you don’t value the root of the art, what are the values of the art (non capitalistic value)?

Who is using AI engine and for what? I have yet to see a legitimate application.

Machine learning to improve accuracy of spotting cancer cells, yes. Machine learning into find patterns in enormous data sets, yes.

Machine learning to copy all of the art ever from everywhere, go nuts, don’t think about the possible damage/ harm, go nuts…

1 Like

There are tons of legitimate applications all over. They just don’t show up in the news as much because they aren’t the headline grabbing stuff that will make people angry.

Here’s one recent one. I’m really interested it in, but it’s too expensive.

2 Likes

The point I was trying to make, was there’s no good use for AI in art.

There are many occasions where AI is perfect for eliminating algorithmic and repetitive tasks. Due to being both high in volume and complex enough to warrant a machine to do it.

Where I’ve seen AI as a good art tool is, object selections. For example, making edits in Photoshop, making selections to separate subject from the background. It’s a life saver. This is a very limited and specific use case and that’s fine. Because the process itself if repetitive and algorithmic. It lends itself to automation.

But to make a complete composition sans human… There’s probably a exploitative reason for using this tech in that manner. Usually to eliminate the need for the artist to begin with.

ie. the most valuable/ expensive part of the process.

Danger has been here for decades and the less transparency that we have around AI, the worst things can get.

That’s nonsense.

I mean, just look at some stuff we’ve seen already. The Seinfeld Twitch channel? Yeah, it fucked up royally with its transphobic joke that go it rightfully banned. It has seen been unbanned, and also supposedly fixed.

Separate from the question of whether it’s good, it’s absolutely art. Despite making heavy use of AI for 100% of the actual dialogue, there was significant human input in the conception and creation of the system itself. I would not bat an eye if I saw it setup at an installation in the MoMa next week.

2 Likes

My TotD from January.

The AI deepfakes to make the presidents into gamer bros is a good example of regular people using AI to just make funny creations. I consider that a form of AI art.

Or like, if that’s not a great example:

1 Like

Adobe incorporating some image generating machine learning into their products is no surprise. What’s actually somewhat cool is that they are claiming that they have only trained their model using art which they have the rights to, not just a scrape of the Internet. That includes images in Adobe Stock, public domain, etc.

I’m going to guess it also includes a lot of art and photos that artists who use Adobe products have uploaded to services like Behance. I haven’t read the EULA for any of these things, because who has, but I’m going to guess that there are Adobe services where the EULA requires the user to grant Adobe a license to use their works for things like this.

People will probably, and perhaps justifiably, be mad about that if it’s the case. But at least in terms of the US legal system, in my non-laywer opinion, that’s rock solid. We could see a world where other systems face lawsuits and get taken out, but Adobe doesn’t.

Also, from an Adobe customer perspective, it’s going to be super nice to have this built into the product. I can imagine a world where a user can give Photoshop plain instructions in their written language and are able to avoid having to find tutorials and convoluted sequences of steps for each and every thing they want to do.

From the FAQ:

The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired. […]
We do not train on any Creative Cloud subscribers’ personal content. For Adobe Stock contributors, the content is part of the Firefly training dataset, in accordance with Stock Contributor license agreements. The first model did not train on Behance.

Sounds… pretty legit?

Above board, assuming they’re not lying. I’ll assume they aren’t lying since it would be legal trouble if they did. Also, they wouldn’t be so foolish to make their customers revolt.