Will AI Destroy Us?

Isn’t the problem that already to train these AI engines, pirating of copyrighted already happens.

Watermarks included.

So in a sense you couldn’t copyright already copyrighted work. It’s already been stolen.

This is not the same as a human referencing other works of art, imitating a style or deriving new work.

AI is literally copy pasting and blending pixels between other artists in a very opaque fashion. Such that no credits can be attributed to the origin artist.

Is that not important?

If an artist, on their own, trained an AI on their own work, and used that AI to generate more art based on what they’ve already produced, essentially they’ve externalised a part of their process so that they can make their decisions on other aspects (a calculator, but for art). Okay, cool.

If someone else does it without consent, it’s theft.

Unlike maths, art is not universal and free.

Free, not in the capitalism sense, but in the sense that art requires a human to begin with. You cannot freely access the thoughts, feelings, the context in which an art has been made without simply reducing it to the fact you can reduce anything down to binary data.

And if you don’t value the root of the art, what are the values of the art (non capitalistic value)?

Who is using AI engine and for what? I have yet to see a legitimate application.

Machine learning to improve accuracy of spotting cancer cells, yes. Machine learning into find patterns in enormous data sets, yes.

Machine learning to copy all of the art ever from everywhere, go nuts, don’t think about the possible damage/ harm, go nuts…

1 Like

There are tons of legitimate applications all over. They just don’t show up in the news as much because they aren’t the headline grabbing stuff that will make people angry.

Here’s one recent one. I’m really interested it in, but it’s too expensive.


The point I was trying to make, was there’s no good use for AI in art.

There are many occasions where AI is perfect for eliminating algorithmic and repetitive tasks. Due to being both high in volume and complex enough to warrant a machine to do it.

Where I’ve seen AI as a good art tool is, object selections. For example, making edits in Photoshop, making selections to separate subject from the background. It’s a life saver. This is a very limited and specific use case and that’s fine. Because the process itself if repetitive and algorithmic. It lends itself to automation.

But to make a complete composition sans human… There’s probably a exploitative reason for using this tech in that manner. Usually to eliminate the need for the artist to begin with.

ie. the most valuable/ expensive part of the process.

Danger has been here for decades and the less transparency that we have around AI, the worst things can get.

That’s nonsense.

I mean, just look at some stuff we’ve seen already. The Seinfeld Twitch channel? Yeah, it fucked up royally with its transphobic joke that go it rightfully banned. It has seen been unbanned, and also supposedly fixed.

Separate from the question of whether it’s good, it’s absolutely art. Despite making heavy use of AI for 100% of the actual dialogue, there was significant human input in the conception and creation of the system itself. I would not bat an eye if I saw it setup at an installation in the MoMa next week.


My TotD from January.

The AI deepfakes to make the presidents into gamer bros is a good example of regular people using AI to just make funny creations. I consider that a form of AI art.

Or like, if that’s not a great example:

1 Like

Adobe incorporating some image generating machine learning into their products is no surprise. What’s actually somewhat cool is that they are claiming that they have only trained their model using art which they have the rights to, not just a scrape of the Internet. That includes images in Adobe Stock, public domain, etc.

I’m going to guess it also includes a lot of art and photos that artists who use Adobe products have uploaded to services like Behance. I haven’t read the EULA for any of these things, because who has, but I’m going to guess that there are Adobe services where the EULA requires the user to grant Adobe a license to use their works for things like this.

People will probably, and perhaps justifiably, be mad about that if it’s the case. But at least in terms of the US legal system, in my non-laywer opinion, that’s rock solid. We could see a world where other systems face lawsuits and get taken out, but Adobe doesn’t.

Also, from an Adobe customer perspective, it’s going to be super nice to have this built into the product. I can imagine a world where a user can give Photoshop plain instructions in their written language and are able to avoid having to find tutorials and convoluted sequences of steps for each and every thing they want to do.

From the FAQ:

The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired. […]
We do not train on any Creative Cloud subscribers’ personal content. For Adobe Stock contributors, the content is part of the Firefly training dataset, in accordance with Stock Contributor license agreements. The first model did not train on Behance.

Sounds… pretty legit?

Above board, assuming they’re not lying. I’ll assume they aren’t lying since it would be legal trouble if they did. Also, they wouldn’t be so foolish to make their customers revolt.

There was already an issue, where consent was enabled by default.

You have to go into your privacy settings to remove consent from your images used for content analysis.

I imagine everyone who has auto updates would have, at least for a period, their images used without their explicit consent.


Sounds like a case where the EULA clauses allowing tracking/use of data that might be relevant had been around a while, long before the AI thing got really hot in the last few months, so opens up a question that is TBD whether feeding data to an AI is some fundamentally new thing that requires explicit permission instead of being an implicit type of activity a corp might do with data it has obtained.

If in either case the purpose is to improve product performance, it probably shouldn’t matter whether the data was fed to some human programmers to use in trying to implement some feature, or whether it was all fed into a big AI training pool that is going to be used to implement some new feature.

People who license images to stock image products (especially Adobe, which is kind of the bottom of the barrel on stock images) have always given up most of their rights. Stock images have been worth almost nothing for at least a couple decades now.

I considered uploading my entire photo library (sans face and recognizeable human shots) to Adobe Stock years ago, but based on even simple research I would have made dozens of cents for all that effort.

Even if training data is minimal and tightly controlled, with full disclosure and consent for all creators of the training data, the concept of making money selling or licensing “stock images” is dead. It’s a perfect use case for even fully ethical generative AI.

Show me where Adobe is training this model on images that weren’t uploaded into Adobe Stock to be sold and licensed.

As for process modeling, raw artistic process has never been copyrightable. Short of a limited-duration patent on specific processes, the how of making art is not protected and never has been: only the art itself is. Making tools better by learning how people use them generically is a noble pursuit.

1 Like

Will AI destroy us?

Your ChatGPT conversations could fall in the wrong hands.

This is an example of where relying on AI chatbots for factual information is potentially sketchy. It just confidently asserts information without providing a reference to consult. It is one thing to use it to brainstorm ideas or be entertained, but it seems like a big step backward to rely on it for objective factual information compared to a search engine pointing you towards a quality source. It would seem like it should be possible to at least provide a citation link in a chatbot’s answer.

Will AI destroy us?

Twitter thread:

TW: Suicide


Farewell, private thoughts.

" The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities."

1 Like

ChatGPT is banned in Italy.