Will AI Destroy Us?

AI will now write your YouTube comments for you.

1 Like

So, Meta created a “queer, Black AI” called Liv.
Oh no…


But wait, there’s more!

1 Like
2 Likes

I think mass surveillance is about to get a lot easier and even more massive. But maybe the trains will run on time? Oh wait, we don’t do trains here.

It is possible that this is just a necessary upgrade to the nation’s tech infrastructure, and it is fear mongering to claim otherwise. However, the gaggle of billionaire ghouls on deck to lead the initiative and the fascistic tendencies of the government facilitating it do not put me at ease.

Imagining a future where this commentary will get me arrested but the network of AI super computers that flagged this as dangerous hate speech has zero lag time.

The slop really is getting out of hand. Have noticed recently that my Youtube feed is being infiltrated by 100% AI generated content. This shit has been generally easy to spot, but it does seem to be getting more and more sophisticated. Frustrating that sometimes they get me with a topic that I’m genuinly interested in, then 30 seconds into the video I realize it’s all AI and even though everything sounds reasonable and likely correct, I know I can’t trust anything being presented as fact. And things only promise to get worse.
https://www.fastcompany.com/91293162/ai-slop-is-suffocating-the-web

The article:

AI slop is suffocating the web, says a new study

New research highlights the scale of the internet’s AI-generated content problem.
The generative AI revolution shows no sign of slowing as OpenAI recently rolled out its GPT-4.5 model to paying ChatGPT users, while competitors have announced plans to introduce their own latest models—including Anthropic, which unveiled Claude 3.7 Sonnet, its latest language model, late last month. But the ease of use of these AI models is having a material impact on the information we encounter daily, according to a new study published in Cornell University’s preprint server arXiv.

An analysis of more than 300 million documents, including consumer complaints, corporate press releases, job postings, and messages for the media published by the United Nations suggests that the web is being swamped with AI-generated slop.

The study tracks the purported involvement of generative AI tools to create content across those key sectors, above, between January 2022 and September 2024. “We wanted to quantify how many people are using these tools,” says Yaohui Zhang, one of the study’s coauthors, and a researcher at Stanford University.

The answer was, a lot. Following the November 30, 2022, release of ChatGPT, the estimated proportion of content in each domain that saw suggestions of AI generation or involvement skyrocketed. From a baseline of around 1.5% in the 11 months prior to the release of ChatGPT, the proportion of customer complaints that exhibited some sort of AI help increased tenfold. Similarly, the share of press releases that had hints of AI involvement rapidly increased in the months after ChatGPT became widely available.

Which areas of the United States were more likely to adopt AI to help write complaints was made possible by the data accompanying the text of each complaint made to the Consumer Financial Protection Bureau (CFPB), the government agency that Donald Trump has now dissolved. In the 2024 data analyzed by the academics, complainants in Arkansas, Missouri, and North Dakota were the most likely to use AI, with its presence in around one in four complaints; while West Virginia, Idaho, and Vermont residents were least likely—where between one in 20 and one in 40 showed AI evidence.

Unlike off-the-shelf AI detection tools, Zhang and his colleagues developed their own statistical framework to determine whether something was likely AI-generated that compared linguistic patterns—including word frequency distributions—in texts written before the release of ChatGPT against those known to have been generated or modified by large language models. The outputs were then tested against known human- or AI-written texts, with prediction errors lower than 3.3%, suggesting it was able to accurately discern one from the other. Like many, the team behind the work is worried about the impact of samizdat content flooding the web—particularly in so many areas, from consumer complaints to corporate and non-governmental organization press releases. “I think [generative AI] is somehow constraining the creativity of humans,” says Zhang.

1 Like

I’ll admit, that’s one that frustrates me of late - Even if it’s not generated by AI completely, important parts of it are AI generated, and this isn’t disclosed anywhere. Greg from HowToDrink is pretty bad for this - for just a recent example, Having recently used ChatGPT for modern historical research in a language he doesn’t speak, getting it completely wrong, and then when pressed, insisting it must be correct…because he had multiple people ask ChatGPT, and it gave the same wrong answer every time.

Or a smaller 3d printing youtuber(Who I’ll not name, due to the size of the channel), who went over the affiliate contracts of various manufacturers for problematic statements and language…and managed to get every single part wrong, because he used Gemini to skim the documents for problematic language, and then after admitting he knew nothing about the law, just confidently says “Well, you don’t need to be a lawyer to see some of these clauses are bad”.

(He then got very pissy when a bunch of the comments - which he later deleted most of - came in basically going “Nah dude, these are all fairly standard contractual language, the way you’re trying to interpret it is literally legally unenforcable and impossible, the arguments you say they could make are nonsense.”)

These people are setting out to make informative content on these topics, in many cases, smart and well educated people, and they’re just falling over at the first hurdle with sheer laziness and foolishness, and worse, in a way that taints not just those obvious mistakes, but all of the information presented - because if they can’t be bothered to do basic fact checks, because they’re so deeply and easily fooled by statistical prediction tricks with language that they just trust whatever garbage an LLM serves up to them, how can we trust they did any fact checking or research on the rest of it?

4 Likes

It certainly doesn’t help all the tools in the video production and publishing pipeline are including these features, throwing around pop-ups encouraging users to make use of the, and removing all the friction for doing so.

1 Like

It’s getting really sad that we’re living in a world where https://herecomestheairplane.co/ is not an obvious parody.

Update: NaNoWriMo is dead.

(Despite the timing of the article, this is not an April Fools joke.)

1 Like