Great, now everyone can get slightly wrong, but seemingly correct, answers to random questions from a new “ChatGPT Enabled Clippy!”
I keep seeing news stories about this “marines vs AI” challenge.
The framing is always: “the marines tricked the AI!!! LOL!!”
But what I see is: “the marines are providing training data for future versions of this AI”
The sudden rush of big companies launching or announcing autoregressive language model offerings on the heels of the buzz about GPT-3 is a sign that this is a desperate rush into a growing bubble.
I have a more dire reading of this specific situation.
Every big tech company is laying people off and the near-term economic prospects across the board are looking bleak. They’re all out of ways to make money and have no real new ideas.
They’re all rushing this out because they are desperate for a new hype bubble. Retreating to safe revenue is impossible for most large tech companies under capitalism.
I like Tom Scott and all, but clickbait title and thumbnail, and 16 minutes talking to the camera! Is there a TL;DW?
He used an AI to generate some code to do a thing, and had an existential crisis about it because something in the process reminded him of how he, as a human being, has written code in the past.
It took me a few days to get round to watching it too because yeah, a 15 minute video with a clickbait title didn’t appeal to me.
But I saw it recommended in a few more places and by a variety of people, so watched it myself.
Don’t watch it if it doesn’t appeal to you. I’m just another person recommending others watch it, so if you see enough other recommendations it might finally tip the scales.
TLDR:
Tom’s a good storyteller. Someone else TLDR-ing the contents of the video isn’t going to take you on the same journey as watching the video.
Thankfully what I see happening is these models being used for too much too quickly, leading to more than one catastrophic failure, which may poison the well for a while.
But I also think two areas are about to rapidly change.
-
Low tier low signal content (e.g., clickbait articles, most “tech review” sites, celebrity gossip, etc…) will be entirely automated. This is already happening. The model writes dozens of plausible articles and a single human’s job is to clean them up and publish them. The content is so low value and so generic there’s no reason to ever have a human craft it.
-
Low signal social media communities are fucked. These things are going to invade every online community where the signal in the median content there is low. As in, people chit chat and hang out but don’t have super deep personal conversations regularly. Forums, twitter hashtags, discords: they’re going to be completely flooded with fake people inserting native ads into bland “content.”
Just like with fast food, we’ll have “fast content.” A sea of chum.
Next up, ai-powered ai-blockers.