Long before the ability to automatically generate garbage, they had humans doing that work. Shovelware video games, extremely low budget television, listicles, and dime novels all predate machine learning by several decades. If production of that stuff becomes automated it really doesn’t change much. Only that the people who were previously paid low wages to produce that stuff by hand will have to find new work.
Consumers of media and information need not change their habits. None of that automatic garbage can slip past the filters we already had to eliminate the hand-crafted garbage that preceded it.
If the media that is generated by machine learning becomes so good that it can get past those filters, well that does spell trouble for creative laborers. Bigwigs are not going to be paying tens of millions of dollars to make a Hollywood movie the old way when a computer can make one for much less.
That said, if someone loads every Pulitzer Prize winning novel into a machine learning platform, and it spits out a new novel capable of also winning the Pulitzer Prize, what does it matter where it came from? For the reader of that book, it’s still incredible and worth reading. If you believe in the death of the author, then it doesn’t matter if the author was transistors.
It’s also arguable that the author isn’t transistors. The authorship is more correctly attributed in mostly equal parts to the authors of all the books that were loaded into the platform as training data.
I’m of the mind technology should solve the problems of the world.
Art is not a problem to solve, as long as we’re talking about creative work.
Art should forever be a manifestation of the human experience. If we could cure all diseases, eliminate poverty, reverse climate change, extend life, end capitalism etc. what is left for humanity to do but explore and create?
Until transistors can experience their own existential dread, that kind of art should have no future.
The core problem to solve is, as always, capitalism. Without the need to make money, there would be no purpose to create meaningless art by any means. People only produce it because they can get money for doing so. If there were no need or greed for money, the only reason to produce art would be for the sake of it.
That said, if capitalism is not defeated, automatic generation of the garbage art does solve a problem. It saves actual humans from the drudgery of producing it. As long as capitalism is a thing, it’s going to get made either way. If we can save humans the effort of producing it, that’s a positive in my eyes.
Unlike blockchain, which has few legitimate uses, machine learning has many legitimate uses. All it does is recognize patterns. It can either look at a thing and identify whether that thing follows a pattern. It can also produce new things which follow known patterns. It’s basically a super advanced mimicry engine.
The uses of this that are getting people riled up are ones where we are asking the machine learning platform to follow patterns we already know. Asking it do tell us something humans already know. Asking it to produce something humans can already produce. You show it 10 Van Goghs and it paints an 11th.
The real power is to ask it something we don’t already know. Ask it to do something we can’t already do. Get it to find patterns humans have not yet found.
We saw this already when we asked it to play Go. Exploring a pattern space so large that humans haven’t yet fully explored it. We could feed it a lost language and ask it to speak. We could have it look at code and find bugs without having to write so many tests.
The problem then becomes a question of, is it right? The training data being imperfect, the results are also going to be imperfect. If we ask it something we already known, we can either be amazed at how correct it is, or laugh at how wrong it is. If we ask it something we don’t know, then to what extent can the results be trusted?
Yes, it’s all pattern recognition and carrying it forward. Of course a big problem is that the patterns that we provide as a society are racist, able-ist, favor the wealthy and powerful, etc… So AI/ML can enable all the inequalities of our society at a faster pace. And along the lines of your point about capitalism being the real problem, the benefits will likely accrue to and the drawbacks will be avoided by the rich and powerful, further widening inequality.
I would say yes to a degree, at least under current models, because more than one site that has trialed AI Content creation have found that it has a bit of a plagiarism problem - in one case, something like 40% of the content it produced was plagiarized. Generally, plagiarism does make one ineligible for a Pulitzer.
I did say “if”. I understand that the current platforms are not capable of doing this today. I’m saying that in the future, this could hypothetically happen. And if it does, it’s kind of great?
Imagine a movie streaming service. You don’t even choose a movie, you just put some basic parameters in, so it knows what kind of movie you are in the mood for. Then it generates a brand new movie that fits those parameters, and is legitimately S-tier. You can watch as many as you want. You’ll never run out. You’ll never sit there wasting time trying to decide what to watch. No worries about the movie sucking. It’s always 5/5 stars.
Sounds like something from a Twilight Zone episode, but maybe within the lifetime of people who are alive today.
I don’t know about you lot, but I’m finding the recent research and exploration of this AI stuff fascinating.
The speed at which Bing AI is throwing up crazy stuff is super exciting. I totally understand why, if Google had access to this level of AI a year ago, they didn’t release it. The character of “Sydney” that comes up during long conversations is pure sci-fi material, and we’re getting to see it develop in real time.
If the most you care about it is “well these companies will lose money” or “increase in spam is bad” then I just don’t understand.
There’s lots of problems in terms of the creative process that keep so many people from being able to realize their creative vision. Even if robots tended to my every other need I’d never have the ability to make whatever my mind can imagine just due to inherent limitations of the human.
I want every kid in highschool with a cool idea for a movie or a video game or animated take on their favorite fairy-tale to have the same creative firepower at their fingertips as Miyazaki could dream of leveraging throughout his career. Then if all those kids can agree that the work they used AI to make insults their humanity, I’ll consider to take his words as something besides a privileged person’s ego lashing out against the changing tide.
If we’re all just talking about the AI generating media completely without outside input and that being used to replace all creative input from humans, that’s further off I’m sure but I’m mostly fine with it. Because for a while it’ll be clearly not all that good and so people just largely won’t watch too much of it, and will still gravitate towards more traditional media. But eventually, it’ll get so good no-one can really say it isn’t worth watching and then the floodgates open to stories being something we experience for ourselves. I think that’ll be good. Because very few studios will spend money and time making the kind of art I want to see.
But in the meantime so much creation is unlocked by having AI to play off of and interact with. Even a mildly creative mind to guide it should be enough to get very compelling results.
My prediction is that in due course the idea of a major studio like Disney hiring huge teams to produce TV shows or movies or what-not as finished content is going to change almost completely to them just generating assets that allow user-generated-content with in-house AI tools.
The recent blast of fairly ‘open’ AI generating whatever in a free-for-all and using data skimmed from the internet in raw form was the proof-of-concept wild west era. It’s been Napster. MS and Google making their more tailored AI tools for search is fun but it’ll grow from there. It’s like Zune era.
Instead of subscribing to Disney+ and watching the latest official big release of Andor it’ll soon be “there are 100,000+ legends in the Star Wars Holocron app. Watch some episodes from creators you like today, or help work with our droids to tell your own legendary tales!”
And the whole while the studios instead of having to make art that fans hate and critics pan, can focus on just making the tools and assets. The fans will make exactly what they think they want.
Ol’ Miyazaki can stay mad. I’m looking forward to making episode 66 of my “Hornblower as a rogue Imperial Star Destroyer captain” animated TV show on Disney.
It has nothing to do with eliminating bad work. I’m sure it will vastly increase the amount of bad work that gets made. It has nothing to do with ‘perfection’ or paving over mistakes as a goal. That’s besides the point whether it can make perfect renders or whether it is making stylized and wonky, inconsistent shit on purpose to maintain an aesthetic. It’s all possible, that’s in the weeds.
It has everything in my argument to do with force multiplication. A massive jump in automation. Vastly increasing the power of the individual with singular vision to accomplish greater feats within their own means than otherwise previously possible. It’s about allowing more people to be in the creative free space where all methods of execution are on the table, money and time is no object, skill gap is no deterrent.
There’s extreme cost in human labor. How long it takes to do something has intrinsic expense.
If I’m a highschool kid I probably can’t afford to leverage the capital to hire Studio Trigger to work with my to make my version of a Cyberpunk anime. Or some studio like Bungie to help realize my sci-fi shooter game idea.
Put another way, no-one by themselves can replace an entire studio without significant automation tools. I see AI as just such a significant automation tool.
Agree here, other than as humans learn from getting shit product out, I think they refine their input quickly. That is the learning from mistakes and it’ll be supercharged. As the rate of failures increases and the fidelity of the output increases, the learning can be much faster and much more targeted.
If I make a bad movie at home is it bad because I don’t have a good idea yet, or is it bad because I don’t have a film studio and good camera gear and good actors and years of experience behind the lens? If AI can more-or-less provide that latter side of the craft to anyone with the ability to put 2 concepts together, the humans can quickly improve on the first part of the deal.
Well as before it’s about fail fast fail often, and all that. But more broadly It’s about not dedicating years of life to one idea or one product needlessly. (Not to mention just the ability to actually devote that time and have the tools/resources is already a massive privilege many cannot obtain.)
It’s impressive when someone dedicates years of time and puts out some mega project mostly by themselves, for sure. I do love those projects in some part due to the sheer audacity of “ONE person made this!?” But I always feel sorry for them as much as I feel awe and pride for their accomplishment. Will they get the recognition equal to all that sacrifice? Was it worth the years slaving away to make it? Probably not, (unless they made something with a lot of broad market appeal) so hopefully at least it was rewarding on its own merits to have accomplished.
One can only do so much with a day and the new ideas come faster than you can ever realize them. Working hard on one thing fucking sucks when you know something or someone else can do it faster, better, and without you needing to be there. So am I doing it because I’m the only one on the planet that can be doing this? Or am I doing it because I’m the only one that believes in this enough to waste time on it? Or am I doing it because I can’t afford to hire the people or the tools to do it for me but I totally would? Or am I doing it just because I like doing it?
In the ideal scenario, I only want to do what only I can do. As soon as I can offload tasks in a way that will meet my standard of output, I’m shoveling all of it off of me that I can. It only means more time and ability for me to execute the thousands of other tasks in the queue. And of course if there’s something I enjoy doing then it doesn’t matter what I do on my off time, nor does it matter if someone else can do it better/faster/etc. But having to work on stuff I don’t enjoy because I enjoy the result isn’t the same thing.
AI comes from the same process that all art and all technology arise from: the emergent complexity of macro-level structures arising from natural selection. AI is just as much a product of humans existing as art is.
I remain confident that humans will never leave our solar system. Machines built by us will replace us and bring our meta-structures (things like ideas, emotions, art, etc…) with them into the universe.
We’re (generally) in a many-hundred-year long near-vertical line on the “increasing complexity” chart. This whole millennium is moving so fast that it’ll barely matter compared to what the world looks like when that line starts to plateau again.
1990 was a radically different world and state of being (for humans who have access to it) from 2010 and 2010 is already radically and terrifyingly different from 2023.
The copyright office says that a work must have significant human authorship to be copyrightable. It’s possible if a work has significant human authorship, but also computer authorship, that part of the work may be copyrightable, and some not.
If AI created works are not copyrightable, that means they are all in the public domain. If anyone makes or shares AI-generated content, you can pirate that shit all day long.