Will AI Destroy Us?

This is an update to an old episode.

It begins. The Japanese insurance firm Fukoku has replaced 34 employees with IBM’s Watson.

And here’s a relevant video from way back when:

https://www.youtube.com/watch?v=7Pq-S557XQU

Depends on your definition of “Destroy” and “Us”. If you mean destroy a la Skynet, probably not, I think, but who knows? If you mean destroy as in “irrevocably change the way humanity functions, to the point of requiring a massive shift in behavior as a people, lest we face destruction due to our inherent vices and greed” then probably, yeah.
And, again still, the answer depends on whether “us” means “all people”, “poor people”, “rich people”, etc…

Depending on what the people who instigate the automation and AI and those in control of said systems do, it can go many ways depending on what social group you’re a part of.

Also I don’t understand “it” or “begins”. Is “it” replacing a human workforce with computers? Because technology eliminating jobs goes back to the beginning of the industrial revolution in the 18th century. Does “begins” means that until now this hasn’t happened?

1 Like

18th century technology wasn’t really “intelligent” back then.

Is it “intelligent” now? When did/will it happen? How can you tell?

It won at Jeopardy. It’s also doing the jobs for 34 insurance workers.

And the big difference between technology today and 200 years ago boils down to software. One has it, the other doesn’t. This software can be used to help make thinking simpler. Instead of hiring a bunch of human brains to look at and recite numbers, we can just run an analysis program.

Winning at jeopardy does not correlate with intelligence.
Software is technology in and of itself. Software is a simple categorisation of technology.
Technology is mostly invented to make tasks easier or be more efficient.

An abacus was advanced technology meant for an intelligent purpose that put many people out of work. So many human brains were put out of employment by that device.

I think the more important technology to look forward to is one that has sentience.

It’s interesting that we always move the goal post with regard to AI. When electronic computing first started everyone was all like “When we can teach a computer to play chess that’ll be an intelligent computer” but then we did it and we realized that just because it can play chess doesn’t mean it can talk to you about Kant.

We’ve been replacing people’s jobs with software for a while now. Most of my job is to automate whole swaths of people’s jobs. We didn’t need to hire 4 people because I “freed up” a lot of time for current employees by using software. It’s not intelligent though. It can collate data and perform actions based on that. That’s all Watson is doing, collating existing data, using statistics and other analysis tools to give a human actionable data, or in the place make decisions for the humans.

The progress of AI has always been started with starry eyed dreams and then results delivered with a disinterested shrug. I’m pretty sure that the evolution of AI will accelerate but by the time we get to sentient software no one will really great it with fanfare. We’ll see it get gradually better and then it’ll be just as good as us, and then better and then we’ll go “oh, that was cool” and keep going like we did before. With less jobs of course, but machines have been displacing humans for over a century.

On a macro level, all advancements along these lines fundamentally raise two metrics.

  1. The amount of “productive” work achievable per person
  2. The level of expertise/skill required to add value to productive enterprises

As early as last ear many google engineers has jokingly said that they do not fully understand what the AI is doing ( Google Admits They Fully Don't Understand RankBrain ). So my only concern is that the advancement of AI is vastly outweighing our ability to understand it. That is my main concern here.

Not just google engineers, anyone who’s involved in neural nets has no idea what’s going on inside there. I mean, in general, sure, when you’re training the algorithm we know that the weights for each neuron is being updated.

But the whole “oh this section is filtering for this feature, and that branch is combining these features together …” sort of explanation is beyond anyone using a neural net of any real size. There are just too many connections between the neurons and too much interaction to really know what’s going on specifically.

Really any real advancement of AI would necessitate that we really don’t understand what’s going on there, we don’t even really know what’s going on in our own minds. To truly have a chance to understand what’s going on in neural nets we’d have to really understand how our own minds work first.

1 Like

I’m pretty sure @lackofcheese would have a well educated guess.

[quote=“sK0pe, post:7, topic:290”]I think the more important technology to look forward to is one that has sentience.[/quote]I think this is rather misguided.

As far as human society is concerned, it is indeed intelligence and not sentience that has the potential to improve our prosperity, and it is intelligence that makes AI potentially dangerous as well.

Artificial sentience, on the other hand, raises a host of moral concerns that might be difficult to answer. You might, perhaps, argue that there is a potential for a massive intrinsic good from sentient AI compared to which the concerns of human society would be trivial. You might also argue that sentience is necessary for intelligence, and thus a sufficiently intelligent AI would be sentient by default.

That being said, I don’t think either of those arguments hold any real merit. If anything, sentience is an undesirable quality in a superintelligent AI, since it invalidates our ability to use it for our own ends. Moreover, although intelligence is the main thing that can make an AI potentially dangerous, sentience would tend to exacerbate the problem.

I was looking forward to a new kind of life form, one that has not evolved biologically on this planet (that we know of).
I disagree with all your assumptions.

[quote=“zehaeva, post:8, topic:290”]
It’s interesting that we always move the goal post with regard to AI. When electronic computing first started everyone was all like “When we can teach a computer to play chess that’ll be an intelligent computer” but then we did it and we realized that just because it can play chess doesn’t mean it can talk to you about Kant.[/quote]Well, there’s a couple of separate factors at play here.

On the one hand, there’s the AI effect, which can be summarized by this quote from Pamela McCorduck:

On the other hand, there was significant naïvety in the early days of AI (e.g. the Dartmouth workshop). Despite that, getting computers to beat humans at chess did in fact require significant advancement in the understanding of AI and it did indeed drive significant progress in the field for a while, but from the 1980s chess-related advancements became very domain-specific or “narrow”; plus there was also the impact of advances in raw processing speed. Another part of the problem is that human beings just aren’t that good at chess, so it really wasn’t all that hard to beat even the best humans at it. This quote from John McCarthy is pretty relevant:

[quote]Alexander Kronrod, a Russian AI researcher, said "Chess is the Drosophila of AI.‘’ He was making an analogy with geneticists’ use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.

Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.[/quote]

2 Likes

[quote=“zehaeva, post:8, topic:290”]The progress of AI has always been started with starry eyed dreams and then results delivered with a disinterested shrug. I’m pretty sure that the evolution of AI will accelerate but by the time we get to sentient software no one will really great it with fanfare. We’ll see it get gradually better and then it’ll be just as good as us, and then better and then we’ll go “oh, that was cool” and keep going like we did before. With less jobs of course, but machines have been displacing humans for over a century.[/quote]Here I have to strongly disagree.

The advent of human-level AI (aka artificial general intelligence) would be both qualitatively and quantitatively different from previous advancements in AI. Moreover, this could easily go hand-in-hand with the capacity for self-improvement, and thus the possibility of an intelligence explosion might also be a serious concern.

Also, getting back to the core thread topic:
There is indeed a significant possibility that AI will destroy us, in the sense of causing the extinction of humanity. Yes, that possibility is probably decades away, but it’s one that needs to be taken seriously.

[quote=“sK0pe, post:14, topic:290, full:true”]
I was looking forward to a new kind of life form, one that has not evolved biologically on this planet (that we know of).
I disagree with all your assumptions.
[/quote]Can you give more detail, perhaps? For one thing, it’s probably better to use the term “consciousness”, which I assume is what you’re talking about.

It’s not that I’m opposed to artificial consciousness per se, but I do think it’s important not to conflate intelligence with consciousness.

There are many concerns with artificial consciousness that are not necessarily present for artificial intelligence. For example, you might create enormous suffering entirely by accident and not even realise that you’ve done so. Thus I think it’s an area where we can and should tread very carefully.

Oh 100%, if they do gain sentience the human race is pretty fucked because we would be no different than any other animal on earth. A resource or a pest.

This type of doomsday situation seems more obvious if AI could access and improve on it’s own code and be able to reproduce physical forms to populate copies. this is obviously just bs uneducated hypotheses based on the smallest bit of machine learning algorithms and agents I’ve observed.

Why can we just not have the functionally magic AI of the culture novels where, sure you can make the case that humans are just pets. But who cares? They do all the work parts of keeping the race going.

Except the notion of pets or working dogs have a place due to everything from emotional insecurity, loneliness, depression to functioning better than any current machine could in a farming environment (herding sheep and cattle).

From an AI"s perspective anything which will improve efficiency can and should be investigated and then adopted. e.g. lack of humans vs human presence, why waste resources on keeping meat bags around?

Exactly: most notion of AI keeping humans as pets is based on some kind of sentimentality they would hold for a creator but I’m wagering the conscious robot will reject sentiment as a shit holdover of meat based processing.

They may tolerate some humans existing in terms of it not being worth resources to go and get rid of them. But maybe stripping the atmosphere improves computing speed, so why not? That it would kill all humans probably won’t stop them.