Will AI Destroy Us?

So for christmas, I got the book Genetic Algorithms by David Goldberg. It’s a really low level book w/ tonnes of math and very specific examples and I’ve only read maybe 50ish pages so far, it’s slow going.

If someone infinitely more clever than I were to set up a fitness function with the aims of providing comfort for meat bags. You have something approaching intelligence. No consciousness required. Just intelligence honed and refined for taking me and everyone else to the beach.

Are you saying, in principal, this can’t happen?

I thought you were still speaking in the context of a sentient AI (not just intelligent).

In the culture they are sentient. But my idea doesn’t require it. Though if my idea turns out to be, at least in principal possible, I’m going to move the goalposts for sentience.

I thought you were still speaking in the context of a sentient AI (not just intelligent).[/quote]I think this is a problem on your part where you’re conflating sentience and intelligence.

Being sentient is neither necessary nor sufficient for an AI to view us as a resource or as a pest. It’s simply a question of what the AI is programmed to optimize and how humans relate to that optimization target.

[quote=“Naoza, post:21, topic:290, full:true”]
So for christmas, I got the book Genetic Algorithms by David Goldberg. It’s a really low level book w/ tonnes of math and very specific examples and I’ve only read maybe 50ish pages so far, it’s slow going.

If someone infinitely more clever than I were to set up a fitness function with the aims of providing comfort for meat bags. You have something approaching intelligence. No consciousness required. Just intelligence honed and refined for taking me and everyone else to the beach.

Are you saying, in principal, this can’t happen?
[/quote]Well, it’s not realistic to think of this happening solely via evolutionary algorithms, but at the very least you’re thinking about this in roughly the right way.

Yes, in principle, you could have a superintelligent AI that expends all its considerable efforts to maximise the comfort of human beings.

However, the result you would get is not “beach”, but large-scale production and administration of sedatives.

One would hope that the assigner of merit for the fitness function would take sedation to be less of a high score than providing logistics that ensures a perpetual surplus of everything for everyone.

I know that genetic programming won’t be the whole solution but it’s the part that I understand best so it’s the angle I approach from.

Isn’t the goal for the AI to not think in strict or limited terms, but get to a point where it can comprehend and understand and empathize with humanity such that it will be able to independently come to find solutions we react well to? Wouldn’t it be the case that it should be able to parse Tom Sawyer and understand why humans find it culturally and historically significant? Shouldn’t it be able to plow through the entire written catalog of all of mankind’s output and parse that massive fucknugget of data into something resembling a comprehension of how we all have comprehended our world? Research I suggest is one key. An AI that is the Pai Mei of Google-Fu and can say “humans express varied desires and to maximize the potential for them to find and experience those they find most compatible we need to offer all these various environments and modes and lifestyles”

Then I imagine It should start building ample spacefaring tech so that people who really want to go to space can get their wish. It should start building mansions and colleges and concert halls and collective eco-housing communes and massive arcades and inventing better forms of VR and essentially doing everything all humans already do but, just better than we do.

Because any AI worth anything needs to be able to think for you. An AI that is ‘garbage in, garbage out’ is not really what people are going for usually. You want an AI where you say “Im bored” and the AI looks at your entire life history and comes up with a suggestion most likely to fit your personal preferences for maximal enjoyment, (and also know that if it just offers you a lot of drugs, that will over time have unfavorable consequences)

You need an AI that, when you ask it for a faster horse, it invents for you the Model T.

What you’re describing are actually two separate problems in the AI domain, both of which are very difficult.

For an AI to be able to understand humans, e.g. to be able to, say, [quote=“SWATrous, post:27, topic:290”]parse Tom Sawyer and understand why humans find it culturally and historically significant[/quote]requires general intelligence. Any AI that we would seriously label as “intelligent” in the same way as humans needs to be able to accomplish intellectual tasks like that one. Coming up with such an AI is, indeed, a very difficult problem; the problem of “artificial general intelligence” (also “strong AI”, though the latter term is more ambiguous).

However, for the kinds of behaviours you’re describing, having sufficient intelligence to understand what humans want and need is not enough. You also need the AI to care what humans want and need, and that is a wholly different problem that is also very difficult. Note that I don’t mean “care” in an anthropomorphic sense, but in the broader sense that the AI’s own goals are ones that coincide with our own; this is sometimes referred to as the “value alignment problem”.

Here’s a quote from Stuart Russell:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.
    A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

So, this is decently big news:

This is basically the OpenAI team making themselves known to the world. Many people wouldn’t have heard about them up until now, but this is a good way to make a name for themselves!

Don’t buy too much into the hype though.

Nope!

It’s important to note that the AI problem they’ve solved here is not “Dota 2” but rather “Shadow Fiend 1v1 Solo Mid”, which is inordinately easier. As an AI problem, DeepMind’s achievements with AlphaGo were probably a bigger deal overall, although both of these tasks are very difficult.

I’m not completely sure about the extent to which this was the result of “pure” unguided self-play, but it’s still quite an impressive showing for (what I assume must be) deep reinforcement learning.

Also, the bot wasn’t exactly unbeatable; see this thread:

OpenAI says they’re working on 5v5 Dota 2 AI for next year. As an AI problem, for-reals Dota 2 is way, way harder than this, and it would be a real achievement for them to pull it off.

1 Like

Oh, and by way of poking a little bit of fun, a quote from the old forum:

2 Likes

Published on Jun 14, 2017: Microsoft’s AI Just Mastered Ms. Pac-Man

https://youtu.be/zQyWMHFjewU

3 Likes

AlphaGo2 defeated AlphaGo1: https://www.nature.com/nature/journal/v550/n7676/full/nature24270.html

We’re one step closer, Dave.

So, you know how I have kept saying that the endgame for human Chess play is probably a draw? How the only avenue left for enhanced human play is to study and memorize patterns developed by the unbeatable AIs?

Well…

The World Chess Championship is evolving nicely along these lines. Whichever player is struggling can effectively force a draw rather than lose outright. I bet this goes into extra innings.

Bet it doesn’t. Not because I don’t think it shouldn’t but practically that’d be difficult. These games take all day. How long can they keep both these guys in London? How long will the press care?

I think there were old tennis rules that stated you had to break your opponents serve to technically win. This resulted in one unfathomably long tennis game. The rule was changed. They won’t keep playing for months.

They may change to a different game completely to determine the winner. Like speed chess (spoilers, Magnus wins if they do that)

Have you not read the rules? This is literally what happens. After 12 full time games, they just schedule shorter and shorter games until someone wins. Speed chess is the the final overtime.

1 Like

Speed chess should be the default game from the start.

Touche. No, I’ve not. I’m not really watching the games. I’ll catch later in depth analysis of the more interesting variations from the games.

That said, I think that is a silly way to settle the score. Speed chess is as different from long form chess it is from the 50 meter dash.

That’s a separate record. You can watch that one if you like. It’s very cool, but you have to admit it kinda tests a slightly different skillet than chess with a time limit of 4 hours per player. One tests raw chess playing ability, the other tests fast chess playing ability.

If you didn’t have it, some would argue we don’t know who the best chess player is.

Actually I just looked it up again, and the final overtime is sudden death.

If the match is tied after 12 games, tie breaks will be played on the final day in the following order, if necessary:

  • Best of 4 rapid games (25 minutes for each player with an increment of 10 seconds after each move). The player with the best score after four rapid games is the winner; otherwise they proceed to blitz games.
  • Up to five mini-matches of best of 2 blitz games (5 minutes plus 3 seconds increment after each move). The player with the best score in any two-game blitz match is the winner. If all five two-game matches are tied, an “Armageddon” game is played.
  • One sudden death “Armageddon” game: White receives 5 minutes and Black receives 4 minutes. Both players receive an increment of 3 seconds starting from move 61. The player who wins the drawing of lots may choose the color. In case of a draw, the player with the black pieces is declared the winner.