5 Comments
User's avatar
Gesild's avatar

I've gone through a lot of the literature on the dangers of ASI and I'm just not convinced that ASI would view us as a threat or any kind of barrier to their goals (amassing resources, proliferating etc.). Compared to an AI we are very fragile and live for a very short amount of time, what threat do we pose?

Expand full comment
Shaked Koplewitz's avatar

Two issues

- we use a lot of valuable resources that any sort of agent might want. We don't hate orangutans but we still kill their habitat.

- even without any sort of AI with agency, a human with access to one might use it in various destructive ways and there's no lack of crazy humans out there.

Expand full comment
Gesild's avatar

The orangutans analogy seems flawed. We destroy their habitat but we don't go out of our way to kill all of them.

Misuse by other humans seems much more likely than an AI acting on its own. Possible misuse by other humans is possible and dangerous but is it really an existential risk? Incidents of AI going haywire and hurting or killing people I've referred to in the past as a "chernobyl-type event" or "AI chernobyl". It's inevitable that there will be tragic incidents involving any technology that is widely used but I think we disagree on how much damage will be done and when.

Expand full comment
Shaked Koplewitz's avatar

That's exactly the point though. We don't go out of our way to kill them, but the existence of an agent with higher intelligence and a competing use for the resources they live off is enough to mostly destroy them.

I agree here that misuse by humans is more likely though (it's simpler and would probably happen first anyway).

Expand full comment
Gesild's avatar

Part of my disagreement is I don't believe humans really compete with Orangutans and I don't think AI will compete with humans either. I expect an ASI would just roll right over us and take what it needs so there'll definitely be collateral damage, I just don't see it as an existential risk. I imagine it just does its own thing and we do our own thing much like we otherwise leave Orangutans alone (besides occasionally destroying their habitats, hunting them for sport, capturing them for amusement etc.).

Within how much time would you predict a major incident? And what form do you think it might take? (Rogue drones targeting civilians?, Self-driving bus crash?, Cyber-attack by individual or nation?, etc.)

Expand full comment