AI X-Risk: Difference between revisions

From Open Source Ecology
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Risk Spectrum=
=Risk Spectrum=
= AI Risk Spectrum (Representative Thinkers) =


{| class="wikitable"
! Position
! General View
! Representative Thinkers
|-
| Strong extinction risk (“p(doom) high”)
| Superintelligent AI is likely to cause human extinction or irreversible loss of control unless development is halted or radically redesigned.
| Eliezer Yudkowsky, Connor Leahy
|-
| Serious risk but potentially solvable
| Advanced AI presents major existential risks, but safety research, governance, and technical controls may mitigate them.
| Geoffrey Hinton, Max Tegmark
|-
| Moderate concern
| AI poses real risks that require oversight and safety research, but extinction scenarios are uncertain.
| Yoshua Bengio
|-
| Skeptical of extinction framing
| AI risks are manageable engineering challenges; concerns about superintelligence are overstated.
| Yann LeCun, Andrew Ng
|}


=Key presentations:=
=Key presentations:=
Line 25: Line 47:


Max Tegmark - Life 3.0 https://www.youtube.com/watch?v=Gi8LUnhP5yU&t=13s
Max Tegmark - Life 3.0 https://www.youtube.com/watch?v=Gi8LUnhP5yU&t=13s
Yoshua Bengio - https://www.youtube.com/watch?v=azOmzumh0vQ
Yann LeCun - https://www.youtube.com/watch?v=5t1vTLU7s40 'doomers think people are fundamentally bad'

Latest revision as of 04:50, 9 March 2026

Risk Spectrum

AI Risk Spectrum (Representative Thinkers)

Position General View Representative Thinkers
Strong extinction risk (“p(doom) high”) Superintelligent AI is likely to cause human extinction or irreversible loss of control unless development is halted or radically redesigned. Eliezer Yudkowsky, Connor Leahy
Serious risk but potentially solvable Advanced AI presents major existential risks, but safety research, governance, and technical controls may mitigate them. Geoffrey Hinton, Max Tegmark
Moderate concern AI poses real risks that require oversight and safety research, but extinction scenarios are uncertain. Yoshua Bengio
Skeptical of extinction framing AI risks are manageable engineering challenges; concerns about superintelligence are overstated. Yann LeCun, Andrew Ng

Key presentations:

Stuart Russell - Nobel Prize winner - balanced discussion - https://www.youtube.com/watch?app=desktop&v=P7Y-fynYsgE

American Compass - Nate Soares and Eliezer Yudkowsky - https://www.youtube.com/watch?v=O4XXkO3uo8c

Robinson's Podcast with Eliezer - https://www.youtube.com/watch?v=0QmDcQIvSDc&t=3s

Center for Humane Technology - Tristan Harris and Aza Raskin - https://www.youtube.com/watch?v=xoVJKj8lcNQ

Technical discussion - Lex Fridman with Nathan Lambert and Sebastian Raschka - https://www.youtube.com/watch?v=EV7WhVT270Q&t=5s

ASI with Nate Soares - https://www.youtube.com/watch?v=0tjOzQne1LY

Connor Leahy https://www.youtube.com/watch?v=7Y_1_RmCJmA

Max Tegmark - https://www.youtube.com/watch?v=OkG5S1NwwVM&t

Joscha Bach - how intelligence actually works - https://www.youtube.com/watch?v=3MkJEGE9GRY

OpenClaw - viral agent - https://www.youtube.com/watch?v=YFjfBk8HI5o

Max Tegmark - Life 3.0 https://www.youtube.com/watch?v=Gi8LUnhP5yU&t=13s

Yoshua Bengio - https://www.youtube.com/watch?v=azOmzumh0vQ

Yann LeCun - https://www.youtube.com/watch?v=5t1vTLU7s40 'doomers think people are fundamentally bad'