Existential risk from artificial general intelligence
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
AI riskAgi riskExistential Risk from Artificial General IntelligenceExistential risk from advanced AIExistential risk from advanced artificial intelligenceExistential risk from agiExistential risk from aiExistential risk of AIExistential risk of artificial general intelligenceExistential risk of artificial intelligenceExistential risks from artificial general intelligenceOrthogonality thesisTerminator scenarioX-risk from AI
Wikipage redirect
AI aftermath scenariosAI control problemAI riskAI takeoverAI takeovers in popular cultureA Human AlgorithmAgi riskAlien: CovenantArtificial brainArtificial intelligence arms raceArtificial intelligence in fictionCenter for Applied RationalityCenter for Human-Compatible Artificial IntelligenceComputing Machinery and IntelligenceDavid A. McAllesterEffective Altruism GlobalEffective altruismElon_MuskEnlightenment NowEthics of artificial intelligenceExistential Risk from Artificial General IntelligenceExistential risk from advanced AIExistential risk from advanced artificial intelligenceExistential risk from agiExistential risk from aiExistential risk of AIExistential risk of artificial general intelligenceExistential risk of artificial intelligenceExistential risks from artificial general intelligenceFermi paradoxFrank WilczekFriendly artificial intelligenceGeoffrey HintonGlobal catastrophic riskGlobal issueGlossary of artificial intelligenceHuman CompatibleHuman extinctionI. J. Good
Link from a Wikipage to another Wikipage
seeAlso
Existential risk from artificial general intelligence
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
has abstract
Existential risk from artifici ...... man ability extremely rapidly.
@en
Link from a Wikipage to an external page
Wikipage page ID
46,583,121
page length (characters) of wiki page
Wikipage revision ID
1,026,335,517
Link from a Wikipage to another Wikipage
wikiPageUsesTemplate
subject
hypernym
comment
Existential risk from artifici ...... ure machine superintelligence.
@en
label
Existential risk from artificial general intelligence
@en