AI control problem
In artificial intelligence (AI) and philosophy, the AI control problem is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in "AI safety engineering", might also find applications in existing non-superintelligent AI. Potential strategies include "capability control" (preventing an AI from being able to pursue
primaryTopic
AI control problem
In artificial intelligence (AI) and philosophy, the AI control problem is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in "AI safety engineering", might also find applications in existing non-superintelligent AI. Potential strategies include "capability control" (preventing an AI from being able to pursue
has abstract
In artificial intelligence (AI ...... AI that wants to be helpful).
@en
Wikipage page ID
50,785,023
Wikipage revision ID
725,956,299
type
comment
In artificial intelligence (AI ...... n AI from being able to pursue
@en
label
AI control problem
@en