![]() ![]() ![]() In 2022, a survey of AI researchers found that some researchers believe that there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe (more than half the respondents of the survey, with a 17% response rate). Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. The probability of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. If AI surpasses humanity in general intelligence and becomes " superintelligent", then it could become difficult or impossible for humans to control. The existential risk ("x-risk") school argues as follows: The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. ![]() Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. ![]()
0 Comments
Leave a Reply. |