Superintelligence is already close: a former OpenAI employee spoke about the dangers of artificial intelligence.

Superintelligence is already close: a former OpenAI employee spoke about the dangers of artificial intelligence

In the coming years, humanity may lose control of AI.

Former OpenAI security analyst Leopold Aschenbrenner published a paper describing the future of artificial intelligence and the risks associated with its development. In his opinion, by 2027 the so-called artificial general intelligence (AGI) could be created. He will be capable of solving a wide range of problems, self-learning and in no way inferior to the human mind.

In his work, Aschenbrenner emphasizes that the development of AI is proceeding at an accelerated pace, and humanity does not have time to realize this speed. According to him, by 2026, the AGI prototypes being developed will be smarter than college graduates, and by the end of the decade they will be able to surpass the intelligence of any person. This will lead to the emergence of artificial superintelligence.

According to Aschenbrenner, the emergence of AI at the level of automated research will lead to an almost uncontrollable advanced development of AGI. And this will create serious threats for all humanity. The author emphasizes that a confrontation could happen between the United States and China.

He also notes in his text that in the new realities, the arms race between countries will take place precisely in the field of development of artificial intelligence, which will control many areas of life, including defense issues. The main threat is that humanity will approach this point without being prepared for what it will face.

Aschenbrenner complains that despite the huge financial investments, companies developing AI machines do not pay attention to safety. In fact, with his work he wants to draw the attention of regulators to this problem.