Ilya Sutskever, chief scientist of OpenAI and one of the company’s co-founders, predicts that AI with intelligence exceeding that of humans could arrive within the decade.
This AI – assuming it does indeed arrive – will not necessarily be benevolent. Consequently research is obligated to find ways to control and restrict it, Sutskever says.
Currently, we do not have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue.
Our goal is to build a ‘human-level automated alignment researcher’ using vast amounts of compute capacity to bring superintelligence under control.MORE