OpenAI Creating Team To Rein In ‘Superintelligent’ AI
OpenAI co-founder Ilya Sutskever warns of controlling superintelligence to prevent human extinction. They propose new governance institutions to manage risks and ensure AI systems align with human intent, ensuring they follow human intent.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs,” the blog post read.
A new team has been leading a 20 per cent computing power effort to solve issues related to OpenAI models like ChatGPT. The team aims to improve OpenAI models and mitigate risks while also addressing machine learning challenges in aligning superintelligent AI systems with human intent. Their goal is to create a human-level automated alignment researcher using vast computing resources to iteratively align superintelligence.
(With inputs from Shikha Singh)
You need to login in order to Like