OpenAI Calls For Regulating AI
Leaders of the company that developed ChatGPT, OpenAI, have urged for the regulation of “superintelligent” AIs, claiming that a body similar to the International Atomic Energy Agency is required to save humanity from the possibility of unintentionally developing a weapon.
Greg Brockman, Ilya Sutskever, and Sam Altman, co-founders of the company, call for an international regulator to start figuring out how to “inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security” in order to lessen the “existential risk” that such systems might present in a brief note posted to the company’s website.
In the medium term, the trio urges “some degree of coordination” among businesses engaged in cutting-edge AI research in order to guarantee that the societal integration of ever-more potent models is prioritised while maintaining safety. To limit the development of AI capabilities, this coordination could, for example, take the form of a government-led project or an agreement between all parties.
For decades, scientists have warned about the possible dangers of superintelligence, but as AI development has accelerated, those dangers have become more tangible.
(With inputs from Shikha Singh)
You need to login in order to Like