OpenAI Outlines AI Safety Plan
OpenAI has announced a framework to ensure safety in its advanced models, allowing the board to reverse safety decisions.
The company will only deploy its latest technology if it is deemed safe in specific areas like cybersecurity and nuclear threats.
An advisory group will review safety reports and send them to executives and the board.
The company has been raising safety concerns since ChatGPT’s launch, as it has been a topic of interest for both researchers and the public due to its potential to spread disinformation and manipulate humans.
(With inputs from Shikha Singh)
You need to login in order to Like