According to a plan published on OpenAI's website Dec 18, the company laid out a framework to address safety in its most advanced models, including allowing the board to reverse safety decisions.
The latest technology from Microsoft-backed OpenAI will only be deployed if it is deemed safe in specific areas such as cybersecurity and nuclear threats. In addition, the company is forming an advisory group to review safety reports and forward them to the company's executives and board. While executives will make decisions, the board has the authority to overturn those decisions.
Since ChatGPT's inception a year ago, both AI researchers and the general public have been concerned about the potential dangers of AI. The ability of generative AI technology to write poetry and essays has wowed users, but it has also raised safety concerns due to its potential to spread misinformation and manipulate humans.
A group of AI industry leaders and experts signed an open letter in April calling for a six-month moratorium on developing systems more powerful than OpenAI's GPT-4, citing potential societal risks. According to a May Reuters/Ipsos poll, more than two-thirds of Americans are concerned about the potential negative effects of AI, with 61% believing it could threaten civilization.