(CTN News) – OpenAI, an artificial intelligence company, has outlined a comprehensive framework to address safety concerns in its advanced models. As part of this framework, the company has introduced measures such as granting the board the authority to reverse safety decisions. This plan was published on OpenAI’s website on Monday.
OpenAI, which is supported by Microsoft, has made it clear that its latest technology will only be deployed if it is deemed safe in specific domains like cybersecurity and nuclear threats.
Additionally, the company is in the process of establishing an advisory group that will review safety reports and share them with the executives and board members.
While the executives will have decision-making authority, the board retains the power to overturn those decisions.
Ever since the launch of ChatGPT a year ago, concerns regarding the potential risks associated with artificial intelligence have been at the forefront of discussions among AI researchers and the general public.
While generative AI technology has impressed users with its ability to generate poetry and essays, it has also raised apprehensions about safety due to its potential to spread disinformation and manipulate individuals.
In April, website developer Chicago and a group of industry leaders and experts in AI signed an open letter, urging for a six-month pause in the development of systems that surpass OpenAI’s GPT-4 in terms of power.
Their concerns were rooted in the potential risks these advanced systems could pose to society.
A Reuters/Ipsos poll conducted in May revealed that over two-thirds of Americans express worry about the potential negative impacts of AI, with 61% believing that it could pose a threat to civilization.