(CTN News) – In an announcement made on Tuesday, OpenAI, a startup company that focuses on artificial intelligence, announced that it has established a Safety and Security Committee to oversee the company’s safety and security procedures.
Members of the board, including Sam Altman, who is the CEO of the company, will be in charge of supervising this committee. The company is in the process of training its next artificial intelligence model, and this revelation comes at a crucial time because the company is now dealing with the training process.
Furthermore, according to a blog post OpenAI published regarding the company, directors Bret Taylor, Adam D’Angelo, and Nicole Seligman would be in charge of the committee.
OpenAI provided this information.
Concerns regarding the security of chatbots that incorporate generative artificial intelligence skills have arisen as a result of the growing strength of artificial intelligence models.
In addition to the ability to generate visuals based on text inputs, these talents include the capability to engage in conversations that are analogous to those that humans normally have. OpenAI, which has received support from Microsoft, is the company that is involved in the development of these chatbots.
At the beginning of this month, both Ilya Sutskever, who had previously held the position of Chief Scientist at OpenAI, and Jan Leike, who had been in charge of the Superalignment team, resigned from their positions at the organization.
The task of ensuring that the objectives of artificial intelligence were maintained in a manner that was congruent with the outcomes that were sought fell on their team.
It was reported by CNBC a few days after the high-profile departures that OpenAI had disbanded the Superalignment team earlier in May.
This occurred less than a year after the business had established the Superalignment organization. Following the beginning of the departure of the high-profile personnel, this news was announced. Additionally, a number of the members of the team had been moved to different departments within the firm because of promotion opportunities.
The newly formed committee will be tasked with the responsibility of presenting recommendations to the board of directors regarding choices affecting safety and security for OpenAI’s operations and initiatives, respectively. This responsibility will be assigned to the committee.
By the time it has finished fulfilling its first obligation, which is to investigate and further enhance OpenAI’s existing safety regulations over the course of the following ninety days, it will then present its ideas and suggestions to the board of directors. In the event that ninety days have passed, this will take place.
OpenAI will, once the board’s inquiry is done.
In accordance with the business’s request, provide an update on the suggestions that have been accepted to the general public.
The group also includes Jakub Pachocki, who was just recently appointed Chief Scientist, and Matt Knight, who is the head of security. Both of these individuals are additional members of the committee. These two individuals are both represented on the committee in some capacity.
The corporation will also seek the advice of other professionals, such as Rob Joyce, who served as the head of cybersecurity for the United States National Security Agency, and John Carlin, who was a former officer with the Department of Justice. Both of these individuals have extensive experience in the IT industry.
Both of these professionals have a significant amount of experience in taking care of concerns related to cybersecurity.
In spite of the fact that OpenAI did not disclose any additional information on the new “frontier” model that it is going through the process of training, the business did announce that it would bring its systems to the “next level of capabilities on our path to artificial general intelligence.”
SEE ALSO:
T-Mobile To Acquire US Cellular’s Wireless Business For $4.4 Billion