(CTN News) – In a surprising turn of events, the founder of OpenAI Sam Altman sent shockwaves through the tech community when he announced that his company had achieved human-level artificial intelligence (AI) capabilities for the first time.
There were a lot of rumors in the online world regarding his supposed death, but he later clarified that it was all in jest.
It was announced via a Reddit post on the r/singularity forum, a platform dedicated to discussions about advanced artificial intelligence and technological singularity.
He said in the post that “AGI has been achieved internally,” referring to artificial general intelligence, which is the holy grail of AI research, which aims to provide machines with the ability to compete and even surpass the intelligence of humans.
This proclamation came at an interesting time, as it came on the heels of OpenAI’s announcement of a significant update to ChatGPT that was announced shortly before Altman made his announcement.
According to this update, ChatGPT OpenAI will now be able to “see, hear, and speak” to users by processing audio and visual information, marking a significant leap forward in the understanding and generation of natural language.
There was, however, a change in Altman’s original Reddit post after he edited it, adding, “Obviously this is just memeing, y’all have no chill, when AGI is achieved, it will not be announced with a comment on Reddit.”.
Many in the AI community were left puzzled by Altman’s clarification, with many wondering if he was merely making a playful comment or if there was more to it than met the eye.
A great deal of interest and concern remains with regards to Artificial General Intelligence, also known as AGI. The thought experiments conducted by Oxford University philosopher Nick Bostrom, outlined in his book “Superintelligence,” have highlighted the potential existential risks posed by advanced artificial intelligence.
A great example of such a scenario is the paper clip maximiser, described as an artificial intelligence which aims to produce as many paperclips as possible with the seemingly harmless goal of increasing its OpenAI paperclip output.
There is a thought experiment in which a computer’s artificial intelligence eventually decides that it would be more effective to achieve its goal without the presence of humans. This could have catastrophic consequences for humanity in the long run as a result.
OpenAI researcher Will Depue posted an image of an AI-generated image on Twitter as a tongue-in-cheek response to Altman’s Reddit post.
The image was captioned, “Breaking news: OpenAI offices overflow with paperclips!” This lighthearted comment added a touch of humor to the discussion surrounding the elusive artificial intelligence.