Connect with us

Tech

Google Engineer Claims AI is Intelligent

Avatar of Salman Ahmad

Published

on

Google Engineer Claims AI is Intelligent

An engineer at Google has been placed on paid leave after the company rejected his claim that its artificial intelligence is sentient, exposing yet another rift over the company’s most advanced technology.

Senior software developer Blake Lemoine, who works for Google’s Responsible AI group, said in an interview he was placed on leave. He had violated Google’s confidentiality policy, according to the company’s human resources department. A U.S. senator’s office received documents the day before Mr. Lemoine’s suspension, he said, containing evidence that Google and its technologies discriminated against religious groups.

While Google’s systems could mimic conversation and riff on different topics, they lacked consciousness. “Our team, including ethicists and technologists, has reviewed Blake’s concerns in light of our Artificial Intelligence guidelines and told him the evidence does not support his claims,” Google spokesman Brian Gabriel said. A few in the broader A.I. experts are pondering the possibility of sentient or general A.I. in the future, but it does not make any sense to do so by anthropomorphizing conversational models today, which are not sentient.” The Washington Post broke the story of Mr. Lemoine’s suspension.

MUST READ: Best Auto Dialer Software Products of The Year 2022

Mr. Lemoine argued for months with Google managers, executives and human resources over his claim that the Language Model for Dialogue Applications, or LaMDA, possessed consciousness and a soul. Researchers and engineers at Google have used an internal tool called LaMDA and reached a different conclusion than Mr. Lemoine. The majority of A.I. professionals believe the industry is very far from computing sentience.

Some A.I. researchers have long claimed that AI technologies will soon reach sentience, but others dismiss these claims out of hand. “You would not say such things if you used these systems,” said Emaad Khwaja, a researcher at UC Berkeley and UCSF exploring similar technologies.

MUST READ: Best for Sale by Owner Websites to List On MLS

In Google’s case, they use a mathematical system called a neural network that learns skills by analyzing a lot of data. For example, it can learn to recognize cats by finding patterns in thousands of photos.

Several years ago, Google and other top companies developed neural networks that learn from unpublished books and countless Wikipedia articles. This kind of model is applicable to many tasks. As well as summarizing articles and answering questions, they can tweet and even write blog posts.

They are extremely flawed, however. They sometimes produce perfect prose. Others produce nonsense. In general, the systems are very good at recreating patterns, but they are unable to reason like humans.

Salman Ahmad is a seasoned writer for CTN News, bringing a wealth of experience and expertise to the platform. With a knack for concise yet impactful storytelling, he crafts articles that captivate readers and provide valuable insights. Ahmad's writing style strikes a balance between casual and professional, making complex topics accessible without compromising depth.

Continue Reading

CTN News App

CTN News App

Recent News

BUY FC 24 COINS

compras monedas fc 24

Volunteering at Soi Dog

Find a Job

Jooble jobs

Free ibomma Movies