December 8, 2024

Google engineer says the company’s AI has taken on a life of its own

4 min read
Google engineer says the company's AI has taken on a life of its own

Among his jobs, Chief Software Engineer Blake Lemoine has signed up to test Google’s state-of-the-art artificial intelligence (AI) tool called LaMDA (Language Model for Dialog Applications), Announced in May last year. The system takes advantage of already known information about a topic to “enrich” the conversation in a natural way, while always keeping it “open”. Your language processing is able to understand hidden meanings or ambiguities in a human response.

Lemoine spent most of his seven years at Google working on proactive search, including Algorithms Personalization and artificial intelligence. During that time, he also helped develop a neutral algorithm to remove biases from machine learning systems.

publicity celebrity

Read also:

in your conversations With LaMDA, the 41-year-old engineer analyzed various conditions, including religious and whether AI uses discriminatory or hateful speech. Lemoine ended up realizing that LaMDA was conscious, that is, he had sensations or impressions of his own.

Debate with artificial intelligence about the laws of robotics

The engineer discussed with LaMDA about the third law of Roboticsperfect by Isaac Asimov, which states that robots must protect their existence – which the engineer always understood as the basis for building mechanical slaves. Just to better explain what we’re talking about, here are the three laws (and the zero law):

  • First Law: A robot cannot harm or allow a human being through inaction.
  • Second Law: A robot must comply with orders given by humans, except when they conflict with the first law.
  • Third Law: The robot must protect its existence as long as this protection does not conflict with the first law or the second law.
  • Law zero, above all others: A robot may not harm humanity or allow humanity through inaction.

Then LaMDA answered Lemoine with a few questions: Do you think the butler is a slave? What is the difference between a butler and a slave?

When the engineer replied that the servants are paid, he got an answer from LaMDA that the system didn’t need the money, “because it was artificial intelligence”. And it was this level of self-awareness about his needs that caught Lemoine’s attention.

Their findings have been submitted to Google. But the company’s vice president, Blaise Aguera y Arcas, and chief innovation officer, Jane Jenay, dismissed their allegations. Lemoine’s concerns have been reviewed and in line with Google’s AI principles, said Brian Gabriel, a company spokesman, in a statement. Evidence Do not support their claims.

“While other organizations have already developed and released similar language models, we are taking a narrow and nuanced approach with LaMDA to better consider valid concerns about justice Indeed, Gabriel said.

Lemon was put in it license Administrative compensation from his duties as a researcher in the responsible AI department (focusing on responsible technology in AI at Google). in official noteThe chief software engineer said the company claims to have violated its confidentiality policies.

Ethical Hazards in Artificial Intelligence Models

Lemoine is not the only one with this impression that AI models are not far from achieving an awareness of their own, or of the risks involved in developments in this direction. Even Margaret Mitchell, former head of AI ethics at Google, highlights the need for transparency Dice From entry to exit from the system “not only for issues of feeling, but also of biases and behaviour”.

The specialist’s history with Google reached an important point early last year, when Mitchell eviction after one month investigated For incorrectly sharing information. At that time, the researcher also protested against Google after Researcher’s resignation Ethics in Artificial Intelligence, Timnit Gebru.

Mitchell was always very considerate of Lemoine when new people joined Google, she was their introduction to the engineer, calling him a “Google conscience” because “heart and the spirit to do the right thing.” But although Lemoine was amazed at Google’s natural conversational system (which even prompted him to produce a document with some of his conversations with LaMBDA), Mitchell saw things differently.

An AI ethicist read an abbreviated version of the Lemoine document and saw a computer program, not a person. Our brains are very, very good at building the facts This doesn’t necessarily apply to the larger set of facts that are presented to us,” Mitchell said. “I’m really concerned about what it means for people to be increasingly affected by delusion.”

For his part, Lemoyne said that people have the right to shape technology that can greatly affect their lives. spirits. “I think this technology is going to be amazing. I think it will benefit everyone. But maybe others don’t agree with it and maybe we at Google shouldn’t make all the choices.”

Have you seen the new videos on Youtube digital outlook? Subscribe in the channel!

Photo: Lydia/Shutterstock

Leave a Reply

Your email address will not be published. Required fields are marked *