Thu. Jun 8th, 2023

Google refused to comment on Satrajit Chatterje’s departure, but it defended the study he criticized and its refusal to publish his judgment.

Google has fired a researcher who questioned a study on the skills of a particular sort of artificial intelligence used in building computer chips less than two years after it terminated two researchers who criticized the biases built into artificial intelligence systems.

Scientist Satrajit Chatterjee led a team to challenge a well-known study published in the scientific journal Nature last year that claimed computers could build some computer chip components more quickly and efficiently than humans could.

Four people with knowledge of the case who was not authorized to talk publicly about it, say Chatterjee 43, was sacked in March after Google instructed his team that it would not publish a study refuting some of the assertions made in Nature. Chatterjee was “terminated with reason,” according to a statement from Google.

The search engine made a full-throated defense of the study that Chatterjee attacked and Google’s refusal to publish his judgment.

“We extensively examined the original Nature publication and stood by the peer-reviewed results,” Google Research vice president Zoubin Ghahramani said. The following contribution was likewise thoroughly scrutinized, but it did not match our requirements for publishing.”

The firing of Chatterjee was the latest sign of division within and around Google Brain, a division tasked with AI research that is seen as critical to the firm’s future. For years, Google has faced a wide range of criticisms regarding utilizing and displaying the technology it has invested billions of dollars in developing.

There are many worries about new AI technologies and the complex societal issues that entangle these systems and the people who create them in the tech sector. The tension among Google’s AI researchers mirrors these bigger challenges.

There has been a pattern of dismissals and counterclaims of misconduct among Google’s AI researchers, causing increasing alarm for a corporation that has staked its future on incorporating artificial intelligence into everything it does. Artificial Intelligence (AI) has been compared to the emergence of electricity or fire by Google’s parent company, Alphabet CEO Sundar Pichai.

A decade ago, a group of academics constructed a system that learned to recognize cats in YouTube videos. This was the beginning of Google Brain. As soon as it became clear that robots could learn new abilities, Google’s top executives rushed to expand the lab, laying the groundwork for a company-wide transformation using this new form of artificial intelligence. As a symbol of the company’s loftiest aspirations, the research group was formed.

Although Google has proclaimed the technology’s promise, it has received pushback from employees concerning its applicability. Concerned that Google’s AI may end up murdering humans, Google employees opposed a deal with the Department of Defense in 2018. In the end, Google decided to withdraw from the initiative.

For her efforts to publish a research article pointing out the shortcomings in a new form of AI system for learning languages, Timnit Gebru was sacked in December 2020 by Google’s Ethical AI team.

Previously, Gebru sought permission to publish a research paper on how AI-based language systems, such as Google’s, may wind up employing the prejudiced and nasty language they acquire from texts in books and websites. Google’s response to Gebru’s objections, notably its failure to publish the study, had gotten more exasperating, according to Gebru.

As a result of Margaret Mitchell’s public criticism of the corporation’s handling of the Gebru incident, the company dismissed her. The business claimed that Mitchell had broken its code of conduct.

The research in Nature, published in June, touted a concept known as reinforcement learning, which the paper said may enhance the architecture of computers. To many, it was a game-changer when it came to chip design, both for artificial intelligence (AI) and in general. Google claims to have developed its processors for artificial intelligence using this method.

In the last year, Google has released a similar article on machine learning in chip design. Those familiar with the situation say that around that time, Google asked Chatterjee, a computer scientist with a doctorate from the University of California, Berkeley, who had previously worked for Intel, if the method might be sold or licensed to a chip design business.

Three people close to Chatterjee said he expressed doubts about the paper’s claims and questioned whether the technology had been thoroughly tested in an internal email sent to the company.

The argument over that study was still raging when Google submitted a new publication to Nature. According to people familiar with the situation, Google made some changes to the earlier paper and removed the names of two authors who had worked closely with Chatterjee and had also expressed concerns about the paper’s central claims.

Some Googlers were taken aback when the newest study was published. After Gebru’s dismissal, the company’s senior vice president, Jeff Dean, who is in charge of the vast majority of its AI activities, told them that they needed to follow a publication clearance procedure that Dean had mandated.

Both Google and one of the article’s two primary authors, computer scientist Azalia Mirhoseini, argued that the revisions from the original paper did not necessitate the complete clearance procedure. An internal and external research team was authorized to work on a study that disputed several of Google’s assertions.

The rebuttal document was submitted to a resolution committee for publishing approval by the team. Several months after submitting the manuscript, it was deemed insufficient.

Pichai and Alphabet’s board of directors have been contacted by researchers who worked on a rebuttal to the original publication. Because Google did not publish a reply, they said, the company had broken one of its own AI principles. According to the sources, Chatterjee was notified shortly after that he would no longer be employed.

According to Goldie, it was revealed that Chatterjee had denied the idea. Afterward, she added, he could not back up his claims and dismissed the information they had provided in response to their criticism.

Goldie said in a written statement that “Sat Chatterjee has undertaken a campaign of lies against Azalia and me for over two years now.”

She claimed that nature, one of the world’s most esteemed scientific journals, has vetted the research. And Google has utilized its technology to develop new chips, and these chips are already being used in Google’s computer data centers.

According to Chatterjee’s lawyer, Laurie M. Burgess, it’s sad that “some authors of the Nature publication are trying to shut down scientific dialogue by defaming and abusing Dr. Chatterjee for just pursuing scientific transparency. As one of 20 co-authors of the Nature study, Burgess questioned Dean’s ability to oversee the project.

He said, “Jeff Dean’s activities to restrict the availability of all relevant experimentation results, not just those that fit his favorite hypothesis, should be highly alarming to both the scientific community and those who use Google goods.”

Asked for a remark, Dean did not return the phone call or email. A global debate over chip design erupted when the rebuttal article was made available to academics and other specialists outside of Google.

Some experts don’t know what Google’s study may signify for the rest of the tech sector. However, chipmaker Nvidia claims to have employed comparable approaches for chip creation.

As outlined in Google’s study, AI technology has the potential to be “very excellent,” according to Dresden University of Technology professor Jens Liebig. It’s still unclear whether it’s working, though.