Ethical question takes center stage at Silicon Valley summit on AI

Execs from Google, Microsoft, and other top tech firms are grilled about putting ‘ethics before business interests’ during Silicon Valley summit on artificial intelligence

  • The discussions at EmTech Digital were run by the MIT Technology Review
  • It underscored how companies are making a bigger show of their moral compass
  • Activists questioned if big firms could deliver on promises to address concerns
  • e-mail

View
comments

Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’

A Microsoft Corp executive pointed to how the company considered whether it ought to sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID service at all.

The big news at the summit, in San Francisco, came from Google, which announced it was launching a council of public policy and other external experts to make recommendations on AI ethics to the company.

The discussions at EmTech Digital, run by the MIT Technology Review, underscored how companies are making a bigger show of their moral compass.


Technology executives were put on the spot at an artificial intelligence summit this week, each faced with a simple question growing out of increased public scrutiny of Silicon Valley: ‘When have you put ethics before your business interests?’ (file photo)

WHO IS ON GOOGLE’S AI ETHICS BOARD? 

  • Alessandro Acquisti – s a Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University 
  • Bubacarr Bah – Assistant Professor in the Department of Mathematical Sciences at Stellenbosch University.
  • De Kai – Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology
  • Dyan Gibbens – CEO of Trumbull, a Forbes Top 25 veteran-founded startup 
  • Joanna Bryson – Associate Professor in the Department of Computer Science at the University of Bath
  • Kay Coles James – President of The Heritage Foundation
  • Luciano Floridi – Professor of Philosophy and Ethics of Information at the University of Oxford
  • William Joseph Burns – Former US deputy secretary of state 

The teeth the companies’ efforts have may sharply affect how governments regulate the firms in the future.

‘It is really good to see the community holding companies accountable,’ David Budden, research engineering team lead at Alphabet Inc’s DeepMind, said of the debates at the conference. 

‘Companies are thinking of the ethical and moral implications of their work.’

Kent Walker, Google’s senior vice president for global affairs, said the internet giant debated whether to publish research on automated lip-reading. 

While beneficial to people with disabilities, it risked helping authoritarian governments surveil people, he said.

Ultimately, the company found the research was ‘more suited for person to person lip-reading than surveillance so on that basis decided to publish’ the research, Walker said. The study was published last July.

Kebotix, a Cambridge, Massachusetts startup seeking to use AI to speed up the development of new chemicals, used part of its time on stage to discuss ethics. Chief Executive Jill Becker said the company reviews its clients and partners to guard against misuse of its technology.

  • Norwegian site that houses the Doomsday Vault preserving… Researchers find DOZENS of security flaws in LTE networks… Twitter hoax leaves users locked out of their accounts by… Update your iPhone now: Apple releases iOS 12.2 with fixes…

Share this article

Still, Rashida Richardson, director of policy research for the AI Now Institute, said little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others launched the nonprofit Partnership on AI to engage the public on AI issues.

‘There is a real imbalance in priorities’ for tech companies, Richardson said. 

Considering ‘the amount of resources and the level of acceleration that’s going into commercial products, I don’t think the same level of investment is going into making sure their products are also safe and not discriminatory.’

Google’s Walker said the company has some 300 people working to address issues such as racial bias in algorithms but the company has a long way to go.

‘Baby steps is probably a fair characterization,’ he said. 

Source: Read Full Article