US colleges grapple with teaching AI

PITTSBURGH • About 18 months ago, Dr Shawn Blanton, a professor of electrical and computer engineering at Carnegie Mellon University, met some of his graduate students to redesign his course on artificial intelligence (AI).

“We need to transform this course to make it more relevant outside these walls,” he said.

It had only been three years since Dr Blanton started the class, but as AI moves from the stuff of dystopian fantasies – robots run amok – to the reality of everyday use, universities around the United States are grappling with the best ways to teach it.

This year, Carnegie Mellon said it became the first university in the US to offer a separate undergraduate AI degree through its College of Computer Science.

The Massachusetts Institute of Technology last month announced plans to establish a college for AI, backed by US$1 billion (S$1.4 billion) in investments.

And the expansion is not just happening in the country’s top science and technology schools. The University of Rhode Island this autumn opened an AI lab operated by its college library.

But this growth also means new challenges, such as figuring out how to teach the subject in ways understandable to those who are not computer science majors and addressing ethical dilemmas raised by the technology, such as privacy and job displacement.

“We have to start teaching those who will be practitioners and users in the broad discipline of AI, not just computer scientists,” said Dr Emily Fox, an associate professor of computer science, engineering and statistics at the University of Washington.

Dr Fox developed an AI course for non-majors, which was first offered last spring. To qualify, students had only to have completed courses in basic probability and basic programming, far fewer prerequisites than typically needed by students taking AI

She had to cap enrolment at 110 students because of such high interest.

The ethical issues raised by AI – among them privacy, security and job displacement – and how to teach them are also something educators across the US are wrestling with.

And many professors and students say more needs to be done in AI classes – not just in separate ethics courses – to ensure students become workers who are thoughtful about the role of AI.

“For instance, we think of self-driving cars as 20 years down the road, but the way things are progressing, it will be a lot sooner,” said Mr Dillon Pulliam, who is studying for a master’s degree in electrical and computer engineering at Carnegie Mellon. “We need policies – if the car hits a pedestrian, who is responsible?”

At the University of Washington, a new class called Intelligent Machinery, Identity And Ethics is being taught this autumn by a team leader at Google and the co-director of the university’s Computational Neuroscience programme.

Dr Daniel Grossman, a professor and deputy director of undergraduate studies at the university’s Paul G. Allen School of Computer Science and Engineering, explained the purpose this way: The course “aims to get at the big ethical questions we’ll be facing, not just in the next year or two but in the next decade or two”.

Dr David Danks, a professor of philosophy and psychology at Carnegie Mellon, just started teaching a class, AI, Society And Humanity.

The class is an outgrowth of faculty coming together over the past three years to create shared research projects, he said, because students need to learn from both those who are trained in the technology and those who are trained in asking ethical questions.

NYTIMES

Source: Read Full Article