AI Safety, Ethics, and Policy

COMS W3995 - Spring 2018

  • Location: 467 EXT Schermerhorn
  • Time: Mon Wed 10:10am-11:25am
  • Piazza
  • Instructor: Chad DeChant
  • Office Hours: Monday 11:30-12:00, Wednesday 3:30-4:30 in Mudd 535 (& by appointment)
  • TA: Boyuan Chen
  • Office Hours: Friday, 3-5pm, Mudd 535 (& by appointment)
  • email: bc2699@columbia.edu

Course Description

This course will examine some of the issues raised by the increasing use of artificial intelligence (AI). We will consider a range of possible problems arising from AI and how researchers and policy makers might address them. We will look at how AI might be engineered to operate within safety, ethical, and legal limits. We will also study the economic and political effects that AI could have on society.

We will begin by reading Nick Bostrom's Superintelligence on the possibility and risks of advanced AI.

The course will then consider some legal and policy issues related to the use of AI systems, including fairness, privacy, and liability. We will examine proposed regulations, e.g. by the European Parliament, that provide individuals with a right to explanations when decisions made by an AI agent affect them. We will also ask what ethical considerations should guide computer scientists and others who create artificially intelligent agents.

Economic issues around AI will come next, including the threat of mass unemployment caused by the replacement of workers by AI systems and the consequent effects on economic inequality. Implications for domestic and international politics will be considered.

The course will then move on to consider autonomous weapons systems and efforts to ban or regulate such systems.

We will then look at ways researchers are currently working to ensure the safe operation of AI as well as speculations about what directions research in the area might take. We will study the challenges of ensuring that artificial intelligent agents behave ethically, or at least legally and safely.

Finally, we will end with a discussion of legal and philosophical work already being done that raises the possibility that AI agents could one day deserve rights or moral consideration.

For details, please go to the Course Syllabus page.

Required Texts

Nick Bostrom, Superintelligence

Virginia Eubanks, Automating Inequality

Wendell Wallach and Collin Allen, Moral Machines

Suggested texts

Ryan Calo, Robot Law

Patrick Lin, Keith Abney, and George Bekey, Robot Ethics

Patrick Lin, Ryan Jenkins, and Keith Abney, Robot Ethics 2.0

Course Requirements

The course will be different than most classes in the Computer Science Department in that it will involve a lot of reading, writing, and discussion.

Everyone will be required to do a 5-6 page paper due just before spring break.

Students will then have a choice about how to fulfill the course requirements for the final project. Projects may consist of either research papers (~10-12 pages) on a topic related to the course or projects involving coding. There will be a straightforward final exam on the last day of class and students will present their final projects in short 5 minute presentations during the scheduled final exam time.

Grading will be based on:

*A short description of your plan for this part of the requirements will be due shortly after spring break. This will only be preliminary and can be changed. An updated, final plan and a bibliography or list of related research will be due April 2nd and a rough draft (or very detailed outline) will be due April 18th. Students are invited and encouraged to go to the instructor's ofice hours early and often to discuss the papers and other aspects of the class.

Prerequisites

Students must have taken at least one 4000 level Intelligent Systems track course (e.g. Artificial Intelligence, Machine Learning, Natural Language Processing, Computational Genomics, Computational Aspects of Robotics)