This course will examine some of the issues raised by the increasing use of artificial intelligence (AI). We will consider a range of possible problems arising from AI and how researchers and policy makers might address them. We will look at how AI might be engineered to operate within safety, ethical, and legal limits. We will also study the economic and political effects that AI could have on society.
We will begin by reading Nick Bostrom's Superintelligence on the possibility and risks of advanced AI.
The course will then consider some legal and policy issues related to the use of AI systems, including fairness, privacy, and liability. We will examine proposed regulations, e.g. by the European Parliament, that provide individuals with a right to explanations when decisions made by an AI agent affect them. We will also ask what ethical considerations should guide computer scientists and others who create artificially intelligent agents.
Economic issues around AI will come next, including the threat of mass unemployment caused by the replacement of workers by AI systems and the consequent effects on economic inequality. Implications for domestic and international politics will be considered.
The course will then move on to consider autonomous weapons systems and efforts to ban or regulate such systems.
We will then look at ways researchers are currently working to ensure the safe operation of AI as well as speculations about what directions research in the area might take. We will study the challenges of ensuring that artificial intelligent agents behave ethically, or at least legally and safely.
Finally, we will end with a discussion of legal and philosophical work already being done that raises the possibility that AI agents could one day deserve rights or moral consideration.
For details, please go to the Course Syllabus page.
Nick Bostrom, Superintelligence
Virginia Eubanks, Automating Inequality
Wendell Wallach and Collin Allen, Moral Machines
Ryan Calo, Robot Law
Patrick Lin, Keith Abney, and George Bekey, Robot Ethics
Patrick Lin, Ryan Jenkins, and Keith Abney, Robot Ethics 2.0
The course will be different than most classes in the Computer Science Department in that it will involve a lot of reading, writing, and discussion.
Everyone will be required to do a 5-6 page paper due just before spring break.
Students will then have a choice about how to fulfill the course requirements for the final project. Projects may consist of either research papers (~10-12 pages) on a topic related to the course or projects involving coding. There will be a straightforward final exam on the last day of class and students will present their final projects in short 5 minute presentations during the scheduled final exam time.
Grading will be based on:
- 20% Five - six page paper
- 20% In class final exam
- 40% Choose one:*:
Option one: programming project
Option two: 10-12 page research paper
- 20% Class participation
Students must have taken at least one 4000 level Intelligent Systems track course (e.g. Artificial Intelligence, Machine Learning, Natural Language Processing, Computational Genomics, Computational Aspects of Robotics)