Moral Philosophy in the Age of Artificial Intelligence
Moral Philosophy in the Age of Artificial Intelligence is concerned with a variety of moral questions that are raised by the prospect of an artificial superintelligence, that is, ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'.
In this module, you will explore and assess:
- The moral status of an artificial superintelligence: Could it have the moral rights of a human person? Would it be wrong to ‘unplug’ such an entity – would that be tantamount to murder? Should it be free to ‘live its life’, within the familiar constraint that it doesn’t harm others?
- Machines making moral decisions: If an artificial superintelligence outperforms us humans in all domains, should we then let it make moral decisions on our behalf? We already encounter smaller-scale versions of this question, involving the role of AI in courtrooms and driverless cars.
- The risks to the survival of humanity: An existential risk is ‘one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development’. If an artificial superintelligence with its own aims is hostile or indifferent to humans, it could destroy us. What should we do in the face of this risk?
Although it is not strictly required reading, Superintelligence by Nick Bostrom is typically part of the core reading materials.
Aside from the study of specific content, students will learn to improve their academic writing, learning to shape arguments and developing an academic writing style, whilst also improving their subject-specific vocabulary.
Students will experience a mixture of lectures, seminars, group work and independent study on this course. Lecture content is taught by staff of the University of St Andrews.
Students will receive an official certificate from the University of St Andrews to confirm their successful completion of the course.