Teaching Future Engineers to Question AI
In the Ethical and Responsible Artificial Intelligence classroom, the discussion about artificial intelligence (AI) drifted quickly from lecture slides to existential speculation. Several students raised their hands at once, steering the conversation toward a familiar set of questions: “Will AI become sentient? Could it eventually harm humanity? Are we creating something we won’t be able to control?” The energy in the room shifted as the conversation edged toward science-fiction territory, reflecting broader public anxieties about runaway technology.

“We don’t need to worry about some future superintelligence to see AI causing harm,” said Ansaf Salleb-Aouissi to her summer class of 32 students. “The harm is happening right now.”
The Senior Lecturer continued by describing how there are AI systems with biased hiring algorithms, discriminatory lending systems, and facial recognition systems that fail for people with darker skin, and how these present-day problems are affecting real people. As artificial intelligence rapidly moves from research labs into everyday decision-making systems, the “question is no longer just what AI can do, but how it should be built and deployed responsibly.”
The development of a dedicated course on AI ethics grew out of both global dialogue and hands-on research experience. After participating in an international symposium as a keynote speaker on AI and inclusion, Salleb-Aouissi realized that ethical considerations couldn’t be treated as side discussions; they needed to be embedded directly into how AI is taught and practiced.
That urgency was reinforced through a research project focused on predicting adverse pregnancy outcomes such as premature birth and preeclampsia. The work highlighted a critical reality: highly accurate models can still fail ethically. In this case, the population was mostly White patients, and African Americans composed about 25 percent. The model didn’t perform well with the small sample, so they had to employ post-processing techniques to mitigate bias and improve the model’s performance. These experiences shaped the course’s core philosophy: AI systems must be not only accurate but also fair, interpretable, and trustworthy, and students need practical training to achieve that.
We sat down with Salleb-Aouissi to learn more about the course and why it is essential to think about ethics in AI.
Q: What classroom topic tends to spark the most debate?
One discussion that consistently generates strong debate is the debate between group fairness and individual fairness.
Here’s the question I pose: Should we ensure fairness at the group level, for example, by checking that women and men get hired at equal rates? Or should we ensure that similar individuals get treated similarly, regardless of which group they belong to?
Students initially think these should be the same thing, but they quickly discover they can conflict. You might achieve group parity while treating similar individuals very differently. Or you might treat similar individuals the same way but end up with significant disparities between groups.
Students have genuinely different intuitions about which approach is more fair. Some argue that group fairness addresses systemic discrimination and historical inequities. Others argue that individual fairness is what fairness truly means: treating people based on their individual characteristics, rather than their group membership.
There’s no silver bullet. Both approaches have merit, and the choice often depends on the specific context and the desired outcome. The realization that fairness itself is contested and context-dependent is one of the most important lessons students take from the course.
Q: How do you integrate technical learning with ethical reasoning?
I don’t have a philosophical background, but I connect every ethical dimension back to its intellectual foundations and societal implications.
The key is showing students that technical choices embody ethical commitments. When we discuss fairness metrics, we don’t just compute them; we ask what conception of justice each one assumes. I use case studies where technical decisions had real ethical consequences: bias in criminal risk assessment, privacy violations in contact tracing, interpretability failures in medical diagnosis.
The goal is for students to see that ethical AI requires both technical expertise and ethical reasoning, as they’re inseparable.
Q: What do you hope students take away from the course?
I want students first to be aware of the ethical dimensions of every AI system they encounter or build: recognizing fairness questions, privacy implications, and interpretability needs as fundamental considerations, not afterthoughts.
Second, I want them to have practical tools. Ethical AI requires concrete skills: techniques for bias detection and mitigation, methods for building interpretable models, and frameworks for privacy-preserving systems.
Finally, I hope they will feel responsibility and become advocates for ethical AI in their organizations by asking hard questions, challenging problematic practices, and championing ethical principles even when it’s difficult. As AI practitioners, they have real power to shape how these systems affect people’s lives.
Q: How are AI ethics challenges likely to evolve?
We’ll see progress in areas like privacy and fairness as techniques mature and transition from research to production. But other areas remain challenging. Interpretability is tough, especially with large language models. Robustness and safety concerns are growing as systems become more autonomous.
New challenges are also emerging: AI-generated content, consent issues with training data, and accountability when systems interact unpredictably.
The course stays current by continuously incorporating recent research, industry case studies, and regulatory changes.
Q: For someone curious about AI ethics but not from a technical background, what’s one idea or question from the course that might resonate with them?
They need to question AI. Be aware of its progress and understand how it affects them now and, in the future, as well as its broader impact on society.
This matters to everyone, not just technical people, because AI increasingly shapes decisions that affect all our lives, from the content we see online to whether we get a loan, a job, or access to healthcare. Understanding and questioning AI is essential for participating in conversations about how these systems should work and whom they should serve.