The Ancient Question in a Digital Age
The lecture hall falls silent as Professor Armando Solar-Lezama, a luminary in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), poses a deceptively simple question:
“How do we ensure machines do exactly what we intend—and nothing more?”
In 2024, as generative AI reshapes industries, this query feels urgent. Yet Solar-Lezama reminds his students that humanity has wrestled with this dilemma since the first tools were forged.
To illustrate, he recounts King Midas, the mythical ruler whose golden touch turned his daughter into a statue.
“Technological power always carries unintended consequences,” Solar-Lezama warns. “Today’s AI systems are no different.”
The class—future engineers, programmers, and philosophers—exchanges glances. Among them sits Titus Roesler, an electrical engineering senior drafting a thesis on autonomous vehicles. His work grapples with a chilling scenario:
“If a self-driving car kills a pedestrian, who bears moral responsibility?”
From Pygmalion to ChatGPT—A Revolution in Control
Early programmers painstakingly coded every instruction—modern AI learns on its own (Credit: Unsplash)
Solar-Lezama projects grainy slides from MIT’s archives:
- 1970s: The Pygmalion system demanded line-by-line commands for basic tasks.
- 1990s: Software teams spent years drafting 800-page manuals.
- 2020s: AI like ChatGPT defies programmers, generating novel—and sometimes alarming—outputs.
“We’ve traded control for creativity,” Solar-Lezama observes. “But at what cost?”
The Double-Edged Sword of Modern AI
✅ Breakthroughs: Medical diagnostics, climate modeling, artistic innovation.
⚠️ Risks: Bias amplification, misinformation, existential threats.
Bold Experiment—Where Code Meets Conscience
This isn’t a typical computer science class. Ethics of Computing (6.C40/24.C40), launched in Fall 2024, is MIT’s first course to fully merge engineering and philosophy.
Developed through the Common Ground for Computing Education, the class alternates between:
- Solar-Lezama’s technical deep dives (How do neural networks make decisions?)
- Philosopher Brad Skow’s ethical frameworks (What should machines decide?)
“We’re not teaching students what’s ‘right,’” Skow clarifies. “We’re teaching them how to think about right and wrong in uncharted territory.”
Student Spotlight: The Technologists Turned Ethicists
- Alek Westover (Math/CS): “If AI can do any human job, should it earn a salary?”
- Caitlin Ogoe (Computation & Cognition): A self-described “tech skeptic,” Ogoe critiques AI’s impact on her hearing-impaired mother.
- Titus Roesler: His utilitarian analysis of autonomous vehicles divides the class.
The Internet on Trial—Is Technology Eroding Society?
“Is the internet destroying the world?”—a question debated in MIT’s Ethics of Computing course (Credit: Unsplash)
One session begins provocatively:
“So, is the internet destroying the world?”
Students dissect:
- Social media’s mental health toll
- Algorithmic bias (e.g., COMPAS, which falsely flagged Black defendants as high-risk twice as often as whites)
- Surveillance capitalism
Skow introduces two competing fairness theories:
- Substantive Fairness: “Was the outcome just?”
- Procedural Fairness: “Was the process fair?”
The verdict? Even well-designed AI can reinforce systemic inequities.
The Road Ahead—Can We Avoid a Modern Midas Curse?
As class ends, Solar-Lezama and Skow debrief over coffee.
“Five years from now, we might laugh at today’s AI panic,” Solar-Lezama muses. “Or we’ll wish we’d taken these questions more seriously.”
Their students—future AI architects and policymakers—will shape that answer.
Key Takeaways for the AI Era
🔹 Control is an illusion: Machines will act unpredictably.
🔹 Ethics isn’t optional: Bias and accountability must be designed into systems.
🔹 The humanities matter: Philosophy isn’t a footnote—it’s the blueprint.
Join the Conversation
Where should humanity draw the line with AI? Share your thoughts below.