Training for an AI-proof career

Elijah Rivera
Elijah Rivera, Assistant Professor of Computer Science

Photo Credit: Gaelen Morse

By David Levin
February 10, 2026

When Brandeis’ Computer Science department asked new faculty member Elijah Rivera to teach a course on AI and programming, he turned the idea on its head. Instead of teaching students to use the latest AI tools, he's helping provide a philosophical framework for coding that will outlast any single technology. We sat down with Rivera to learn how this approach embodies Brandeis's distinctive vision of computer science education.

Your course is called "Automation in Software Development" rather than something like "AI and Programming." What's behind that choice?

I'm an AI skeptic — not because these tools aren't powerful, but because I think we need to approach them with our heads on straight. The term "AI" has become so broad it's almost meaningless, like using the word "vehicle" to describe both bicycles and Boeing 747s. What I can teach meaningfully is automation, which has been the discipline of computer science since its inception. By understanding automation as a continuum — from water wheels to factory assembly lines to modern LLMs — students develop frameworks for thinking about any tool that might emerge, rather than just training on today's flashy technology.

How does studying historical automation help students navigate current AI tools?

History keeps repeating itself in technology. We start the course reading Ursula Franklin's work on holistic versus prescriptive technologies — the difference between one person controlling an entire process and one person delegating tasks to many others, which is effectively a form of automation. The latter is exactly how systems like LLMs are designed to be used, with larger models delegating subtasks to smaller specialized ones. When students understand these enduring design principles, they're positioned for whatever direction the field takes next, because new technologies always emerge from these same foundational ideas.

What makes this approach particularly suited to Brandeis students?

This really aligns with President Levine’s vision of having one foot in industry and one foot in the library. Students need technical expertise to survive professionally, but they also need to reason at a philosophical level to keep their humanity intact and make intentional engineering decisions. We're not just teaching students to use today's tools — we're teaching them to think critically about automation at every scale, from individual functions to entire systems. That kind of analytical framework doesn't become obsolete when the next platform emerges.

How does thinking about programming at different scales change how students approach their work?

One fundamental concept in computer science is recursion — breaking problems into smaller versions of themselves. We apply the same design principles whether we're writing a single function or architecting an entire system. A function is a chunk of code that takes an input, changes it, and spits out an output. A system component, like a computer’s operating system, does the same thing at a larger scale. We give it an input, and it spits out a response. Understanding this continuity helps students see that good software design isn't about any particular language or tool — it's about applying consistent principles of modularity and abstraction. Those skills transfer across whatever technologies they encounter in their careers.

How does this approach help students stay relevant in a field where AI tools are constantly evolving?

The skills that make you valuable aren't the ones AI can replicate — like the ability to reason about problems, understand underlying principles, and make intentional design decisions. If you only learn to use specific tools or platforms, you're vulnerable to becoming obsolete every time something new emerges. But if you understand why we design systems the way we do, and how to evaluate whether an automation fits your needs, you become the person making decisions about how and when to deploy these tools rather than just following instructions. What students learn in this class — critical thinking about automation, understanding design principles that transcend any single technology — those are exactly the capabilities that remain human. It's not about competing with AI; it's about developing the judgment and philosophical grounding that lets you work effectively with whatever tools exist, while maintaining your agency as an engineer.