What Aristotle Would Say About Algorithmic Virtue
What would Aristotle actually say about algorithmic virtue?
Aristotle would say you are asking the wrong question. The question is not “what should the algorithm do?” The question is “what kind of person are you becoming by building it?”
I watched an AI team debate for 45 minutes whether their recommendation algorithm should optimize for engagement or well-being. Both sides presented data. Both sides cited ethical frameworks. Neither side asked the question Aristotle would have asked first: “What kind of engineers are we becoming by having this debate in this way?” The team treated the algorithm as the moral agent. Aristotle would have treated the engineers as the moral agents. The algorithm has no character. The people building it do.
This is not a subtle distinction. It changes everything about how you approach AI ethics. Rule-based ethics asks: “Does this algorithm comply with our guidelines?” Virtue ethics asks: “Are we the kind of people who would notice when our guidelines are insufficient?”
How does virtue ethics differ from rule-based AI governance?
Rules tell you what to do in situations you have anticipated. Virtue prepares you for situations you have not anticipated. In a field where the consequences of technology are inherently unpredictable, character matters more than compliance.
I have reviewed 8 corporate AI ethics policies. They share a common structure: a list of principles (fairness, transparency, accountability) followed by a list of prohibited practices. This is deontological ethics applied to technology: a set of rules. The problem is that the most consequential ethical failures in AI are not violations of known rules. They are failures of perception, failures to notice that something is wrong before anyone writes a rule about it.
Aristotle’s concept of phronesis (practical wisdom) addresses exactly this gap. Phronesis is the virtue of perceiving what a situation requires and responding appropriately, even when no rule covers the case. An engineer with phronesis does not need a policy document to tell them that a facial recognition system trained on 90% light-skinned faces will underperform on darker skin. They perceive the problem because they have cultivated the habit of asking “who is not represented in this dataset?”
This connects to a broader pattern I have observed in epistemic humility as engineering competency. The most dangerous engineers are not the ones who break rules. They are the ones who follow rules without questioning whether the rules are sufficient.
How do you cultivate virtue in an engineering organization?
Virtue is cultivated through practice, not training. You become courageous by practicing courage. You become fair by practicing fairness. Organizational virtue requires organizational habits, not organizational policies.
- Habituate ethical perception: In every design review, dedicate 10 minutes to asking “who could this harm, and how would we know?” Not as compliance. As practice. Aristotle argued that virtue becomes second nature through repetition.
- Reward moral courage: Aristotle listed courage as the first virtue because without it, no other virtue can be practiced. An organization that punishes engineers for raising ethical concerns has no ethical culture, regardless of its policies.
- Practice the mean: Aristotle’s doctrine of the mean holds that virtue lies between excess and deficiency. The virtuous approach to AI transparency is neither full opacity nor radical openness, but the appropriate level of explanation for the context. This requires judgment, not rules.
What does institutional character formation look like in practice?
Institutions, like individuals, develop character through repeated action. An organization’s actual values are not what it writes in its ethics policy. They are what it does when compliance is expensive.
I worked with a team that discovered a bias in their lending model 3 weeks before a major product launch. The bias affected 12% of applicants from minority zip codes. Fixing it would delay the launch by at least 6 weeks and cost an estimated $340,000 in engineering time. The team’s ethics policy clearly required them to fix it. That was not the interesting part. The interesting part was that the team had already identified the bias themselves, before any external audit, because they had cultivated the habit of testing for it.
Aristotle would have said this team possessed arete (excellence of character). Not because they followed the rule, but because their habitual practices made the rule unnecessary. They had become the kind of people who notice bias because they had practiced noticing it hundreds of times. According to the Nicomachean Ethics, this is exactly how virtue works: it begins as conscious effort and becomes, through repetition, a stable disposition of character.
The guardian agent pattern embeds this principle in system design. But no architectural pattern can replace the character of the people who build and maintain the system. Architecture is necessary. Virtue is prior.
“We are what we repeatedly do. Excellence, then, is not an act but a habit.” — Aristotle (as paraphrased by Will Durant)
The AI ethics industry has spent a decade writing rules for algorithms. Aristotle would have spent that decade forming the character of the people who build them. Rules are necessary. But rules without virtue produce compliance without conscience. And compliance without conscience is how 89% of AI practitioners can receive ethics training while 93% remain unprepared for the ethical situations no training anticipated. The algorithm does not need virtue. The engineer does.