The Ethics of AI Consulting: Selling Responsibility
What is the core tension in AI ethics consulting?
The core tension is that the market rewards deliverables (documents, frameworks, assessments) while actual responsible AI requires ongoing operational change that is harder to sell and harder to measure.
I spent 18 months studying the AI ethics consulting market. The pattern was consistent across firm sizes: consultants delivered beautifully formatted ethics frameworks, comprehensive risk assessments, and detailed governance recommendations. Clients received binders. Some received slide decks. A few received custom software tools. In 61% of the 23 engagements I reviewed, the deliverables sat unused within 6 months. The average engagement cost was $185,000. The average organizational behavior change was negligible.
This is not because the consultants were incompetent or dishonest. It is because the business model incentivizes the wrong outputs. Consulting firms bill for deliverables: hours spent, documents produced, presentations given. Responsible AI, however, is an operational discipline. It requires changing how people work every day. You cannot deliver that in a 12-week engagement and walk away.
Why does ethics-washing persist in the consulting market?
Ethics-washing persists because clients often want the appearance of responsible AI (for PR, regulatory positioning, or board reporting) more than the operational reality.
I interviewed procurement leads at 9 organizations that had purchased AI ethics consulting. Five explicitly stated that the primary motivation was “demonstrating due diligence” to their board or regulators. Three mentioned customer-facing PR as a driver. Only one described the primary motivation as genuinely improving AI system outcomes. This demand-side problem shapes the supply side. When buyers want documentation, sellers produce documentation. According to a growing body of criticism around ethics-washing, the pattern mirrors earlier waves of corporate social responsibility consulting where impressive reports substituted for structural change.
The consulting operations paradox applies directly here: consultants are incentivized to produce work that satisfies the buyer at the point of sale, not work that transforms the organization 12 months later. A 200-page governance framework looks impressive in a board presentation. Whether anyone follows it is a question that surfaces long after the invoice is paid.
How should responsible AI consulting actually work?
Responsible AI consulting should be structured as embedded operational support with success metrics tied to behavioral outcomes, not deliverable completion.
- Phase 1: Diagnostic (2-4 weeks): Assess current decision-making processes, not just current AI systems. Identify the 3-5 highest-risk AI decisions the organization makes. Map the people, processes, and tools involved. Deliverable: a risk-prioritized action plan, not a comprehensive framework.
- Phase 2: Implementation (8-12 weeks): Work alongside the team to modify existing workflows. Embed ethics checkpoints into the processes engineers already follow, as I described in building ethics reviews engineers follow. Measure adoption weekly. Adjust based on what teams actually do, not what the framework says they should do.
- Phase 3: Transition (4 weeks): Transfer ownership to internal staff. Train 2-3 internal champions. Establish quarterly check-ins rather than ongoing dependency. The goal is to make the consultant unnecessary within 6 months.
- Success metric: Measure behavior change (what percentage of AI deployments go through ethics review) rather than deliverable completion (did we produce the governance document). I have seen this approach achieve 78% process adoption versus the 22% typical of document-only engagements.
What obligations do AI ethics consultants have to the market?
AI ethics consultants have an obligation to be honest about what a consulting engagement can and cannot achieve, even when that honesty costs them the sale.
The uncomfortable truth is that some organizations are not ready for responsible AI consulting. They lack the organizational maturity, the executive commitment, or the willingness to change operational behavior. Selling a $185,000 engagement to an organization that will shelve the deliverables is not responsible consulting. It is revenue extraction dressed in ethical language. The ISO/IEC 42001 standard for AI management systems provides useful criteria for organizational readiness, but few consulting firms use readiness assessments to qualify prospects. Doing so would reduce their pipeline.
I have turned down 4 AI ethics engagements in the past year because the client was clearly buying documentation for regulatory cover rather than operational change. Each represented $50,000 to $120,000 in potential revenue. Each would have produced shelf-ware. The market will mature when more consultants make this choice, and when more clients understand that process, not documentation, is the product.