Data Mesh Is an Org Design Problem in a Tech Costume
What is data mesh, and what does it actually require?
Data mesh is an organizational design pattern that distributes data ownership to domain teams, requiring each domain to treat its data as a product, but most implementations focus on the technical infrastructure while ignoring the organizational restructuring that makes the pattern viable.
Dehghani’s original formulation was clear: data mesh is about organizational topology, not tooling. The four principles describe how teams should be structured (domain ownership), what their obligations are (data as a product), what platform capabilities they need (self-serve infrastructure), and how cross-cutting concerns are managed (federated governance). Three of these four principles are organizational. Only one (self-serve infrastructure) is primarily technical.
Yet when I surveyed 14 data mesh initiatives between 2020 and 2025, 12 began with technology selection. The first workstreams were “build a self-serve data platform” and “implement a data catalog.” The organizational questions, who owns what data, how are quality standards enforced across autonomous domains, what happens when a domain team lacks data engineering skills, were deferred. In 9 of those 12, they were never addressed at all.
Why do organizations treat data mesh as a technology problem?
Organizations default to technology because technology procurement is within the data team’s authority, while organizational restructuring requires executive sponsorship, cross-functional negotiation, and changes to incentive structures that data teams cannot unilaterally implement.
This is the core diagnosis. A data engineering team can evaluate tools, deploy platforms, and build APIs without anyone’s permission. Redefining team boundaries, changing performance review criteria, and reassigning budget for data engineering headcount from a central team to domain teams requires organizational authority that data engineers do not possess.
I watched this play out at a financial services company in 2023. The data team, enthusiastic about data mesh, spent 6 months building a self-serve data platform with domain-specific data product templates, automated quality checks, and a federated metadata catalog. The platform was technically excellent. When they presented it to domain teams with the message “now you own your data products,” the domain teams responded with reasonable objections: “We don’t have data engineers on our team,” “Our performance reviews don’t include data quality metrics,” “Our sprint planning doesn’t account for data product maintenance,” and “Who pays for the compute costs?”
The platform sat underutilized for 8 months before the initiative was quietly renamed and absorbed back into centralized data operations. The technology worked. The organization didn’t change.
What organizational changes does data mesh actually require?
Data mesh requires 5 organizational changes that most companies are unwilling to make: embedded data engineers in domain teams, domain-level data quality accountability, federated governance with enforcement authority, data product management roles, and adjusted budget allocation.
- Embedded data engineers: Each domain team needs at least one person with data engineering skills. In the 14 implementations I studied, only 3 organizations moved data engineers into domain teams. The rest expected domain developers (with no data engineering experience) to build and maintain data products alongside their existing responsibilities. This produced data products with 4x the defect rate of centrally-built pipelines
- Domain-level accountability: Domain teams must be measured on data quality, not just their primary product metrics. Of 14 organizations, only 2 added data quality metrics to domain team performance reviews. Without accountability, domain teams rationally deprioritized data product maintenance in favor of their primary deliverables
- Governance with teeth: Federated governance requires a body with the authority to enforce standards across autonomous domains. In 11 of 14 implementations, the governance function was advisory only. Advisory governance in a mesh architecture produces exactly the outcome you would expect: each domain develops its own standards, and the “mesh” becomes 8 incompatible data silos with a catalog on top
- Data product managers: Someone must own the product roadmap for each data product, the interface, the SLAs, the consumer communication. This role existed in only 1 of 14 implementations. Without it, data products were built and abandoned
- Budget reallocation: Data engineering costs must shift from a central budget to domain budgets. This is the change most organizations refuse. It makes data costs visible to domain leaders, which is the point (it creates incentive alignment) but also the objection (nobody wants a new cost center)
What does 5 years of industry evidence tell us?
Five years of evidence shows that data mesh succeeds only in organizations with strong engineering culture, existing DevOps maturity, and executive willingness to restructure teams, which describes fewer than 20% of enterprises attempting the pattern.
The 3 successful implementations in my sample shared common characteristics. All 3 were technology companies with existing platform engineering practices. All 3 had executive sponsors who understood the organizational implications and committed to multi-year restructuring timelines. All 3 started with 2 to 3 pilot domains rather than organization-wide rollouts. And all 3 took 18 to 24 months to show measurable results.
The 11 failures shared different characteristics. 8 of 11 were non-technology companies (financial services, healthcare, manufacturing) where domain teams had limited technical depth. 9 of 11 attempted organization-wide rollout within 6 months. 10 of 11 lacked executive sponsorship beyond the data team. All 11 measured success primarily through technology metrics (platform adoption, catalog entries) rather than organizational metrics (domain data quality scores, consumer satisfaction).
The uncomfortable conclusion is that data mesh may be the right pattern for a narrow subset of organizations, and the wrong pattern for most. This is not a critique of Dehghani’s ideas, which are sound. It is a critique of how the industry adopted them: selectively, superficially, and with an overwhelming technology bias.
What should organizations do instead?
Organizations should start with the organizational questions (who should own what data, and do they have the capability and incentives to do so) before selecting any technology, and should be honest about whether they are willing to make the organizational changes data mesh requires.
I now advise a 3-question diagnostic before any data mesh initiative:
First: “Are domain teams willing and able to accept data ownership?” If the answer is no (because they lack skills, capacity, or incentive), data mesh is premature. Invest in a centralized data team that delivers high-quality data products while gradually building domain data literacy.
Second: “Will executives restructure teams and budgets to support domain ownership?” If the answer is no, data mesh will produce a technology investment with no organizational substrate to sustain it. Save the platform budget.
Third: “Is the organization’s primary problem data quality/ownership (which mesh addresses) or data capability/skills (which mesh assumes already exist)?” Most organizations I work with have a capability problem, not an ownership problem. Training 50 people to do data engineering does not happen by redefining org charts.
Data mesh is an organizational pattern wearing a technology costume, and the costume is what gets purchased. The pattern itself is sound: distributed ownership, product thinking, federated governance. But these are organizational capabilities that require years to build. Deploying a self-serve platform without building those capabilities is like buying a Formula 1 car without learning to drive. The technology is not the constraint. The organization is. Until data leaders are honest about this, data mesh implementations will continue to produce expensive platforms that nobody uses, which is exactly what 5 years of evidence shows.