Trapped On Opposite Sides: The Canyon Between Data And Product Teams
Imagine this scene: a video call full of product, marketing, and analytics folks. They all look at a slide full of data from last week. Think of a table with 50 different data points. “Metric A is not correct; we had problems with the pipeline, and we are in touch with Alex to figure out a fix”, says Jordan, the marketing manager. “Same with metric B”, Avery, the product manager, replies, “we had tracking issues last week”. At the end of the weekly meeting, we spent most of our time discussing the issues instead of what the data tells us and how we should respond.
Just two hours later, the weekly experiment review takes place. Once again, Jamie, one of the growth product managers, announces that they must delay the experiment. Last week, the user bucketing failed, and they assigned too many users to the control group. Yesterday, when the team finally released the fix, they discovered that a colleague started an experiment on the same page using 100% of the traffic for six weeks without previous alignment.
These scenarios aren't isolated incidents—they're symptoms of a systemic problem plaguing organisations that have invested heavily in data infrastructure but struggle with actual execution. According to research from Wavestone and NewVantage Partners1, nearly 80% of data leaders cite culture, people, and process, not technology, as the top barrier to becoming data-driven. Despite massive investments in data infrastructure, only 20.6% of organisations say they've established a data culture, the same report states.
The misalignment between product vision and data reality costs companies millions in wasted investment and missed opportunities. This disconnect isn't just inefficient—it harms business growth and innovation.
The Four Horsemen of Data-Product Dysfunction
THE TRUST GAP
At the heart of data-product misalignment lies a fundamental issue of trust. According to research by Precisely and Drexel LeBow2, 67% of business leaders report that they don't fully trust the data their organisation is using. Similarly, a BARC study on AI Observability3 found that only 59% of respondents trust their model inputs and outputs.
These stats and the trust deficits are unsurprising and have several root causes.
Inconsistent implementation: When tracking is implemented differently across platforms or products, conflicting numbers emerge. Often, this comes from organisational design. If a company has a mobile app team and a web team working on the same feature on different platforms but not cross-functionally and without proper tracking definitions, you’ll likely end up with two different names for the same user action.
Poor data governance: Data integrity suffers without clear ownership and quality standards. Finding the accountable person can be difficult, especially with high-level metrics like users and sessions, which result from many teams’ efforts.
Siloed collection systems: Disconnected data sources create multiple versions of the "truth." App installs are a great example. While product teams rely on their analytics system or the raw data of the app stores, marketing often depends on their mobile measurement partner tool (MMP), creating two entirely different views on the same KPI.
The psychological impact is profound: when teams stop believing in data, they stop using it. When two dashboards show conflicting numbers for the same metric, both become suspect. Organisations develop competing data narratives, each department wielding its preferred numbers in territorial battles over resources and priorities.
Without a single source of truth, the very foundation of data-driven decision making crumbles.
THE TRANSLATION BARRIER
A particularly stubborn problem is the divide between technical and product teams. According to research by Qlik and Accenture4, only 27% of organisations said their data and analytics projects produce actionable insights. This isn't surprising when data teams focus on what can be measured while product teams need what should be measured.
I came across data science teams that build interesting models but are entirely disconnected from the needs of the product teams. It’s like creating features without doing user research upfront. It is a time investment without prior knowledge of its potential return on investment.
McKinsey's Analytics Translator Report5 highlights this issue: "Translators help ensure that deep insights... translate into impact at scale." Without these translators, the expertise divide between data analysts lacking product context and product managers lacking data literacy becomes unbridgeable.
This results in:
Technical terminology that obscures rather than clarifies business objectives
Metrics that track activity but fail to connect to meaningful business impact
Data collection that misses crucial context needed for product decisions
When a product manager asks, "Are users enjoying the new feature?" and receives an answer about "session duration variance by segment," the translation has failed. The data might be accurate, but it doesn't answer the question in a way that drives action.
We know similar gaps between product and engineering or design, which lead to the creation of the product trio to overcome barriers and improve mutual knowledge and understanding. As I laid out in a previous article, it is about time to add the analyst and form a quartet. When analysts and data scientists are part of a team with a specific mission and target, they will learn the business context, the user problems and start to come up with valuable insights and solutions.
THE IMPLEMENTATION CANYON
The journey from data strategy to execution often resembles crossing a treacherous canyon, with many falling into the abyss before reaching the other side.
According to Precisely & Drexel LeBow research6, only 46% of organisations report having a comprehensive data strategy to begin with. But even among those with strategies, many fail to implement them effectively. This reveals a systemic disconnect between strategic planning and actual execution.
This canyon forms through several common patterns:
Strategic frameworks created without awareness of operational realities
Technical implementations that prioritise architectural elegance over user needs
Handoffs between strategy and implementation teams with inadequate knowledge transfer
A particularly illuminating example of this canyon occurred in an organisation where the data team created a strategy internally but without sufficiently involving the operators from marketing and product. This led to a new custom-built data lake made by the data engineering team with an architecture that only some parts of the company used. Meanwhile, analysts from product and marketing, as well as product managers, continued using a competing infrastructure.
When the organisation finally created a new unified data model to consolidate efforts, everyone was forced to move to the custom-made data lake, only to discover extremely slow query performance and the need to rewrite queries entirely as the new system couldn't handle existing workflows. The solution had been built not with all end-user needs in mind.
This scenario perfectly exemplifies the Implementation Canyon: technically sound strategies fail to deliver value because they're disconnected from the practical realities of those who need to use them. The result is the "shelf-ware" phenomenon, expensive but poorly implemented analytics frameworks that fail to serve their purpose while teams struggle with the real-world complexities of execution.
Without an intentional bridge across this implementation canyon, organisations waste resources building systems that fail to serve their intended purpose, and the gap between strategic intent and operational reality continues to widen.
THE CAPABILITY VACUUM
The fourth dysfunction extends beyond dependency on external consultants to encompass a more fundamental challenge: the failure to build cross-functional capabilities within the organisation. According to Qlik and Accenture's research7 on data literacy, only 21% of the global workforce is fully confident in their data literacy skills.
This capability vacuum manifests in various ways:
Knowledge silos that prevent cross-functional understanding
Centralised "service provider" models that fail to build distributed expertise
Documentation deficits that make knowledge transfer nearly impossible
Role boundaries that limit skill development beyond immediate responsibilities
I've seen this dynamic play out in many companies where tracking is defined by analysts, often from a central team, who are not fully knowledgeable of what the product team is trying to learn. Instead of training product managers to build efficient tracking while maintaining consistency, organisations fall back on providing analytics as a service. This approach leads to a continuous, siloed understanding and limits the mindset and capabilities development of product managers.
The consequences are significant: when product requirements change, teams lack the knowledge to adapt their tracking appropriately. When new questions arise, they must wait for specialised resources rather than exploring data independently. And when analysts move on, they take critical context with them that isn't documented or transferred.
As the Qlik and Accenture report8 notes: "Organizations must provide employees with the tools, processes and methodologies that enable them to use data." This isn't just about technical training, it's about breaking down the artificial boundaries between those who "do data" and those who "use data."
Without intentional capability building across functions, companies remain trapped in inefficient service models where data expertise is concentrated in a few individuals or teams rather than embedded throughout the organisation as a shared capability. This creates bottlenecks and dependencies and ultimately limits the organisation’s ability to be truly data-driven.
The True Business Impact
The consequences of these dysfunctions go far beyond frustration and inefficiency, they directly impact business performance.
DIRECT FINANCIAL COSTS
The financial burden of data-product misalignment is substantial:
Wasted infrastructure investment: Companies invest heavily in data systems that remain underutilised or untrusted.
Lost productivity: According to Qlik and Accenture9, "Companies lose an average of more than five working days (43 hours) per employee each year due to data-related stress and procrastination."
Quality issues: Wavestone/NewVantage Partners10 notes that only 37% of organisations say they have been able to improve data quality, meaning most companies continue to pay for suboptimal data.
Maintenance costs: Organisations spend significant resources maintaining systems that don't deliver actionable insights.
Beyond these direct costs, teams waste countless hours reconciling conflicting data sources, creating parallel reporting systems, and engaging in "data detective work" to verify basic information.
DECISION QUALITY COSTS
Perhaps most concerning is the impact on decision quality:
Reverting to gut feeling: Qlik and Accenture11 found that only 37% of employees trust their decisions more when based on data, while 48% defer to gut feeling.
HiPPO-driven decisions: As IDC and Heap Analytics note12, "Decisions are often driven by the HIPPO (highest paid person's opinion) without regard for data."
False negatives: Potentially successful initiatives are killed due to measurement errors or misinterpretation.
False positives: Failed strategies continue to receive investment because data doesn't effectively capture their shortcomings.
When data systems fail to provide clear, trusted insights, organisations fall back on intuition, often disguised with post-hoc data justifications. The compounding effect of consecutive misguided decisions can derail product strategy and market positioning.
GROWTH AND INNOVATION COSTS
The impact on growth and innovation is equally significant:
Velocity gaps: IDC and Heap Analytics13 found that data “Leaders can answer product performance questions in minutes or hours; lagging orgs take days or longer."
Experimentation failures: Without reliable data, experimentation programs cannot confidently validate or invalidate hypotheses.
Innovation barriers: Precisely and Drexel LeBow14 note that "Only 12% say their organisation's data is of sufficient quality and accessibility for AI," hampering innovation initiatives.
Competitive disadvantage: Organisations with mature data practices can identify growth opportunities and optimise experiences faster than competitors struggling with data basics.
In rapidly evolving markets, these velocity and capability gaps can mean the difference between market leadership and obsolescence.
ORGANISATIONAL AND CULTURAL COSTS
Perhaps most insidious are the long-term cultural costs:
Cross-functional friction: Tension grows between technical and product teams when collaboration repeatedly fails.
Talent retention challenges: High-performing data-literal professionals leave organisations where their work doesn't translate to impact.
Cynicism cycle: Failed data initiatives breed cynicism about future initiatives, creating a downward spiral of decreasing engagement.
Cultural damage compounds over time, making each subsequent attempt at building data capabilities more difficult.
Recognising Misalignment Signals
Before organisations can address these challenges, they must recognise the warning signs.
Data is being used primarily to justify decisions rather than to inform them
Product roadmaps disconnected from analytics insights
Multiple, conflicting sources of "truth" for key metrics
Regular debates about basic definitions and measurement approaches
Data teams focused on collection rather than activation
Extensive time spent on manual reporting rather than insight generation
Looking Forward: Principles for Better Alignment
While the challenges are substantial, they are not insurmountable. Organisations must recognise, as IDC and Heap Analytics15 note, "the need for both data access and data confidence to empower decision making." For that, we need to start building bridges between technical experts and operational leaders.
The path forward requires:
Shared objectives between data and product teams
Communication frameworks that bridge technical and business understanding
Governance models that maintain data quality while serving business needs
Knowledge transfer that builds internal capabilities systematically
The Paradox of Modern Organisations
We face a striking paradox: organisations have more data than ever before, yet many have less clarity on what actions to take. The companies that will thrive in the coming years are those that recognise these challenges as strategic rather than merely technical. They will invest not just in data infrastructure, but also in the human systems and organisational bridges that translate data into action.
For leaders navigating these challenges, the first step is honest assessment. Where do your organisation's data and product teams most disconnect? Which of the four dysfunctions causes the most significant impact in your context? And most importantly, what bridges can you begin building to close these gaps?
---
Jakob Gehring is a product growth leader with 20+ years of experience in Europe's largest marketplaces. He specialises in bridging the gap between data analytics and product management through practical, implementation-focused approaches.
Davenport, T., Bean, R., & WAVESTONE. (2023). Delivering Business Value from Data and Analytics Investments. In DATA AND ANALYTICS LEADERSHIP ANNUAL EXECUTIVE SURVEY 2023.
Precisely, Inc. (2024, November 12). 2025 Outlook: Data Integrity Trends and Insights. Precisely. Retrieved April 20, 2025, from https://www.precisely.com/resource-center/analystreports/lebow-report-2024
Precisely, Inc. (2025, April 14). BARC Research Study: Observability for AI Innovation. Precisely. Retrieved April 20, 2025, from https://www.precisely.com/resource-center/analystreports/barc-research-study-observability-for-ai-innovation
Laurianne. (2023, March 20). The human impact of data literacy. The Data Literacy Project. Retrieved April 20, 2025, from https://thedataliteracyproject.org/the-human-impact-of-data-literacy
Henke, N., Levine, J., & McInerney, P. (2018, February 1). Analytics translator: The new must-have role. McKinsey & Company. Retrieved April 20, 2025, from https://www.mckinsey.com/capabilities/quantumblack/our-insights/analytics-translator#/
Precisely, Inc. (2024, November 12). 2025 Outlook: Data Integrity Trends and Insights. Precisely. Retrieved April 20, 2025, from https://www.precisely.com/resource-center/analystreports/lebow-report-2024
Laurianne. (2023, March 20). The human impact of data literacy. The Data Literacy Project. Retrieved April 20, 2025, from https://thedataliteracyproject.org/the-human-impact-of-data-literacy/
Laurianne. (2023, March 20). The human impact of data literacy. The Data Literacy Project. Retrieved April 20, 2025, from https://thedataliteracyproject.org/the-human-impact-of-data-literacy/
Laurianne. (2023, March 20). The human impact of data literacy. The Data Literacy Project. Retrieved April 20, 2025, from https://thedataliteracyproject.org/the-human-impact-of-data-literacy/
Davenport, T., Bean, R., & WAVESTONE. (2023). Delivering Business Value from Data and Analytics Investments. In DATA AND ANALYTICS LEADERSHIP ANNUAL EXECUTIVE SURVEY 2023.
Laurianne. (2023, March 20). The human impact of data literacy. The Data Literacy Project. Retrieved April 20, 2025, from https://thedataliteracyproject.org/the-human-impact-of-data-literacy/
Wallace, D. & IDC. (2022). How data maturity and product analytics improve digital experiences and business outcomes.
Wallace, D. & IDC. (2022). How data maturity and product analytics improve digital experiences and business outcomes.
Precisely, Inc. (2024, November 12). 2025 Outlook: Data Integrity Trends and Insights. Precisely. Retrieved April 20, 2025, from https://www.precisely.com/resource-center/analystreports/lebow-report-2024
Wallace, D. & IDC. (2022). How data maturity and product analytics improve digital experiences and business outcomes.