AI for Pharma: Why the key to success is failure

AI can unlock a ton of value for healthcare — but only if it solves real user problems
In recent months, I’ve had a number of conversations with colleagues and clients who work in life sciences about a July, 2025 MIT report that 95% of GenAI pilots fail. On its surface, it does seem startling. But here is a list of reasons I am not surprised:
- The core technology is new: While it’s moving at breakneck speed with super frothy investment, we are still in the first inning of enterprise adoption of this tech.
- Data isn’t ready: Information is often siloed within incompatible legacy systems, or datasets aren’t robust enough to train effective AI models.
- Change is hard: Pharma’s full of smart people working in systems that weren’t designed to be agile. AI requires new ways of working and culture. That’s a heavy lift.
- Regulatory uncertainty: Regulators are still figuring out how to govern AI in healthcare, which makes it risky to go all in or take the risks required for rapid adoption.
- There’s a skills gap: Many in the pharma sector lack a deep understanding of what AI can and can’t do, creating a knowledge gap between data scientists and domain experts. There are also gaps in human-centered design skills and the subtle UX chops needed to make these systems sing.
- Same old mistakes: Having worked in enterprise technology and healthcare for over 20 years, I am seeing the same patterns get in the way of success here: solution-led thinking, not centering the humans, lack of clear vision and OKRs, innovation theater, unwillingness to be bold, lack of cross-functional collaboration
I also am not sure that a 5% “success” rate is such a “failure.” And the fact that pharma corporate culture tends to be so allergic to the notion of “failure” always strikes me as odd. On average, for a single commercial drug to get to market, companies evaluate 5,000–10,000 molecules in discovery, 250 compounds enter preclinical testing, 5 enter human trials; and only one is eventually approved. If we equate preclinical testing with a pilot, that is a 0.4% success rate or 99.6% failure rate.
But let’s spend a minute to see how we can make more GenAI pilots and investments at life sciences companies more successful, or at least provide more value more quickly. Harvard Business Review just published a direct response to MIT’s study with an ethnographic study at Fortune 50 companies on what is working. A lot of what they tease out resonates with my experience. In addition to leadership that is generally collaborative and relational and that values the collective genius as much as or more than individual rock stars, they put forward the SHAPE framework to model successful leadership for getting further faster with GenAI:
- Strategic agility
- Human centricity
- Applied curiosity
- Performance drive
- Ethical stewardship
I very much agree with this framework, and the first two elements are where we do a lot of work. These were also the attributes that study participants said were most important, so I want to dive a little deeper into those.
Strategic agility
This is more or less what we, at Modus, call “strategy in motion” or emergent strategy. I began working with this paradigm about 5 years ago while collaborating with WL Gore’s digital innovation team. I was challenged to develop a new resilient model for their group based on complex systems, and I was simultaneously reading Adrienne Maree Brown’s brilliant book, Emergent Strategy. She writes:
“Rather than laying out big strategic plans for work, the invitation of emergent strategy is to come together in community, build authentic relationships, and see what emerges from the conversations, connections, visions, and needs.”
In today’s world of rapid change, volatility, and complexity, traditional linear and hierarchical strategic frameworks are too rigid and too fragile. Especially when working with the speed demon of change that is GenAI.
So what does strategic agility mean for leaders in a practical sense? Here are a few moves that have proved successful:
- Sprint to go fast. Adopt a sprint mentality and ways of working using agile principles and rituals (backlog, sprint planning, retro, story points, etc.) across all functions, from UX to marketing to strategy and operations. Maybe it’s a 6-week sprint as opposed to the 2-week dev cycle, and maybe there is a preparatory Sprint Zero, but this proven framework provides rigor across all teams.
- Invest in culture. As Peter Drucker wrote, “culture eats strategy for breakfast.” But you need to proactively and collaboratively define your culture for your team and invest in ways to celebrate and inculcate it. And model it yourself.
- Prioritize relationships. Create moments, gatherings, programs, etc., that facilitate relationship-building inside and outside of your team.
- Define the vision. Worry less about the path. Let your teams and the process figure that out, but get clear on the “shining city on the hill” that you want your team to work towards. What does success look like? Be bold and be clear. Define thoughtful OKRs that point toward that ideal.
Human centricity
Human-centered design (HCD) or design thinking are proven frameworks that lead to more successful adoption, solving actual problems, and help avoid solutioning with the latest bright shiny object.
When we engage with pharma clients to create decision-support systems and dashboarding, patient support experiences, or any digital project, really, we bring in a multidisciplinary team — research, strategy, information architecture, engineering, and UX design. Our process involves empathy-based research, collaborative development, and iterative testing and prototyping.
Here’s what that looks like:
1. Build trust
AI solutions are more likely to be adopted if users trust them. HCD directly addresses this by:
- Including users early and often. By involving clinicians, researchers, and patients throughout the design process, HCD ensures that the technology addresses real-world needs and priorities.
- Prioritizing transparency. If an AI system recommends something, it should explain why in plain language. For example, a clinical decision support tool designed with HCD would explain its recommendations rather than simply providing an answer, allowing healthcare providers to maintain control and validate the insights.
- Mitigating bias. By involving diverse perspectives in the design and testing phases, HCD helps identify and correct for biases in training data, leading to fairer and more equitable outcomes.
2. Design for how people actually work
Many failed AI projects were technically sound but poorly integrated into existing workflows. HCD ensures solutions fit seamlessly into the daily work of pharmaceutical professionals by:
- Mapping user journeys. Designers create detailed visualizations of a user's experience to identify pain points and build AI tools that alleviate burdens rather than creating new ones. For example, AI-powered ambient scribes can handle documentation, allowing doctors to be more present during patient interactions.
- Reframing the problem. Instead of building a tool and finding a use for it, HCD helps identify the root problems and reframes them around the user. For example, a pharmacy's problem might not be adopting virtual care but rather empowering patients to feel confident using the technology.
- Balancing automation and control. A human-led, AI-powered approach ensures that AI augments human capabilities rather than replacing them. This gives users control over critical decisions while automating routine tasks.
3. Tackle data challenges as part of the process
Instead of treating data readiness as a separate, pre-AI project, HCD incorporates it into the user-centered process.
- Identifying critical data needs. Through user interviews and shadowing, we dig into what actually matters — what data is useful, what’s missing, what’s noise. Designers can pinpoint exactly what data is needed to solve a specific problem, allowing for a more targeted and efficient data integration strategy.
- Breaking down silos. HCD's collaborative, team-based approach, which involves diverse stakeholders, naturally helps to break down departmental data silos and improve information flow.
It might be okay that 95% of your GenAI projects fail at this particular moment in time — especially if that 5% is truly transformative and provides the high ROI this tech promises. And with agile, human-centered approaches, we can do better.
Let’s collectively avoid the movie in which we find ourselves working for the robots and instead harness GenAI to unlock more human potential. By doing so, we can positively impact both business outcomes and the wider world.