Partnership Models

From AI tools to AI ecosystems: why lab sciences companies are rethinking partnership models

Andrew Wyatt at Sapio Sciences

How are deliberately structured artificial intelligence ecosystems integrating the tools scientists already rely on, enabling innovation while preserving provenance, governance and confidence across the drug discovery and development life cycle?

In 2026, the conversation around artificial intelligence (AI) in pharmaceutical R&D has shifted decisively. The focus is no longer on what AI might make possible, but on where returns are actually being realised.

At an industry level, the momentum is clear. Large biopharma organisations are investing heavily in AI-enabled discovery and development, often through partnerships with specialist technology providers. Examples range from structure prediction and generative chemistry, to automation-driven experimentation and predictive analytics embedded into R&D operations. AI has moved well beyond pilots and is increasingly treated as scientific infrastructure, rather than a speculative innovation layer.

For scientists and lab leaders, however, investment and momentum have not always translated into everyday impact.

AI gaps and gains

AI is widely available, and in some areas, it has delivered genuine progress. Structure prediction models have accelerated target assessment, generative chemistry tools have expanded design space and predictive analytics have improved prioritisation decisions. What remains harder to demonstrate is consistent improvement across the full workflow; how experiments are planned, interpreted, reused and built upon.

In many labs, AI still sits alongside the workflow, rather than connecting the steps end to end. Data is exported from instruments or notebooks, processed elsewhere and analysed in specialist tools or public AI services, then summarised back into the experimental record after decisions have already been made. Fast answers are possible in the moment, but context fragments along the way. Assumptions, parameters, model versions, intermediate reasoning and uncertainty often sit outside the system of record.

Tool switching also has a governance side. When sanctioned tools are hard to use, or data becomes too dispersed, teams revert to public AI models. Shadow AI in the lab is widespread; scientists admit to using AI with logins they created themselves. Whatever the motivation, the practical outcome is consistent: reasoning and intermediate steps move outside the record, which makes results harder to defend and reuse.

Separate from the security risk, another consequence of tool switching is what scientists have described as a ‘rework tax’ – insights exist, but comparing or trusting them often requires re-derivation. Experiments get repeated, not because the science demands it, but because scientists cannot find or reuse previous results. The cost accumulates quietly in lost time, duplicated effort and bottlenecks that arise from interpretation rather than execution.

The AI return on investment question has changed as a result. Leaders are not only asking whether AI models perform well in isolation; they want durable value that comes from reducing rework and shortening the interpretation cycle.

Why AI partnerships look different in biopharma R&D

Biopharma R&D and biotech research workflows impose constraints that do not apply in most enterprise analytics environments. Interpretation, provenance, permissions and documentation are inseparable from the work itself. Decisions need to be explainable, reproducible and defensible long after the experiment has finished. AI tools that sit outside the primary experimental environment may accelerate individual tasks, but tool sprawl weakens traceability and makes data reuse harder over time.

As experimental complexity increases, it’s unrealistic for any one vendor to natively deliver every specialist capability that scientists rely on across chemistry, biology, automation and emerging multimodal analysis. But loosely connected tools leave too much context behind.

Image

That is why a different model has started to take shape. Rather than marketplaces of disconnected applications or single stacks that attempt to do everything, ecosystems are forming as extensible workflow environments. Best-in-class tools, AI and non-AI, can be integrated through a shared data model, with experimental context preserved end to end.

The limits of closed platforms and shallow integrations

Most organisations have tried to address AI tool sprawl using one of two models. Each approach has its strengths, but both run into structural limits in modern drug development:

The vertically integrated model

In a closed platform model, a vendor or internal team builds a tightly coupled environment spanning data capture, analysis and, increasingly, AI. Control is centralised, and validation paths are easier to define. The challenge is scope. Domains such as cheminformatics, bioinformatics, structure-based design and image analysis are not static categories. Each encompasses multiple analysis classes supported by methods that evolve rapidly. As multimodal approaches combine sequence data, structural models, imaging and phenotypic readouts, no single platform can maintain best-in-class coverage without forcing trade-offs that push scientists back out to specialist tools.

The shallow integration model

The second approach retains a core informatics environment and connects specialist tools through relatively shallow integrations. Data is moved into a central place – sometimes manually, sometimes via automation – but the integration often stops at the data layer. Results arrive, but context does not. Experimental intent, parameters, model assumptions and analytics remain scattered across different tools.

The electronic lab notebook becomes the place where experiments are written up, but not where analysis and interpretation actually happen. Scientists are still forced to move data through spreadsheets, scripts and point tools to make progress. The organisation loses a coherent chain of context, leaving data lower quality and harder to search over time.

Neither model is inherently wrong, but both are incomplete. Closed platforms struggle to keep pace with scientific breadth, while shallow integrations struggle to preserve meaning and continuity. Ecosystem approaches aim to address both issues by making specialist capabilities available inside the workflow layer, rather than outside it.

Three design principles for research ecosystems

A real ecosystem is not defined by the number of partners, nor by open active pharma ingredients alone. It takes the form of a curated set of best-in-class tools and information sources that can be invoked in context, with a consistent way to capture what was done and why.

Specialisation and clear roles

The industry is still early in building effective ecosystems, but one principle is already clear. Participants need to stay focused on what they do best. Specialist tool vendors concentrate on depth in areas such as molecular modelling, retrosynthesis, property prediction, image analysis or scientific knowledge extraction. Workflow platforms play a different role, orchestrating those capabilities around the scientist rather than attempting to replicate them natively. The goal of partnerships is not to replace specialist tools, but to make them usable together without forcing scientists into disconnected environments.

Workflow integration around scientific need

From a scientist’s perspective, integration should follow how work gets done, using the tools teams actually rely on. Instead of AI arriving as a bolt-on across separate applications, capabilities should be embedded where experiments are designed, executed and interpreted. That means integrating into the tools at the centre of workflows, including lab notebooks, laboratory information management systems and informatics platforms.

One practical example is a tighter Design-Make-Test-Analyse cycle. A model proposes candidates, automation executes assays and results flow back into the workflow environment linked to conditions and provenance, allowing the next decision to be made with the full experimental record in view.

Preserving experimental context

For any ecosystem to function, experimental context has to survive across tools. Downstream analysis needs to reflect how data was generated, under what conditions and with which assumptions.

When that context is lost, results become difficult to compare, challenge or reuse. When it is preserved, prior work becomes a genuine asset, rather than something that has to be reconstructed from memory or rerun from scratch.

Trust, reuse and what governance actually supports

Scientific research depends on trust in evidence, and AI-assisted insights are no different. If a result cannot be interrogated, challenged or placed in context, it will not be reused, regardless of how sophisticated the model behind it may be. Trust, transparency and governance matter for adoption.

Ecosystem approaches reduce that friction by making analysis part of the experimental process, rather than a clean-up step. They also increase reuse by keeping results coherent enough to be searched, compared and trusted in context.

For platform and tool providers, the strategic question is equally direct: decide what to build and what to integrate. Trying to do everything risks lagging behind specialists, while integrating everything shallowly recreates fragmentation. Value comes from enabling best-in-class tools to work together in ways that reflect how scientists actually operate.

Ecosystems as an operating reality

Ecosystem models are taking shape in lab science as a consequence of research complexity. Closed platforms will continue to play a role, and specialist tools will continue to emerge as science advances. What is changing is where those capabilities sit in relation to the workflow.

When tools operate outside the environment where experiments are designed and interpreted, context is easily lost and reuse becomes harder. Bringing them into the workflow layer, with shared context and traceability, makes the economics of AI more credible.

For many organisations, progress will not be defined by access to better algorithms, but by how coherently scientific work is supported over time. The labs that move forward will be those that reduce fragmentation, preserve experimental meaning and allow scientists to work with the tools they trust without reconstructing the record after the fact. That is a practical response to how research actually gets done.

Fragmented workflows erode that trust over time. When analysis lives in emails, scripts, spreadsheets and external tools, the provenance of results becomes thin. Reuse becomes risky, and reviews slow down because reconstructing what happened takes effort.

Keeping reasoning close to the experimental record addresses this directly. Governance in this sense is not bureaucracy; it is the practical foundation that allows scientists to decide what to trust and what to build on.

Implications for R&D leaders

Biopharma R&D leaders must shift from acquiring tools to enabling workflows. The most useful question is not which model performs best, but where work slows down and why. Repeated exports, queues for analysis and rerun experiments are signals that interpretation is happening outside the workflow.


Image

As chief growth officer for Strategic Partnerships, Andrew Wyatt is responsible for growing Sapio Sciences’ international operations. He has over 30 years of expertise in commercially scaling global software companies, having worked in a wide range of organisations from NASDAQ-listed companies to privately held businesses, from the communications sector to the life sciences industry. Most recently Andrew was the COO of Lumeon, where he successfully grew the business and entered the US market. With his deep understanding of the life sciences industry and proven ability to navigate the complexities of scaling operations, Andrew has consistently built organisations that delight customers while maximising shareholder value.