Blogs & News
Last month, UKRI announced a major program for AI development and adoption (UKRI’s AI Strategy announcement, a £1.6Bn funding package for the next four years). I wanted to highlight a few things specifically as it relates to healthcare and clinical research, but also call out a couple of points for small/medium size companies working in this space.
The six focus areas outlined (Advancing AI development, Transforming research, Building AI skills, Accelerating innovation and economic benefits, Championing responsible, trustworthy AI, World-class AI data and infrastructure) are all sensible and the reality is the Government (or anyone else for that matter) can’t go any further than broad brush themes for AI over the next four years. For all the advances we see week to week and month to month, particularly over the last 12 months, we’re still in the foothills of how all this actually plays out in the wider economy and society and the impact (good and bad) that this will have on day to day life.
In healthcare and clinical research, AI tools and resources are already proving useful – literature synthesis, anomaly detection in imaging, pulling structure out of messy clinical notes, cutting genomic analysis from weeks to days. These aren’t trivial wins, but most of them are working on data that’s already accessible and available. The harder problem is when the data that actually matters sits across dozens of institutions, each with their own governance frameworks and entirely reasonable reluctance to just hand it over. This applies in equal measured for routine care and clinical research.
The main issues in moving to scale and widespread adoption isn’t the functionality of the AI service but a combination of a few more messy and ingrained points of friction –
All of this (and more) add up to a realisation that it’s not just the availability of AI (as a tool or model) that’s needed to see benefit, it’s the aggregate of many other aspects that live within a system that is regulated, requires access to data that is privileged and sensitive, users who are healthcare professionals first and foremost, an extremely long tail of use cases as a consequence of clinical specialities and sub-specialties and patients that are all very different.
One of the most discussed topics is trust, people trust people when they build relationships, organisations trust other organisations but rely on contracts and legal enforceability. What’s the answer to the question, ‘do you trust the model?’ Reproducibility, openness and transparency help a lot, regulatory approval will help a lot.
Interestingly the FDA authorised 221 AI/ML medical devices in 2023 alone, against just 33 in the entire period from 1995 to 2015, 76% of all approved devices have been in radiology: AI advances fastest where the data infrastructure already exists as it does in radiology allowing regulators to work backwards through the provenance of how the model arrived at the inference. That should provide an additional signal that the data infrastructure for both training and routine use is a number one priority to get in place if you want things to scale.
Still on trust, UKRI has called out the use of Trusted Research Environments or TRE’s as a vehicle for AI development and experimentation. This will enable a level of security and governance that satisfies healthcare organisations, regulators and patients that the primary and secondary use of data for both clinical and research use matches governance and consent requirements but also provides clinicians and researchers the tooling and services they need to do their work and health IT with the deployment framework to enable scale. We anticipated this a while back and started development of AIRA, a trusted framework for AI in the Aridhia DRE back in 2024. AIRA is running though it’s final stage of preview release and will be available for general release in April of this year. It’ll not be perfect as we’re still (along with everyone else) discovering nth degree details that are hidden until you find them.
OpenAI, Google, Meta, Anthropic, xAI, Microsoft; the real clinical value from AI isn’t going to come from generic models trained on everything. Real clinical value will come from smaller, targeted models. Models that are based on data that cannot be shared outside a trusted boundary. Value is going to come from a cardiologist or clinical informaticist who understands the data, the workflow and their patient population being able to build and fine-tune their own “smaller” models on their own data (within appropriate governance of course). More often than not the data that matters sits across dozens of institutions, each with their own governance responsibilities. The ability to establish federated discovery and access to those data establishes networks of trust that have scale in the data required for a particular task(s) allowing AI models and tools to be run across distributed data without centralising or transferring. Ironically this is the opposite of the approach taken to establish large, generic LLM’s that we’re all familiar with using day in and day out.
More broadly, UKRI are highlighting innovation and economic benefit as a key theme; I hope the call out on engagement with Industry is not a euphemism for Big Tech who have all the presence, time, money and lobbying at their disposal while SME’s struggle to be heard. Aridhia started developing our trusted research platform (Aridhia DRE) in 2011 and we did this through a collaborative grant from the Technology Strategy Board (the precursor to UKRI) and in collaboration with the experimental cancer centres in Edinburgh and Dundee. That grant was foundational for us and we’ve continued to pour as much R&D as we can into this domain and now we deliver trusted research services (soon to include AI!) across all regions of the world with data contributions from 89 countries supporting 10,000 clinical, scientific, data science users all backed by ISO27001, ISO27701, HiTrust, HIPPA certifications for security and governance.
Top of our agenda this year is how we help bring the benefits of AI through AIRA and the DRE to our community of customers and partners while mitigating the risks, but doing this with deployment scale as the end objective. Getting from a promising proof of concept to something running reliably in a live clinical/research environment is a non-trivial step that the sector is still largely figuring out. Fixing that needs trusted infrastructure and federated approaches that take the analysis to the data rather than the reverse. A better model doesn’t solve any of this, better infrastructure and governance might.
For more information on how we work with AI, head over to our AI Sandbox page.
March 9, 2026
David is the CEO of Aridhia.