Blogs & News

Home Blogs & News

Enabling the Ecosystem for Neurology Research – Biomarker Data Collaboration

In July, the Alzheimer’s Disease Data Initiative (ADDI) hosted the NeuroToolKit (NTK) hackathon: a 17-day virtual challenge with the goal of investigating the potential clinical utility of different biomarkers in Alzheimer’s disease. The hackathon was conducted by ADDI in partnership with Roche Diagnostics on the AD Workbench which uses the Aridhia DRE as its foundational platform infrastructure.

The challenge made use of the DRE’s capability to host externally developed containerised Apps. Containerised Apps were launched on the DRE earlier this year and allow users to run their own Apps within the security and governance perimeter of their private workspace. This enables Apps written in multiple languages to be ‘dropped in’ and function as part of a broader analysis workflow. The research community have developed a long tail of different Apps, algorithms and other tools over the years that are crucial to their work. These tools can be very specific to their clinical or research domain and have been developed using the tools and languages that best fit that specific domain. The ability to containerise these Apps and add them to the platform making them accessible and usable to a wider community of researchers will extend their impact, ultimately benefiting the wider research community and patients.

The NeuroToolKit

Three NTK Apps developed by Roche and their partners were, through close collaborative working, integrated into the DRE. The NTK Curate App that enables data to be curated before being analysed, the NTK Analysis App that facilitates statistical analysis and provides visualisation options, and the NTK Meta Analysis App that allows for data comparisons. It was vital that workspace users could interact with these apps seamlessly, with on-demand access to the appropriate compute resources to run their analysis.

These Apps, or any other third-party Apps brought to the platform, can be used together with the Aridhia provided tools on the DRE. They are all connected to the same secure, audited environment where all files and data are safely stored and where only the invited members have access to them. Since the tools run on the same infrastructure and are connected to the same filesystem and database, it is easy to go from one tool to another so that the strengths of each tool can be used to streamline the analysis. Researchers in the same workspace can use Jupyter Notebook to work in Python and RStudio to work in R, depending on their preferred language of choice. Users in a workspace can also add their own Apps written in RShiny to their Workspaces. Aridhia provides a selection of pre-written Shiny Apps that users can upload to their workspaces without having to write the code for them themselves. They can be found here along with instructions on how to add them to a workspace.

With excellent collaboration between our development teams, we provided the capability for Roche and their partners to use the DRE to deliver their custom Apps on the AD Workbench to over 200 users from around 45 organisations and 18 countries around the world. All the planning and collaboration between the teams involved paid off, and the hackathon ran smoothly and seamlessly without any technical issues reported by users. In fact, the usage of the Apps worked so well that ADDI now provides access to the three NTK Apps to all their users across the AD Workbench, not just the workspaces related to the NTK hackathon, so that everyone can benefit from them.

Powered by the DRE

The Aridhia DRE is the ideal platform to host such a hackathon. The participants benefited from a secure, audited environment, access to the NTK suite of Apps, as well as the Aridhia provided Apps (RStudio and Jupyter Notebook services). Each member of a workspace can have access to cloud compute on demand without the need for expensive dedicated virtual machines. Each user of the Apps is provided with their own pod with its own dedicated compute for each App they run, so there is no competition for the compute resources needed to run their analysis. As pods run on nodes that are shared across all workspaces in the same cloud region, the cost is kept to a minimum and the compute infrastructure will only scale when required. This removes the need for each user to have an individual virtual machine. You can read more about how the setup of the shared compute resources work here.

It was a great collaborative effort across many teams from different companies that pulled this off with the help of great communication and a drive towards a common goal. We look forward to supporting more hackathons and challenges like this in the future and will keep working towards providing all customers the tools they need to make them a success.