If you are looking for a Data Engineering role where you can apply and further develop your data engineering skills by designing, creating and maintaining data processing pipelines then this could be the role for you!
As a Data Engineer at Aridhia you will support customers in using automation to make their data more findable, accessible, interoperable, and reusable (FAIR data).
This role requires technical and analytical skills to enable hospitals, pharmaceutical companies, and universities to provide biomedical researchers access to the best data for their projects, while respecting patient privacy, consent and confidentiality. You will contribute to teams that will include members from multiple, international customer and partner organisations.
Working with Aridhia is about more than just a job; it is a chance to make a real difference to the world. Our customers are conducting important research into diseases including Alzheimer's disease, cancer, and covid-19 and you will be supporting them.
What you’ll be doing
- Be involved in client projects to build project specific solutions.
- Manage and maintain the curation process pipelines for clients to integrate their data into the Aridhia DRE platform.
- Facilitate the data transfer process through integration with various internal and external API endpoints.
- Assist in the deployment of customer tools and applications to the Aridhia DRE platform.
- Provide feedback on the User Experience of the Aridhia DRE platform from a Data Engineer’s perspective for continual improvement to the product.
- Monday to Friday
- Start time: between 08:00 and 10:00
- Finish time: between 16:00 and 18:30
What you’ll bring
- A versatile and experienced, problem solving data engineer with expertise in writing advanced SQL (we use PostgreSQL).
- A track record of delivering data solutions to customers.
- Expertise in building and maintaining reliable data processing pipelines.
- Experience in at least one other programming language for data engineering and data science (Python/R).
- Experience with basic Linux command line tools and Shell scripts.
- Demonstrable ability to work with a variety of data infrastructure, including transactional and analytical databases and data APIs.
- Experience with version control software (Git).
- Experience adapting and designing data models/schemas.
- Evidence of testing and quality analysis of data integration solutions.
- Experience in Azure Cloud Services (or other similar cloud platforms).
- Experience with healthcare data schemas such as FHIR and OMOP.
- Experience with Camunda.
- Experience with GraphQL.
- Experience applying machine learning for data engineering tasks.
- Experience with a variety of structured data formats (CSV, JSON, XML, images).