















You will get opportunities to work on solutions that accelerate learning, capability and value in the work you do.
Your working life with Layer 9 is based on three areas, our customers, your knowledge and our services or solutions.
Working with a company with a real passion for turning data into a valuable resource you will work with people with a likeminded attitude and a diverse set of experiences and capabilities.
Once submitted we will contact you to request a CV, if you haven’t sent it to us already.
We follow this interview process;
A five minute introduction callWe have run several internship programmes since 2018, from four week to four months in length depending on the subject and roles we are working with at the time.
We have worked with Universities supporting graduates in out reach to businesses and developing projects and supporting final dissertations (Undergraduates and postgraduates)
Check back here for future programmes.
We often work with experienced associates where a project or customer would benefit from additional expertise.
Please contact us if you would like to work with us or have a project or customer you would like Layer 9 to collaborate with.
We are currently recruiting across a number of roles to support the development of our Microsoft capability.
Candidates we would like to speak ideally have a particular interest in :
We are looking for a Spark developer who knows how to fully exploit the potential of a Spark cluster.
You will clean, transform, and analyse vast amounts of raw data from various systems using Spark to provide ready-to-use data to feature developers and business analysts.
This involves both ad-hoc requests as well as data pipelines that are embedded in a production environment.
Responsibilities;
Create Scala/Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods
Write Scaladoc-style documentation with all code
Design data processing pipelines
Developing complex data aggregation using SQL and SPARQL
Working with structured, semi-structured and unstructured data
Working with cross-functional teams (Data Science/Analytical) across multiple databases
Client facing work with key stakeholders at varying organisations
Skills
Experience working with Cloud technologies (Azure, AWS)
Scala/Spark – Spark query tuning and performance optimisation
SQL database integration (Microsoft, Oracle, Postgres, and/or MySQL)
Experience working with Cosmos/DynamoDB/Neo4j
Experience of distributed systems
If you believe you bring a passion and expertise but a vacancy is not advertised we are happy to have a discussion on what is possible.