Job type: Contract, Full time

Location: London

£500 per day+expenses

Sector: Technology

Job description

Create and maintain optimal data pipeline architecture,

Assemble large, complex data sets that meet functional / non-functional business requirements.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.

Build analytics tools that utilise the data pipeline to provide actionable insights into customer operational efficiency and other key business performance metrics.

Work with stakeholders including the Project, Product, Data and Design teams to assist with data-related technical issues and support their data needs.

Work with data and analytics experts to strive for greater functionality in our data systems.

Create data tools for analytics and data operstional team members that assist them in building and optimizing tool/product.

Strong Programming skills in Python

Experience with big data tools: Hadoop, Spark, Hive, Kafka, etc.

Experience with relational SQL databases Particularly Oracle.

Experience with data pipeline and workflow management tools

Experience with Linux OS and Shell scripting

Experience with stream-processing systems: Spark-Streaming, etc.

Experience with object-oriented/object function scripting languages


Apply for this position

Applications for this position have closed