Lead Data Engineer (12 month contract)

Momentum Group Limited


Date: 2 weeks ago
City: Centurion, Gauteng
Contract type: Contractor
Momentum Investments is a leading financial services provider dedicated to helping people achieve their financial goals and secure their futures. With a focus on innovation, integrity, and customer-centric solutions, we strive to empower individuals and businesses to navigate the complexities of the financial landscape with confidence.

Role Purpose

As the Lead Data Engineer at Momentum Investments, you will play a pivotal role in driving our data engineering initiatives forward. You will play a critical role in driving the design, development, and optimization of our data warehouse solutions. You will also serve as the technical lead of the SQL Data engineering team at Momentum Investments. Leading a team of skilled SQL developers, you will oversee the implementation of robust data models, ETL processes, robust data infrastructure, pipelines and reporting solutions that support our organization's data-driven decision-making initiatives. Leveraging your expertise in data engineering best practices and data warehousing principles, you will collaborate closely with cross-functional teams to deliver high-quality data solutions that meet business requirements and drive strategic objectives.

Requirements

  • Bachelor's degree in computer science, Engineering, or a related field.
  • Minimum of 8 years of experience in data engineering/ SQL development roles,
  • Minimum of 3 years in a leadership role, which included managing the performance, technical development and delivery of other developers, with a proven track-record of leading successful data engineering/ SQL warehousing projects.
  • Minimum 8+ years of hands-on SQL development experience. (Experience in warehouse, data lake, or data vault development using SSIS, python and SQL Transact will be considered here – not just fluency in SQL for 8+ years).
  • Minimum 6+ years of Python for data processing, automation, and ETL/ELT pipeline development
  • Expertise in SQL & python development, query optimization, and performance tuning.
  • Proficiency in database technologies such as SQL Server, Oracle, and PostgreSQL.
  • Hands-on experience with ETL tools and processes, such as SSIS is required
  • Strong understanding of distributed computing principles and cloud platforms (e.g., AWS, Azure, GCP).
  • Strong understanding of data warehousing concepts, methodologies, and best practices.
  • Hands-on experience with data modeling, ETL/ELT processes.
  • Hands on experience in the use of data tools native to AWS (Glue, parquet etc…) would be advantageous
  • Hands-on experience with ETL tools and processes, such as Informatica, or Talend would be advantageous
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc. would be advantageous
  • Proficiency in languages such as C#, Java, or Scala would be advantageous
  • Certifications in cloud platforms or data engineering would be an advantage (e.g., AWS Certified Data Analytics)
  • Excellent problem-solving skills and attention to detail, with a strong focus on delivering high-quality solutions.
  • Effective communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
  • Proven leadership abilities, with a track-record of effectively leading and mentoring teams to achieve shared goals.
  • Proven ability to collaborate effectively with cross-functional teams and drive projects to successful completion.

Duties & Responsibilities

  • Lead and mentor a team of SQL warehouse developers in the design, development, and maintenance of data solutions, including scalable data pipelines and data infrastructure.
  • Manage the KPIs and technical performance, capacity allocation, cadence and timelines of delivery of the data development team.
  • Manage technical development of team members through training, planning training interventions to upskill team members, facilitating on the job training and monitoring their adherence to personal development plans.
  • Collaborate with data architect, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.
  • Expert-level SQL backend development (not application development).
  • Implement data architecture designs, data models, schemas, and storage solutions that optimize performance, scalability, and reliability.
  • Deliver innovative solutions aligned with business needs and strategies.
  • Design and implement scalable and efficient ETL processes to extract, transform, and load data from various sources into the data Lake.
  • Optimize SQL queries, stored procedures, and database schemas for performance, reliability, and scalability.
  • Monitor, troubleshoot, and optimize data pipelines and infrastructure to ensure optimal performance and reliability.
  • Automate deployment pipelines using GitLab CI/CD to streamline development and operational workflows.
  • Implement and enforce coding standards, best practices, and quality assurance processes to ensure the delivery of high-quality solutions.
  • Research and evaluate emerging technologies and tools to recommend to the architect to continuously improve our data engineering capabilities and drive innovation.
  • Stay abreast of industry trends, best practices, and advancements in data engineering, contributing insights and recommendations to inform strategic decisions.
  • Provide technical guidance and mentorship to team members, fostering a culture of learning and professional development.

Competencies

  • Working with People
  • Presenting and Communicating Information
  • Applying Expertise and Technology
  • Analysing
  • Planning and Organising
  • Delivering Results and Meeting Customer Expectations
  • Following Instructions and Procedures
  • Coping with Pressures and Setbacks
  • Can work in a team
  • Able to multi-task
  • Out of the box testing thinking patterns
  • Skilled with end to end testing
  • Sharing knowledge
Post a CV