Senior Data Engineer (Kafka - Datafin IT Recruitment
Cape Town, Western Cape 1 day ago Permanent Salary - Market Related Remote Job
Senior Data Engineer (Kafka
Datafin IT Recruitment
Cape Town, Western Cape
Date Created : 1 day ago
Job Type : Permanent
Salary : Market Related
Remote Job
- Design, implement, and maintain robust data pipelines, ensuring the efficient and reliable flow of data across systems.
- Develop and maintain Elasticsearch clusters, fine-tuning them for high performance and scalability.
- Collaborate with cross-functional teams to Extract, Transform, and Load (ETL) data into Elasticsearch for advanced analytics and search capabilities.
- Troubleshoot data pipeline and Elasticsearch issues, ensuring the integrity and availability of data for analytics and reporting.
- Participate in the design and development of data models and schemas to support business requirements.
- Continuously monitor and optimise data pipeline and Elastic performance to meet growing data demands.
- Collaborate with Data Scientists and Analysts to enable efficient data access and query performance.
- Contribute to the evaluation and implementation of new technologies and tools that enhance Data Engineering capabilities.
- Demonstrate strong analytical, problem-solving, and troubleshooting skills to address data-related challenges.
- Collaborate effectively with team members and stakeholders to ensure data infrastructure aligns with business needs.
- Embody the company values of playing to win, putting people over everything, driving results, pursuing knowledge, and working together.
- Implement standards, conventions and best practices.
- Proven experience in designing and implementing data pipelines.
- Experience with End-to-End Testing of analytics pipelines.
- Expertise in managing and optimising Elasticsearch clusters, including performance tuning and scalability.
- Strong proficiency with data extraction, transformation, and loading (ETL) processes.
- Familiarity with data modeling and schema design for efficient data storage and retrieval.
- Good programming and scripting skills using languages like Python, Scala, or Java.
- Knowledge of DevOps and automation practices related to Data Engineering.
- Kafka / ksqlDB
- Python
- Redis
- Elasticsearch, cluster management and optimisation
- AWS S3
- PostgreSQL
- AWS
- Experience with Data Engineering in an Agile / Scrum environment.
- Familiarity with ksqlDB / Kafka or other stream processing frameworks.
- Familiarity of Data Lakes and the querying thereof.
- Experience with integrating Machine Learning models into data pipelines.
- Familiarity with other data-related technologies and tools.
- Strong analytical and problem-solving abilities, with a keen attention to detail.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
- A commitment to staying up to date with the latest developments in Data Engineering and technology.
- Alignment with company values and a dedication to driving positive change through data.
While we would really like to respond to every application, should you not be contacted for this position within 10 working days please consider your application unsuccessful.
By applying to a job using RecruitmentPartner, you are agreeing to comply with and be subject to RecruitmentPartner Terms for use of our website.
By applying to a job using RecruitmentPartner, you are agreeing to comply with and be subject to RecruitmentPartner Terms for use of our website.