Nr ref.: LP/SDE/ZD/10
Responsibilities for Data Engineer:
- Create and maintain a data pipeline for automating data ingestion from multiple data sources
- Build the data platform capabilities required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement a solution for automating manual processes and optimizing data delivery
- Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Requirements:
- Experience with Data Pipeline and ETL Tools such as Apache Airflow, AWS Data Pipeline, AWS Glue, Talend
- Experience with Data Warehouse solution - Redshift, Snowflake
- Experience with AWS cloud services: S3, Athena, RDS, EC2, EMR, RDS, Lambda,
- Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and DynamoDB.
- Experience with data pipeline and workflow management tools: Airflow, Azkaban, Luigi, etc.
- Experience with Kubernetes, EKS
- Advanced working SQL knowledge and experience working with relational databases and NOSQL databases.
- Experience building and optimizing ‘Big Data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data ingestion.
- Strong analytic skills related to working with Structured, Semi-Structured and Unstructured datasets.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience supporting and working with cross-functional teams in a dynamic environment.