The point where experts and best companies meet
Share
Overview
evolution from Web site to e-commerce partner to development platform is driven byJob Description
help drive our data-powered analytical revolution.
In this role, you will be responsible for designing and developing highly efficient data
pipelines that seamlessly extract, transform, and load data from a diverse array of
sources. You will work closely with our Business Intelligence, Data Science, andquality, actionable insights to support our TA initiatives.
Responsibilities:
Design and develop highly efficient data pipelines that seamlessly extract,
transform, and load data from a diverse array of sources using SQL, Python, and
AWS big data technologies.
Oversee and continuously enhance production operations, optimizing data
delivery, redesigning infrastructure for greater scalability, managing code
deployments, addressing bugs, and coordinating overall release management.
Establish and uphold best practices for the design, development, and support of
data integration solutions, including comprehensive documentation.
high-impact insights.
Demonstrate proficiency in reading, writing, and debugging data processing and
orchestration code in Python/Scala, adhering to the highest coding standards
(e.g., version control, code review, etc.).
Basic qualifications:
Bachelor's degree in Computer Science, Data Science, Engineering, or a related
technical field
Minimum 3-5 years of experience in data engineering or a similar role
Hands-on experience designing, developing, and maintaining data pipelines and
data integration solutions
Proficient in SQL, Python, and AWS big data technologies (e.g., EMR, Glue,
Athena, Redshift)
Strong understanding of data architecture, data modeling, and ETL/ELT
processes
Experience building/operating highly available, distributed systems of data
extraction, ingestion, and processing of large data sets
Experience with version control systems (e.g., Git) and code review best practices
Exposure to building scalable and fault-tolerant data processing systems
Preferred Qualifications:
Advanced experience with cloud-native data engineering tools and platforms
(e.g., Databricks, Snowflake, Kafka, Kinesis)
Proficiency in writing high-performance, scalable, and maintainable code (e.g.,
using design patterns, unit testing, refactoring)
Familiarity with data streaming and real-time data processing frameworks
Exposure to machine learning and artificial intelligence techniques for data-
driven insights
Proven track record of leading data engineering projects from inception to
delivery
Experience in mentoring and training junior data engineers
Demonstrated ability to identify and implement process improvements, optimize
data pipelines, and enhance overall data infrastructuretechnologies, and a willingness to explore new tools and approaches
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
These jobs might be a good fit