מציאת משרת הייטק בחברות הטובות ביותר מעולם לא הייתה קלה יותר
You will be part of a group of talented engineers and scientists that build data products and services to turn financial data into insights using advanced analytics and machine learning. You will also be responsible for architecting high-performance, scalable, and cost-efficient big data processing and storage solutions, automating infrastructure with CI/CD principles, and optimizing data models to drive efficient ETL processes. You will be interfacing with several key services and APIs, write code and build applications in a scalable manner to extract/process/ingest unstructured data in a streaming, batch and asynchronous fashion.Your role will also focus on data governance, ensuring compliance with policies, implementing access controls, encryption, retention, and audit mechanisms. Additionally, you will work on continuous improvement by automating processes and adopting the latest data engineering technologies. Collaboration with Business Intelligence Engineers (BIEs), Data Scientists, Software Development Engineers (SDEs), Product Managers (PMs), and Finance Managers will be key to delivering tailored data solutions.In this position, your attention to detail and dedication to high-quality, well-documented data products will empower stakeholders to make better, data-driven decisions.
Key job responsibilities- Collaborate cross-functionally with BIEs, Data Scientists, PMs, and Finance Managers to understand data requirements and deliver customized data solutions.
- Automate infrastructure deployment with CI/CD pipelines and ensure streamlined processes for deployment and maintenance.
- Ensure data quality through robust validation, cleansing, and deduplication techniques.
- Implement data governance standards, including access control, encryption, data retention, deletion policies, and audit mechanisms to ensure compliance and security.
- Continuously improve and optimize data pipelines and infrastructure, staying up to date with emerging technologies and implementing automation and monitoring tools.
- Build a scalable and reliable data platform supporting analytics and financial planning for intuitive, self-service data products.
- Write high quality code and build scalable applications that interface with critical services and APIs to extract and process unstructured data
- Work with a range of data technologies, including Python, EMR, Spark, Airflow, and many AWS data services like Glue, Athena, Redshift to create end-to-end pipelines that consolidate data from disparate systems.
- Bachelor's degree
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
- Knowledge of distributed systems as it pertains to data storage and computing
- Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
משרות נוספות שיכולות לעניין אותך