Lead and mentor a team of data engineers in developing and deploying scalable, robust data solutions. Ensure the delivery of high-quality data pipelines and systems to meet business requirements and enable growth. Focus areas include advanced data batch processing, real-time pipelines, data governance, and architectural innovation.
Responsibilities:- Lead the design, development, and deployment of distributed data pipelines to support high data rates.
- Create and review HLDs based on PRDs composed by Product Owners.
- Create and review SDDs based on HLDs composed by Architects or other Senior Developers.
- Ensure accurate, production-ready delivery of data solutions.
- Take full responsibility for the QA of developed features, writing, and running comprehensive tests.
- Lead and actively participate in all SCRUM ceremonies, providing precise estimates and progress updates:
- Own and estimate story points and time.
- Ensure the team delivers sprint goals.
- Drive backlog refinement and story creation.
- Provide detailed status and progress throughout the sprint.
- Collaborate with other teams and stakeholders to ensure timely, quality delivery of commitments.
- Serve as the primary gatekeeper for services, leading efforts to debug and troubleshoot development and production issues.
- Work closely with team members and stakeholders to understand business requirements and provide guidance for developed products and technologies.
- Proactively own the entire data pipeline for specific domains, ensuring its effectiveness and efficiency.
- Lead the group effort in building innovative architecture.
- Stay updated with the latest technologies in data engineering and suggest innovative approaches to meet business needs.
- Mentor and support team members, fostering a culture of continuous improvement and professional growth.
Requirements:- Passionate about data engineering, quality, automation, and efficiency, with a self-starter attitude and strong problem-solving skills.
- Sc. in Computer Science, Statistics, Mathematics, or related field with strong analytical skills.
- 7+ years of experience as a Data Engineer or Java/Scala developer.
- 5+ years of experience in writing Spark-based batch or streaming data applications (Streaming\Structured Streaming\SparkSQL).
- Deep understanding of current software development practices (SCRUM, Unit tests, source control, CI/CD).
- Extensive experience with SQL, SQL-like, and NoSQL databases.
- Proficient in Hadoop ecosystems (HDFS, Zookeeper, Yarn, ORC, Parquet, Hive) and related technologies.
- 5+ years of experience with Kafka.
- Strong Linux knowledge with extensive bash scripting experience.
- Advanced English level (C1)
Advantages- Advanced expertise in parallel processing algorithms and techniques.
- Deep knowledge in BigData topics and distributed computing.
- Proficiency in K8s, Docker, DeltaLake, DataMesh, Aerospike, AirFlow, Redis, Vertica.
- Extensive experience with data warehousing concepts and systems.
- Proficiency in Python.
What does it mean to work at Playtika?- You’ll join a team of leaders in the field, and enjoy amazing benefits, some are listed below:
- A competitive salary and performance-based bonuses;
- Hybrid working mode: Two days from our office, located in the heart of Warsaw (Browary Warszawskie), and three days from anywhere, per week
- All you can eat! Breakfast, lunches, desserts, snacks and much more in our Playtika-only cafeteria
- Access to PlaytiCafe where all of your coffee (and other refreshments) dreams come true;
- Six “Power Up” long weekends for all, and additional day off for your birthday month;
- Private medical healthcare and three additional sick leave days;
- A wellness program in the office: yoga classes, massage chairs and zerobody room;
- Gaming room with a variety of activities;
- Flexible working hours and monthly happy hours;
- Work permit assistance for employees;
- Corporate celebrations, team buildings, and fun activities.