We are seeking experienced Senior Data Engineer with a strong background in designing and building scalable data architectures. You will play a key role in creating and optimising our data pipelines, improving data flow across our organisation, and working closely with cross-functional teams to ensure data accessibility and quality. This role requires deep knowledge of BigData ecosystem and Datalake concepts, as well as hands-on expertise in modern big data technologies like Advanced SQL, Spark, Flink, Trino, Iceberg and SnowflakeData Pipeline Development: * Design, build, and maintain scalable ELT processes using Spark, Flink, Snowflake and other big data frameworks. * Implement robust, high-performance data pipelines in cloud environments. * Deep and hand-on knowledge of at-least one programming language like Python, Java or, Scala * Expertise with advanced SQL skills and knowledge of BI/Analytics platforms.Datalake & Datawarehouse Architecture: * Develop and maintain efficient Datalake solutions * Ensure Datalake reliability, consistency, and cost-effectiveness. * Develop data models and schemas optimised for performance and scalability. * Experience with modern data warehouses like Iceberg, Snowflake, etc.Orchestration & CI/CD: * Comfortable with basic DevOps principles and tools for CI/CD (Jenkins, GitLab CI, or GitHub Actions). * Familiar with containerisation and orchestration tools (Docker, Kubernetes). * Familiarity with Infrastructure as Code (Terraform, CloudFormation) is a plus.Performance Tuning & Optimisation: * Identify bottlenecks, optimise processes, and improve overall system performance. * Monitor job performance, troubleshoot issues, and refine long-term solutions for system efficiency. * Work closely with data scientists, analysts, and stakeholders to understand data needs and deliver solutions. * Mentor and guide junior data engineers on best practices and cutting-edge technologies.