Your key responsibilities
- Design, deploy, and manage cloud-native solutions using AWS services such as Glue, Lambda, S3, DynamoDB, Lake Formation, Athena, Redshift, and RDS.
- Build scalable data lakes using S3 and Iceberg, ensuring secure and governed access to data for analytics and reporting
- Implement event-driven workflows using Lambda, SNS, and EventBridge (AWS EventHub) for automation and real-time processing
- Develop APIs and integrate services to support backend systems and client applications
- Monitor, troubleshoot, and optimize infrastructure using CloudWatch and Grafana dashboards
- Collaborate with cross-functional teams to define best practices, ensure security compliance, and align with business objectives
- Automate deployments using CI/CD pipelines and Infrastructure as Code (IaC) frameworks
- Mentor junior engineers and support knowledge-sharing across teams
Skills and attributes for success
- Hands-on experience designing and implementing AWS cloud infrastructure at scale
- Expertise in data lake formation, governance, and analytics using AWS Lake Formation, S3, Athena, and Iceberg
- Strong understanding of serverless architectures and event-driven workflows using Lambda, SNS, and EventBridge
- Strong knowledge of data warehousing using AWS Redshift and relational database design using Amazon RDS
- Proficiency in scripting languages like Python or Bash to automate cloud operations and workflows
- Experience developing APIs and integrating distributed systems in cloud environments
- Strong knowledge of DynamoDB, data modeling, and query optimization for NoSQL data stores
- Experience in configuring and monitoring cloud infrastructure using CloudWatch and Grafana
- Working knowledge of CI/CD pipelines, DevOps practices, and tools like Terraform, CloudFormation, or AWS CDK
- Understanding of storage formats such as Parquet, ORC, and AVRO for efficient querying and compression
- Familiarity with security frameworks, IAM policies, encryption, and data protection best practices
- Analytical mindset with strong troubleshooting and performance tuning capabilities
- Ability to work with structured and unstructured data in batch and streaming pipelines
- Demonstrated problem-solving skills, adaptability, and eagerness to learn new technologies
- Experience with deployment processes and containerization is a plus
To qualify for the role, you must have
- Be a computer science graduate or equivalent with 5-10 years of industry experience
- Have working experience in an Agile base delivery methodology (Preferable)
- Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
- Excellent communicator (written and verbal formal and informal).
- Participate in all aspects of Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support.
Ideally, you’ll also have
What we look for
- People with technical experience and enthusiasm to learn new things in this fast-moving environment
You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer:
- Support, coaching and feedback from some of the most engaging colleagues around
- Opportunities to develop new skills and progress your career
- The freedom and flexibility to handle your role in a way that’s right for you
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.