Job responsibilities
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Develops secure high-quality production code, and reviews and debugs code written by others
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture
- Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- 5+ years of experience working in big data environment, using AWS, Java /Python / Spark, Scala
- Hands-on practical experience delivering system design, application development, testing, and operational stability
- Proficient in one or more object-oriented programming language(s) with expertise in languages such as Scala, Java, Python etc.
- Experience in Apache Spark for large-scale data processing.
- Proficient in application, data, and infrastructure architecture disciplines.
- Proficient in cloud-native architecture, design and implementation across all systems.
- Proficient in Event Driven Architecture.
- Proficient in Application Containerization.
- Proficient in building applications on Public Cloud (AWS, GCP, Azure) development with AWS experience being strongly preferred.
- Proficient in building applications for real-time streaming using Apache Spark Streaming, Apache Kafka, Amazon Kinesis etc.
- Proficient in building on emerging cloud serverless managed services, to minimize/eliminate physical/virtual server footprint.
Preferred qualifications, capabilities, and skills
- Proficient in designing and developing data pipelines using Databricks Lakehouse to ingest, enrich, and validate data from multiple sources.
- Proficient in re-engineering and migrating on-premises data solutions to and for the public cloud.
- Proficient in implementing security solutions for data storage and processing in the public cloud.
- Proficient understanding of traditional big data systems, such as Hadoop, Impala, Sqoop, Oozie, Cassandra, Hive, HBase etc.