The point where experts and best companies meet
Share
Staff Software Engineer
Designs and develops batch data pipelines using bigdata technologies. Designs and develops real time data pipelines using Kafka and Spark streaming applications. Builds real time applications using RESTful APIs, design patterns and microservice architecture. Works on GCP and Azure cloud technologies for data storage, deployment, cluster management of application, ensuring scalability, availability. Builds the projects using CI/CD tools and unit testing of the application. Builds AdHoc tools using scripting languages as required for the project. Implements new architectural patterns and performs design and code review of the changes. Monitors the production system, ensures minimal downtime and works with DevOps team to resolve issues. Troubleshoots production issues, by reviewing and analyzing information such as business impact, root cause and criticality. Enhances designs to prevent re-occurrence of defects. Creates and maintains documentation for code, design, architecture and process. Builds process and creates tools to ensure regression and stress testing are performed before any production rollout. Works on proof-of-concepts and implements prototypes to validate ideas. Identifies performance standards, measures progress and adjusts performance accordingly.
Master's degree or equivalent in computer science, computer engineering, computer information systems, software engineering, or related area and 2 years of experience in software engineering or related area; Bachelor's degree or equivalent in computer science, computer engineering, computer information systems, software engineering, or related area and 4 years of experience in software engineering or related area.
Experience designing and developing data pipelines using bigdata technologies including Hadoop, Hive, Spark, Kafka, Orc, and Parquet. Experience coding using programming language Java and Shell scripting. Experience coding using programming language Scala and Python. Experience working on Apache Airflow, to create DAGs using Python for data pipeline jobs orchestration. Experience designing and writing SQL queries, creating views, indexes, and stored procedures in RDBMS. Experience working on CI/CD and log aggregation tools including GIT, Jenkins, and Splunk. Experience designing and developing Rest API and Web pages using Spring, Hibernate and Java technology. Experience working on cloud technology GCP to onboard and execute data pipeline jobs. Experience writing unit test cases using Junit, automated regression and performance testing using Shell Script and Python. Experience designing and storing data in NoSQL databases including Casandra. Experience using tools including JIRA, Confluence for Sprint planning and documentation.
Employer will accept any amount of experience with the required skills.
Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to a specific plan or program terms. For information about benefits and eligibility, see One.Walmart.com.
These jobs might be a good fit