In this role, you’ll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
Your Role and Responsibilities
As a Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
Your primary responsibilities include:
- Strategic Data Model Design and ETL Optimization: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
- Robust Data Infrastructure Management: Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Seamless Data Accessibility and Security Coordination: Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
Required Technical and Professional Expertise
- Extensive experience in Python. Top priority: BigQuery and BigQuery Transformation (SPROC/BigQuery Tuning & Query Optimization techniques).
- Other Key Skills: Cloud Storage, Pub/Sub, Eventarc, Firestore, DataProc, Workflows, Dataflow, Cloud Run.
- Additional Tools: Cloud Workstation, Cloud Functions, Cloud Build, Cloud Composer. Version Control & CI/CD: Expertise in Git, understanding how CI/CD works, Docker, Terraform on GCP. APIs: Understanding of logic and how APIs operate, preferably using Python.
Preferred Technical and Professional Expertise
- Technical Development experience
- Demonstrated client interaction and excellent communication skills; both (written and verbal)
- Amenable to work on a client dictated schedule (Day, Mid and Night) and Location