Finding the best job has never been easier
Share
Ab Initio Data EngineerTechnical Stack:
• Ab Initio 4.0.x software suite – Co>Op, GDE, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center, Easy>Graph
• Big Data – Cloudera Hadoop, Hive, Yarn
• Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake
• Others – JIRA, Service Now, Linux, SQL Developer, AutoSys, and Microsoft Office
Responsibilities:
• Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio softwares.
• Build Web-Service and RESTful graphs and create RAML or Swagger documentations.
• Complete understanding and analytical ability of Metadata Hub metamodel.
• Strong hands on Multifile system level programming, debugging and optimization skill.
• Hands on experience in developing complex ETL applications.
• Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues
• Strong in UNIX Shell/Perl Scripting.
• Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3.
• Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment.
• Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now.
• Build Query>It data sources for cataloguing data from different sources.
• Parse XML, JSON & YAML documents including hierarchical models.
• Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components.
• Build Autosys or Control Center Jobs and Schedules for process orchestration
• Build BRE rulesets for reformat, rollup & validation usecases
• Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations.
• Ability to identify performance bottlenecks in graphs, and optimize them.
• Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies
• Build regression test cases, functional test cases and write user manuals for various projects
• Conduct bug fixing, code reviews, and unit, functional and integration testing
• Participate in the agile development process, and document and communicate issues and bugs relative to data standards
• Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids
• Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment
• Perform other duties and/or special projects as assigned
Qualifications:
• Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience
• Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications
• Expertise in handling complex large-scale Data Lake and Warehouse environments
• Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities
Education:
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Time Type:
Full timeView the " " poster. View the .
View the .
View the
These jobs might be a good fit