Expoint - all jobs in one place

Finding the best job has never been easier

Limitless High-tech career opportunities - Expoint

IBM Senior AI Back End developer 
India, Karnataka, Bengaluru 
731213800

04.09.2024

Your Role and Responsibilities

Key Responsibilities:

  • Solution Design: Lead the design of end-to-end AI solutions that encompass data acquisition, preprocessing, model selection, training, deployment, and maintenance.
  • Algorithm Development: Design, implement, and optimize machine learning algorithms and models that solve specific business problems, such as natural language processing, computer vision, recommendation systems, predictive analytics, etc.
  • Data Preprocessing: Clean, preprocess, and curate large datasets to ensure high-quality data for training and validation of AI models.
  • Model Training and Evaluation: Train and fine-tune AI models using appropriate techniques and frameworks. Evaluate model performance using relevant metrics and iterate on models to improve their accuracy and efficiency.
  • Architecture Design: Create high-level architecture designs for AI systems that consider scalability, performance, security, and integration with existing systems.
  • Technical Leadership: Provide guidance and mentorship to AI developers, data scientists, and engineers, ensuring adherence to best practices and architectural guidelines.
  • Technology Evaluation: Stay updated with emerging AI technologies, tools, and frameworks, and evaluate their suitability for solving specific business challenges.
  • Collaboration: Work closely with cross-functional teams, including data engineers, software developers, product managers, and domain experts, to align AI solutions with overall product development.
  • Proof of Concept: Develop prototypes and proof-of-concepts to demonstrate the feasibility of proposed AI solutions and gain buy-in from stakeholders.
  • Innovation: Explore innovative AI applications and propose ideas that can lead to new products, features, or enhancements.
  • Code Development: Write clean, efficient, and well-documented code using programming languages such as Python, and utilize libraries and frameworks like TensorFlow, PyTorch, scikit-learn, etc.
  • Ethical and Regulatory Compliance: Ensure that AI solutions meet ethical standards and regulatory requirements, particularly in areas such as data privacy, bias mitigation, and transparency.
  • Performance Optimization: Optimize AI models and system performance, including latency, throughput, and resource utilization, to meet business needs.
  • Documentation: Maintain clear documentation of the AI models, algorithms, development processes, and deployment procedures.
  • To be successful, you will need:
  • Passion for handling technical challenges and be goal and results oriented
  • Excellent communication skills and ability to work with multiple team
  • Proven listening, detail-oriented thinking, and creative problem-solving skills
  • Ability to work in highly collaborative global organization
  • Be open to flexible schedule in development and support environment
  • Agile development experience
  • What we look for:
  • Hands on experience in AI , Python, Java , Scala and utilize libraries and frameworks like TensorFlow, PyTorch, strongly preferred.
  • BE/B Tech in Computer Science or relevant and 17 to 20+ years track record in Architecture and development in a customer facing role working with enterprise software


Required Technical and Professional Expertise

  • 7-8 years of experience in developing enterprise applications using Java, Python, Scala , spark and related technologies with 2+ years a focus on Data Engineering, DataOps, MLOps
  • Knowledge of data best practices and ML/Dev operations in SaaS and hybrid environments
  • Software development strategies for low latency, high throughput software’s
  • Hands-on experience with common distributed processing tools and languages Python, Spark, Hive, Presto
  • Deep understanding of data pipelines, data modelling strategies, schema management
  • Experience with specialized data architectures like data lake, data mesh and optimizing data layouts for efficient processing.
  • Hands on Experience with streaming platforms and frameworks like Kafka, spark-streaming
  • Strong understanding of advanced algorithms used in design and development of enterprise grade software
  • Strong understanding of data governance, data security, and data privacy best practices.
  • Strong expertise in working with distributed big data technologies and frameworks like Spark, Flink or Kafka.
  • Ability in managing and communicating data pipeline plans to internal clients
  • Familiarity with pipeline orchestrator tools like Argo, Kubeflow, Airflow or other open source
  • Familiarity with platforms like Kubernetes and experience building on top of the native platforms
  • Excellent communication skills with the ability to effectively collaborate with technical and non-technical stakeholders
  • Experience with cloud-based data platforms and services (e.g., IBM, AWS, Azure, Google Cloud).
  • Ability to provide guidance to less experienced team members.


Preferred Technical and Professional Expertise

  • Experience designing, building, and maintaining data processing systems working in containerized environments (Docker, OpenShift, k8s)
  • Experience working with both batch and streaming data processing pipelines using workflow engines (Argo, Tekton, etc.)
  • Experience developing or leveraging automated platforms for data observability, data quality and drift and systems to automatically identify and correct data issues.