

Share
You will play a critical role in driving innovation and advancing the state-of-the-art in evaluating and training AI models. You will work closely with cross-functional teams, including product managers, engineers, and data scientists to ensure that our AI systems are best in class.Key job responsibilities
- Design complex data collections with human participants in response to science needs: author instructions, define and implement quality targets and mechanisms, provide day-to-day coordination of data collection efforts (including planning, scheduling, and reporting), and be responsible for the final deliverables
- Design and conduct complex data creation tasks using synthetic and model-based data generation methods, following state-of-the-art approaches
- Analyze and extract insights from large amounts of data
- Build tools or tool prototypes for data analysis or data creation, using Python or another scripting language
- Use modeling tools to bootstrap or test new AI functionalities
- Master's or higher degree in a relevant field (Computational Linguistics or equivalent field with computational analysis)
- 2+ years experience in computational linguistics or language data processing or AI data creation
- Experience with language data annotation systems and other forms of data markup
- Proficient with scripting languages, such as Python
- Experience working with speech, text, and multimodal data in multiple languages
- Excellent communication, strong organizational skills and very detailed oriented
- Comfortable working in a fast paced, highly collaborative, dynamic work environment
- PhD in Computational Linguistics (or equivalent field with computational emphasis)
- Expertise in bootstrapping AI data collections for quickly evolving requirements
- Extensive experience working with speech, text, and multimodal data in multiple languages
- Experience in data creation for complex agentic workflows
- Practical experience with Machine Learning
- Familiarity with technical concepts such as APIs
- Practical knowledge of version control and agile development
- Familiarity with database queries and data analysis processes (SQL, R, Matlab, etc.)
- Willingness to support several projects at one time, and to accept reprioritization as necessary
- Able to think creatively and possess strong analytical and problem solving skills
These jobs might be a good fit

Share
Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You'll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project.
Key job responsibilities
As an experienced technology professional, you will be responsible for:
1. Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs
2. Providing technical guidance and troubleshooting support throughout project delivery
3. Collaborating with stakeholders to gather requirements and propose effective migration strategies
4. Acting as a trusted advisor to customers on industry trends and emerging technologies
5. Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts
About the team
About AWS:
Diverse Experiences - AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.Mentorship & Career Growth - We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
- Bachelor’s degree required
- 5-8 years of experience as a contact center technology architect, enterprise IT Architect or senior contact center developer working with leading contact center technology platforms and applications, such as Avaya, Cisco, Genesys, Verint, NICE, Salesforce, etc.
- Hands-on technical practitioner and individual contributor
- Hands-on experience working on the design, development and deployment of contact center solutions at scale
- 5-8 years of experience building call center / collaboration / telephony platforms in a Cloud or On-Premises environment, particularly building application integration capabilities for CRM/ WFM platforms
- Familiarity with Amazon Connect capabilities, benefits, and required deployment skills. Responsibility for designing, implementing and operating contact centers or telecommunication infrastructures within an enterprise environment
- Visible IT Industry thought leadership on relevant topics related to enterprise IT call centers and infrastructure.
- Experience implementing and optimizing AI-powered customer service solutions
- Experience with AI/ML technologies in contact center applications, including Natural Language Understanding (NLU), Natural Language Processing (NLP), prompt engineering, large language model implementation, chatbot development and optimization, and AI/ML model training and fine-tuning
- Serverless development experience including complex integrations with Amazon Lex, Lambda, Kinesis, Dynamo DB, Bedrock and 3rd party AI services
- Software Development / DevOps experience with integrating contact center platforms, CRMs, and WFMs

Share
Overview of the role:
An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics.Basic Qualifications
• Bachelor's degree in Computer Science, Information Technology, or a related field
• Proficiency in automation using Python
• Excellent oral and written communication skills
• Experience with SQL, ETL processes, or data transformationPreferred Qualifications
• Experience with scripting and automation tools
• Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK
• Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB
• Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions
• Understanding of cloud services, serverless architecture, and systems integrationKey job responsibilities
Responsibilities:• Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards.
• Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.)
• Development of dashboards and reports.
• Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations.
• Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases.
• Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues.
• Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Internal job descriptionBasic qualifications:• 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field.
• Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes.
• Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.)
• Expert-level SQL.
• Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.)
• Knowledge of AWS products such as Redshift, Quicksight, and Lambda.
• Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams.Preferred qualifications:• Experience with data-specific programming languages/packages such as R or Python Pandas.
• Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR.
• Knowledge of machine learning techniques and concepts.
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in Statistical Analysis packages such as R, SAS and Matlab
- Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling

Share
As a Business Intelligence Engineer, you'll be at the intersection of data and business strategy, translating complex requirements into actionable analytics solutions. You'll partner with stakeholders to unlock insights that elevate our global work authorization experiences and drive program scalability.Key job responsibilities
A successful candidate will demonstrate:Advanced SQL skills for writing complex queries and stored procedures to extract, transform, and analyze large datasets
Proficiency in Python, particularly with libraries like pandas and PySpark, for data manipulation and ETL processes
Strong analytical and problem-solving capabilities, with the ability to translate business requirements into efficient data solutions
Experience in designing and implementing scalable ETL pipelines that can handle large volumes of data
Expertise in data modeling and database optimization techniques to improve query performance
Ability to work with various data sources and formats, integrating them into cohesive data structures
Skill in developing and maintaining data warehouses and data lakes
Proficiency in using BI tools to create insightful visualizations and dashboardsExcellence in communicating technical concepts and data insights to both technical and non-technical audiences
A day in the life
You'll work closely with Product Managers, Software Developers, and business stakeholders to:
Perform deep-dive analyses to uncover actionable insights
Develop and automate data processes to improve efficiency
Present findings and recommendations to leadership
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience with scripting language (e.g., Python, Java, or R)
- Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries)
- Experience applying basic statistical methods (e.g. regression) to difficult business problems
- Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports
- Bachelor's degree, or Advanced technical degree
- Knowledge of data modeling and data pipeline design
- Experience with statistical analysis, co-relation analysis
- Experience in designing and implementing custom reporting systems using automation tools

Share
Overview of the role:
An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics.Basic Qualifications
• Bachelor's degree in Computer Science, Information Technology, or a related field
• Proficiency in automation using Python
• Excellent oral and written communication skills
• Experience with SQL, ETL processes, or data transformationPreferred Qualifications
• Experience with scripting and automation tools
• Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK
• Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB
• Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions
• Understanding of cloud services, serverless architecture, and systems integrationKey job responsibilities
Responsibilities:• Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards.
• Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.)
• Development of dashboards and reports.
• Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations.
• Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases.
• Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues.
• Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Internal job descriptionBasic qualifications:• 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field.
• Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes.
• Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.)
• Expert-level SQL.
• Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.)
• Knowledge of AWS products such as Redshift, Quicksight, and Lambda.
• Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams.Preferred qualifications:• Experience with data-specific programming languages/packages such as R or Python Pandas.
• Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR.
• Knowledge of machine learning techniques and concepts.
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in Statistical Analysis packages such as R, SAS and Matlab
- Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling

Share
Key job responsibilities
• The role requires to manage a Business Intelligence team with limited guidance. You hire the right job families (e.g. business intelligence engineers, business analysts, data engineers) to accomplish team goals. You are able to assess and manage performance. You take effective action addressing employee concerns. You are able to hire, develop, and promote team members.
• The role requires to set team priorities, partnering effectively with customers and stakeholders. You establish a roadmap and successfully deliver BI solutions that meet the business needs.
• The role requires to influence both business and technical teams that overlap in organizational business/technology areas.
• The role requires to proactively identify risks or cross-organizational dependencies, and bring them to the attention of your manager, customers, or stakeholders with plans for mitigation before they become roadblocks. You know when to escalate.
• The role requires to build mechanisms and standards for ensuring engineering excellence (e.g. correctness and efficiency) and operational excellence (e.g. quality, consistency, and reliability).
• The role requires to define metrics to measure your team’s progress, data/solution quality, and engineering/operational excellence.
• The role requires to deliver BI solutions for complex problems, often at the architectural level and often in conjunction with other engineering teams.
• The role requires to think strategically and make trade-offs. Your decisions impact the organization’s BI infrastructure, including resources and cost.
• The role requires to communicate ideas effectively, both verbally and in writing, to all types of audiences. You are able to contribute narratives to strategic documents.
- 7+ years of business intelligence and analytics experience
- 5+ years of delivering results managing a business intelligence or analytics team, including employee development and performance management experience
- Experience with SQL
- Experience with ETL
- Experience with data visualization using Tableau, Quicksight, or similar tools
- Experience with R, Python, Weka, SAS, Matlab or other statistical/machine learning software
- 4+ years of working with very large data warehousing environment experience
- 10+ years of data warehouse technical architectures, data modeling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding experience

Share
Overview of the role
The Business Research Analyst will be responsible for Data and Machine learning part of continuous improvement projects across the Selling Partner Return Reduction space. This will require collaboration with Product, Science and Engineering teams. The Research Analyst should be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. The Research Analyst will perform Big data analysis to identify patterns, train model to generate product to product relationship and product to brand & model relationship. The Research Analyst is also expected to continuously improve the ML/LLM solutions in terms of precision & recall, efficiency and scalability. The Research Analyst should be able to write clear and detailed functional specifications based on business requirements.
Key job responsibilities
• Collaborate and propose best in class ML/LLM solutions for business requirements
• Dive deep to drive product pilots, demonstrate innovation and customer obsession to steer the product roadmap
• Develop scalable solutions by writing high-quality code, building ML/LLM models using current research breakthroughs and implementing performance optimization techniques
• Coordinate design efforts between Sciences and Software teams to deliver optimized solutions
• Communicate technical concepts to stakeholders at all levels
• Ability to thrive in an ambiguous, uncertain and fast moving ML/LLMuse case developments
• Familiar with ML/LLM models and able to work independently.
- Bachelor's degree in Quantitative or STEM disciplines (Science, Technology, Engineering, Mathematics)
- 3+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms
- Strong hands-on programming skills in Python, SQL. Additional knowledge of Spark, Scala, R, Java desired but not mandatory
- Strong analytical thinking
- Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills
- Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations.
- Master's degree with specialization in ML, NLP or Computer Vision preferred
- 3+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis)
- Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics

Share
You will play a critical role in driving innovation and advancing the state-of-the-art in evaluating and training AI models. You will work closely with cross-functional teams, including product managers, engineers, and data scientists to ensure that our AI systems are best in class.Key job responsibilities
- Design complex data collections with human participants in response to science needs: author instructions, define and implement quality targets and mechanisms, provide day-to-day coordination of data collection efforts (including planning, scheduling, and reporting), and be responsible for the final deliverables
- Design and conduct complex data creation tasks using synthetic and model-based data generation methods, following state-of-the-art approaches
- Analyze and extract insights from large amounts of data
- Build tools or tool prototypes for data analysis or data creation, using Python or another scripting language
- Use modeling tools to bootstrap or test new AI functionalities
- Master's or higher degree in a relevant field (Computational Linguistics or equivalent field with computational analysis)
- 2+ years experience in computational linguistics or language data processing or AI data creation
- Experience with language data annotation systems and other forms of data markup
- Proficient with scripting languages, such as Python
- Experience working with speech, text, and multimodal data in multiple languages
- Excellent communication, strong organizational skills and very detailed oriented
- Comfortable working in a fast paced, highly collaborative, dynamic work environment
- PhD in Computational Linguistics (or equivalent field with computational emphasis)
- Expertise in bootstrapping AI data collections for quickly evolving requirements
- Extensive experience working with speech, text, and multimodal data in multiple languages
- Experience in data creation for complex agentic workflows
- Practical experience with Machine Learning
- Familiarity with technical concepts such as APIs
- Practical knowledge of version control and agile development
- Familiarity with database queries and data analysis processes (SQL, R, Matlab, etc.)
- Willingness to support several projects at one time, and to accept reprioritization as necessary
- Able to think creatively and possess strong analytical and problem solving skills
These jobs might be a good fit