

What you'll be doing:
Develop new Deep Learning models for automatic speech recognition, speech synthesis, neural machine translation and natural language
Design new large scale training algorithm
Open-source models using NeMo conversational AI frameworks
Mentor interns
Publish research papers on top speech and NLP conferences
Collaborate with universities and research teams.
What we need to see:
PhD in Computer Science or Electrical Engineering (or equivalent experience)
Proven understanding of Deep Learning for Natural Language Processing or Speech Recognition
At least 5 years of research experience in speech recognition or NLP
Excellent Python programming skills
Experience with PyTorch
Strong publications record
Ways to stand out from the crowd:
Contribution to open-source projects
Being reviewers for one of the top speech conferences
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Lead, build, and drive the architecture and engineering alignment for key automotive customer projects through all phases, from bringup to production and post-production, using the DRIVE platform.
Architect and build a seamless integration environment to amplify the scalability of our software solutions for our partners.
Collaborate with senior leaders across the company to evolve the product initiatives, roadmaps, and processes. Drive innovation bringing in new technologies.
Lead bring-up activities and provide deep technical guidance and strategies to resolve functional and system performance issues, working with internal and external partner teams.
Collaborate with our global engineering teams in our US, APAC, and Europe locations to deploy the solution to our customers.
What we need to see:
BE/BS or MS in computer science, robotics, computer engineering, or equivalent experience.
Understand the technological evolution in the self driving industry.
15+ years of deep hands-on technical experience.
Extensive strong technical leadership experience across large-scale organizations.
Established proficiency in application development and scalability for autonomous machines, and familiarity with robotics or automotive related middleware frameworks.
Broad and deep technical knowledge across software and hardware.
Proven ability to lead teams across multiple hardware, software and business groups through design and implementation.
Excellent communication and interpersonal skills and the ability to influence large organizations in meaningful ways.
Ways to stand out from the crowd
Familiar with automotive design processes and norms (e.g. ISO 26262, ASPICE), including in-vehicle testing, simulation and metrics development of autonomous driving systems.
Software development experience on QNX or equivalent RTOS.
Applied knowledge in resolving sophisticated, interrelated issues emanating from sensors to other embedded controllers on the vehicle and from interactions between applications.
Knowledge of GPU programming such as OpenCL or CUDA and understanding of the NVIDIA DRIVE platform.
Contributions to or ownership of open-source project and mentorship experience.
You will also be eligible for equity and .

As a Research Scientist specializing in Generative AI for Physical AI, you'll be at the forefront of developing next-generation algorithms that bridge the gap between virtual and physical realms. You'll work with state-of-the-art technology and have access to massive computational resources to bring your ideas to life.
What you'll be doing:
Pioneer revolutionary generative AI algorithms for physical AI applications, with a focus on advanced video generative models and video-language models
Architect and implement sophisticated data processing pipelines that produce premium-quality training data for Generative AI and Physical AI systems
Design and develop cutting-edge physics simulation algorithms that enhance Physical AI training
Scale and optimize large-scale training systems to efficiently harness the power of 20,000+ GPUs for training foundation models
Author influential research papers to share your groundbreaking discoveries with the global AI community
Drive innovation through close collaboration with research teams, diverse internal product groups, and external researchers
Build lasting impact by facilitating technology transfer and contributing to open-source initiatives
What we need to see:
PhD in Computer Science, Computer Engineering, Electrical Engineering, or related field (or equivalent experience).
Deep expertise in PyTorch and related libraries for Generative AI and Physical AI development
Strong foundation indiffusion, vision language and reasoning models and their applications
Proven experience with reinforcement learning algorithms and implementations
Robust knowledge of physics simulation and its integration with AI systems
Demonstrated proficiency in 3D generative models and their applications
Ways to stand out from the crowd:
Publications or contributions to major AI conferences (ICLR, NeurIPS, ICML, CVPR, ECCV, SIGGRAPH, ICCV, etc.)
Experience with large-scale distributed training systems
Background in robotics or physical systems
Open-source contributions to prominent AI projects
History of successful research-to-product transitions
You will also be eligible for equity and .

What you'll be doing:
Design and implement triggering systems and deploy containerized orchestration pipelines for distributed map creation, maintenance, and evaluation from crowdsourced vehicle data
Develop map quality detection systems including automated hotspot detection algorithms and human-in-the-loop review workflows for large scale map validation
Develop Python, C++, and JavaScript tools for map management, data validation, on-vehicle testing, and web-based geospatial visualizations
Build C++ modules for on-vehicle map integration with perception, localization, and other consumers to ensure end-to-end validation of the maps' impact on driving performance
Work with embedded systems and real-time constraints to optimize map consumption by autonomous driving software
Collaborate with perception, planning, and operations teams to improve map quality and real-time driving performance
What we need to see:
BS or MS degree in Computer Science, Software Engineering, or related field (or equivalent experience)
5+ years of proven experience building production data pipelines, distributed systems, mapping infrastructure, or map to vehicle integration
Strong C++ programming skills for performance-critical algorithms, data processing tools, and on-vehicle software
Strong Python programming skills for automation, workflow orchestration, and API development
Proficiency with JavaScript for building web-based tools, visualizations, and internal dashboards
Hands-on experience with Airflow or similar workflow orchestration frameworks
Experience with Docker, Kubernetes, and cloud platforms
Experience with Protocol Buffers, gRPC, and REST API design
Excellent problem-solving skills and ability to debug sophisticated distributed and embedded systems
Ways to stand out from the crowd:
Extensive experience with SD & HD mapping and autonomous vehicle software architectures
Deep understanding of how maps are consumed by localization, perception, and planning systems in autonomous vehicles
Deep knowledge of road topology generation, analysis, and graph partitioning algorithms
Background with computer vision concepts pipelines (3D geometry, point clouds,structure-from-motion,COLMAP, SfM, visual odometry)
Experience debugging and profiling performance on embedded platforms (NVIDIA Orin, Xavier, etc.)
You will also be eligible for equity and .

What you will be doing:
Develop, evaluate, and build architectures for Level 2 to Level 4 self-driving driver support and self-driving vehicle technologies
Characterize autonomous vehicle interactions relative to vehicle requirements to drive data collection for system training and verification
Drive autonomous vehicle system verification activities, including architecture and design verification, test strategy development working with the System Integration & Verification Test Leads, and reviewing and prioritizing test results
Lead and communicate data analysis, working with product and engineering teams to drive design decisions relative to requirements and system behavioral needs
Develop methods to analyze and compare the impact of ODDs on relevant performance metrics, translating data and analysis into impactful recommendations
What we need to see:
BS, MS, or PhD in Mechanical Engineering, Electrical Engineering, Aerospace Engineering, Physics, Computer Science, or another related field, or equivalent experience.
5+ years of proven industry experience
Strong analytics and architectural background in a development and production setting
Experience in systems engineering, robotics development, or systems integration and test
Strong leadership and interpersonal skills, with the ability to drive alignment between product development, engineering, and test teams.
Ways to stand out from the crowd:
Experience with complex hardware and software systems
Experience with software and/or tool development with a focus on modelling, analysis, utility languages such as Python. Success applying the systems V-Model to sophisticated engineering projects, ideally advanced driver assistance or autonomous vehicle systems and associated standards (e.g. ISO 26262)
Deep technical background in at least one focus area of robotic systems development (e.g. sensing, perception, motion control)
Experience with Model-Based Systems Engineering methods and tools, especially with Magic Cyber Systems Engineer / Cameo
You will also be eligible for equity and .

We, the
What you’ll be doing:
Decompose research questions into smaller, more manageable parts and tackle them in iterative steps.
Think critically to identify unseen gaps, and creatively bridge them with non-traditional, high-impact solutions.
Connect with researchers and product engineers to ground research findings in real-world problems.
Lead knowledge dissemination effort, with options for conference, journal, and in-house publication.
What we need to see:
Currently pursuing a Ph.D. in the field of Computer Science/Engineering, Electrical Engineering, or related fields.
Experience in gaming and/or human behavior in related application domains, demonstrated by one or more lead-author publications.
Examples of public portfolios (e.g. repositories, OSS contributions, notebooks, packages, or technical blog posts with code).
Proficiency with Python, Rust, and/or C++.
Experience with AI model training and evaluation frameworks, like PyTorch.
Comfortable with modern software development and version-control systems (e.g., GitHub/lab).
Familiarity with and interest in modern deep learning models, including working with large-scale, multi-modal foundation models (such as recent LLMs, VLMs).
Strong understanding of deep learning fundamentals and recent trends.
Ways to stand out from the crowd:
Experience with multi-node, multi-GPU training and inference workflows.
Proficiency in crafting, implementing, and conducting behavioral experiments involving human subjects and interactive visual stimuli.
Experience with quantitatively modeling human perception, movement, and decision-making.
A track record of applying human behavior models to machine learning applications.
You will also be eligible for Intern
Applications for this job will be accepted at least until November 3,2025.
What you'll be doing:
Leading the end-to-end product lifecycle, from new features to supporting new AI platforms, delivering multiple releases per year.
Collaborating with cross-functional teams, including engineering, marketing, and sales, to successfully implement product strategies and roadmaps.
Writing clear requirements, user stories, and compelling user experiences to ensure a quality product.
Managing release schedules and coordinating with development teams to ensure timely delivery of enterprise software products.
Applying sophisticated product management software to monitor progress, track metrics, and report on the success of product initiatives.
What we need to see:
Bachelor's degree in Computer Science, Engineering, or equivalent experience.
Minimum of 8 years of experience in software product management.
Extensive hands-on experience with compute, network, and storage technologies.
Proven proficiency in release management strategies and adept utilization of product management software tools.
Proven written and verbal communication skills. Ability to effectively connect with technical and non-technical stakeholders.
Leadership skills! Remove obstacles. Resolve ambiguity. Comfortable presenting and defending your fact-based opinion or recommendation.
Ways to stand out from the crowd:
Hands-on experience with NVIDIA Base Command Manager or Bright Cluster Manager
Experience as an SRE, datacenter operator, infrastructure manager
Experience with high-performance computing
Background with Software Development Life Cycle
You will also be eligible for equity and .

What you'll be doing:
Develop new Deep Learning models for automatic speech recognition, speech synthesis, neural machine translation and natural language
Design new large scale training algorithm
Open-source models using NeMo conversational AI frameworks
Mentor interns
Publish research papers on top speech and NLP conferences
Collaborate with universities and research teams.
What we need to see:
PhD in Computer Science or Electrical Engineering (or equivalent experience)
Proven understanding of Deep Learning for Natural Language Processing or Speech Recognition
At least 5 years of research experience in speech recognition or NLP
Excellent Python programming skills
Experience with PyTorch
Strong publications record
Ways to stand out from the crowd:
Contribution to open-source projects
Being reviewers for one of the top speech conferences
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך