- perception-based techniques for acquiring multimodal signals,- planning and decision-making systems for sense making and control, and- building of interactive prototypes using state-of-the-art AI technology insights acquired from experimental data
M.S. or Ph.D. in Computer Science, Human-Robot-Interaction (HRI), Human-Computer Interaction, Cognitive Science or equivalent experience.
10+ years experience developing autonomous systems, control and perception models
Experience with hardware/software rapid prototyping for HRI applications
Experience in using state-of-the-art AI technologies
Experience in programming autonomous camera systems
Hands-on experience working with simulators and real-world robots
A passion for developing technologies for real-world, large scale impact
Strong communication skills
Experience in applying machine learning to robotics, including areas such as reinforcement, imitation, and transfer learning
Experience in integrating multimodal sensing into planning and control
An understanding of decision making modeling frameworks, such as Bayesian Belief Networks, state machines, gating networks, intelligence models, human-AI interaction
Proficiency with ML modeling frameworks (PyTorch, Tensorflow, etc.) and ROS
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.