Must be currently enrolled in a PhD programin audio signal processing, acoustics, or a related machine learning field, including but not limited to, speech enhancement, speaker recognition, self-supervised learning, and/or semi-supervised learning.
Other Requirements:
Deep experience in
at least oneof the following areas:
- Classical signal processing theory (adaptive filters, detection and estimation theory)
- ML and AI algorithms, approaches to audio signal processing
- Microphone signal processing
- Multi-modal (e.g., audio-visual) signal processing
- Numerical linear algebra
- Demonstrable experience developing, characterizing, and implementing signal processing algorithms.
Preferred/Additional Qualifications:
- Ability to clearly communicate what work you have done, why it was important, and how it was different from existing projects.
- Ability to work in ambiguous uncharted areas and having the experience, creativity, and technical depth to identify technical gaps, acquire missing information, align requirements, and pick the right direction.
- Ability to create, train, and optimize neural network architectures for audio (or audio-visual) applications such as speech enhancement, speaker recognition, echo cancellation, source localization, audio-visual speaker diarization and active speaker detection.
- Proven understanding of deep learning tools such as PyTorch.
- Fluency in Python and/or Matlab.