

What You'll be Doing:
Build system hardware products around GPU & Tegra SoC.
Collaborate with cross-function team to pursue the balance of product cost, performance, and schedule under the guidance of system architects and product architects.
Drive initial test and bringup, lead the debug efforts.
Create schematic, supervise PCB layout and system validation.
Handle documentation required to release the product to manufactures and partners.
Optimize/invent circuits, functions for better performance, and lower cost.
Improve the design flow together with the infrastructure team.
What We Need to See:
Recent graduate with a B.S or M.S. in Electrical Engineering or equivalent experience.
Have strong analytical skills including past experience in PCB design and review.
Experience with using lab tools such as oscilloscopes, multimeters, and logic analyzers.
Possess a nurtured knowledge of Linux, and be very comfortable working in various Linux environments as well as with Windows OS’s
Strong verbal and written skills.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
As a senior member in our team, you will work with pre-silicon and post-silicon data analytics - visualization, insights and modeling.
Design and uphold sturdy data pipelines and ETL processes for the ingestion and processing of DFX Engineering data from various origins
Lead engineering efforts by collaborating with cross-functional teams (execution, analytics, data science, product) to define data requirements and ensure data quality and consistency
You will work on hard-to-solve problems in the Design For Test space which will involve application of algorithm design, using statistical tools to analyze and interpret complex datasets and explorations using Applied AI methods.
In addition, you will help develop and deploy DFT methodologies for our next generation products using Gen AI solutions.
You will also help mentor junior engineers on test designs and trade-offs including cost and quality.
What we need to see:
BSEE (or equivalent experience) with 5+, MSEE with 3+, or PhD with 1+ years of experience in low-power DFT, Data Visualization, Applied Machine Learning or Database Management.
Experience with SQL, ETL, and data modeling is crucial
Hands-on experience with cloud platforms (AWS, Azure, GCP)
Design and implement highly scalable, fault tolerant distributed database solutions
Lead data modeling, performance tuning, and capacity planning for large-scale, mission-critical storage workloads
Excellent knowledge in using statistical tools for data analysis & insights.
Strong programming and scripting skills in Perl, Python, C++ or Tcl is expected
Outstanding written and oral communication skills with the curiosity to work on rare challenges.
Ways to stand out from the crowd:
Experience in data pipeline and database architecture for real-world systems
Experience in application of AI for EDA-related problem-solving
Good understanding of technology and passionate about what you do
Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment
You will also be eligible for equity and .

What you'll be doing:
Develop and implement the business logic in the new End-to-End Data systems for our Planning, Logistics, Services, and Sourcing initiatives.
Lead discussions with Operations stakeholders and IT to identify and implement the right data strategy given data sources, data locations, and use cases.
Analyze and organize raw operational data including structured and unstructured data. Implement data validation checks to track and improve data completeness and data integrity.
Build data systems and data pipelines to transport data from a data source to the data lake ensuring that data sources, ingestion components, transformation functions, and destination are well understood for implementation.
Prepare data for AI/ML/LLM models by making sure that the data is complete, has been cleansed, and has the necessary rules in place.
Build/develop algorithms, prototypes, and analytical tools that enable the Ops teams to make critical business decisions.
Build data and analytic solutions for key initiatives to set up manufacturing plants in US.
Support key strategic initiatives like building scalable cross-functional datalake solutions.
What we need to see:
Master’s or Bachelor’s degree in Computer Science or Information System, or equivalent experience
8+ years of relevant experience including programming knowledge (i.e SQL, Python, Java, etc)
Highly independent, able to lead key technical decisions, influence project roadmap and work effectively with team members
Experience architecting, designing, developing, and maintaining data warehouses/data lakes for complex data ecosystems
Expert in data and database management including data pipeline responsibilities in replication and mass ingestion, streaming, API and application and data integration
Experience in developing required infrastructure for optimal extraction, transformation, and loading of data from various sources using Databricks, AWS, Azure, SQL or other technologies
Strong analytical skills with the ability to collect, organize, and disseminate significant amounts of information with attention to detail and accuracy
Knowledge of supply chain business processes for planning, procurement, shipping, and returns of chips, boards, systems, and networking.
Ways to stand out from the crowd:
Self-starter, collaborative, positive mindset, committed to growth with integrity and accountability, highly motivated, driven, and high-reaching
Solid ability to drive continuous improvement of systems and processes
A consistent record to work in a fast-paced environment where good interpersonal skills are crucial
You will also be eligible for equity and .

What you’ll be doing:
Contribute features to vLLM that empower the newest models with the latest NVIDIA GPU hardware features; profile and optimize the inference framework (vLLM) with methods like speculative decoding,data/tensor/expert/pipeline-parallelism,prefill-decode disaggregation.
Develop, optimize, and benchmark GPU kernels (hand-tuned and compiler-generated) using techniques such as fusion, autotuning, and memory/layout optimization; build and extend high-level DSLs and compiler infrastructure to boost kernel developer productivity while approaching peak hardware utilization.
Define and build inference benchmarking methodologies and tools; contribute both new benchmark and NVIDIA’s submissions to the industry-leading MLPerf Inference benchmarking suite.
Architect the scheduling and orchestration of containerized large-scale inference deployments on GPU clusters across clouds.
Conduct and publish original research that pushes the pareto frontier for the field of ML Systems; survey recent publications and find a way to integrate research ideas and prototypes into NVIDIA’s software products.
What we need to see:
Bachelor’s degree (or equivalent expeience) in Computer Science (CS), Computer Engineering (CE) or Software Engineering (SE) with 7+ years of experience; alternatively, Master’s degree in CS/CE/SE with 5+ years of experience; or PhD degree with the thesis and top-tier publications in ML Systems, GPU architecture, or high-performance computing.
Strong programming skills in Python and C/C++; experience with Go or Rust is a plus; solid CS fundamentals: algorithms & data structures, operating systems, computer architecture, parallel programming, distributed systems, deep learning theories.
Knowledgeable and passionate about performance engineering in ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).
Familiarity with GPU programming and performance: CUDA, memory hierarchy, streams, NCCL; proficiency with profiling/debug tools (e.g., Nsight Systems/Compute).
Experience with containers and orchestration (Docker, Kubernetes, Slurm); familiarity with Linux namespaces and cgroups.
Excellent debugging, problem-solving, and communication skills; ability to excel in a fast-paced, multi-functional setting.
Ways to stand out from the crowd
Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).
Hands-on work with ML compilers and DSLs (e.g., Triton,TorchDynamo/Inductor,MLIR/LLVM, XLA), GPU libraries (e.g., CUTLASS) and features (e.g., CUDA Graph, Tensor Cores).
Experience contributing tocontainerization/virtualizationtechnologies such ascontainerd/CRI-O/CRIU.
Experience with cloud platforms (AWS/GCP/Azure), infrastructure as code, CI/CD, and production observability.
Contributions to open-source projects and/or publications; please include links to GitHub pull requests, published papers and artifacts.
You will also be eligible for equity and .

ATE/SLT hardware team provides the interface hardware of IC package testing at final test and system level test. Hardware includes highly custom high-speed sockets, active thermal plungers, and load boards. Take an active role in hardware design for both product bring-up and HVM, design and manufacture improvement, and verification/debug, to production support.
What you’ll be doing:
Review and approve the design of test socket, thermal plunger, and other accessories related to ATE/SLT IC testing for product bring-up and production.
Provide ATE and SLT test fixture/HW solutions from design, manufacturing order, schedule monitoring, verification, and improvement.
Drive ATE and SLT socket/thermal technology, solutions, and qualifications.
Drive DOE with a sense of responsibility from collecting and analyzing engineering data and making decisions and recommendations for improvement.
Provide cross-functional support.
Drive a project and host a meeting internal and external stakeholders.
Apply strong hardware troubleshooting and root-cause analysis. Possess the ability to provide preventive actions.
Able to debug ATE/SLT hardware setup such as socket, thermal plunger, PCB, chiller, and handler.
Require on-duty lab support. Weekend support may be necessary.
What we need to see:
Bachelor’s degree or equivalent experience is required. EE and ME related degrees are preferred.
5 plus years of IC testing engineering and ATE/SLT hardware engineering experience.
Having test socket knowledge is a strong plus. Familiar with test socket mechanics, footprints, and contact pins.
Fully capable of understanding mechanical drawings and knowing mechanical and electrical circuit knowledge.
Have IC testing knowledge of ATE/SLT interface hardware, maintenance, troubleshooting, and repairs. (Socket, load board, thermal plungers)
Having ATE tester knowledge is a plus.
Proven troubleshooting skills and ability to provide solutions and prevent reoccurrence.
Willing to conduct hands-on-work such as socket pin repair, electric wire, and tiny capacitor/resistor.
Able to lift 30 pounds load board during unboxing, boxing, and a short transportation.
Have the knowledge of prevention and control of electrostatic discharge (ESD)
You will also be eligible for equity and .

What you’ll be doing:
Building and integrating tools to configure, simulate, and test robots
Maintain and optimize the existing simulation stack for scalable robot and sensor simulation
Integrate APIs to support large scale simulator deployments on distributed systems
Develop microservices, using ZMQ, DDS, RPC, RESTful and other network level communication APIs
What we need to see:
Pursuing or recently completed BS, MS, PhD (or equivalent experience) in Computer Science, Simulation, or related field
Experience in systems software engineering
Excellent C, C++, and Python programming skills
Flexibility to adapt quickly to varying roles & responsibilities
Experience with physics simulation, robotics or motion planning & controls
Excellent interpersonal skills and ability to work optimally with multi-functional teams, principles, and architects across organizational boundaries and geographies
Ways to stand out from the crowd:
Experience with Isaac Sim, Omniverse, USD, MJCF, URDF, CAD formats
Background with physical robots, reinforcement learning, synthetic data generation
Experience with UI/UX for user and developer facing tools
Background with shipping and supporting software products
Experience with system level optimization using multi-threading, asynchronous programming, concurrency and parallelism
You will also be eligible for equity and .

What you'll be doing:
Develop and implement the new End-to-End Data systems for our Planning, Logistics and Services, and Sourcing initiatives
Lead discussions with stakeholders and IT to identify and implement the right data strategy given data sources, data locations, and use cases
Build data pipelines to transport data from a data source to the data lake
Analyze and organize raw operational data including structured and unstructured data
Build data systems and pipelines ensuring that data sources, ingestion components, transformation functions, and destination are well understood for implementation
Interpret trends and patterns by performing complex data analysis
Prepare data for prescriptive and predictive modeling by making sure that the data is complete, has been cleansed, and has the necessary rules in place
Build/develop algorithms, prototypes, and analytical tools that enable the Ops teams to make critical business decisions.
Build data and analytic solutions forkeyinitiativestoset upmanufacturingplantsinUS.
Support key strategic initiatives like building scalablecross-functionaldatalakesolutions.
What we need to see:
Master’s or Bachelor’s degree in Computer Science or Information System, or equivalent experience
8+ years of relevant experience including programming knowledge (i.e SQL, Python, Java, etc)
Highly independent, able to lead key technical decisions, influence project roadmap and work effectively with team members
Experience architecting, designing, developing, and maintaining data warehouses/data lakes for complex data ecosystems
Expert in data and database management including data pipeline responsibilities in replication and mass ingestion, streaming, API and application and data integration
Experience in developing required infrastructure for optimal extraction, transformation, and loading of data from various sources using Databricks, AWS, Azure, SQL or other technologies
Strong analytical skills with the ability to collect, organize, and disseminate significant amounts of information with attention to detail and accuracy
Knowledge in operational processes in chips, boards, systems, and servers with a view of data landscape
Knowledge of supply chain business processes for planning,procurement,shipping,andreturns
Ways to stand out from the crowd:
Self-starter, positive mindset with integrity and accountability, highly motivated, driven, high-reaching, and attracted to a meaningful opportunity.
Solid ability to drive continuous improvement of systems and processes.
A consistent record to work in a fast-paced environment where good interpersonal skills are essential
You will also be eligible for equity and .

What You'll be Doing:
Build system hardware products around GPU & Tegra SoC.
Collaborate with cross-function team to pursue the balance of product cost, performance, and schedule under the guidance of system architects and product architects.
Drive initial test and bringup, lead the debug efforts.
Create schematic, supervise PCB layout and system validation.
Handle documentation required to release the product to manufactures and partners.
Optimize/invent circuits, functions for better performance, and lower cost.
Improve the design flow together with the infrastructure team.
What We Need to See:
Recent graduate with a B.S or M.S. in Electrical Engineering or equivalent experience.
Have strong analytical skills including past experience in PCB design and review.
Experience with using lab tools such as oscilloscopes, multimeters, and logic analyzers.
Possess a nurtured knowledge of Linux, and be very comfortable working in various Linux environments as well as with Windows OS’s
Strong verbal and written skills.
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך