

What you'll be doing:
As a senior member in our team, you will work with pre-silicon and post-silicon data analytics - visualization, insights and modeling.
Design and uphold sturdy data pipelines and ETL processes for the ingestion and processing of DFX Engineering data from various origins
Lead engineering efforts by collaborating with cross-functional teams (execution, analytics, data science, product) to define data requirements and ensure data quality and consistency
You will work on hard-to-solve problems in the Design For Test space which will involve application of algorithm design, using statistical tools to analyze and interpret complex datasets and explorations using Applied AI methods.
In addition, you will help develop and deploy DFT methodologies for our next generation products using Gen AI solutions.
You will also help mentor junior engineers on test designs and trade-offs including cost and quality.
What we need to see:
BSEE (or equivalent experience) with 5+, MSEE with 3+, or PhD with 1+ years of experience in low-power DFT, Data Visualization, Applied Machine Learning or Database Management.
Experience with SQL, ETL, and data modeling is crucial
Hands-on experience with cloud platforms (AWS, Azure, GCP)
Design and implement highly scalable, fault tolerant distributed database solutions
Lead data modeling, performance tuning, and capacity planning for large-scale, mission-critical storage workloads
Excellent knowledge in using statistical tools for data analysis & insights.
Strong programming and scripting skills in Perl, Python, C++ or Tcl is expected
Outstanding written and oral communication skills with the curiosity to work on rare challenges.
Ways to stand out from the crowd:
Experience in data pipeline and database architecture for real-world systems
Experience in application of AI for EDA-related problem-solving
Good understanding of technology and passionate about what you do
Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך

What you'll be doing:
Develop and implement the business logic in the new End-to-End Data systems for our Planning, Logistics, Services, and Sourcing initiatives.
Lead discussions with Operations stakeholders and IT to identify and implement the right data strategy given data sources, data locations, and use cases.
Analyze and organize raw operational data including structured and unstructured data. Implement data validation checks to track and improve data completeness and data integrity.
Build data systems and data pipelines to transport data from a data source to the data lake ensuring that data sources, ingestion components, transformation functions, and destination are well understood for implementation.
Prepare data for AI/ML/LLM models by making sure that the data is complete, has been cleansed, and has the necessary rules in place.
Build/develop algorithms, prototypes, and analytical tools that enable the Ops teams to make critical business decisions.
Build data and analytic solutions for key initiatives to set up manufacturing plants in US.
Support key strategic initiatives like building scalable cross-functional datalake solutions.
What we need to see:
Master’s or Bachelor’s degree in Computer Science or Information System, or equivalent experience
8+ years of relevant experience including programming knowledge (i.e SQL, Python, Java, etc)
Highly independent, able to lead key technical decisions, influence project roadmap and work effectively with team members
Experience architecting, designing, developing, and maintaining data warehouses/data lakes for complex data ecosystems
Expert in data and database management including data pipeline responsibilities in replication and mass ingestion, streaming, API and application and data integration
Experience in developing required infrastructure for optimal extraction, transformation, and loading of data from various sources using Databricks, AWS, Azure, SQL or other technologies
Strong analytical skills with the ability to collect, organize, and disseminate significant amounts of information with attention to detail and accuracy
Knowledge of supply chain business processes for planning, procurement, shipping, and returns of chips, boards, systems, and networking.
Ways to stand out from the crowd:
Self-starter, collaborative, positive mindset, committed to growth with integrity and accountability, highly motivated, driven, and high-reaching
Solid ability to drive continuous improvement of systems and processes
A consistent record to work in a fast-paced environment where good interpersonal skills are crucial
You will also be eligible for equity and .

What you'll be doing:
Lead sophisticated programs focused on improving the quality and efficiency of data center infrastructure, hardware, and software domains with multi-year strategic roadmaps and cross-
Drive technical execution from requirements gathering through production launch, including writing technical specifications, coordinating release schedules, and ensuring operational readiness across multiple team dependencies
Own server hardware development, testing, and integration efforts for computing products, working closely with original design manufacturers and contract manufacturers on new product introductions at global manufacturing scale
Partner with software development teams to build automation programs for large-scale infrastructure testing and develop solutions that enhance operational performance across highly concurrent, high-throughput distributed systems
Guide enterprise network infrastructure and data center operations initiatives covering servers, storage, networking, power, and cooling systems while serving as domain leader for manufacturing test infrastructure
Lead continuous improvement initiatives for engineering processes, quality management, and operational excellence while leading risk mitigation strategies and critical path oversight
Build trusted partnerships across hardware teams, security professionals, supply chain, operations, and product management to drive technical decisions and resolve sophisticated multi-functional dependencies
What we need to see:
Bachelor's degree in Engineering, Computer Science, Electrical Engineering, Mechanical Engineering, or related technical field, or equivalent experience
12+ years working directly with engineering teams with demonstrated technical program management experience
More than 7 years of practical program or project management expertise being responsible for intricate technology ventures involving teams with multifaceted strengths
5+ years of software development experience with proficiency in programming languages.
5+ years leading hardware product development and new product introduction on a global manufacturing scale
Deep technical expertise in server, network, or storage product architecture and manufacturing test development
Strong understanding of large-scale distributed systems, data center infrastructure, and enterprise network architecture
Experience with Linux/Unix or Windows system administration, database management, and infrastructure automation
Demonstrated ability to lead programs across multiple teams, handle project scope, schedule, budget, and quality, and maintain executive-level relationships
Ways to stand out from the crowd:
8+ years directly leading sophisticated technology projects with experience designing and architecting highly reliable, scalable systems
Track record launching AI or ML server products with new technology enablement such as Liquid Cooling
Experience leading manufacturing test engineering teams within the server, network, or storage sector with expertise in Design for Excellence methodologies
Knowledge of security engineering, cryptography, quality management systems, and supply chain operations
Demonstrated single-threaded ownership of strategic programs with demonstrated ability to deliver groundbreaking systems independently in fast-paced, ambiguous environments
You will also be eligible for equity and .

This position requires the incumbent to have a sufficient knowledge of English to have professional verbal and written exchanges in this language since the performance of the duties related to this position requires frequent and regular communication with colleagues and partners located worldwide and whose common language is English.

As part of the NVIDIA Solutions Architecture team, you will navigate uncharted waters and gray space to drive successful market adoption by balancing strategic alignment, data-driven analysis, and tactical execution across engineering, product, and sales teams. You will serve as a critical liaison product strategy and large-scale customer deployment.
What you’ll be doing:
Lead the end-to-end execution for key Hyperscalers customers to optimally and rapidly go-to-market at scale with NVIDIA data center products (e.g., GB200).
Partner with Hyperscalers Product Customer Lead to understand strategy, define metrics, ensure alignment.
Data-Driven Execution: Collect, maintain, and analyze sophisticated data trends to assess the product's market health, identify themes, challenges, and opportunities, and guide the customer to resolution of technical roadblocks.
Problem Solving & Navigation: Navigate complex issues effectively, embodying a productive leader who balances short-term unblocks with long-term process and product improvements.
Executive Communication: Deliver concise, direct executive-level updates and regular status communications to multi-functional leadership on priorities, progress, and vital actions.
Process Improvement: Integrate insights from deployment challenges and customer feedback into future developments for processes and products through close partnership with Product and Engineering teams.
What we need to see:
BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
8+ years of combined experience in Solutions Architecture, Technical Program Management, Product Management, System Reliability Engineer or other complex multi-functional roles.
Proven track record to lead and influence without direct authority across technical and business functions.
Proven analytical skills with experience in establishing benchmarks, collecting/analyzing intricate data, and redefining data into strategic themes, action items, and executive summaries.
Skilled in reviewing logs and deployment data, and aiding customers in resolving technical concerns (e.g., identifying performance issues associated with AI/ML and system architecture).
Ways to stand out from the crowd:
Lead multi-functional teams and influence interested parties to address challenges in customer datacenter deployments, ensuring cluster health and performance at scale.
Established track record of driving a product from the pilot phase to at-scale deployment in a data center environment.
Hands-on experience with NVIDIA hardware (e.g., H100, GB200) and software libraries, with an understanding of performance tuning and error diagnostics.
Knowledge of DevOps/MLOps technologies such as Docker/containers and Kubernetes, and their relationship to data center deployments.
Confirmed capacity to align, adopt, and disseminate insights among various internal teams (e.g., collaborating with other program leads).
You will also be eligible for equity and .

What you'll be doing:
Develop and implement the new End-to-End Data systems for our Planning, Logistics and Services, and Sourcing initiatives
Lead discussions with stakeholders and IT to identify and implement the right data strategy given data sources, data locations, and use cases
Build data pipelines to transport data from a data source to the data lake
Analyze and organize raw operational data including structured and unstructured data
Build data systems and pipelines ensuring that data sources, ingestion components, transformation functions, and destination are well understood for implementation
Interpret trends and patterns by performing complex data analysis
Prepare data for prescriptive and predictive modeling by making sure that the data is complete, has been cleansed, and has the necessary rules in place
Build/develop algorithms, prototypes, and analytical tools that enable the Ops teams to make critical business decisions.
Build data and analytic solutions forkeyinitiativestoset upmanufacturingplantsinUS.
Support key strategic initiatives like building scalablecross-functionaldatalakesolutions.
What we need to see:
Master’s or Bachelor’s degree in Computer Science or Information System, or equivalent experience
8+ years of relevant experience including programming knowledge (i.e SQL, Python, Java, etc)
Highly independent, able to lead key technical decisions, influence project roadmap and work effectively with team members
Experience architecting, designing, developing, and maintaining data warehouses/data lakes for complex data ecosystems
Expert in data and database management including data pipeline responsibilities in replication and mass ingestion, streaming, API and application and data integration
Experience in developing required infrastructure for optimal extraction, transformation, and loading of data from various sources using Databricks, AWS, Azure, SQL or other technologies
Strong analytical skills with the ability to collect, organize, and disseminate significant amounts of information with attention to detail and accuracy
Knowledge in operational processes in chips, boards, systems, and servers with a view of data landscape
Knowledge of supply chain business processes for planning,procurement,shipping,andreturns
Ways to stand out from the crowd:
Self-starter, positive mindset with integrity and accountability, highly motivated, driven, high-reaching, and attracted to a meaningful opportunity.
Solid ability to drive continuous improvement of systems and processes.
A consistent record to work in a fast-paced environment where good interpersonal skills are essential
You will also be eligible for equity and .

What you'll be doing:
Act as the subject matter expert (SME) for material management processes supporting data center infrastructure hardware across its full lifecycle.
Be responsible for the planning and execution of operational hardware sparing strategies to ensure availability and minimal downtime.
Own the end-of-life (EOL) management process for infrastructure hardware, including decommission planning and material disposition.
Ensure inventory accuracy through ongoing audits, reconciliation processes, and alignment with data center operational needs.
Apply ABC inventory classification methodology to prioritize and optimize stock levels based on usage, cost, and criticality.
Maintain and improve material planning models to support forecasting and capacity planning initiatives.
Analyze data trends to drive continuous improvements in inventory optimization, cost control, and operational efficiency.
What we need to see:
12+ years of experience in material management, inventory operations, or hardware lifecycle support within data center infrastructure, manufacturing, or supply chain environment.
Solid grasp of data center hardware components (servers, networking, storage, etc.) and their lifecycle (deployment, sparing, EOL).
Demonstrable experience with inventory control practices, including ABC classification, stock audits, and accuracy initiatives.
Excellent organizational and documentation skills; attention to detail is a must.
Bachelor’s degree in Supply Chain Management, Operations, Logistics, Information Technology, or related field; or equivalent experience.
You will also be eligible for equity and .

What you'll be doing:
As a senior member in our team, you will work with pre-silicon and post-silicon data analytics - visualization, insights and modeling.
Design and uphold sturdy data pipelines and ETL processes for the ingestion and processing of DFX Engineering data from various origins
Lead engineering efforts by collaborating with cross-functional teams (execution, analytics, data science, product) to define data requirements and ensure data quality and consistency
You will work on hard-to-solve problems in the Design For Test space which will involve application of algorithm design, using statistical tools to analyze and interpret complex datasets and explorations using Applied AI methods.
In addition, you will help develop and deploy DFT methodologies for our next generation products using Gen AI solutions.
You will also help mentor junior engineers on test designs and trade-offs including cost and quality.
What we need to see:
BSEE (or equivalent experience) with 5+, MSEE with 3+, or PhD with 1+ years of experience in low-power DFT, Data Visualization, Applied Machine Learning or Database Management.
Experience with SQL, ETL, and data modeling is crucial
Hands-on experience with cloud platforms (AWS, Azure, GCP)
Design and implement highly scalable, fault tolerant distributed database solutions
Lead data modeling, performance tuning, and capacity planning for large-scale, mission-critical storage workloads
Excellent knowledge in using statistical tools for data analysis & insights.
Strong programming and scripting skills in Perl, Python, C++ or Tcl is expected
Outstanding written and oral communication skills with the curiosity to work on rare challenges.
Ways to stand out from the crowd:
Experience in data pipeline and database architecture for real-world systems
Experience in application of AI for EDA-related problem-solving
Good understanding of technology and passionate about what you do
Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment
You will also be eligible for equity and .
משרות נוספות שיכולות לעניין אותך