The point where experts and best companies meet
Share
What you’ll be doing:
Design and implement network architecture, storage solutions, virtualization, and services specific to EDA workflows.
Work closely with EDA teams to understand their requirements and translate them into infrastructure solutions.
Collaborate with system administrators, database administrators, and security experts to ensure seamless integration.
Use broad IT infrastructure skills to implement infrastructure innovations which accelerate chip development
Develop automation in order to scale infrastructure easily
Work in a diverse team performing fast paced investigations to empower our engineers to develop at the speed of light.
Collaborate with EDA teams to help continuously improve how our chip development process utilizes our infrastructure environment
Architect security mechanisms to protect intellectual property
Directly contribute to the overall quality and improve time to market for our next generation chips.
What we need to see:
Strong experience investigating and debugging complex, multi-discipline problems in a UNIX environment
Hands on experience with architectural decisions in technologies (storage, networking, compute) our chip engineers depend on
Experience with automation workflows such as Ansible and Jenkins
Understanding of distributed UNIX system concepts such as NFS, autofs, DNS, LDAP and/or NIS
UNIX Systems programming and automation using industry standard languages and familiar with API calls, python experience preferred
Authoritative level usage of UNIX and UNIX CLI utilities such as sed, awk, grep
Excellent planning and communication skills and a passion for improving the productivity and efficiency of other specialists
History of using data analysis principles and influencing data-driven decisions
4+ years experience in a large, distributed UNIX environment
MS (preferred) or BS in Computer Science, similar degree or equivalent experience
Ways to stand out from the crowd:
Experience with chip design workflows, such as front end verification, back end workflows, or mixed signal workflows
Extensive knowledge with job schedulers (in particular IBM Spectrum LSF and/or SLURM)
Hands-on experience running workloads in a batch computing environment
Deep understanding of distributed system principles
Experience in crafting solutions that balance security and productivity for the end user
You will also be eligible for equity and .
These jobs might be a good fit