Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.

We are looking for an experienced engineer to drive AI workload productization and benchmarking for Large Language Models (LLMs). This role focuses on making models customer-ready, developing benchmarking infrastructure, and ensuring our AI models deliver industry-leading efficiency and scalability.

This role is hybrid, based out of Warsaw or Gdansk, Poland

We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.

Responsibilities:

  • Design and execute comprehensive model testing protocols to ensure robustness and scalability of AI models.
  • Develop and execute performance and accuracy benchmarking tests for AI workloads across various computational environments.
  • Analyze and optimize system performance using advanced profiling and tuning techniques.
  • Conduct competitive analysis and positioning to inform strategic decision-making and product development.
  • Collaborate with cross-functional teams to integrate best practices and innovations in AI performance optimization.
  • Integrate LLMs with popular inference server platforms (e.g., vLLM), perform testing and benchmarking using these platforms, and stay up to date with the latest inference server trends to influence strategic decision-making.
  • Track AI model accuracy and performance in a CI/CD environment. Identify and triage regressions, and implement or drive fixes with other teams to maintain the accuracy and performance of the models.

Experience & Qualifications:

  • Bachelor's, Master’s, or PhD in Computer Science, Electrical Engineering, Machine Learning, or a related field.
  • Strong background in AI model benchmarking and profiling.
  • Experience with scalable AI infrastructure, including distributed computing environments.
  • Proficiency in Python for AI workload optimization.
  • Familiarity with LLM frameworks, AI accelerators, and performance tuning methodologies.
  • Familiarity with Github CI/CD environments is a requirement.
  • Familiarity with LLM inference servers (e.g. vLLM) is bonus.
  • Ability to interpret and analyze hardware/software interactions to maximize AI model efficiency.
Tenstorrent

Tenstorrent