Who we are

Helsing is a defence AI company. Our mission is to protect our democracies. We aim to achieve technological leadership, so that open societies can continue to make sovereign decisions and control their ethical standards.

As democracies, we believe we have a special responsibility to be thoughtful about the development and deployment of powerful technologies like AI. We take this responsibility seriously.

We are an ambitious and committed team of engineers, AI specialists and customer-facing programme managers. We are looking for mission-driven people to join our European teams – and apply their skills to solve the most complex and impactful problems. We embrace an open and transparent culture that welcomes healthy debates on the use of technology in defence, its benefits, and its ethical implications.

The role

At Helsing, we are redefining perception by building foundational intelligence for the physical world. You will research, design, and train large-scale Foundational Models that transform complex multimodal sensor data into cutting-edge autonomous capabilities.

We are seeking an individual at the intersection of AI Research and Machine Learning Engineering, with a proven track record in LLM/VLM/(multimodal) backbones. Your primary focus will be training and fine-tuning Vision-Language Models on our custom datasets to power our different products. You'll be responsible for the entire model lifecycle, from data curation and training to evaluation.

What we offer

  • Competitive salary and stock options (ESOP)
  • Relocation support: up to €2,500 and 4 weeks temporary accommodation
  • Learning: €500/£450 yearly allowance
  • Health & wellness: gym membership and mental health support (Nilo.health)
  • Social: regular company events and monthly social allowances
  • Enhanced parental leave: 22 weeks fully paid for primary caregivers & 6 weeks for secondary caregivers
  • Family support: 5 days of paid family emergency leave, 100% remote work option during pregnancy and phased return to work

You should apply if you

  • Hold an MSc or PhD in Machine Learning, Computer Science, or Mathematics with a focus on Deep Learning.
  • Possess theoretical understanding and practical experience in training Foundational Models (LLMs, VLMs, or other large-scale multimodal models) from scratch or through fine-tuning.
  • Have strong software engineering skills in Python and fluency with modern DL frameworks (PyTorch/JAX). You don’t just import libraries; you are comfortable writing custom layers, loss functions, and distributed training loops.
  • Are a clear communicator who can build from complex theoretical concepts and contribute to the company's internal research culture.
  • Have a "first-principles" mindset: you enjoy reading the latest ArXiv papers and implementing them into the codebase rapidly.

Nice to have

  • Top-Tier Research Track Record: You have authored publications in top-tier conferences (NeurIPS, ICML, ICLR, CVPR, ACL) specifically regarding attention mechanisms, efficient transformers, or multimodal learning.
  • Experience training models on large-scale GPU clusters.
  • Proven experience in data curation, data cleaning, data pruning, and building robust data pipelines for large-scale datasets.