Are you looking for a hybrid or remote work opportunity? Are you interested in a workplace that allows for flexibility in your day? Are you ready for a workplace that provides benefits that suit your needs?

Join our team as we innovate the future of data platform architecture, enabling massive scaling and data processing for ML and Gen AI projects. You'll be at the forefront of processing vast unstructured data, building high-throughput APIs, and supporting distributed compute frameworks for seamless model deployment. Ready to dive into the heart of cutting-edge tech?

Your role in action

  • Build our next-generation data platform tooling and servicesto support the ingestion and processing of billions of documents at scale. 
  • Improve and extend our Spark based distributed data processing pipeline. 
  • Improve and extend our Rust based distributed query engine used to request large amounts of document data. 
  • Create tools to automate and optimize processes across disciplines 
  • Actively participate in the on-call schedule to investigate and fix production issues related to our data processing pipeline or query engine. 
  • Participate in code reviews for projects written by your team 
  • Focus on quality through comprehensive unit and integration testing 

  • Comprehensive health plan
  • Flexible work arrangements
  • Two, week-long company breaks per year
  • Unlimited time off
  • Long-term incentive program
  • Training investment program

  • 4+ years of software development experience in writing performant, commercial-grade systems and applications  
  • Experience with monitoring and troubleshooting production environments 
  • Proficiency in programming languages used in high volume data processing and applications like Java or Scala and Python 
  • Experience building data pipelines with distributed compute frameworks like Hadoop. Spark, orDask 
  • Knowledge of Linux/Unix systems, Docker/Kubernetes and CI/CD including scripting in Python or other scripting languages to automate build and deployment processes 
  • Knowledge of professional software engineering practices & software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations 
  • Leverages best practices and past experiences to mentor and improve the productivity of the team 

We’d particularly love it if you have

  • Deep experience building and debugging distributed data pipelines 
  • Experience with columnar databases and storage formats like Delta Lake and Parquet 
  • Experience deploying and managing services on Kubernetes 
  • Experience building with Rust 
  • If you don’t meet 100% of the above qualifications, you should still seriously consider applying. 

Relativity

Relativity

1000+

We are Relativity. A market-leading, global tech company that equips organizations with a powerful platform to organize data, discover the truth, and act on it.  Over 180,000 users in 40+ countries rely on our platform to manage large volumes of unstructured data and quickly identify key themes during litigation, internal investigations, and compliance projects. As we grow, we continue to seek individuals that will bring their whole self to our team atmosphere. Join us in the transformation of the legal industry and play a pivotal role in shaping the future of the practice of law and beyond.