Machine Learning & Data Systems Engineer

  • Paris
  • Full-Time
  • Start Date: 05 January 2026
  • Apply Now

About

At Inarix, we are transforming agriculture through AI.

Our vision platform helps farmers, grain buyers, and processors identify the best of each grain, quickly, accurately, and at scale.

By replacing expensive, hardware-based analyzers with a flexible and accessible digital solution, we empower the entire agri-food chain to unlock more value from every harvest.

From field to silo, our services bring data-driven insights that improve quality, reduce waste, and support better decision-making, for a more efficient and sustainable agriculture worldwide.

We already work with major industry players across Europe, and we’re just getting started !

With a remote-first culture, whether working from Paris or anywhere in France, we combine flexibility with human connection thanks to regular team gatherings at our Paris office (14e)

Joining Inarix means working at the crossroads of AI, agriculture, and impact, alongside a curious and ambitious team, driven by purpose and real-world outcomes

Job Description

About the position

As a Data & ML systems engineer, you will be in charge of the heart of Inarix: making our models train and run in a heartbeat and our data flowing everywhere.

You'll quickly become a key contributor to our ML and data platforms which is used both by our customers and by our research and development team. This means improving constantly the current platform as well as expanding it with the latest technologies, by benchmarking, prototyping, and shipping new bricks in production.

The ML platform is based on a mixture of Kubeflow (for training orchestration, collaborative notebooks etc.) and a custom inference engine built around NVIDIA Triton.

The Data platform is currently based on a combination of Python, PostgreSQL, dbt, Dagster, and cloud services (AWS & GCP). You'll have the opportunity to expand and transform these services to support our ambitious growth plan.

You'll play a crucial role in the tools and processes that allow us to:

  • Collect large datasets continuously from various sources, filter, sort, process, store, and redirect data into our training pipelines, R&D experiments, and analytics solutions. Importantly, we expect to leverage AI-agent pipelines to ingest messy data locked in documents and images.

  • Support data access to our R&D team by contributing to our ETL processes (APIs, dbt, PostgreSQL) and our core data-access library in python: pnx.

  • Expand our data monitoring and data-quality control using pipelines, models, dashboards, alerts, tracing products, etc.

  • Efficiently train new model, evaluate them, release them.

  • Serve frequently updated models with reliability and efficiency for all our customers and for internal needs.

You'll be managed by our Head of Software and will work in very close relationship with our Head of Data Science.

As you have now understood, this job is as challenging as it can be rewarding. We don't expect you to know everything already and as Inarix evolves, the position will too. You'll have the opportunity to learn a lot and teach us a lot too.

More importantly, we expect you to rapidly own large portions of these crucial systems, therefore being responsible for central parts of our production systems.


About the stack

We don't really need sentences here do we?

Python, PostgreSQL, SQL, dbt, Dagster, count.co, Docker, Kubeflow, PyTorch, Triton, numpy, Google Pub/Sub, Redis, Google FileStore, Google BigQuery, AWS S3, GCP GS, Kubernetes, ArgoCD, Newrelic, AzureDevOps, Slack, PyTest


Missions

  • Own our data ingestion, transformation, and access layers systems

  • Own critical parts of our ML production systems (inference engine, training pipelines, model inference on-edge etc.)

  • Own our data platform capability (data warehousing, lineage, monitoring) interacting with our R&D team to continuously improve it

  • Contribute to data modeling for new projects & products

  • Contribute to our in-house core python librairies (pnx, science, loki)


About Inarix

Inarix offers AI services for agricultural environments. Our cereal qualification tools provide powerful digital alternatives to complex hardware solutions. Inarix successfully launched its first product in 2020, generating strong revenues from top-tier clients in France.

Our vision platform helps farmers, grain buyers, and processors identify the best of each grain, quickly, accurately, and at scale. By replacing expensive, hardware-based analyzers with a flexible and accessible digital solution, we empower the entire agri-food chain to unlock more value from every harvest. From field to silo, our services bring data-driven insights that improve quality, reduce waste, and support better decision-making, for a more efficient and sustainable agriculture worldwide.

We already work with major industry players across the world (we operate in 15+ countries in 3 continents and analyse millions of tons of production), and we’re just getting started.

Joining Inarix means working at the crossroads of AI, agriculture, and impact, alongside a curious and ambitious team, driven by purpose and real-world outcomes.


About Inarix product offer

Our mobile app PocketLab allows customers to take pictures of cereals and immediately access useful information about their quality. Our digital solution has a radically different value and can both substitute existing solutions (hardware and/or expert analysis in laboratories) and extend what can be measured, providing new tools to help the agricultural sector face challenges such as fast quality assessment, supply-chain optimisation and traceability.

Under the hood, we run state-of-the art Deep Learning algorithms to estimate various criteria from images (variety, protein level, percentage of grains that are broken, etc.). We believe we have the world’s largest database of grain images, serving multiple purposes such as exploration, monitoring, labelling, and model training. This is made possible by a state-of-the-art in-house data and ML platforms providing services both for our customers facing products and our R&D team.

The data systems ingests tens of thousands of data points and images every day, transforms them, enriches them, and makes them available to all stakeholders. Our ML platform handles all MLOps steps: automatic and manual model trainings on GPU clusters, model validation, deployment, inference and monitoring. Together they form the backbone of our battle-tested platform.


Working at Inarix

Inarix is a remote-first company: we work mostly in a distributed fashion. We provide the equipment and means for you to work efficiently from home, a co-working space or an internet-connected tree-house. We value this flexibility and the diversity it fosters.

Although we’re primarily digitally connected, we also believe in the importance of real human interaction. We organize company-wide residential seminars every quarter, and for this position, you should expect approximately 1 meetings / month for this position. Most meetings will be held in Paris so your location will affect your travel needs. Travel & accommodation costs to attend our meetings from most French Metropolitan cities will be entirely covered.

If you live outside of France, we will require you to work within 3 hours of the French time-zone; we also require you to have French fiscal residency or work through a third-party company; a specific travel package will be negotiated along with your salary. We work in English and therefore strongly encourage non-French speakers to apply.

Preferred Experience

What we are looking for

  • 5+ years of experience as a Data Engineer / ML Engineer / System Engineer or similar position

  • Proven expertise in Python for production-grade systems

  • Working experience with SQL

  • Working experience with any cloud platform (AWS, GCP, or Azure)

  • English (good written and spoken English)

  • Autonomous, Proactive, Team-player, Good communication


Nice to Have

  • Experience with inference systems (Triton, edge systems like TF lite, Tensor RT, ONNX…)

  • Experience with distributed computing / HPC

  • Experience of low-level language or GPU programing (CPython, CUDA, C++ or Rust)

  • Experience with PostgreSQL, BigQuery

  • Experience with ETL pipelines (dbt, dagster, argo-workflow, airflow, or any other)

  • Experience with no-SQL databases (Elasticsearch, MongoDB, or any other)

  • Experience with ML-oriented platforms (Kubeflow, MLFlow, Vertex AI, SageMaker etc.)

Recruitment Process

  • 1h with Hiring Manager

  • 30 min with HR

  • 30 min with Head of Data Science

  • 30 min with CTO

  • 2h Technical interview

Additional Information

  • Contract Type: Full-Time
  • Start Date: 05 January 2026
  • Location: Paris
  • Education Level: Master's Degree
  • Experience: > 5 years
  • Possible full remote