Machine Learning Engineer at Seldon
🇬🇧 United Kingdom › London (Posted Jan 8 2022)
Please mention that you found the job at Jobhunt.ai
London OR Cambridge, UK (hybrid)
Seldon is looking for talented Software Engineers with Machine Learning expertise to join our growing Engineering team. This role covers various positions in the software engineering team including backend product, open source MLOps and client facing machine learning engineers and can fit applicants from a range of seniority levels.
We are focused on making it easy for machine learning models to be deployed and managed at scale in production. We provide Cloud Native products that run on top of Kubernetes and are open-core with several successful open source projects including Seldon Core, Alibi:Explain and Alibi:Detect. We also contribute to open source projects under the Kubeflow umbrella including KFServing.
We have created a culture that we’re proud of driven by our passionate, talented team and our open, collaborative ethos. We operate on the cutting edge of technology, in an agile environment that is evolving as we scale, enabling unique opportunities to grow and develop your career as part of the team and help shape the future with MLOps. We've adopted hybrid working moving forward.
About the role
Help realise the product vision: Production-ready machine learning models within moments, not months. Our products make enterprise-grade MLOps easy.
Help design, build and extend Seldon's core product range of MLOps (Machine learning operations) tools and products.
Help enterprises deploy their machine learning models at scale across a wide range of use-cases and sectors.
Extend the state of the art in the developing area of MLOps including:
Managing the production lifecycle of ML models from initial deployment, to testing and updating of the next iteration.
Monitoring ML models in production.
Explaining and ensuring correct governance of ML models in production.
A degree or higher in a scientific or engineering subject or equivalent relevant experience
Familiarity with linux based development.
At least 2 years of experience in industry or academia showing completed projects.
Interest in MLOps.
Core skills (existing experience or a demonstrable desire to learn)
Experience with GoLang and/or Python.
Experience with Kubernetes and the ecosystem of Cloud Native tools.
Experience using machine learning tools in production.
Contributions to open source projects
A broad understanding of data science and machine learning.
Understanding of explainable AI or machine learning monitoring in production.
Familiarity with Kubeflow, MLFlow or Sagemaker.
Familiarity with python tools for data science.
Some of the technologies we use in our day-to-day:
Go is our primary language for all-things backend infrastructure including our Kubernetes Operator, and our new GoLang Microservice Orchestrator)
Python is our primary language for machine learning, and powers our most popular Seldon Core Microservices wrapper, as well as our Explainability Toolbox Alibi
We leverage the Elastic Stack to provide full data provenance on inputs and outputs for thousands of models in production clusters
Metrics from our models collected using Prometheus, with custom Grafana integrations for visualisation and monitoring
Our primary service mesh backend leverages the Envoy Proxy, fully integrated with Istio, but also with an option for Ambassador
We leverage gRPC protobufs to standardise our schemas and reach unprecedented processing speeds through complex inference graphs
We use React.js for our all our enterprise user products and interfaces
Kubernetes and Docker to schedule and run all of our core cloud native technology stack
Some of our high profile technical projects
We are core authors and maintainers of Seldon Core, the most popular Open Source model serving solution in the Cloud Native (Kubernetes) ecosystem
We built and maintain the black box model explainability tool Alibi
We are co-founders of the KFServing project, and collaborate with Microsoft, Google, IBM, etc on extending the project
We are core contributors of the Kubeflow project and meet on several workstreams with Google, Microsoft, RedHat, etc on a weekly basis
We are part of the SIG-MLOps Kubernetes open source working group, where we contribute through examples and prototypes around ML serving
A supportive and collaborative team environment
A commitment to learning and career development and £1000 per year L&D budget
Flexible approach to hybrid-working (2/3)
Share options to align you with the long-term success of the company
28 days annual leave (plus flexible bank holidays on top)
Perkbox - perks, medical and wellbeing benefits
Healthcare cash plan and Employee Assistance Programme
Cycle to work scheme
London or Cambridge UK offices with a flexible approach to hybrid working
We can provide Visa sponsorship.
Our interview process is normally 4 filtered stages:
a 30min phone interview.
a coding task.
2-3 hours of post-task interview.
Our recruitment process has an average length of 3 weeks.
As part of the process we will identify which part of the tech team fits your skills and interests more closely: product, delivery, MLOps. However, as we are a small team, all our employees are very cross functional and roles develop based on skills, interests and ongoing projects.
Please mention that you found the job at Jobhunt.ai