Software Engineer, Machine Learning Infrastructure at Planet
San Francisco, California, United States
🇺🇸 (Posted Sep 26 2018)
About the company
Founded in 2010 by a team of ex-NASA scientists, Planet is driven by a mission to image the entire Earth every day, and make Earth's changes visible, accessible and actionable.
Planet started as a small team of physicists, aerospace and mechanical engineers in a garage, using the cubesat form-factor to inform the first designs of the Dove satellite. Just three years after our first satellite entered space, we now operate the largest constellation of Earth-imaging satellites...ever.
Our satellites are collecting a radical new data set with endless, real-world applications. Whether you’re measuring agricultural yields, monitoring natural resources, or aiding first responders after natural disasters, our data is here to lend businesses and humanitarian organizations a helping hand. Planet believes timely, global imagery will empower informed, deliberate and meaningful stewardship of our planet.
Planet designs, builds, and operates the largest constellation of imaging satellites in history. This constellation delivers an unprecedented dataset of empirical information via a revolutionary cloud-based platform to decision-makers in commercial, environmental, and humanitarian sectors. We are both a space company and data company all rolled into one.
Customers and users across the globe use Planet's data and machine learning-powered analytics to develop new technologies, drive revenue, power research, and solve our world’s toughest challenges.
As we control every component of hardware design, manufacturing, data processing, and software engineering, our office is a truly inspiring mix of experts from a variety of domains.
We have a people-centric approach toward culture and community and we are iterating in a way that puts our team members first and prepares our company for growth.
Join Planet and be a part of our mission to change the way people see the world.
Define and build infrastructure for training Tensorflow deep learning models on our unique global imagery
Deploy Tensorflow models to our asynchronous image processing compute cluster
Implement efficient solutions for processing raster and vector geographic data
Skills & requirements
The Must Haves
BS or MS in Computer Science or related fields
3+ years of production software development experience
Expertise in both a dynamic language (Python, Ruby) and one or more static, compiled languages (Go, Java, C/C++)
Experience with SQL databases (Postgres or MySQL) and NoSQL databases (e.g. Bigtable, Redis, HBase, etc.), and understand when to use each
Fluency with the basics of Machine Learning workflows and techniques (e.g. best practices around training data management, understand basics of numerical optimization)
Experience with the Python scientific computing ecosystem (Pandas, numpy, scikit-learn, scikit-image, etc.)
Experience with a largely shared codebase and Continuous Integration and Deployment workflows and tooling
Know your way around a Linux environment
Excellent communication, relationship skills, and a strong team player
The Nice to Haves
5+ years experience with software engineering and 2+ years experience with geospatial data and/or machine learning
Experience with open-source GIS software applications/ and packages such as QGIS, GDAL/OGR, or PostGIS
Experience with remote sensing data from satellite constellations like Landsat, Sentinel
Experience training and/or deploying customer-facing machine and deep-learning models
Experience with at least one deep learning framework (Tensorflow, PyTorch, Caffe, Theano, Keras)
Instructions how to apply
see the website
[ job website
Let them know you found the job via https://Jobhunt.ai
(Companies love to know recruiting strategies that works)