Big Data Engineer at Logic20/20
🇺🇸 United States › Washington › Seattle (Posted Jun 10 2022)
Please mention that you found the job at Jobhunt.ai
Apply now!
Do they allow remote work?
Remote work is possible, see the description below for more information.
Job description
We’re a six-time “Best Company to Work For,” where intelligent, talented people collaborate on high-value consulting solutions. Because we’re a full-service consulting firm with a diverse client base, you can count on a steady stream of opportunities to work with complex data sets on solutions that make a real difference for businesses, customers, and the environment.
Job Description
Data science is revolutionizing the utilities sector. Using a variety of technologies, including Ridar and computer vision, we’re reducing the risk of wildfires and fundamentally changing the approach to risk management. It’s a big win for the environment and communities at risk for fires.
Operating out of a Data Science Center of Excellence, you’ll work on small “product” team to design and develop a scalable data processing infrastructure. Applying an Agile approach to data science, you’ll work closely with our team of analysts, technical product owners, and data scientists to drive real business value.
About you
You’re the perfect person for the job if you have…
A nose for uncovering business needs and pain points in partnership with executive management
A talent for communicating engineering concepts to non-techy business stakeholders
A passion for building large-scale machine learning pipelines
A knack for developing and iterating solutions at record speed
What you’ll be doing
Joining forces with internal and external teams to understand the client’s business needs
Designing and developing a scalable data processing infrastructure
Helping the client better understand their core needs, with a keen awareness of technical limitations.
Qualifications
3+ years of PySpark
5+ years of data engineering
Data engineering implementation experience with some of the following technologies:
Python
Spark and PySpark
SparkSQL
Strong understanding of high-performance ETL development with Python
Demonstrated ability to identify business and technical impacts of user requirements and incorporate them into the project schedule
An undergraduate degree in technology or business is required
Desired Skills
Experience building data and computational systems that support machine learning
Knowledge of AWS services
Experience with modern software delivery practices, including source control, testing, continuous delivery
Experience delivering product with Agile methodologies
Experience with streaming data in Spark
Please mention that you found the job at Jobhunt.ai
Apply now!