Machine Learning Engineer
Grand Rounds
 San Francisco, CA
About us:
Grand Rounds is a new kind of healthcare company. Founded in 2011, the company is on a mission to raise the standard of healthcare for everyone, everywhere. The Grand Rounds team goes above and beyond to connect and guide people to the highest quality healthcare available for themselves and their loved ones. Grand Rounds creates products and services that give people the best possible healthcare experience. Named a 2019 Best Place to Work by Glassdoor and Rock Health’s 2018 Fastest Growing Company, Grand Rounds works with inspiring employers and doctors to empower them to be the change agents we need to make our shared vision a reality.

The Role:

Data Scientists at Grand Rounds work on problems that are core to the company’s mission. Major challenges include developing systems and models to identify the highest quality doctors in the country as well as methodologies to uncover the subtle differences between each physician’s clinical expertise. Additionally, patient-level modeling allows us to understand the specific healthcare needs of every person. With a high fidelity understanding of both patients and physicians we are able to match patients to both appropriate and high quality care and understand the health of our patient populations.

Our growing sect of machine learning engineers sit on the Data Science team and work closely with Data Engineering, Platform, and Analytics to build out search and analytics platforms powered extensively by machine learning technologies.  This role will involve managing the full platform and lifecycle involved with production grade online machine learning, developing batch and stream processing pipelines in Spark, and ensuring deep integration with the next generation data platform being developed by our data engineers.


Example projects include:


Building modules for our “Match Engine” ecosystem: This collection of services powers our provider matching backend in terms of distributed runtime and deployment services.  As data scientists across our team are constantly developing new models and features that will help patients immediately, we aim to publish these into our own integrated platform.  You will help solidify, grow, and lead the evolution of the end-to-end systems architecture. As a user-facing real-time prediction environment, we take seriously the notions of instrumentation, optimization of both models and orchestration code, as well as the construction and integration of online and offline experimentation frameworks.

Laying the groundwork for our next generation analytics platform. This multi-purpose Spark-based ecosystem will enrich our view of providers and patients through machine learning driven inference and population health statistics. By deeply understanding the modeling and inference patterns that have long been used across our team you will help to simplify and improve our prototype-to-production processes at scale. You will work at the intersection of our Data Engineering and Data Science teams to ensure robust modeling and data delivery systems.

Qualifications:

  • Excellent verbal communications, including the ability to clearly and concisely articulate complex concepts to both technical and non-technical collaborators
  • BS with 8+ years or MS with 6+ years or PhD with 3+ years of experience. Degree(s) should be in a technical discipline such as Computer Science, Engineering, Statistics, Physics, Math, quantitative social science
  • Previous experience in machine learning, and statistics fundamentals
  • Production engineering experience is highly desired including previous experience developing and maintaining high-availability search and/or machine learning services.
  • Experience with distributed systems componentry including SQL and NoSQL databases, caching layers (Redis, Memcached), compute in Spark or Hadoop, queueing (Kafka, Kinesis, SNS), cloud based databases (BigQuery, Athena, Redshift).
  • Experience with workflow management solutions such as Airflow, Azkaban, or Luigi and scalable ETL in batch and stream processing workloads in Spark, Hadoop.
  • Required: SQL, Python, linux shell scripting
  • Desired: Scala, Java, or Ruby
  • Experience with production ready machine learning packages such as scikit-learn, TensorFlow, PyTorch, SparkML.
  • Frequent user of cloud computing platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform
  • Double Bonus Points: previous work on medical applications and/or with claims data
This is a full time position located in San Francisco, CA.

-----
Grand Rounds is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Grand Rounds considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.