ML Engineering

 
 

Hop develops algorithms at scale – reliably, reproducibly and responsibly

Here at Hop, we build production-scale systems to deploy machine learning at scale – whether that’s on premise, in the cloud, or on device.

Sometimes, this also requires us to construct novel compute substrates to explore more interesting research questions. Our engineers focus on questions of scale, latency, concurrency and resilience. Though we have preferences (spoiler alert: PyTorch/Python/AWS), we’re generally language and platform agnostic, and have worked deeply in AWS, GCP, Azure and Heroku, as well as various on-premise installations.

 

Featured Case Study

Toyota Research Institute’s Human Interactive Driving research team faced challenges as their experiments grew in complexity and scale. Engineering needs of the researchers outpaced their existing capabilities. Experimental datasets were approaching hyperscale levels. They needed advanced engineering and operations support.

Accelerating Research in Autonomous Driving

Accelerating Research in Autonomous Driving

Working closely with Toyota Research Institute’s Human Interactive Driving division, we’ve provided advanced engineering and operations support to scale and accelerate their ML research efforts.

Contact us to learn how Hop can help with your ML engineering needs.