No
Yes
View More
View Less
Working...
Close
OK
Cancel
Confirm
System Message
Delete
Schedule
An unknown error has occurred and your request could not be completed. Please contact support.
Scheduled
Scheduled
Wait Listed
Personal Calendar
Speaking
Conference Event
Meeting
Interest
There aren't any available sessions at this time.
Conflict Found
This session is already scheduled at another time. Would you like to...
Loading...
Please enter a maximum of {0} characters.
{0} remaining of {1} character maximum.
Please enter a maximum of {0} words.
{0} remaining of {1} word maximum.
must be 50 characters or less.
must be 40 characters or less.
Session Summary
We were unable to load the map image.
This has not yet been assigned to a map.
Search Catalog
Reply
Replies ()
Search
New Post
Microblog
Microblog Thread
Post Reply
Post
Your session timed out.
This web page is not optimized for viewing on a mobile device. Visit this site in a desktop browser to access the full set of features.
DockerCon 2018
Add to My Interests
Remove from My Interests

151972 - Accelerating Development Velocity of Production ML Systems with Docker

Session Speakers
Session Description

The rise of microservices has allowed ML systems to grow in complexity, but has also introduced new challenges when things inevitably go wrong. This talk dives into why and how Pinterest Dockerized the array of microservices that produces the Pinterest Home Feed to accelerate development and decrease operational complexity and outlines benefits we gained from this change that may be applicable to other microservice based ML systems. Most companies provide isolated development environments for engineers to work within. While a necessity once a team reaches even a small size, this same organizational choice introduces potentially frustrating dependencies when those individual environments inevitably drift. This project was initially motivated by challenges arising from the difficulty of testing individual changes in a reproducible way - without having standardized environments, pre-deployment testing often yielded non-representative results, causing downtime and confusion for those responsible for keeping the service up. The Docker solution that was eventually deployed pre-packages all dependencies found in each microservice, allowing developers to quickly set up large portions of the Home Feed stack, and always test on the current team-wide configs. This architecture enabled the team to debug latency issues, expand our testing suite to include connecting to simulated databases, and more quickly do development on our thrift APIs. This talk will feature tips and tricks for Dockerizing a large scale legacy production service and discuss how an architectural change like this can change how an ML team works.


Additional Information
Innovation
Breakout
40 minutes
Session Schedule
    Similar Sessions
     
    Do Not Sell My Personal Information
    First name
    Last name
    Email address