Summary
The experimentation phase of building a machine learning model requires a lot of trial and error. One of the limiting factors of how many experiments you can try is the length of time required to train the model which can be on the order of days or weeks. To reduce the time required to test different iterations Rolando Garcia Sanchez created FLOR which is a library that automatically checkpoints training epochs and instruments your code so that you can bypass early training cycles when you want to explore a different path in your algorithm. In this episode he explains how the tool works to speed up your experimentation phase and how to get started with it.
Announcements
- Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Your host as usual is Tobias Macey and today I’m interviewing Rolando Garcia about FLOR, a suite of machine learning tools for hindsight logging that lets you speed up model experimentation by checkpointing training data
Interview
- Introductions
- How did you get introduced to Python?
- Can you describe what FLOR is and the story behind it?
- What is the core problem that you are trying to solve for with FLOR?
- What are the fundamental challenges in model training and experimentation that make it necessary?
- How do machine learning reasearchers and engineers address this problem in the absence of something like FLOR?
- Can you describe how FLOR is implemented?
- What were the core engineering problems that you had to solve for while building it?
- What is the workflow for integrating FLOR into your model development process?
- What information are you capturing in the log structures and epoch checkpoints?
- How does FLOR use that data to prime the model training to a given state when backtracking and trying a different approach?
- How does the presence of FLOR change the costs of ML experimentation and what is the long-range impact of that shift?
- Once a model has been trained and optimized, what is the long-term utility of FLOR?
- What are the opportunities for supporting e.g. Horovod for distributed training of large models or with large datasets?
- What does the maintenance process for research-oriented OSS projects look like?
- What are the most interesting, innovative, or unexpected ways that you have seen FLOR used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on FLOR?
- When is FLOR the wrong choice?
- What do you have planned for the future of FLOR?
Keep In Touch
- rlnsanz on GitHub
- @rogarcia_sanz on Twitter
Picks
- Tobias
- Rolando
Closing Announcements
- Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
- FLOR
- UC Berkeley
- Joe Hellerstein
- MLOps
- RISE Lab
- AMP Lab
- Clipper Model Serving
- Ground Data Context Service
- Context: The Missing Piece Of The Machine Learning Lifecycle
- Airflow
- Copy on write
- ASTor
- Green Tree Snakes: Python AST Documentation
- MLFlow
- Amazon Sagemaker
- Cloudpickle
- Horovod
- Ray Anyscale
- PyTorch
- Tensorflow
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA