Discover more from LLM Fluff
Notebooks and MLOps. Choose one.
In the previous issue, I wrote about what MLOps suffers from. Now that I come to think of it, I have realized that it is worth writing about one more thing that stands in our way towards MLOps. You know this thing very well. It’s Jupyter notebooks.
In fairness to Jupyter notebooks, they have become the standard way of prototyping ML models all over the industry. Because the notebooks are interactive and support visual outputs, there is no better way of exploring data and sharing the results. Integration with lots of data science libraries made Jupyter the heart of the ecosystem. Jupyter notebooks have aesthetics and are simple enough to be used by anyone. This makes notebooks a perfect tool.
When I saw notebooks for the first time, I fell in love. When I switched my career from software development to data science, I was lucky to help the PyCharm team integrate Jupyter into the IDE. Today, it’s possible to work with Jupyter notebooks right within PyCharm or DataSpell and enjoy the interactivity of the notebooks with the intelligence of the IDE.
Like anything close to perfect, Jupyter notebooks come with a price. Watching the pain teams deal with trying to use notebooks for production, I derived a rule: For any ML model, the time spent in a Jupyter notebook is inversely proportional to its reproducibility. The reasons behind this rule are poor modularity and reusability of the code in notebooks, and poor integration with Git. The worst part of it is the habit of using notebooks which incentivizes the practices that go against reproducibility. This seems to be a vicious circle. We use notebooks because they are a great way of prototyping models or exploring data. However, the more you use notebooks, the more problems you’ll face at the deployment stage.
Imagine that you’d like to say fit. Often people think that extra calories can be compensated with more work at the gym. Without fixing the level of calorie composition, they go to a gym and work until exhaustion. This won’t work. At least this won’t make you fit. The same is true about Jupyter notebooks and MLOps. If you think MLOps tools, for example, such as pipeline orchestration frameworks, will help you improve the reproducibility of your models while you still work on them in Jupyter notebooks, good luck with that.
Is there a way out of this vicious circle? Yes, it’s a change of habit. You’ll be surprised how much more reproducible and maintainable your models can become just in a few iterations if you simply spend more time outside of Jupyter notebooks. Often people ask how they can adopt MLOps practices. Try to train your models using Python scripts, Git, and CI/CD. Just avoiding Jupyter notebooks, you’ll see that you start to spend more time making the code reusable, and tested.
You may ask: what about the interactivity of Jupyter notebooks? How can it be compensated? In fact, there is a better solution to it too. With the ML application frameworks such as Gradio and Streamlit today, you can achieve a lot more, without compromising on reproducibility. ML application frameworks allow you to decouple the logic of your ML application from the model and data. Building ML applications is as easy as writing notebooks. This is why ML applications today are taking over notebooks.
MLOps seems to gain traction these days. So how to prepare yourself for it? Go to a meetup or join a course? I got a better idea for you. Simply try to train and deploy a model without using Jupyter notebooks, e.g. using plain Python scripts. By doing such an exercise, you learn about MLOps more than from a dozen meetups or courses.
Did you like the article? Subscribe to MLOps Fluff, and I promise to post more about developer tools, AI, and particularly how to apply it to MLOps.