Streamlit: Deploying ML Models the easy way
In the previous post I described the process and steps required to deploy a Machine Learning model such that it can be used in production and actually generate value. The app was built using the web framework Flask and the project was deployed using Heroku.
As a reminder: to create the app we had to know some HTML and web development basics - things most Data Scientist neither have nor are really interested in. The app also lacked any kind of styling, and hence looks … well … it gets the job done.
In this post, I’ll introduce an alternative to this approach, for which no web development knowledge is needed at all. The package I’m going to use is called Streamlit and is especially useful for programmers who quickly want to get their models into production or simply to showcase their work. It comes with pretty and consistent styling and takes care of all the design and front end for you.
1. Building the App
We start off by giving our app a title using the st.title()
function. Streamlit titles and texts are also able to handle emojis in markdown form, we also make use of it.
st.title("Classify your (German) lyrics! 🎉🎤🎶")
Streamlit comes with a variety of widgets that can be used as user input, for example, text input, slider, or predefined selections. These widgets can furthermore be organized in different sections: on the main page, in a sidebar or in container, which can hold multiple elements.
The only input we need for the genre classification are the song’s lyrics. A text area is included on the page in which the lyrics can be pasted. Since lyrics are usually rather long with many line breaks, we organize the text area in a container that can be expanded and collapsed. The container will be collapsed by default when initializing the page. We further add a clickable button reading “Classify Lyrics” which will start the classification.
expander = st.beta_expander("Lyrics")
with expander:
lyrics = st.text_area("Paste your lyrics here:", height=100)
clicked = st.button("Classify lyrics!")
The next step is to provide the functionality to the Streamlit app. Similar to the custom made app, the genre prediction should be displayed together with some genre-related image. To do this, we organize the pictures and the prediction together in a container. When the “Classify Lyrics” button is clicked, the lyrics are used as input for the model, the prediction is made, and based on the prediction the corresponding image should be displayed and the prediction under this image. We use the same images as before and store them in a folder called “static”. The code doing this looks like the following:
if clicked:
st.spinner("Predicting the song's genre...")
with st.beta_container():
# make prediction and edit output format
if clicked:
with st.spinner("Predicting the song's genre..."):
pred = learn_inf.predict(lyrics)[0]
img = images + pred + ".jpg"
# edit string output
if pred == "hiphop":
pred = "Hip-Hop"
if pred in ["pop", "schlager"]:
pred = pred.capitalize()
st.image(img, use_column_width=True)
st.info("The song's genre is " + pred)
Lastly, since we are going to use Streamlit Sharing to deploy the app, and since it does not yet support Git Lfs files, we have to manually download the model file from Dropbox. To avoid repeating the download every time the app starts, we use Streamlit’s built-in caching for this purpose. Loading the model into the app then looks as follows:
@st.cache(allow_output_mutation=True)
def load_model(url):
modelLink = url
model = requests.get(modelLink).content
return model
modelFile = load_model("https://bit.ly/2MiwpnP")
model = BytesIO(modelFile)
learn_inf = load_learner(model)
2. Putting it all together
Now simply copy and paste all the above code snippets in a single app.py file:
import streamlit as st
import pickle
from fastai.text.all import *
import sentencepiece
import requests
from io import BytesIO
st.title("Classify your (German) lyrics! 🎉🎤🎶")
# download model from Dropbox, cache it and load the model into the app
@st.cache(allow_output_mutation=True)
def load_model(url):
modelLink = url
model = requests.get(modelLink).content
return model
modelFile = load_model("https://bit.ly/2MiwpnP")
model = BytesIO(modelFile)
learn_inf = load_learner(model)
# point to the images
images = "./static/"
expander = st.beta_expander("Lyrics")
with expander:
lyrics = st.text_area("Paste your lyrics here:", height=100)
clicked = st.button("Classify lyrics!")
if clicked:
st.spinner("Predicting the song's genre...")
with st.beta_container():
# make prediction and edit output format
if clicked:
with st.spinner("Predicting the song's genre..."):
pred = learn_inf.predict(lyrics)[0]
img = images + pred + ".jpg"
# edit string output
if pred == "hiphop":
pred = "Hip-Hop"
if pred in ["pop", "schlager"]:
pred = pred.capitalize()
st.image(img, use_column_width=True)
st.info("The song's genre is " + pred)
And that’s it. The app can be run locally by typing streamlit run app.py
into the terminal in the app’s directory or the virtual environment.
3. Deploying to Streamlit Sharing
To deploy the app, one needs to sign-up for a free Streamlit Sharing account and can then easily deploy the app from GitHub with only a few clicks. Make sure to include a requirements.txt file in the repository like here and simply follow the instructions on the Streamlit Sharing page. The final app can be accessed here or by clicking on the “Open in Streamlit” badge below.