In this homework, we'll deploy the ride duration model in batch mode. Like in homework 1 and 3, we'll use the FHV data.
You'll find the starter code in the homework directory.
We'll start with the same notebook we ended up with in homework 1.
We cleaned it a little bit and kept only the scoring part. Now it's in ./starter.ipynb.
Run this notebook for the February 2021 FVH data.
What's the mean predicted duration for this dataset?
- 11.19
- 16.19
- 21.19
- 26.19
You can see it in ./starter.ipynb
Like in the course videos, we want to prepare the dataframe with the output.
First, let's create an artificial ride_id
column:
df['ride_id'] = f'{year:04d}/{month:02d}_' + df.index.astype('str')
Next, write the ride id and the predictions to a dataframe with results.
Save it as parquet:
df_result.to_parquet(
output_file,
engine='pyarrow',
compression=None,
index=False
)
What's the size of the output file?
- 9M
- 19M
- 29M
- 39M
Make sure you use the snippet above for saving the file. It should contain only these two columns. For this question, don't change the dtypes of the columns and use pyarrow, not fastparquet.
19M
Now let's turn the notebook into a script.
Which command you need to execute for that?
jupyter nbconvert --to script starter.ipynb
Now let's put everything into a virtual environment. We'll use pipenv for that.
Install all the required libraries. Pay attention to the Scikit-Learn version (scikit-learn==1.0.2 ): check the starter notebook for details.
After installing the libraries, pipenv creates two files: Pipfile
and Pipfile.lock
. The Pipfile.lock
file keeps the hashes of the
dependencies we use for the virtual env.
What's the first hash for the Scikit-Learn dependency?
"sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"
Let's now make the script configurable via CLI. We'll create two parameters: year and month.
Run the script for March 2021.
What's the mean predicted duration?
- 11.29
- 16.29
- 21.29
- 26.29
Hint: just add a print statement to your script.
Finally, we'll package the script in the docker container. For that, you'll need to use a base image that we prepared.
This is how it looks like:
FROM python:3.9.7-slim
WORKDIR /app
COPY [ "model2.bin", "model.bin" ]
(see homework/Dockerfile
)
We pushed it to agrigorev/zoomcamp-model:mlops-3.9.7-slim
,
which you should use as your base image.
That is, this is how your Dockerfile should start:
FROM agrigorev/zoomcamp-model:mlops-3.9.7-slim
# do stuff here
This image already has a pickle file with a dictionary vectorizer and a model. You will need to use them.
Important: don't copy the model to the docker image. You will need to use the pickle file already in the image.
Now run the script with docker. What's the mean predicted duration for April 2021?
- 9.96
- 16.55
- 25.96
- 36.55
winpty docker build -t q6:v1 .
winpty docker run -it --rm q6:v1 2021 04