Machine Learning Meets Fashion (AI Adventures)


YUFENG: On this episode
of AI Adventures, we will attempt to go through
an entire machine learning workflow in one video,
pulling best practices from our previous episodes. It’s a bit of material,
but I think we can do it. Training a model with
the MNIST data set is often considered the Hello
World of machine learning. But that’s been done
many times over. And unfortunately, just because
a model does well on MNIST, it’s not necessarily
predictive of high performance with other data sets,
especially since most image data we have today are more complex
than handwritten digits. Zalando decided it was time
to make MNIST fashionable again and recently released a
data set called Fashion-MNIST. It’s the exact same format
as the regular MNIST, except the data is in the form
of pictures of various clothing types, shoes, bags. It’s still across 10
categories, though. And the images are
still 28-by-28 pixels. So let’s train a model to
detect which type of clothing is being shown. We’ll start by building a
linear classifier, as usual, and see how we do. And we’ll use TensorFlow’s
Estimator Framework to make our code easy to
write and easy to maintain. As a reminder, we’ll
first load in the data, create our classifier,
and then run the training and evaluation. We’ll also make some predictions
directly from our local model. Let’s start by
creating our model. We’ll flatten that
data set from being 28-by-28 to 1-by-784
pixels and make a feature column called “pixels.” This is analogous to
our flower features from Episode 3, Plain
and Simple Estimators. Next, we’ll create
our linear classifier. We can have 10 different
possible classes to label, instead of the
three that we used previously with the iris flowers. To run our training, we’ll
need to set up our data set and input function. TensorFlow has a built-in
utility to accept a NumPy array and generate an input
function right from that. So let’s take advantage of it. We’ll load in our data set
using the input data module. I’ve already downloaded
the data set to a folder, so we’ll point to that here. Now we can call classifier.train
to bring together our classifier, the input
function, and the data set. Finally, we run an evaluation
step, to see how our model did. When we use the
classic MNIST data set, this linear model typically
gets about 91% accuracy. However, Fashion MNIST is a
considerably more complex data set and we can
only really achieve an accuracy in the low 80’s, and
sometimes even lower than that. So how can we do better? As we learned in Episode
6, let’s go deep. Swapping in the DNN classifier
is a one-line change. And we can now rerun our
training and evaluation to see if our deep neural
network can conform any better than the linear one. And as we discussed
in Episode 5, we should bring up
tensor board to take a look at these two models’
performance side by side. It looks like the deep network
could definitely use some more time to train, though. Estimators makes this easy. All we need to do is rerun the
call to the Train and Evaluate functions. Looking at tensor
board, it seems like my deep model
is performing no better than my linear one did. This is perhaps an
opportunity, however, to tune some of my
hyper parameters, like we talked
about in Episode 2. Maybe my model
needs to be larger to accommodate the
complexity of this data set, or perhaps my learning
rate needs to be lowered. Experimenting with
these parameters a bit, we can finally
breakthrough and achieve a higher overall accuracy than
our linear model can obtain. It takes quite a
bit more training, but ultimately this
is worth it to achieve those higher-accuracy numbers. Notice also that the
linear model plateaus earlier than the deep network. Because deep networks are often
more complex than linear ones, they can take longer to train. And at this stage, say
we’re happy with our model, we’d be able to export it and
produce a scalable Fashion MNIST classifier API. You can see Episode 4 for more
details on how to do that. Let’s also take a quick
peek at how you can make predictions using estimators. In large part, it
looks just like how we called Train and Evaluate. That’s one of the great
things about Estimators, the consistent interface. Notice that this time, we’ve
specified a batch size of 1, num_epochs of 1, and shuffle as
False This is because we want the predictions
to go one by one, making predictions
through all the data and preserving that order. I’ve extracted five
images from the middle of the valuation data set for
us to try some predictions on. And I picked these five
not just because they were in the middle, but
because my model managed to get two of them wrong. Both were supposed to be shirts. But the model thought that
the third example was a bag and the fifth example
was a coat, incorrectly. And you can see, looking
at these images, how these examples are more
challenging than handwritten numbers, if for no
other reason than just the graininess of the images. So how did your model perform? And what parameters
did you end up using to achieve that accuracy? Let me know below
in the comments. You can find the code that
I used to train this model and generate these images,
also in the links below, along with more links
to the other resources we talked about in this episode. Our next set of
videos will be focused on some of the tools of the
machine learning ecosystem, to help you build out your
workflow and tool chain, as well as showcase even
more architectures that you can employ to solve your
machine learning problems. I look forward to
seeing you there. And until then, keep
on machine learning. Thanks for watching this
episode of Cloud AI Adventures. Be sure to subscribe
to the channel to catch future episodes
right when they come out.

13 Replies to “Machine Learning Meets Fashion (AI Adventures)”

  1. +yufengG @YufengG, the link to the code is the one of the iris model, you probably want to replace it with this one: https://gist.github.com/yufengg/2b2fd4b81b72f0f9c7b710fa87077145

  2. Outstanding series, Yufeng. Your presentation style and clarity is impressive. Editing is stellar, too. Well done!

  3. I'm finding this series very helpful. One issue here: The code linked here is for the earlier iris example, not this fashion-MNIST data.

  4. Very nice video………………….

    Manthan is leading fashion AI retail data analytics software company provides apparel retail analytics solution. Connect now to dominate the retail market with our advanced analytics solutions
    .

    To Continue…………….

    https://bit.ly/2FYO9Pi

Leave a Reply

Your email address will not be published. Required fields are marked *