Fast.ai practical deep learning course – review of week 1

Deep learning is cool, hype, and very popular, and is THE thing to learn right now. I am obsessed with it, too.

Earlier this year I have completed the Udacity’s Deep Learning Nanodegree – unfortunately I have rushed through it, and although I’ve completed the assignments, and the final homework, the time I have invested was not enough to build models on my own. The material and quality of the instructors is top notch but I just jumped into it too fast and didn’t get everything I could from it. I know for sure I will be reviewing the materials again some time soon.

Image result for neural network

In the meantime, I have discovered another open source (and free) course on deep learning by fast.aiPractical Deep Learning For Coders. After reading through the feedback and curriculum, the decision was pretty straightforward – this is a great hands-on MOOC. But before I jumped into it, I have listed down several goals for learning deep learning (no pun intended):

  1. Get my hands dirty with the neural networks. Since I’m not an engineer per se – I manage and build data science and analytics teams – I don’t have so much time to catch up with the newest technological advancements in machine learning, so I must push myself to the level where I can build my own (simple) models, intimately understand the underlying concepts and be able to read, interpret and comment deep learning code that the engineers in my team write.
  2. Brush up on Python. I have been using R for a long time (6+ years as of today) and only few years ago started to learn and use Python. Since most of the material and coding for deep learning is in Python, this is a no-brainer. On top of that, I have started my computer science master’s degree in Georgia Tech this year (1 course finished, 9 to go) where Python is going to be the main language so it doesn’t hurt to become better.
  3. Solidify ML and math/statistics skills. I have had advanced math and algebra courses during my bachelor’s studies – that knowledge is long gone. I have learned ML and statistical analysis at work, with every project I’ve learned more and more. First I’ve used SPSS and then found out about R which changed my life completely. Building model after model in marketing, CRM, financial modelling and other areas improved my knowledge gradually but I have never had a solid foundation about the mathematical underpinnings of the optimization algorithms.
  4. Have fun and stay cool. I don’t know about the second part (stay cool) but having fun is very important for me – I want to build cool applications and models that exceed state-of-the-art results from 5 years back by a sizable improvement.

Review of week 1 – getting thirsty, not much coding

I have started the fast.ai course around a week ago and have finished the week 1 materials and assignments recently. During the first week I have learned how to load a pre-trained state-of-the-art convolutional neural network (VGG16) and use it to build dogs vs. cats classifier that gets into top 50% ranking in the public leaderboard of this Kaggle competition.

As a second exercise I’ve retrained the neural network architecture on another Kaggle competition on detecting and classifying distracted drivers from the driver pictures (10 classes available) and got into top 50 % ranking of the public leaderboard.

On top of that, the course pushed me to setup my laptop to run Ubuntu (after 7+ times of disc formatting and reinstalling I made it), spin off EC2 machine and set it up for deep learning, install and set up CUDA, Tensorflow, PyTorch, MxNet and multiple other deep learning packages, turning my laptop into a mini deep learning machine, and getting comfortable with bash.

There was not a lot of coding this week – mostly it was about loading two helper packages vgg16.py and utils.py and running few lines of code to build the model, without diving deep into the architectural aspects of it. The most interesting part of it was reading through the vgg16.py file to figure out about how the classes and functions are setup and completing parts of the provided notebook. The last assignment of adjusting the code and running against a different dataset was even more interesting. The provided helper files required files be organized in a specific way – every picture had to be in a separate folder that is named as its class i.e. dog pictures in “dogs” folder and cat pictures in “cats” folder. I have discovered a great random sampling command in bash in this stackoverflow thread.

Looking into week 2

Now I am looking forward to week 2 where I’m going to walk through the model and  rebuild the architecture using Keras. On top of that, the reading assignments for week 2 include Stanford’s CS231n Convolutional Neural Networks for Visual Recognition, and chapter 1, 2, 3 from Neural Networks and Deep Learning book. I know I will not do the same mistake as I did with Udacity’s course and will not rush myself through these reading assignments – will take as long as it needs, even if week 2 will turn into a whole month.

Overall, the beginning is great, I feel grateful to Jeremy Howard and Rachel Thomas for this brilliant course.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s