In my previous post about learning machine learning and deep learning, I gave a brief history of my attempts in this journey. While going through the fast.ai book and lessons, I started to document my progress as blog posts. This is the first blog in the series of deep learning. I am writing this post while I am in the middle of lesson 4 (from course videos) and finished chapter 4 of the book.
Primary course material
It would be helpful to clarify how I am following up on the materials of the course. The course primarily provides a book (Deep Learning for Coders with fastai & PyTorch) and course lessons in video format. The book is available as an interactive Jupyter Notebook for free and can be bought on Amazon if someone prefers a paper version. The videos are available on fastai website.
My learning experiences
I try to learn chapter by chapter. Watching the video of one chapter and then reading that chapter from the book. While watching the lesson or reading the book, I have the course Notebooks ready and open to experiment with the code. This has worked great for me. Below are some highlights and recap of first and second chapters of the book.
Chapter 1 (lesson 1 and a bit of lesson 2) will make the necessary introductions related to the course, instructors, course materials, history and some jargon of machine learning and deep learning. You will also learn about setting up the Jupyter Noteboos environment and running some deep learning code.
I already published another post about how I set up my Jupyter Notebooks environment on Microsoft Azure.
It is important to know what to expect on this course. The course follows an approach called top to bottom learning approach. In this approach, the learners are exposed to the big picture of what they are going to learn and get a sense of end-to-end outcomes. For instance, you will already run a state-of-the-art deep learning algorithm in the first lesson and learn how it works at the high-level. Then, you will gradually learn the theories and fundamentals of how the algorithms work under the hood.
I like this approach because I usually tend to waste time in theory and not practicing what I have learned. However, as mentioned by Jeremy, not everyone is comfortable with learning in a top to bottom approach.
Personally, I had some struggles in understanding some of the concepts and intuitions without knowing the underlying fundamentals. Therefore, I spent some time learning some fundamentals on my own using other sources. Even though I know that these topics will most probably be covered in future lessons.
My suggestion to new learners is manage their expectations according to this learning approach and see if it works for them or do they need to adjust it to their own preferences. In the 21st century, we are going towards personalized learning and there is nothing wrong with personalizing our learning approach and experience.
In chapter 2 (taught in lesson 2 and 3), you will learn about the state of deep learning and how it is utilized in different domains. The instructors explain the importance of working on projects and doing hands-on practice. Finally, you see how to build an end-to-end image classification and deploy it as a web application. As a bonus, they also talk about starting your blog in this journey.
I did some research about how to start a personal project. If you want to begin from scratch, which means defining a problem, gathering data, cleaning the data, training a decent model, and deploying the model (as web/mobile application, API, etc.), I suggest having a look at a section in this blog post by Julien Despois.
So far, I created a tree classification model to classify tree images whether they belong to coniferous or deciduous type. I will dedicate another blog post about that project and what I learned.
Another way to do hands-on projects is to use Kaggle. Kaggle is a community for data science and machine learning where you can run your code in their notebook environment. Kaggle also provides free GPU computing for your training. Kaggle is also famous for its competitions. Competitions are well-defined projects. If I am not mistaken, all the competitions provide a dataset for training (labeled) and testing (unlabeled) as well. Kaggle Competitions are a useful source of projects. You can start with one of the older ongoing competitions like Titanic just to learn how Kaggle works. Then go on and find an interesting deep learning competition.
I used Kaggle to review my Python skills and participated in Titanic competition to see how the complete process of submission works in Kaggle.
As final words, I think it is crucial to have tenacity in the first half of this course. I think I need to take my time and go over the materials to make sure I learned deeply enough before continuing to the next chapter. Stay tuned for the next post in Deep Learning series.