Deep Learning From Scratch

Abstract: There are many good tutorials on neural networks out there. While some of them dive deep into the code and show how to implement things, and others explain what is going on via diagrams or math, very few bring all the concepts needed to understand neural networks together, showing diagrams, code, and math side by side. In this tutorial, I’ll present a clear, step-by-step explanation of neural networks, implementing them from scratch in Numpy, while showing both diagrams that explain how they work and the math that explains why they work. We’ll cover normal, feedforward neural networks, convolutional neural networks (also from scratch) as well as recurrent neural networks (time permitting). Finally, we’ll be sure to leave time to translate what we learn into performant, flexible PyTorch code so you can apply what you’ve learned to real-world problems.

No background in neural networks is required, but a familiarity with the terminology of supervised learning (e.g. training set vs. testing set, features vs. target) will be helpful.

Bio: Seth Weidman is a data scientist at Facebook, working on machine learning problems related to their data center operations. Prior to this role, Seth was a Senior Data Scientist at Metis, where he first taught two data science bootcamps in Chicago and then taught for one year as part of Metis’ Corporate Training business. Prior to that, Seth was the first data scientist at Trunk Club in Chicago, where he built their first lead scoring model from scratch and worked on their recommendation systems.

In addition to solving real-world ML problems, he loves demystifying concepts at the cutting edge of machine learning, from neural networks to GANs. He is the author of a forthcoming O’Reilly book on neural networks and has spoken on these topics at multiple conferences and Meetups all over the country.