/logo-square.png

Neural Networks For Your Dog - 2.4 Pocket Algorithm

2.4 Pocket Algorithm In this lecture, we’ll discuss and code up a Perceptron with the pocket algorithm allowing the Perceptron to learn and fit data that’s not linearly separable. Code Course Curriculum (See the code on GitHub) Introduction 1.1 Introduction Perceptron 2.1 MNIST Dataset 2.2 Perceptron Model 2.3 Perceptron Learning Algorithm 2.4 Pocket Algorithm 2.5 Multiclass Support 2.6 Perceptron To Neural Network Neural Network 3.1 Simple Images

Neural Networks For Your Dog - 2.5 Multiclass Support

2.5 Multiclass Support In this lecture, we’ll discuss the one-vs-one and one-vs-rest techniques for extending our Perceptron, a binary classifier, into a multiclass classifier. Code Course Curriculum (See the code on GitHub) Introduction 1.1 Introduction Perceptron 2.1 MNIST Dataset 2.2 Perceptron Model 2.3 Perceptron Learning Algorithm 2.4 Pocket Algorithm 2.5 Multiclass Support 2.6 Perceptron To Neural Network Neural Network 3.1 Simple Images 3.2 Random Weights

Neural Networks For Your Dog - 2.6 Perceptron To Neural Network

2.6 Perceptron To Neural Network In this lecture, we’ll discuss an intuitive way for interpreting why perceptrons work and how they can be used to mimic simple logical programs. However, we’ll also observe that perceptrons fail to solve the XOR problem, and how that deficiency (plus some knowledge about the human brain) leads to a natural derivation of multilayer perceptrons - AKA neural networks. Course Curriculum (See the code on GitHub)

Neural Networks For Your Dog - 3.1 Simple Images

3.1 Simple Images In this lecture, we’ll check out the Simple Images dataset - 2x2 gray scale images of “checkerboards”, “bright lines” and “triangles”. Code Course Curriculum (See the code on GitHub) Introduction 1.1 Introduction Perceptron 2.1 MNIST Dataset 2.2 Perceptron Model 2.3 Perceptron Learning Algorithm 2.4 Pocket Algorithm 2.5 Multiclass Support 2.6 Perceptron To Neural Network Neural Network 3.1 Simple Images 3.2 Random Weights 3.3 Gradient Descent

Neural Networks For Your Dog - 3.2 Random Weights

3.2 Random Weights In this lecture, we’ll introduce the formal structure of a neural network and then we’ll code one up with random weights. Code Course Curriculum (See the code on GitHub) Introduction 1.1 Introduction Perceptron 2.1 MNIST Dataset 2.2 Perceptron Model 2.3 Perceptron Learning Algorithm 2.4 Pocket Algorithm 2.5 Multiclass Support 2.6 Perceptron To Neural Network Neural Network 3.1 Simple Images 3.2 Random Weights

Neural Networks For Your Dog - 3.3 Gradient Descent

3.3 Gradient Descent In this lecture, we’ll see how neural networks learn optimal weights via gradient descent (commonly called “backpropagation. Then we’ll build a neural network with logistic activation functions and log loss objective function. Code Course Curriculum (See the code on GitHub) Introduction 1.1 Introduction Perceptron 2.1 MNIST Dataset 2.2 Perceptron Model 2.3 Perceptron Learning Algorithm 2.4 Pocket Algorithm 2.5 Multiclass Support 2.6 Perceptron To Neural Network Neural Network