Deep Learning - Neural Network Concepts PDF
Document Details
Uploaded by ReadableLouisville
Techno India University, West Bengal
Tags
Related
Summary
This document provides an overview of deep learning concepts and neural networks. It differentiates machine learning from deep learning and explains the learning process of a neural network, including forward and backward propagation, activation functions, loss functions, and optimizers. The document also includes multiple-choice questions on these topics.
Full Transcript
Deep Learning - Understanding Neural Network DIFFERENCE BETWEEN MACHINE LEARNING AND DEEP LEARNING Machine Learning involves creating algorithms that allow Deep Learning is a specialised form of ML that mimics the computers to learn from and make predictions or decisions workings of the...
Deep Learning - Understanding Neural Network DIFFERENCE BETWEEN MACHINE LEARNING AND DEEP LEARNING Machine Learning involves creating algorithms that allow Deep Learning is a specialised form of ML that mimics the computers to learn from and make predictions or decisions workings of the human brain (neural networks) to learn from based on data. The models learn patterns from data using vast amounts of data. DL models automatically learn features predefined features, often requiring human intervention from data without explicit feature extraction. It uses neural for feature selection. It uses algorithms such as - decisionnetworks, particularly multi-layered ones (deep neural trees, random forests, support vector machines (SVMs), k- networks), such as convolutional neural networks (CNNs) and nearest neighbors (KNN), and linear regression. These recurrent neural networks (RNNs). These models can algorithms often require manual feature engineering. automatically extract features as they process data. ML models can work well with smaller datasets and do not DL models typically require large amounts of data and require extensive computational resources. significant computational power (e.g., GPUs) due to the Use Deep complexity Learning and number when: of layers in neural networks. Use Machine Learning when: You have large amounts of data (especially unstructured You have smaller datasets or a limited amount of labelled data like images, audio, or text). data. You're working with tasks that involve complex patterns You need a simpler model that is easy to interpret and and large datasets where traditional ML models may not requires minimal computational resources. Your task is a relatively straightforward classification, perform well. You require fantastic performance in areas like computer regression, or clustering problem. vision, natural language processing, or autonomous systems. Examples: Predicting house prices (linear regression). Customer segmentation (k-means clustering). Examples: Image recognition and object detection (CNNs). Credit risk scoring (decision trees). LEARNING PROCESS OF A NEURAL NETWORK 1 Forward Propagation - Propagation of information from the input layer to the output layer. Input layers connect to hidden layers through weighted channels. The input is multiplied by weights, summed, and passed to hidden layer neurons, each with a bias added. This weighted sum is processed by an activation function, determining if the neuron contributes to the next layer. 2 Backward Propagation - The propagation of information from output clear to the hidden layer makes the neural networks learn by themselves. During backpropagation, the neural network evaluates its performance using a loss function to quantify the deviation between predicted and expected output. This information is sent back to adjust weights and biases, improving accuracy. ACTIVATION FUNCTIONS It introduces non-linearity in the network and decides whether a neuron can cont layer. Non-linearity allows DL models to perform complex operations. A) Sigmoid Function B) ReLU Function LOSS FUNCTIONS Calculates the deviation of the predicted output to the actual output. OPTIMIZERS During training, we have to tweak and adjust the weights or parameters to try and minimise the loss function, which will make the prediction as accurate and optimised as possible. A) Gradient Descent B) Epochs A) Gradient Descent - An iterative algorithm that starts at a random point on the loss function and travels down its slope in steps until it reaches the lowest point (minima) of the function. It’s fast, robust and flexible. B) Epochs - When the entire data set is passed forward and backward through the neural network once. Quick MCQs - Question 1: Question 3: What is the purpose of forward propagation in a Which of the following activation functions is neural network? known for having sparse activations? A) To adjust the weights and biases A) Sigmoid B) To calculate the loss function B) Tanh C) To propagate information from the input layer C) ReLU to the output layer D) Softmax D) To optimise the learning rate Question 2: Question 4: What happens during backpropagation in a neural What is the primary role of the loss function in network? neural networks? A) To control the learning rate A) Weights are initialized randomly B) To calculate the deviation between predicted B) Input is multiplied by weights and passed and actual output through an activation function C) To adjust the bias of neurons C) Weights and biases are adjusted based on the D) To initialize the weights loss function D) Data is normalized to avoid overfitting Question 5: Question 7: Which loss function is commonly used in An epoch in neural networks is defined as: binary classification tasks? A) The process of adjusting weights and A) Squared Error biases during training B) Huber Loss B) The number of neurons in a hidden layer C) Binary Cross Entropy D) Kullback-Leibler Divergence C) A complete pass of the entire dataset through the network Question 6: D) The activation function used in a layer Question 8: Which of the following optimizers works by Which of the following is a non-linear traveling down the slope of the loss activation function? function to find the minimum point? A) ReLU A) Hinge Loss B) Linear B) Gradient Descent C) Hinge Loss C) Cross Entropy D) Binary Cross Entropy D) Sigmoid Question 9: Question 11: Which loss function is typically used in Which loss function is suitable for multi-class regression problems? classification problems? A) Binary Cross Entropy A) Squared Error B) Multi-Class Cross Entropy B) Multi-Class Cross Entropy C) Squared Error C) Huber Loss D) Hinge Loss D) Binary Cross Entropy Question 10: Question 12: During forward propagation, what happens to What is the goal of the gradient descent the input after it is multiplied by weights and optimizer? summed? A) To increase the loss function value A) It is passed to the output layer B) To decrease the loss function value by B) It is passed through an activation function adjusting the weights C) It is sent back to the input layer C) To initialize the weights randomly D) It is discarded if the loss is too high D) To stop the training process when accuracy is achieved