Deep Learning and its Variants_Session 1_20240114.pdf
Document Details
Uploaded by PalatialRelativity
Tags
Related
- Deep Learning and Artificial Neural Networks for Spacecraft Dynamics, Navigation and Control PDF
- Artificial Intelligence - Neural Networks PDF
- AI - Machine Learning & Deep Learning Lecture 08 PDF
- Artificial Intelligence for Big Data, Neural Networks, and Deep Learning PDF
- Artificial Intelligence for Big Data, Neural Networks & Deep Learning PDF
- Artificial Intelligence Reading Comprehension PDF
Full Transcript
Presents Deep Learning & its variants GGU DBA An Introduction to Neural Networks Dr. Anand Jayaraman Professor, upGrad; Chief Scientist, Soothsayer Analytics Chief Data Scientist, Agastya Data Solutions Profile Dr. Anand Jayaraman Role Chief Scientist, Soothsayer Analytics Professor, upGrad UGDX Aca...
Presents Deep Learning & its variants GGU DBA An Introduction to Neural Networks Dr. Anand Jayaraman Professor, upGrad; Chief Scientist, Soothsayer Analytics Chief Data Scientist, Agastya Data Solutions Profile Dr. Anand Jayaraman Role Chief Scientist, Soothsayer Analytics Professor, upGrad UGDX Academic Background Ph.D in Physics, Univ. of Pittsburgh, USA B.Tech. in Engg. Physics, IIT Bombay, India. Expertise Financial Markets, Algorithmic Trading Analytics in practice 3 AGENDA A simple perceptron model How is it similar to linear/logistic regression Limitations of a perceptron Multi-layer perceptron to over come limitations. Automation is future Best learning system known to us? Best learning system known to us? Biological neural system Thinking is possible even with a “small” brain Pigeons were able to discriminate between Van Gogh and Chagall with 95% accuracy (when presented with pictures they had been trained on) Discrimination still 85% successful for previously unseen paintings of the artists Pigeons as art experts – 85% (Watanabe et al. 1995) Source: https://en.wikipedia.org/wiki/Marc_Chagall https://en.wikipedia.org/wiki/Vincent_van_Gogh 8 Thinking is possible even with a “small” brain Mice can memorize mazes, odors of contraband drugs / chemicals / explosives 1) https://www.biointeractive.org/classroomresources/mouse-uses-memory-navigate-maze 2) https://newsfeed.time.com/2012/11/16/israelicompany-trains-mice-to-detect-drugs-bombs/ Mice trained to run mazes, detect drugs 9 Biological neural networks Fundamental units are termed neurons. Connections between neurons are synapses. Adult human brain consists of 100 billion neurons and 1000 trillion synaptic connections. So, how does the brain work? Direction of signal is along the Axon from Nucleus to synapse Biological inspiration The spikes travelling along the axon of the pre-synaptic neuron trigger the release of neurotransmitter substances at the synapse. The neurotransmitters cause excitation or inhibition in the dendrite of the post-synaptic neuron. The integration of the excitatory and inhibitory signals may produce spikes in the post-synaptic neuron. The contribution of the signals depends on the strength of the synaptic connection. Learning of a Biological NN In 1949 Donald Hebb postulated one way for the network to learn. If a synapse is used more, it gets strengthened – releases more Neurotransmitter. This causes that particular path through the network to get stronger, while others, not used, get weaker. You might say that each connection has a weight associated with it – larger weights produce more stimulation and smaller weights produce less. Computational Neuron Artificial (Computational) Neuron Learning – – – – – – Input, Weights, Sum, Threshold Synaptic strengths : influence of one neuron on another; learnt by feedback / experience / observation Input to the next neuron (based on synaptic strength) Dendrites carry signals to cell body where they are summed. If (sum > threshold), the neuron fires Hebbian Learning : If a synapse is used more, it gets strengthened; This causes that particular path through the network to get stronger, while others, not used, get weaker. Machine Learning: Determine synaptic strength (weights) by finding the optimal weights consistent with the given data Perceptron : The first artificial neuron !dea – Neuron structure as a model of decision making – makes decisions by weighing up evidence. – Not meant as a complete model of decision making Captures key characteristics – Input, Weights, Sum, Threshold Notation – Dot Product (Vectors of inputs and weights) – Neuron Input : Weighted sum of inputs (z = w ∙ x) – Threshold → Bias From Perceptron to Sigmoid Neurons Perceptrons are brittle Why does this matter? – – – How to train a perceptron? How to learn from data? Try out different weights which minimize the error Perceptron : As we change weights, – Desirable – Change in output w.r.t. change in input is spiky most of the time nothing happens At some point, output switches (not smooth) small change in weight ➔ small change in the output Sigmoid Neuron – – – – !dea : use a different activation function z is large and positive ➔ output ~ 1 z is large and negative ➔ output ~ 0 Intermediate z ➔ output changes smoothly from 0 to 1 Classification with Logistic Logistic regression model Independant variable (x) Weight coefficients (W) Salary w1 Credit rating w2 Age w3 Education level w4 Account balance w5 Weight coefficients are learned from the data for a classification task (assume binary) Y = sigmoid(Wx + b) Artificial neural model: Perceptron θ MT cars data set Classification with a single logistic regression unit Using the MTcars dataset, estimate the probability of a vehicle being fitted with a manual transmission if it has a 120hp engine and weights 2800 lbs. Example: Automatic or Manual Transmission New data prediction (manual entry) There is a 64% probability of the car being fitted with an Manual transmission. Logistic Regression Example: Automatic or Manual Transmission 0 = 18.8663 – 8.08035 wt + 0.0363 hp