Podcast
Questions and Answers
What is the main purpose of feature scaling in machine learning algorithms?
What is the main purpose of feature scaling in machine learning algorithms?
- To normalize the range of independent variables
- To ensure that each feature contributes approximately proportionately to the final distance (correct)
- To make gradient descent converge faster
- To facilitate the use of regularization in the loss function
Why does gradient descent converge faster with feature scaling?
Why does gradient descent converge faster with feature scaling?
- Because it reduces the number of iterations needed
- Because it standardizes the range of input feature values (correct)
- Because it reduces the computational complexity of the algorithm
- Because it reduces the likelihood of getting stuck in local minima
What could happen if one of the features has a broad range of values in a machine learning algorithm?
What could happen if one of the features has a broad range of values in a machine learning algorithm?
- It could dominate the calculation of the Euclidean distance (correct)
- It could cause the algorithm to underfit the data
- It could cause the algorithm to overfit the data
- It could prevent the algorithm from converging
Why is feature scaling important when regularization is used as part of the loss function?
Why is feature scaling important when regularization is used as part of the loss function?
What is the general step in data processing where feature scaling is typically performed?
What is the general step in data processing where feature scaling is typically performed?