Podcast
Questions and Answers
What happens to the margin position of the hyperplane if the support vectors are deleted?
What happens to the margin position of the hyperplane if the support vectors are deleted?
- The margin position increases
- The margin position shifts towards the support vectors
- The margin position decreases
- The margin position remains unchanged (correct)
In a linearly separable dataset, why are not all points on the supporting hyperplanes considered support vectors?
In a linearly separable dataset, why are not all points on the supporting hyperplanes considered support vectors?
- Because they do not influence the classifier's decision boundary (correct)
- Because they are not important for maximizing the margin
- Because they are outliers
- Because they are misclassified points
What is the main goal when searching for the largest margin classifier in SVM?
What is the main goal when searching for the largest margin classifier in SVM?
- To minimize the number of support vectors
- To maximize the distance between the classes (correct)
- To minimize the margin width
- To maximize the number of support vectors
How are the weights in SVM determined during optimization?
How are the weights in SVM determined during optimization?
What does the hyperplane H0 represent in SVM?
What does the hyperplane H0 represent in SVM?
What is the key difference between Maximum Margin Classifiers and Support Vector Classifiers?
What is the key difference between Maximum Margin Classifiers and Support Vector Classifiers?
Why are support vectors crucial in determining the decision boundary in SVM?
Why are support vectors crucial in determining the decision boundary in SVM?
Why are Maximum Margin Classifiers very sensitive to outliers?
Why are Maximum Margin Classifiers very sensitive to outliers?
What is the purpose of allowing some misclassification in Support Vector Classifiers?
What is the purpose of allowing some misclassification in Support Vector Classifiers?
How does a hard margin differ from a soft margin in Support Vector Machines?
How does a hard margin differ from a soft margin in Support Vector Machines?
What trade-off is involved when using a soft margin in Support Vector Machines?
What trade-off is involved when using a soft margin in Support Vector Machines?
How is the best soft margin determined in Support Vector Machines?
How is the best soft margin determined in Support Vector Machines?
What is the formula for the margin of a separating hyperplane in Support Vector Machines?
What is the formula for the margin of a separating hyperplane in Support Vector Machines?
Why should the decision boundary in SVM be as far away from the data of both classes as possible?
Why should the decision boundary in SVM be as far away from the data of both classes as possible?
In SVM, how can a data point be identified as a support vector?
In SVM, how can a data point be identified as a support vector?
What is the equation that corresponds to the decision boundary in SVM when trained on a dataset with 6 points, containing three samples with class label -1 and three samples with class label +1?
What is the equation that corresponds to the decision boundary in SVM when trained on a dataset with 6 points, containing three samples with class label -1 and three samples with class label +1?
What is the characteristic of the 𝑊 vector in SVM?
What is the characteristic of the 𝑊 vector in SVM?
Why are methods like Lagrange multiplier method used in practice to solve SVM?
Why are methods like Lagrange multiplier method used in practice to solve SVM?