CS7641 Midterm: Biases in Supervised Learning
16 Questions
0 Views

CS7641 Midterm: Biases in Supervised Learning

Created by
@EasiestMimosa

Questions and Answers

What is the purpose of choosing small, random values for the initial input weights in machine learning?

  • To increase the chances of finding a global maximum.
  • To enhance the capacity of the model to overfit the training data.
  • To ensure consistent results in every run of the algorithm.
  • To minimize the risk of falling into local minima. (correct)
  • Which kind of bias in support vector machines focuses on maximizing the margin?

  • Restriction bias
  • Curse of dimensionality
  • Capacity bias
  • Preference bias (correct)
  • In k-nearest neighbors (KNN), which statement accurately describes the 'smoothness' preference bias?

  • The algorithm treats all features equally.
  • The algorithm ignores distant neighbors completely.
  • The algorithm assumes that all local patterns are identical.
  • The algorithm averages values of k-nearest neighbors to expect smooth behavior. (correct)
  • What effect does the curse of dimensionality have on the k-nearest neighbors algorithm?

    <p>It diminishes the model's performance with higher feature numbers unless more data is available.</p> Signup and view all the answers

    In the context of boosting, what does restriction bias refer to?

    <p>The weak learners in the ensemble limiting its performance.</p> Signup and view all the answers

    What is one of the key requirements for k-nearest neighbors (KNN) to model data accurately?

    <p>A method to compute similarity between neighbors.</p> Signup and view all the answers

    Which statement best summarizes the role of smaller weights in the training of machine learning models?

    <p>They help to avoid the complexities associated with larger values which can lead to overfitting.</p> Signup and view all the answers

    Which aspect of k-nearest neighbors could lead to ineffective modeling due to dimensionality issues?

    <p>Equal feature weighting.</p> Signup and view all the answers

    What does the restriction bias of a supervised learning algorithm refer to?

    <p>The set of hypotheses that the algorithm can consider</p> Signup and view all the answers

    What is preference bias in the context of supervised learning algorithms?

    <p>The algorithm's preference for certain representations</p> Signup and view all the answers

    Which of the following is a characteristic of decision trees' preference bias?

    <p>Preferring trees with good splits near the top</p> Signup and view all the answers

    What is a consequence of neural networks not achieving much restriction bias?

    <p>They are prone to overfitting</p> Signup and view all the answers

    What is a method to address overfitting in neural networks?

    <p>Cross-validation</p> Signup and view all the answers

    What is the main difference between the training rules for neural networks?

    <p>Gradient Descent is a more common choice than Perceptron</p> Signup and view all the answers

    What is a general principle of Occam's Razor?

    <p>Simpler models are always preferred</p> Signup and view all the answers

    How many hidden layers are required to model arbitrary functions in neural networks?

    <p>Two</p> Signup and view all the answers

    Study Notes

    Biases in Supervised Learning Algorithms

    • Restriction Bias: The representational power of an algorithm, or the set of hypotheses an algorithm will consider, defining what a model can represent.

    • Preference Bias: The favored representations of a supervised learning algorithm, influencing its choice of hypotheses, e.g., preferring simpler representations.

    Decision Trees

    • Restriction Bias: Can model a set of hypotheses through tree structures.
    • Preference Bias: Prefers shorter trees and trees with informative splits near the top (high information gain).

    Artificial Neural Networks (ANN)

    • Restriction Bias: Can model a wide range of functions with minimal restrictions, prone to overfitting.
    • Preference Bias: Prefers low complexity: small weights, fewer hidden layers, and smaller hidden layers, achieved through:
    • Small, random initial weights to avoid local minima and overfitting.
    • Smaller weights to prevent overfitting.

    Support Vector Machines (SVM)

    • Restriction Bias: Depends on the chosen kernel, requiring a way to calculate instance similarity.
    • Preference Bias: Seeks to maximize margin to avoid overfitting, using the Hyperplane Separation Theorem.

    k-Nearest Neighbors (KNN)

    • Restriction Bias: Nonparametric regression, can model any function with a distance metric.
    • Preference Bias: Assumes:
    • Locality: Near points are similar.
    • Equality: All features matter equally.
    • Smoothness: Nearby points have similar values.
    • Limitation: Affected by the "Curse of Dimensionality", requiring more data with increasing features.

    Boosting

    • Restriction Bias: Same as the underlying weak learners.
    • Preference Bias: Same as the underlying weak learners.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Description

    Explore the concepts of preference and restriction biases in supervised learning algorithms. This quiz will help you understand how these biases influence the representational power of algorithms and the hypotheses they consider. Perfect for organizing your thoughts for the CS7641 midterm.

    More Quizzes Like This

    Use Quizgecko on...
    Browser
    Browser