AT Lecture 7

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Which of the following are common applications of Convolutional Neural Networks (CNNs)?

  • Image classification
  • Object localization
  • Object detection
  • All of the above (correct)

Object localization involves identifying rectangular regions in an image where objects occur and is independent of object classification.

False (B)

What are the four numbers used to define a bounding box in object localization?

Coordinates of the top-left corner; length and width

In object detection, the number of classification and regression heads needed may not be known, making it unsuitable for direct use with object ________ architectures.

<p>localization</p>
Signup and view all the answers

What is the primary purpose of the Region Proposal module in R-CNN?

<p>To generate category-independent region proposals, like candidate bounding boxes. (B)</p>
Signup and view all the answers

Fast R-CNN is more computationally efficient than R-CNN because it avoids recomputing convolutional features for each region proposal.

<p>True (A)</p>
Signup and view all the answers

What is one reason why R-CNN is considered slow?

<p>CNN-based feature extraction is performed separately on each candidate region.</p>
Signup and view all the answers

Faster R-CNN introduces a ________ to predict region proposals, making the process more efficient and integrated.

<p>Region Proposal Network</p>
Signup and view all the answers

In the context of transfer learning, what is the 'source' task?

<p>The task from which knowledge is transferred. (D)</p>
Signup and view all the answers

Transfer learning is most effective when building machine learning models with limited training data.

<p>True (A)</p>
Signup and view all the answers

What is 'negative transfer' in the context of transfer learning?

<p>Degraded performance compared to not using transfer learning</p>
Signup and view all the answers

In transfer learning, a higher learning rate for layers being finetuned can lead to a loss of ________ from the source task.

<p>information</p>
Signup and view all the answers

What does the Predict to Learn (P2L) approach primarily aim to estimate?

<p>The appropriateness of a pre-trained model for a new learning task. (C)</p>
Signup and view all the answers

The Predict to Learn (P2L) approach involves fine-tuning all existing source models to determine the best one for a given task.

<p>False (B)</p>
Signup and view all the answers

According to P2L what are two attributes that determine if a pre-trained model is suitable?

<p>Similarity between source and target dataset; Size of source data</p>
Signup and view all the answers

In basic finetuning, it is common to replace the ______ layer of a pre-trained network with a randomly initialized one.

<p>classification</p>
Signup and view all the answers

What is a 'frozen embedding' in the context of finetuning a neural network for transfer learning?

<p>The weights of the pre-trained layers that are kept fixed during training. (D)</p>
Signup and view all the answers

In the context of transfer learning, using a smaller dataset typically necessitates training more layers of a pre-trained model.

<p>False (B)</p>
Signup and view all the answers

What is one reason to choose a smaller learning rate when finetuning layers of a pre-trained network?

<p>Avoid losing information from the source task</p>
Signup and view all the answers

According to research results, fine-tuning after a source model is chopped and retrained, improves ________.

<p>generalization</p>
Signup and view all the answers

Why might a network trained to classify a random subset of 500 classes achieve lower top-1 error than the same network trained on a 1000-class dataset?

<p>The 500-class network is less prone to overfitting. (D)</p>
Signup and view all the answers

A key step in semi-supervised learning involves labeling additional data from the web to supplement the existing training data.

<p>False (B)</p>
Signup and view all the answers

In semi-supervised learning as it relates to teacher-student methods, what is the role of the teacher model?

<p>To label unlabelled dataset</p>
Signup and view all the answers

In semi-supervised learning, a teacher model is trained and then used to construct a new labeled dataset from previously ________ data for student model training.

<p>unlabelled</p>
Signup and view all the answers

What is the potential drawback of increasing the value of K (number of top samples) in semi-supervised learning when labeling unlabeled images?

<p>It increases the probability of introducing false positives. (A)</p>
Signup and view all the answers

In semi-supervised learning, the teacher model’s predictions are only used to rank images, not to assign them specific labels.

<p>False (B)</p>
Signup and view all the answers

What is one way that web data, such as Flickr, can be used to train visual recognition systems despite having noisy labels?

<p>Leverage human intelligence to filter out noise</p>
Signup and view all the answers

In weakly supervised learning, the convolutional network is trained to predict words that ________ with an image.

<p>co-occur</p>
Signup and view all the answers

Which of the following is a key consideration when using pseudo-labeling to improve transfer learning?

<p>Calculating the distance from a set of named anchor points. (D)</p>
Signup and view all the answers

An image can only belong to a single semantic class when using transfer learning with pseudo-labeling.

<p>False (B)</p>
Signup and view all the answers

Which is better, closer anchor points, or furthest anchor points for divergence based pseduo labelling?

<p>closer</p>
Signup and view all the answers

A ________ Neural Network is a model comprised of two identical neural networks whose outputs can be combined to determine the degree of similarity between two images (or more abstractly, vectors).

<p>Siamese</p>
Signup and view all the answers

What is dissimilarity also referred to as?

<p>Contrastive loss (C)</p>
Signup and view all the answers

Contrastive loss is used to reduce the distance between dissimilar points and increase the distance between similar points.

<p>False (B)</p>
Signup and view all the answers

Why is a 'margin' needed in contrastive loss?

<p>Seperation of classes</p>
Signup and view all the answers

In a general sense and without relating to the specific diagrams, it is observed with transfer learning that the more ________ the image, the better the results.

<p>trees</p>
Signup and view all the answers

Match the following terms with their descriptions:

<p>Object Localization = Identifying the location of one or more objects in an image and drawing a bounding box around them Object Detection = Identifying if an image contains an object or feature of interest Image Classification = Assigning a label to the image based on its content. Semantic Segmentation = Classifying each pixel in an image into a pre-defined set of classes.</p>
Signup and view all the answers

Match the following transfer learning approaches with their corresponding description:

<p>Fine-tuning = Adjusting the weights of a pre-trained model on a new dataset. Feature Extraction = Treating a pre-trained model as a fixed feature extractor and training a new classifier on top of the extracted features Domain Adaptation = Adjusting the model to generalize well on a different target domain Multi-Task Learning = Training a model to perform well on multiple related tasks.</p>
Signup and view all the answers

Which of the following is NOT a typical application of Convolutional Neural Networks (CNNs)?

<p>Text summarization (C)</p>
Signup and view all the answers

Object localization involves identifying all objects in an image regardless of the number of classes.

<p>False (B)</p>
Signup and view all the answers

In object localization, a bounding box is defined using the coordinates of the top-left corner and two ______.

<p>dimensions</p>
Signup and view all the answers

Why can't the object localization architecture be directly used for object detection?

<p>The number of classification and regression heads needed is not known in object detection. (D)</p>
Signup and view all the answers

Which of the following is the primary goal of the Region Proposal step in R-CNN?

<p>To generate category-independent region proposals or candidate bounding boxes. (A)</p>
Signup and view all the answers

In Fast R-CNN, feature extraction is performed separately on each candidate region, resulting in high computational efficiency.

<p>False (B)</p>
Signup and view all the answers

What is the primary advantage of Faster R-CNN over Fast R-CNN?

<p>Faster R-CNN integrates region proposal network, which makes the region proposals faster.</p>
Signup and view all the answers

Faster R-CNN uses a ______ to directly learn and propose regions, integrating this step into the network itself.

<p>Region Proposal Network</p>
Signup and view all the answers

What is the core idea behind transfer learning?

<p>Reusing knowledge gained from solving previous tasks to improve performance on new tasks. (C)</p>
Signup and view all the answers

In transfer learning, it is always beneficial to choose a source model trained on a very large dataset, regardless of its similarity to the target task.

<p>False (B)</p>
Signup and view all the answers

In transfer learning, the choice of layers to ______ and which to freeze in a pre-trained network is an important part of fine-tuning.

<p>finetune</p>
Signup and view all the answers

Match the transfer learning approaches to their descriptions:

<p>Common intuition = Achieving high performance by reusing compact representations from source tasks in related target tasks. Instance based = Identifying and using appropriate data from source tasks to improve target task training. Feature representation based = Leveraging source task weight matrices for adapting pretrained models.</p>
Signup and view all the answers

What are the two attributes that P2L (Predict to Learn) accounts for when selecting source models?

<p>Similarity between source and target dataset &amp; Size of source dataset (D)</p>
Signup and view all the answers

Flashcards

Object Localization

Identifying rectangular regions in an image where a fixed set of objects occur.

Object Localization Task

Simultaneously classifying and localizing objects within an image.

Object Detection

Object detection identifies all objects and their classes in an image with a variable number of objects of different classes.

R-CNN Region Proposal

Generate and extract category independent region proposals, e.g. candidate bounding boxes.

Signup and view all the flashcards

R-CNN Feature Extractor

Extract features from each candidate region, e.g. using a deep convolutional neural network.

Signup and view all the flashcards

R-CNN Classifier

Classify features as one of the known class, e.g. linear SVM classifier model.

Signup and view all the flashcards

Transfer Learning

A technique that reuses knowledge gathered from source tasks for a target task.

Signup and view all the flashcards

Common Intuition in Transfer Learning

Networks reuse learned compact representations to achieve higher performance on a related target task

Signup and view all the flashcards

Feature Representation Based Approaches

Leverage source task weight matrices by fine-tuning or retraining final layers.

Signup and view all the flashcards

Selection of Source Model

What is the best source task/model to transfer knowledge for a 'target' task?

Signup and view all the flashcards

Similarity Between Source and Target Dataset

Source dataset which is similar to a target dataset in feature space is most likely to have a higher degree of transferability.

Signup and view all the flashcards

Predict to Learn (P2L)

Estimate the appropriateness of a previously trained model for use with a new learning task

Signup and view all the flashcards

Basic Finetuning

Take almost any deep network pre-trained on a large dataset of your choice

Signup and view all the flashcards

Frozen Embedding

Train only the new layer's weights, using the 'frozen' embedding

Signup and view all the flashcards

Automated Labeling of Images

pseudo labeling of unlabeled Data for Transfer Learning

Signup and view all the flashcards

Goal of pseudo labeling

Exploit data from the wild to capture rich representations in source models

Signup and view all the flashcards

Semi-Supervised Learning

Using supervised learning to train a teacher model and use it to classify unlabeled images

Signup and view all the flashcards

Semi-Supervised Learning step 1

Train a teacher model on labeled data D

Signup and view all the flashcards

Pseudo Labeling

Divergence based Pseudo Labeling to Improve Transfer Learning

Signup and view all the flashcards

Step one of Pseudo Labeling

Calculate the distance from a set of named anchor points representing known and labeled categories

Signup and view all the flashcards

Siamese Networks

A neural network architecture containing two or more identical subnetworks used to generate feature vectors for each input.

Signup and view all the flashcards

Contrastive Loss

Used with Siamese nets, it minimizes intra-class distance and maximizes inter-class distance.

Signup and view all the flashcards

Margin in Contrastive Loss

Ensures dissimilar data points are pushed farther apart in the embedding space.

Signup and view all the flashcards

Study Notes

CNN Applications

  • CNNs are utilized for image classification, object localization, and object detection.

Object Localization

  • Entails identifying rectangular regions where fixed objects occur in an image.
  • Four numbers are used to identify a bounding box; these specify the coordinates of the top-left corner, the length, and the width.
  • Object localization is often integrated with object classification.

Object Detection

  • Used when there are a variable number of objects of different classes in an image.
  • Aims to identify all objects and their corresponding classes within the image.
  • Object localization architecture can't be used for object detection, as the required number of classification and regression heads is unknown.

R-CNN (Region Based Convolutional Neural Network)

  • The R-CNN approach initially generates category-independent region proposals, like candidate bounding boxes.
  • It then extracts features from each candidate region using a deep convolutional neural network.
  • Finally, classifies these features as one of the known classes, linear SVM classifier models are commonly used.
  • It has three main modules: Region Proposal, Feature Extractor, and Classifier.

Fast R-CNN

  • R-CNN is slow.
  • This slowness is due to passing CNN-based feature extraction for each candidate region. There is is no sharing of computation.
  • Approximately 2,000 proposed regions exist per image at test-time.

Faster R-CNN

  • Includes a region proposal network.

Transfer Learning

  • Building strong machine learning models typically requires a large amount of training data.
  • Transfer learning is useful because small training jobs are common, and labeled data is often scarce.
  • Transfer learning reuses the knowledge gained from "source" tasks is reused for a "target" task that lacks abundant labeled data.
  • A common intuition is that networks can use representations form one task for another.
  • Instance-based approaches leverage appropriate data from a source task to supplement target task training.
  • Feature representation-based approaches leverage source task weight matrices.
  • Using trained weights in a source network the input can be transferred by fine-tuning or retraining the final dense layer.

Improving Transferability in Transfer Learning

  • This involves the selection of a source model.
  • Similarity measures between source and target tasks help improve transferability improvements.
  • Improper base dataset/model choices may result in degraded performance compared to no transfer learning due to negative transfer.
  • Degree of fine-tuning has to be tuned when using transfer learning.
  • This tuning includes selecting layers to fine-tune/freeze and learning rate. Higher rates can wash away what as originally in the network.

Good and bad sources

  • Selecting a base model that is similar to a training task performs better.

Selection of Source Models

  • Training requires time and resources, the goal is to avoid a lengthy optimization process.
  • The P2L (Predict to Learn) approach estimates the appropriateness of a previously trained model for use with a new learning task, therefore improving the speed of training.
  • P2L requires one forward pass of the target data set through a single reference model.
  • Two attributes account for the efficacy of P2L: Similarity between source and target datasets, and size of source dataset.

Transfer Learning: Basic Finetuning

  • A deep network pre-trained on a large dataset is typically used.
  • One replaces the last (classification) layer with a randomly initialized layer.
  • Train only the new layer's weights, using the "frozen" embedding.

Transferability

  • Some networks are more transferable than others, based upon their architecture.

Learn to Transfer

  • One must chose the right source model.
  • One must chose the right method to use, such as shallow learning driving SVMs, fine tuning, or single source or ensemble.
  • Which layers to freeze/fine tune should be chosen (ResNet101, GoogLeNet, VGG16 etc)
  • One must chose the right curriculum.

IBM Visual Recognition Service

  • Cloud based service that supports both image classification and object detection.
  • Provides the use of pretrained models or developing custom models.

Exploiting Web Data for Training

  • Web and social media are important sources of vision research data.
  • Datasets like ImageNet, PASCAL VOC, and MS COCO use human intelligence to filter out noisy labels.
  • Automated labeling can be used to do this without human intervention.

Automated Labeling of Images

  • This may include pseudo-labeling.
  • Exploit data from the wild to capture source representations.
  • This includes semi-supervised and weakly supervised methods to train and classify images.

Semi-supervised Learning

  • Involves training a teacher model and then running it on unlabeled data.

Labeling unlabeled images

  • One needs to run the teacher model on each unlabeled examples to calculate the soft max predictions to use as new labels.

Parameters P and K

  • Concerns why to predict one class over the other.
  • Allows the image to be replicated for each class.

Performance of teacher-student learning

  • Important parameters is fine tuning the model on the correct data.

Learning from weak labels

  • Facebook contains tons of data like this.

Weakly Supervised Learning Architecture

  • Trains CNNs to predict the words from the images.
  • A multi-label learning problem with very noisy labels.
  • Standard convnet architecture is used.
  • There is a 100K class multi-class logistic loss function.

Experimental evaluation

  • Uses CNNs for transferring other vision tasks.

Analyzing the word embeddings

  • The output layer is a word embedding.

Feature Vector Embeddings

  • Average over the activations from layer n.
  • For n=N-1, the vector is 4096 dimensions.

Divergence in Feature Space

  • Measure dataset similarity through feature space.

Divergence based Pseudo Labeling to Improve Transfer Learning

  • For each unlabeled sample, calculate its distance (in feature space) from a set of named anchor points representing known and labeled categories, like animal, plant, tool
  • Construct pseudo-labels for unlabeled samples based on these distances
    • Pseudo-labels are a sequence of semantically descriptive names: e.g., <tool, plant>
  • Train a source model using these automatically generate pseudo labels
  • Finetune the source model on target dataset

Pseudo Label Datasets

  • The closer your training data is to your target the better the results.

Siamese Networks

  • A 2 path neural network, with loss being calculated between 2 streams rather than a single set.

Contrastive Loss

  • A neural network loss algorithm used in siamese networks to perform contrastive learning.
  • A similarity between 2 transformed data points.

Explaining Contrastive Loss Function

  • Aims to minimise intra-class distance and maximise inter-class distance

Why we need margin m in contrastive loss?

  • Introduces a margin into the objective when points are dissimilar that can push the dissimilar points apart.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Mastering CNN Architectures
10 questions
Introduction to CNN Image Challenges Quiz
30 questions
CNN News Quiz Flashcards
16 questions

CNN News Quiz Flashcards

AmicableNeodymium avatar
AmicableNeodymium
Use Quizgecko on...
Browser
Browser