26 - Encoder Architectures and Sentence Embeddings
18 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

Why is it problematic to tie the size of the WordPiece embedding matrix to the size of the hidden layer in BERT?

WordPiece embeddings should be context-independent but hidden layers learn context-dependent embeddings.

What is the solution proposed in ALBERT to address the issue of tying the size of the WordPiece embedding matrix to the size of the hidden layer?

Add projection to lower-dimensional, separately chosen embedding space.

What technique does ALBERT use to replace Masked Language Modeling?

Replaced Token Detection (RTD)

How does ALBERT achieve parameter reduction compared to BERT?

<p>Matrix factorization decouples embeddings and hidden layer; Cross-layer parameter sharing.</p> Signup and view all the answers

What is the main goal of distillation in ALBERT?

<p>To compress the model.</p> Signup and view all the answers

What is the key difference between ALBERT and BERT in terms of the training objective?

<p>ALBERT replaces Next Sentence Prediction (NSP) with Sentence-Order technique.</p> Signup and view all the answers

What are the two unsupervised pre-training tasks used in BERT?

<p>Masked Language Modelling (MLM) and Next Sentence Prediction (NSP)</p> Signup and view all the answers

What is the purpose of the [CLS] token in BERT?

<p>It holds the sequence representation for classification tasks.</p> Signup and view all the answers

How does BERT handle the issue of bidirectional conditioning being non-trivial?

<p>By using Masked Language Modeling (MLM)</p> Signup and view all the answers

What are the two types of special tokens used in BERT?

<p>[CLS] for classification tasks and [SEP] for separation</p> Signup and view all the answers

What are the two types of embeddings used in BERT?

<p>Segment embeddings and Token embeddings</p> Signup and view all the answers

How did BERT impact the field of Natural Language Processing (NLP)?

<p>It advanced the state-of-the-art for 11 NLP tasks.</p> Signup and view all the answers

What is the final training objective of DistilBERT according to the text?

<p>DistilBERT is forced to learn BERT’s behavior through distillation loss, while maintaining MLM loss and adding cosine embedding loss to align hidden state vectors.</p> Signup and view all the answers

How does ELECTRA's training objective differ from masked token prediction?

<p>ELECTRA uses a training objective based on adversarial networks, more suitable for classification, instead of masked token prediction.</p> Signup and view all the answers

What is the main idea behind Skip-thought vectors?

<p>Skip-thought vectors use an encoder-decoder approach with Gated Recurrent Units to predict the previous and next sentences from the current sentence.</p> Signup and view all the answers

Explain the process of Sequential Denoising Autoencoders described in the text.

<p>Sequential Denoising Autoencoders encode a sentence, corrupt the input to the encoder, and then recover the correct sentence, making them robust to noise.</p> Signup and view all the answers

What is the extension of word2vec described in the text?

<p>Paragraph Vectors or doc2vec is an extension of word2vec that predicts a word from neighbor words and a learned paragraph vector unique to the paragraph.</p> Signup and view all the answers

What is the significance of treating the document id as a 'virtual word' in Paragraph Vectors?

<p>By treating the document id as a 'virtual word,' it becomes available in the entire paragraph for prediction, enhancing the contextual understanding.</p> Signup and view all the answers

More Like This

112 BERT
78 questions

112 BERT

HumourousBowenite avatar
HumourousBowenite
Masked Language Model (MLM) Overview
6 questions
Use Quizgecko on...
Browser
Browser