🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Understanding Large Language Models without Math
5 Questions
0 Views

Understanding Large Language Models without Math

Created by
@PopularOcarina

Podcast Beta

Play an AI-generated podcast conversation about this lesson

Questions and Answers

How many words was GPT-3 trained on?

  • 1 trillion words
  • 100 million words
  • 500 billion words (correct)
  • 10 quadrillion words
  • How does the performance of language models scale according to the 2020 OpenAI paper?

  • Exponentially with model size and dataset size
  • Linearly with model size and dataset size
  • As a power-law with model size, dataset size, and compute used for training (correct)
  • Logarithmically with model size and dataset size
  • How many parameters did GPT-1, OpenAI's first language model, have?

  • 117 million parameters (correct)
  • 12 million parameters
  • 768 million parameters
  • 12 billion parameters
  • Approximately how many words does a typical human child encounter by age 10?

    <p>100 million words</p> Signup and view all the answers

    What is one reason given for the surprising performance of large language models like GPT-3?

    <p>The sheer scale of training data and model size</p> Signup and view all the answers

    Use Quizgecko on...
    Browser
    Browser