Jay Alammar 发表的一篇blog,我用机器翻译转给大家看看,关于最火热的GPT3的工作原理。
原文地址:
https://jalammar.github.io/how-gpt3-works-visualizations-animations/
The tech world is abuzz with GPT3 hype. Massive language models (like GPT3) are starting to surprise us with their abilities. While not yet completely reliable for most businesses to put in front of their customers, these models are showing sparks of cleverness that are sure to accelerate the march of automation and the possibilities of intelligent computer systems. Let’s remove the aura of mystery around GPT3 and learn how it’s trained and how it works.科技界充斥着 GPT3 炒作。大规模语言模型(如 GPT3)的能力开始让我们大吃一惊。虽然对于大多数企业来说,展示在客户面前的这些模型还不是完全可靠,但这些模型正在显示出聪明的火花,这些火花肯定会加速自动化的进程和智能计算机系统的可能性。让我们揭开 GPT3 的神秘面纱,了解它的训练方式和工作原理。
A trained language model generates text.经过训练的语言模型生成文本。
We can optionally pass it some text as input, which influences its output.我们可以选择将一些文本作为输入传递给它,这会影响它的输出。
The output is generated from what the model “learned” during its Training period where it scanned vast amounts of text.输出是根据模型在扫描大量文本的训练期间“学习”的内容生成的。
The dataset of 300 billion tokens of text is used to generate training examples for the model. For example, these are three training examples generated from the one sentence at the top.3000 亿个文本标记的数据集用于生成模型的训练示例。例如,这些是从顶部的一个句子生成的三个训练示例。
You can see how you can slide a window across all the text and make lots of examples.您可以看到如何在所有文本上滑动一个窗口并提供大量示例。
These numbers are part of hundreds of matrices inside the model. Prediction is mostly a lot of matrix multiplication.这些数字是模型中数百个矩阵的一部分。预测主要是很多矩阵乘法。
In my Intro to AI on YouTube, I showed a simple ML model with one parameter. A good start to unpack this 175B monstrosity.在我在 YouTube 上的人工智能介绍中,我展示了一个带有一个参数的简单 ML 模型。打开这个 175B 怪物的包装是一个好的开始。
To shed light on how these parameters are distributed and used, we’ll need to open the model and look inside.为了阐明这些参数的分布和使用方式,我们需要打开模型并查看内部。
GPT3 is 2048 tokens wide. That is its “context window”. That means it has 2048 tracks along which tokens are processed.GPT3 是 2048 个令牌宽。那就是它的“上下文窗口”。这意味着它有 2048 个处理令牌的轨道。
Let’s follow the purple track. How does a system process the word “robotics” and produce “A”?让我们跟随紫色轨道。系统如何处理“robotics”这个词并产生“A”?
High-level steps: 高级步骤:
Convert the word to a vector (list of numbers) representing the word将单词转换为表示单词的向量(数字列表)Compute prediction 计算预测Convert resulting vector to word 将生成的向量转换为单词You can see a detailed explanation of everything inside the decoder in my blog post The Illustrated GPT2.您可以在我的博文 The Illustrated GPT2 中看到解码器内部所有内容的详细解释。
The difference with GPT3 is the alternating dense and sparse self-attention layers.与 GPT3 的不同之处在于密集和稀疏自注意力层的交替。
This is an X-ray of an input and response (“Okay human”) within GPT3. Notice how every token flows through the entire layer stack. We don’t care about the output of the first words. When the input is done, we start caring about the output. We feed every word back into the model.这是 GPT3 中输入和响应(“Okay human”)的 X 射线图。注意每个令牌如何流经整个层堆栈。我们不关心第一个单词的输出。输入完成后,我们开始关心输出。我们将每个词反馈回模型。
In the React code generation example, the description would be the input prompt (in green), in addition to a couple of examples of description=>code, I believe. And the react code would be generated like the pink tokens here token after token.在 React code generation example 中,描述将是输入提示(绿色),此外还有几个 description=>code 示例,我相信。反应代码将像这里的粉红色令牌一样生成一个又一个令牌。
My assumption is that the priming examples and the description are appended as input, with specific tokens separating examples and the results. Then fed into the model.我的假设是启动示例和描述作为输入附加,并使用特定标记分隔示例和结果。然后输入到模型中。
It’s impressive that this works like this. Because you just wait until fine-tuning is rolled out for the GPT3. The possibilities will be even more amazing.令人印象深刻的是,它是这样工作的。因为您只需等到 GPT3 推出微调。可能性将更加惊人。
Fine-tuning actually updates the model’s weights to make the model better at a certain task.微调实际上是更新模型的权重,使模型在某个任务上表现更好。
Written on July 27, 2020 写于 2020 年 7 月 27 日
相关文章
猜你喜欢