preface
acknowledgments
about this book
about the author
about the cover illustration
1 Understanding large language models
1.1 What is an LLM?
1.2 Applications of LLMs
1.3 Stages of building and using LLMs
1.4 Introducing the transformer architecture
1.5 Utilizing large datasets
1.6 A closer look at the GPT architecture
1.7 Building a large language model
2 Working with text data
2.1 Understanding word embeddings
2.2 Tokenizing text
2.3 Converting tokens into token IDs
2.4 Adding special context tokens
2.5 Byte pair encoding
2.6 Data sampling with a sliding window
2.7 Creating token embeddings
2.8 Encoding word positions
3 Coding attention mechanisms
3.1 The problem with modeling long sequences
3.2 Capturing data dependencies with attention mechanisms
3.3 Attending to different parts of the input with self-attention
3.3.1 A simple self-attention mechanism without trainable weights
3.3.2 Computing attention weights for all input tokens
3.4 Implementing self-attention with trainable weights
3.4.1 Computing the attention weights step by step
3.4.2 Implementing a compact self-attention Python class
3.5 Hiding future words with causal attention
3.5.1 Applying a causal attention mask
3.5.2 Masking additional attention weights with dropout
3.5.3 Implementing a compact causal attention class
3.6 Extending single-head attention to multi-head attention
3.6.1 Stacking multiple single-head attention layers
3.6.2 Implementing multi-head attention with weight splits
4 Implementing a GPT model from scratch to generate text
4.1 Coding an LLM architecture
4.2 Normalizing activations with layer normalization
4.3 Implementing a feed forward network with GELU activations
4.4 Adding shortcut connections
4.5 Connecting attention and linear layers in a transformer block
4.6 Coding the GPT model
4.7 Generating text
5 Pretraining on unlabeled data
5.1 Evaluating generative text models
5.1.1 Using GPT to generate text
5.1.2 Calculating the text generation loss
5.1.3 Calculating the training and validation set losses
5.2 Training an LLM
5.3 Decoding strategies to control randomness
5.3.1 Temperature scaling
5.3.2 Top-k sampling
5.3.3 Modifying the text generation function
5.4 Loading and saving model weights in PyTorch
5.5 Loading pretrained weights from OpenAI
6 Fine-tuning for classification
6.1 Different categories of fine-tuning
6.2 Preparing the dataset
6.3 Creating data loaders
6.4 Initializing a model with pretrained weights
6.5 Adding a classification head
6.6 Calculating the classification loss and accuracy
6.7 Fine-tuning the model on supervised data
6.8 Using the LLM as a spam classifier
7 Fine-tuning to follow instructions
7.1 Introduction to instruction fine-tuning
7.2 Preparing a dataset for supervised instruction fine-tuning
7.3 Organizing data into training batches
7.4 Creating data loaders for an instruction dataset
7.5 Loading a pretrained LLM
7.6 Fine-tuning the LLM on instruction data
7.7 Extracting and saving responses
7.8 Evaluating the fine-tuned LLM
7.9 Conclusions
7.9.1 What’s next?
7.9.2 Staying up to date in a fast-moving field
7.9.3 Final words
appendix A Introduction to PyTorch
A.1 What is PyTorch?
A.1.1 The three core components of PyTorch
A.1.2 Defining deep learning
A.1.3 Installing PyTorch
A.2 Understanding tensors
A.2.1 Scalars, vectors, matrices, and tensors
A.2.2 Tensor data types
A.2.3 Common PyTorch tensor operations
A.3 Seeing models as computation graphs
A.4 Automatic differentiation made easy
A.5 Implementing multilayer neural networks
A.6 Setting up efficient data loaders
A.7 A typical training loop
A.8 Saving and loading models
A.9 Optimizing training performance with GPUs
A.9.1 PyTorch computations on GPU devices
A.9.2 Single-GPU training
A.9.3 Training with multiple GPUs
appendix B References and further reading
appendix C Exercise solutions
appendix D Adding bells and whistles to the training loop
D.1 Learning rate warmup
D.2 Cosine decay
D.3 Gradient clipping
D.4 The modified training function
appendix E Parameter-efficient fine-tuning with LoRA
E.1 Introduction to LoRA
E.2 Preparing the dataset
E.3 Initializing the model
E.4 Parameter-efficient fine-tuning with LoRA
index