Mastering Large Language Models and Transformers: A Flashcard Guide

Table of Contents

LLMs and Transformers Flashcards

What does LLM stand for?   drill llm_transformer

Front

What does LLM stand for in the context of AI?

Back

Large Language Model

Key components of a transformer architecture   drill llm_transformer

Component 1

Encoder

Component 2

Decoder

Component 3

Self-attention mechanism

Component 4

Feed-forward neural networks

Purpose of self-attention in transformers   drill llm_transformer

Front

What is the main purpose of the self-attention mechanism in transformers?

Back

To allow the model to weigh the importance of different words in the input sequence when processing each word, capturing contextual relationships.

Advantage of transformers over RNNs   drill llm_transformer

Front

What is a key advantage of transformer models over Recurrent Neural Networks (RNNs)?

Back

Transformers can process all input tokens in parallel, making them more efficient for training on large datasets and capturing long-range dependencies.

BERT full name   drill llm_transformer

Front

What does BERT stand for?

Back

Bidirectional Encoder Representations from Transformers

GPT full name   drill llm_transformer

Front

What does GPT stand for?

Back

Generative Pre-trained Transformer

Tokens in LLMs   drill llm_transformer

Front

What are tokens in the context of LLMs?

Back

Tokens are the basic units of text that an LLM processes. They can be words, subwords, or individual characters, depending on the tokenization method used.

Purpose of fine-tuning   drill llm_transformer

Front

What is the purpose of fine-tuning an LLM?

Back

Fine-tuning adapts a pre-trained model to a specific task or domain by further training it on a smaller, task-specific dataset.

Prompt engineering   drill llm_transformer

Front

What is prompt engineering in the context of LLMs?

Back

Prompt engineering is the practice of carefully designing and refining input prompts to elicit desired responses from an LLM, optimizing its performance for specific tasks.

Hallucination in LLMs   drill llm_transformer

Front

What is hallucination in the context of LLMs?

Back

Hallucination refers to the phenomenon where an LLM generates false or nonsensical information that appears plausible but has no basis in fact or the training data.

Author: Jason Walsh

j@wal.sh

Last Updated: 2024-10-30 16:43:54