Set 1 · Question 1 · Chapter 1
What target does self-supervised LLM pretraining usually optimize?
- Next-token prediction on unlabeled text
- Human preference rankings only
- Image segmentation masks
- SQL query execution accuracy
Show answer
Correct: Next-token prediction on unlabeled text
LLM pretraining typically uses next-token prediction built from raw text without manual labels.