« Back to Glossary Index

Parameters in Language Model Learning (LLM) are internal settings learned by an AI model during training. They define the model’s behavior, influencing its understanding of language and ability to generate human-like text.

When we say a model has x “parameters per token,” it means that for each token, the model utilizes a certain number of these parameters to predict the next token. The more parameters a model has, the more complex and nuanced its understanding and generation of language can be, allowing it to make more accurate predictions and understandings based on the tokens it processes.

The architecture may contain millions or billions of these parameters. A special hyperparameter called “temperature” regulates the creativity of the AI’s responses, with higher values increasing diversity and lower values focusing on deterministic responses. Overall, LLM parameters are essential in shaping the AI’s linguistic abilities and effectiveness.

LLMs and their parameters (B stand for billion -miljard in Dutch-):

« Back to Glossary Index