A foundation model refers to a specific instance or version of an LLM, such as GPT-3, GPT-4 or Codex that has been trained and fine-tuned on a large corpus of text or code (in the case of the Codex model). It takes in training data in all different formats and uses a transformer architecture to build a general model. From there adaptions and specializations can be created to achieve certain tasks via prompting or fine-tuning.
« Back to Glossary Index