LangChain is an open-source LLM integration framework for developing applications powered by language models. It enables applications that can connect a language model to other sources of data and allow it to interact with its environment.
LangChain provides components and chains for working with language models, as well as examples and guides for common use cases. LangChain is part of a rich ecosystem of tools that integrate with and build on top of it.
LangChain’s power lies in its six key modules listed from least to most complex:
- Model I/O: Interface with language models, iow the building blocks to interface with any language model
- prompts
- language models
- output parsers
- Retrieval: RAG, Interface with application-specific data
- Document loaders
- Document transformers
- Text embedding models
- Vector stores
- Retrievers
- Chains: Construct sequences of calls.
- Agents: Let chains choose which tools to use given high-level directives
- Memory: Persist application state between runs of a chain
- Callbacks: Log and stream intermediate steps of any chain
Another way of segmenting LangChain is dividing it into its 6 components:
- Schema: Defines the structure of data.
- Models: Machine learning models for various tasks.
- Prompts: Predefined queries or statements for triggering actions.
- Indexes: Data structures for efficient information retrieval.
- Memory: Responsible for storing and recalling information.
- Chains: Sequences of actions or processes.
See LangChain docs for more info.
In this LangChain Crash Course you will learn how to build applications powered by large language models.