There has been a lot of talk as of late on some of emergent behaviors that are being observed by training LLM’s on internet scale data, one of these behaviors that make LLMs very useful is “In-Context Learning”.
Imagine that you are building an app that filters news stories based on the users preferences, the user decides that they would only like to see positive news stories and they are only interested in Tech stories. Building this app would typically contain at least 2 ML models — one that classifies sentiment (positive/negative) and another that categorises stories (tech, finance, spots, etc…). You would need to train these models using large amounts of data and maintain them overtime to ensure that the model is temporarily accurate.
In-context learning refers to the ability of LLMs to learn what needs to be done using a few examples and without being trained. By showing the model what you are trying to achieve in the input, it can identify the intent of the user and produce a suitable output:
This is described well in Stanford AI Lab’s blog post here: http://ai.stanford.edu/blog/understanding-incontext/.
In short, in-context learning is about steering AI models using prompts rather than training them from scratch. Instead of feeding vast amounts of data and adjusting weights (as in traditional training), in-context learning focuses on providing the model with a relevant context or prompt to guide its outputs.
Context is a guide:
- Statelessness: ML models are stateless, meaning they don’t remember past interactions. Every prediction is a fresh slate. By providing context, we give the model a “memory” of sorts, allowing it to reference past interactions and produce more coherent outputs. Memory is a whole different topic that warrants exploring further.
- Relevance: Context ensures that the model’s responses are not just accurate but also relevant to the current situation or conversation.
- Guidance: A well-crafted context acts as a guiding light, ensuring the model stays on track and doesn’t veer into the realm of hallucinations.