Directing LLMs without training

Zahid Parvez
3 min readSep 10, 2023

There has been a lot of talk as of late on some of emergent behaviors that are being observed by training LLM’s on internet scale data, one of these behaviors that make LLMs very useful is “In-Context Learning”.

Imagine that you are building an app that filters news stories based on the users preferences, the user decides that they would only like to see positive news stories and they are only interested in Tech stories. Building this app would typically contain at least 2 ML models — one that classifies sentiment (positive/negative) and another that categorises stories (tech, finance, spots, etc…). You would need to train these models using large amounts of data and maintain them overtime to ensure that the model is temporarily accurate.

In-context learning refers to the ability of LLMs to learn what needs to be done using a few examples and without being trained. By showing the model what you are trying to achieve in the input, it can identify the intent of the user and produce a suitable output:

Demonstration of in-context learning. © Stanford AI Lab

This is described well in Stanford AI Lab’s blog post here: http://ai.stanford.edu/blog/understanding-incontext/.

In short, in-context learning is about steering AI models using prompts rather than training them from scratch. Instead of feeding vast amounts of data and adjusting weights (as in traditional training)…

--

--

Zahid Parvez

I am an analyst with a passion for data, software, and integration. In my free time, I also like to dabble in design, photography, and philosophy.