Speak to me: How many words a model is reading
LLMs have shown their skills in recent months, demonstrating that they are proficient in a wide variety of tasks. All this through one mode of interaction: prompting.
In recent months there has been a rush to broaden the context of language models. But how does this affect a language model?
This article is divided into different sections, for each section we will answer these questions:
What is a prompt and how to build a good prompt?
What is the context window? How long it can be? What is limiting the length of the input sequence of a model? Why this is important?
How we can overcome these limitations?
Do the models use the long context window?
Simply put, a prompt is how one interacts with a large language model (LLM). Given an LLM, we can interact by providing instructions in text form. This textual prompt contains the information the model needs to process a response. The prompt can contain a question, task description, content, and lots of other information. Essentially, through the prompt we provide the model with what our intent is and what we expect it to respond to.
0 Comments