In this article, we explore the concept of prompt engineering and provide an introduction on how to create effective prompts.
AI is revolutionising the way we do business, and prompt engineering is at the forefront of this development. Prompt engineering is a crucial mechanism for leveraging large language models, or LLMs (whether it be GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, etc.) to push humans up the cognitive hierarchy and make language work for us. In this article, we’ll explore what prompt engineering is and provide an introduction on how to curate successful prompts.
What is a ‘prompt’?
A ‘prompt’ is a piece of text used as a starting point for generating text with an LLM. A prompt can be anything from a single word to multiple paragraphs, and it serves as the ‘input’ for the model to generate its ‘output’- the text response we receive – from.
What is ‘prompt engineering’?
Prompt engineering is the art of curating a successful prompt. What counts as a successful prompt depends on the context and user’s intention, but a prompt should generally be unambiguous, direct, and relevant.
It is fundamental to understand that when prompting an LLM, you are, in some way, communicating with it. To successfully communicate with someone (or here, something) both parties must understand the context of the interaction and the intention behind what you say; the same applies here.
What is the context of this interaction? Well, presumably, you are a sentient being who directly interacts with the world, and has a vast understanding of language, meaning, and social cues.You are communicating with an LLM who is not a sentient being interacting with the world, but an algorithm with incredible predictive abilities. GPT-3, for instance, has ‘read’ around 700,000,000,000 words, and is able to look at a sequence of words (the prompt) and assign a probability of what should follow, given its training data. This is where the output emerges.
From this we can deduce that things which are ‘implicit’ to us as humans won’t be to the LLM, which is often the case when people are receiving unsatisfactory outputs from the Model. Intuitive social, contextual, intentional cues must be clarified to the model- hence my three main requirements of being unambiguous, direct and relevant.
It’s worth considering what type of information is available on the internet, and understanding what biases and patterns emerge from that, to better understand what one can expect from the model- both its strengths and limitations.
Unambiguous: If there is more than one way in which your prompt could be interpreted, it is not an optimal prompt!
“Make this paragraph better”
This could mean many things- ‘better how’? Longer? Shorter? Less boring? Clearer?
“Improve the paragraph above by correcting all grammatical errors”
This has little to no room for misinterpretation- I have clarified that my intention is to improve the paragraph and that my method of choice is to improve the grammar.
Clarifying jargon and acronyms falls under this category as well- it’s generally better to not assume the LLM knows what “gg” means!
Direct: Your prompt should have a clear direction and focus; the less broad and general, the better.
“What can you tell me about being environmentally-friendly?”
Well, what would you like to know? It’s unclear what I’m trying to get at and the angle I’m coming from. It is not a bad prompt if you want a general and surface-level answer, but for any specific guidance or information, it won’t help.
“Tell me 5 ways in which I can be more environmentally-friendly every day as someone who lives in a city.”
Here, I’ve clarified why I want to know about being ‘environmentally-friendly’, given the model clear boundaries on how much information I want, and offered a little context about where I live just to direct it further.
Relevant: All context and text in your prompt should be directly relevant to the task/desired output.
“I have 15 friends and am thinking about what birthday present to buy for one friend. What are some good gift ideas?”
The mention of 15 friends provides context to the prompt, but is in no way relevant or helpful to the model in suggesting a good present. This additional information may confuse the model and redirect its focus onto that rather than gift suggestion.
“My friend enjoys baking cakes, playing the piano and reading; her favourite book is “The Hitchhiker’s Guide to the Galaxy”. What is a good birthday present for her?”
All context here is relevant to buying a good birthday present, as it paints a picture of what my friend might like. The additional mention of a favourite book is helpful as it narrows down possible genres of literature, whilst remaining concise.
Another great way to get a desired output is to ask the model to ‘act as’ a certain character, to further minimise ambiguity. If I were to ask it to explain quantum mechanics ‘simply’ to me, I can’t assume a universal agreement on what ‘simple’ is. However, if I ask the model to:
‘act as a primary school teacher and explain quantum mechanics to an 11 year-old’,
I’m narrowing the scope of misunderstanding and increasing the probability of getting my desired response.
These principles provide a strong foundation for anyone interested in prompt engineering, yet this is still only a brief introduction. There are countless other features which make up a successful prompt and impact the output of an LLM, but I’m afraid those are beyond the scope of this article.
For anyone interested in how language, structure, and convention impact prompting (and our daily discourse!), I recommend looking at Wittgenstein, the “language games” we play and “the fact that the speaking of language is part of an activity, or of a form of life”. Each language game is governed by a specific set of rules, just as the game of chess is defined by the characteristics and limitations of each of its pieces, and their respective movements. These rules dictate the proper way to construct syntactically valid sentences or phrases, and are necessary in order to communicate effectively. With effective prompt engineering, we can transform the way we do business and better our understanding of how we use language in our daily lives.