Few-Shot Prompting

Few-Shot Prompting

Few-Shot Prompting is a technique used in large language models (LLMs) to guide the AI's responses. In this technique, you provide a few examples of the desired input-output pairs at the beginning of the conversation.

These examples serve as 'prompts' that set the context and guide the model's subsequent responses. The prompts help the model understand the type of response it should generate, improving the quality and relevance of its answers. This technique is especially useful when you want the model to respond in a specific manner.

For instance, suppose you are interested in developing an AI Assistant that offers sentiment analysis for movie reviews. You can give the API example movie reviews and their sentiment analysis as the Assistant response before giving it a new review to respond to as follows:

let systemPrompt: PromptLiteral = "You are an expert in sentiment analysis, specializing in analyzing movie reviews. Your task is to determine the sentiment of each review as positive, negative, or neutral. Only respond with the sentiment, NOTHING MORE."
 
 
let userPrompt1: PromptLiteral = "A wonderful little production. The filming technique is very unassuming- very old-time-BBC fashion and gives a comforting, and sometimes discomforting, sense of realism to the entire piece. You can truly see the seamless editing guided by the references to Williams' diary entries, not only is it well worth the watching but it is a terrificly written and performed piece. A masterful production about one of the great master's of comedy and his life."
let assistantReply1 = "Positive"
    
let userPrompt2: PromptLiteral = "Phil the Alien is one of those quirky films where the humour is based around the oddness of everything rather than actual punchlines. At first it was very odd and pretty funny but as the movie progressed I didn't find the jokes or oddness funny anymore.Its a low budget film (thats never a problem in itself), there were some pretty interesting characters, but eventually I just lost interest."
let assistantReply2 = "Negative"
 
let userPrompt3: PromptLiteral = "This a fantastic movie of three prisoners who become famous. One of the actors is george clooney and I'm not a fan but this roll is not bad. Another good thing about the movie is the soundtrack (The man of constant sorrow). I recommand this movie to everybody. Greetings Bart"
 
let messages: [AbstractLLM.ChatMessage] = [
    .system(systemPrompt),
    .user(userPrompt1),
    .assistant(assistantReply1),
    .user(userPrompt2),
    .assistant(assistantReply2),
    .user(userPrompt3)
]
 
let model: OpenAI.Model = .gpt_4o
 
do {
 
    let result: String = try await client.complete(
        messages,
        model: model,
        as: .string)
    
    return result
} catch {
    print(error)
}

The resulting response from LLM will be “Positive”, and nothing else.

However, while few-shot prompting can be very effective, like all prompting techniques, it is imperfect. The model does not always interpret the prompts as expected, and the quality of the output is greatly influenced by factors such as input content, formatting, spacing, structure and so on. Note that few-shot prompting usually requires more input tokens, which can increase the cost of your API calls if the exemplars are large documents or otherwise large text.

© 2024 Preternatural AI, Inc.