Chat Completions Overview

Chat Completions Overview

Before Large Language Models (LLMs) became so widely used in the outbreak ChatGPT product, they were actually tested as super powerful completion machines. Based on just a simple prompt such as “And the cow jumped over the moon” for example, the LLM could complete it into a full original poem following in the pattern of the famous "Hey, diddle, diddle," nursery rhyme by Mother Goose. It achieves this by generating the next most likely word (or "token" to be more precise) based on all the training data it has processed, which can be as vast as the entire internet.

However, the LLM’s standalone completion functionality by itself was not found as usable in that state. Instead, OpenAI trained a new model (a brilliant move, in retrospect) with the help of thousands of overseas human workers to apply the completion functionality of the LLM in a User-Assistant conversation format. This chat interface became the breakout product of LLMs.

Knowing this is important for us, as developers. Because while on the surface most of consumers think of LLMs as “Chat”, they are still completion machines under the hood. They are simply completing the input they receive based on probabilities. So when you experiment with Preternatural, remember to expand your mind and think of completions in the larger context.

For example, completions can be used to:

  • modify user input
  • fix grammar
  • aid in data cleaning and analysis
  • summarize complex data
  • return a JSON object in response to a natural language query
  • come up with relevant questions or information based on user input
  • generate images

All of these use-cases and more can highly improve the power of your app behind the scenes without the user themselves ever directly interacting with the Chat functionality of the LLM. Think of this as making a call to an unlimited database that has access to what can be loosely called the entirety of human knowledge to work with.

© 2024 Preternatural AI, Inc.