2.9. Chaining outputs

The inputs and outputs we discussed were relatively brief. Even if the prompts were complex, they consisted of a few paragraphs, and the model’s responses were similar. But what if we wanted the AI to write an entire book, for example? When I typed this prompt into ChatGPT, I got just nine paragraphs of something the model considers a “book,” which they obviously are not.

So the question is, is there any way to guide the model into writing a book, even though it can’t do it on its own?

It turns out the answer is yes. If you, as a prompt engineer, know how to write a book, you can hand-hold the model step by step. First, you ask it to generate a few topics and choose the best one. Next, you have it write an elevator pitch for the book to determine the approach to the topic. Then, you instruct it to create an outline based on this pitch. For each chapter from the outline, you have it develop a plan. Finally, for each bullet point, it writes a few paragraphs. The model might not be able to hold everything it wrote in memory, so to reduce the length of the prompt, each time you prompt it to write a new chapter, you provide the outlines and a summary of what it previously wrote. And yes, it writes the summaries on its own, too.

As a result, at every step, it will generate a few paragraphs, then use these paragraphs to generate more paragraphs, and so on. The outcome might not be easy to digest, since LLMs are not highly creative, but you just guided the model through an extremely complex process.

The technique we just used is called chaining. It involves generating a desired output by following a series of prompts. In this process, a sequence of prompts is provided to a model, guiding it to produce the desired response. Each prompt addresses a subtask defined by the prompt engineer, and they are executed in sequence, each one feeding into the next, continuing down the chain of tasks until we arrive at the final result.

Time for a practical example.

We'll take the result of the summarization prompt from the previous section and build upon it by chaining it with a couple of other prompts.

Model input: Take the last summary you wrote and prepare a list of ideas for different social media sites based on it. Each idea should be tailored to the type of content that works best on each platform.

Model output: Based on the summary of the blog post, here are tailored ideas for different social media platforms:

Twitter: Create a thread explaining "productive fictions" with examples like Pascal's Wager and meritocracy, and how these unverified beliefs positively impact societal values and behaviors. Include engaging questions to encourage discussions.

Instagram: Post a carousel of visually appealing graphics that illustrate the concepts of Pascal's Wager, meritocracy, and their benefits. Use concise text overlays and captions to explain how these ideas contribute to societal balance and pragmatic optimism.

TikTok: Make a short video explaining the concept of "productive fictions" using simple terms and relatable examples. Use visual aids, animations, and on-screen text to make the content engaging. End with a question prompting viewers to share their own examples of beneficial beliefs.

(More ideas follow…)

You may have noticed that depending on the platform, the AI suggested different content forms. On Twitter, it’s text; on Instagram, it’s images; on TikTok, it’s videos. This is another reason why chaining can be extremely useful. In this example, we can take the ideas generated by the LLM and feed them to a visual model or an audio model.

For now, let's focus on just one modality. We’ll feed the model with its own instructions to write a Twitter thread based on the summary it wrote.

Model input: Create a thread explaining "productive fictions" with examples like Pascal's Wager and meritocracy, and how these unverified beliefs positively impact societal values and behaviors. Include engaging questions to encourage discussions.

Model output: 1/10 🌟 Ever heard of "productive fictions"? These are beliefs that, while unverified, bring positive impacts to our lives and society. Let's dive into some examples and see how they work!

2/10 📜 One classic example is Blaise Pascal's Wager. Pascal argued that it's rational to believe in God, even without proof, because the potential benefits (eternal happiness) outweigh the potential costs.

(More tweets follow…)

Great! We could use this simple chain, for example, in an app meant for content creators to help them automate processes related to self-promotion. As the app designer, you set up the process step by step by writing the prompts in a chainable way, teaching the model to act almost like you would instruct a human. The AI then takes care of the results.

Also, you can chain prompts with a human-in-the-loop approach for verification or corrections when it comes to mission-critical processes where hallucinations are unacceptable. For example, if you want the model to generate simple legal contracts, you might add a step that requires a human to approve the content created by the AI at a certain point in the chain. This ensures that you are feeding only 100% correct information into the next prompt.