The Chain of Density idea has been inspiring many of us to adopt a new more iterative approach to prompting. Rather than accepting the first result, a prompt can invite to review and improve the result, iterating until the quality is considered satisfactory.
This approach makes particular sense given the nature of LLM: no planning, just appending stuff. The lack of planning sometimes causes unwanted results, the model commits to a certain path, and at that point it has to continue down that road until it gets to a satisfactory result. This is why some types of instructions, for example limiting the length of a generated text, struggle to work in many examples: if you can only measure the length once it’s written, but at that point it’s too late to change it.
Until recently the solution that we were adopting for these kind of problems was to send multiple requests to the GPT API (or other model). First write something, then improve it, then perhaps write it again. This is still a valid approach when we want to change models between iterations: for example quickly extract entities and summarise a long text with GPT-3.5-16K, then rewrite a better and more nuanced version with GPT-4).
The iterative prompting approach gives you the advantages of multiple iterations in just one prompt. Next time you are writing a prompt to generate some text or code or other types of content, try this:
1. <<insert here your prompt, the task you need the model to deliver>>
2. After completing the first version of this task, review check the instructions of step 1 and assess the quality of the result
3. Write some advice on how to improve the result
4. Create a new version of the result considering advice received
5. Go back to step 2 until your result cannot be improved any further
If you are using the API, you can add the instruction to “print the final result enclosed in ###FINAL RESULT START### and ###FINAL RESULT END###“, this will allow to easily extract the output you actually need from the API response.
I have been experimenting with this approach more informally by adding this custom instruction to my ChapGPT Account:
When I ask you to “go deep”, use this approach to respond:
1. Write your first take
2. Analyse the first take
3. Give advice on how to improve
4. Write an improved take
This needs some improvement, it’s not super stable (ChatGPT will start doing it even when I have not asked to “go deep”), but the results are good and it’s always interesting to see the reasoning process on how to improve the output quality. Of course, it’s much slower than then any regular prompt.
Give it a shot, let me know how it works for you.

Leave a Reply