Member-only story
Chain-of-Draft (CoD) Is The New King Of Prompting Techniques
A deep dive into the novel Chain-of-Draft (CoD) Prompting that outperforms Chain-of-Thought (CoT) Prompting while reducing LLM inference cost and latency like never before.
Reasoning LLMs are a hot topic in AI research today.
We started all the way from GPT-1 to arrive at advanced reasoners like Grok-3.
This journey has been remarkable, with some really important reasoning approaches discovered along the way.
One of them has been Chain-of-Thought (CoT) Prompting (Few-shot and Zero-shot), leading to much of the LLM reasoning revolution that we see today.
Excitingly, there’s now an even better technique published by researchers from Zoom Communications.
This technique, called Chain-of-Draft (CoD) Prompting, outperforms CoT Prompting in accuracy, using as little as 7.6% of all reasoning tokens when answering a query.

This is a big win for reasoning LLMs that are currently very verbose, require lots of…