How ‘Chain of Thought’ Makes Transformers Smarter

bitcoins


Large Language Models (LLMs) like GPT-3 and ChatGPT exhibit exceptional capabilities in complex reasoning tasks such as mathematical problem-solving and code generation, far surpassing standard supervised machine learning techniques. The key to unlocking these advanced reasoning abilities lies in the chain of thought (CoT), which refers to the ability of the model to generate intermediate reasoning steps before arriving at the final answer, kind of like how we humans break down a complex problem into smaller steps in our head. This can be achieved through methods like training the model on examples enriched with intermediate reasoning steps or using few-shot prompting to instruct the model to generate a CoT.

Now, you might think that the contents of these intermediate steps is what allows the model to reason better. But interestingly, in this study, the researchers found that even if the intermediate steps are incorrect or completely random, just the act of generating them still helps the model a lot. It’s like the model is being told “Okay, think this through step-by-step” and that alone improves its reasoning ability drastically.

So the researchers wanted to understand why this “chain of thought” approach is so powerful for transformers (the type of model used in GPT-3, etc). They used concepts from circuit complexity theory and adopted the language of computational complexity classes like NC, AC, and TC to analyze this problem.

Essentially, they found that without the chain of thought, transformers are limited to efficiently performing only parallel computations, meaning they can solve problems that can be broken down into independent sub-tasks that can be computed simultaneously.

However, many complex reasoning tasks require inherently serial computations, where one step follows from the previous step. And this is where the chain of thought helps transformers a lot. By generating step-by-step reasoning, the model can perform many more serial computations than it could without CoT.

The researchers proved theoretically that while a basic transformer without CoT can only solve problems up to a certain complexity level, allowing a polynomial number of CoT steps makes transformers powerful enough to solve almost any computationally hard problem, at least from a theoretical perspective.

To back up their theory, they also did some experiments on different arithmetic tasks – ones that can be parallelized and ones that inherently require sequential computations. Sure enough, they found that transformers struggled on the sequential tasks without CoT, but enabling CoT drastically boosted their performance, especially when the transformer model was relatively small/shallow.

In essence, the chain of thought is a simple but powerful trick that vastly increases the reasoning capabilities of transformer models like GPT-3. It allows them to tackle complex tasks requiring sequential logic that parallel models would fail at. 

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram ChannelDiscord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 42k+ ML SubReddit

Vineet Kumar is a consulting intern at MarktechPost. He is currently pursuing his BS from the Indian Institute of Technology(IIT), Kanpur. He is a Machine Learning enthusiast. He is passionate about research and the latest advancements in Deep Learning, Computer Vision, and related fields.

[Recommended Read] Rightsify’s GCX: Your Go-To Source for High-Quality, Ethically Sourced, Copyright-Cleared AI Music Training Datasets with Rich Metadata





Source link

You might also like

Comments are closed, but trackbacks and pingbacks are open.