Google’s Chain of Thought Prompting Google Algorithms
Table of Contents
Introduction
Google’s Chain of Thought Prompting Google Algorithms
Google’s Chain of Thought Prompting Google Algorithms: Google printed details of a breakthrough technology that considerably improves Google’ newest leading edge algorithms.
Google proclaimed a breakthrough analysis in tongue process referred to as Chain of Thought Prompting that raises the state of the art of advanced technologies like PaLM and LaMDA to what the researchers decision a motivating level.
The very fact that Chain of Thought Prompting will improve PaLM and LaMDA at these important rates may be a huge deal.
LaMDA and PaLM
In the research, we conducted experiments using two language models, the language model of interactive applications (LaMDA) and the Pathways language model (PaLM). The
LaMDA is a conversational model that can support conversational search and voice assistants, as well as other conversational applications.
The PaLM is a model that conforms to what Google calls the Pathways AI architecture, and is trained in language models to learn how to solve problems.
The machine learning model was trained to solve certain problems, but basically it was released to do one thing really well. But to do other things, Google will have to train a new model.
The Pathways AI design could be a thanks to produce a model that may solve issues that it hasn’t essentially seen before.
As quoted in the Google PaLM explainer
“…we’d like to train one model that can not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively.”
What it Does
The research paper lists three important breakthroughs for the chain of ideological reasoning:
1. It allows language models to break down complex multi-step problems into a series of steps
2. The thought process chain allows engineers to look at the process and when things go wrong it allows them to identify where it went wrong and correct it.
3. Can solve math word problems, can perform conventional reasoning, and can (in principle) solve any word-based problem human can do .
Multi-step Reasoning Tasks
The study gives an example of a multi-step inference task where language models are tested:
“Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 – 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.”
PaLM is a modern language model that is part of the Pathways AI architecture. He’s so advanced that he can explain why a joke is funny.
As advanced as PaLM is, the researchers claim that Chain of Thought Prompting significantly improves these models, and that’s what makes this new study so remarkable.
Google explains it this way:
“Chain of thought reasoning allows models to decompose complex problems into intermediate steps that are solved individually.
Moreover, the language-based nature of chain of thought makes it applicable to any task that a person could solve via language.”
The paper goes on to note that the standards incentive does not really improve as the size of the model is increased.
With this new approach, however, scale has a significant and noticeable positive effect on model performance.
Results
Chain of Thought Prompting has been tested on LaMDA and PaLM, using two datasets on math problems.
⦁ GSM8K
⦁ MultiArith
These datasets are used by researchers as a way to compare outcomes on similar problems for different language models.
Below is an image of the graphs showing the results of using the LaMDA Mindset Reminder.
LaMDA scaling results on the MultiArith dataset show a modest improvement. But the LaMDA scores were significantly higher when extended with the Reminder.
Mind Sequence.
The results on the GSM8K dataset show a best modest improvement.
It is a different story with the PaLM language model.
As can be seen in the chart above, the benefits from extending PaLM with Chain of Thought Prompting are huge and they are huge for both
datasets (MultiArith and GSM8K).
The researchers call the results remarkable and state-of-the-art:
“On the GSM8K dataset of math word problems, PaLM shows remarkable performance when scaled to 540B parameters.
…combining chain of thought prompting with the 540B parameter PaLM model leads to new state-of-the-art performance of 58%, surpassing the prior state of the art of 55% achieved by fine-tuning GPT-3 175B on a large training set and then ranking potential solutions via a specially trained verifier.
Moreover, follow-up work on self-consistency shows that the performance of chain of thought prompting can be improved further by taking the majority vote of a broad set of generated reasoning processes, which results in 74% accuracy on GSM8K.”set of generated reasoning processes, which results in 74% accuracy on GSM8K.”Conclusion
The conclusion of a research paper is one of the most important parts to check to see if research has made modern progress or is a dead end or needs more research.
The conclusion of the Google research paper has a very positive review.
It notes:
“We have explored chain of thought prompting as a simple and broadly applicable method for enhancing reasoning in language models.
Through experiments on arithmetic, symbolic, and commonsense reasoning, we find that chain of thought processing is an emergent property of model scale that allows sufficiently large language models to perform reasoning tasks that otherwise have flat scaling curves.
Broadening the range of reasoning tasks that language models can perform will hopefully inspire further work on language-based approaches to reasoning.”
This means that Chain of Thought Prompting could potentially give Google the ability to dramatically improve its various language models, which in turn could lead to significant improvements in types of work. that Google can do.
Related Blogs:
The Complete Guide to YouTube Marketing in 2022


Thank you very much for sharing, I learned a lot from your article. Very cool. Thanks