Google DeepMind’s New Method

 

Meet OPRO: Google DeepMind’s New Method that Optimizes Prompts Better than Humans





Prompt engineering and optimization is one of the most debated topics with large language models(LLMs). Terms such as prompt engineering are often used to describe the task of optimizing a language instruction in order to achieve a specific task with an LLM. These tasks are typically performed by humans but what if AI could do a better job optimizing prompts? In a recent paper, researchers from Google DeepMind proposed a technique called Optimization by Prompting (OPRO) that attempts to address precisely this challenge.

The core idea of OPRO is to leverage LLMs as optimization agents. With the evolution of prompting techniques, LLMs have demonstrated remarkable prowess across various domains. Their proficiency in comprehending natural language opens up a novel avenue for optimization. Instead of rigidly defining optimization problems and prescribing programmed solvers for update steps, DeepMind adopts a more intuitive approach. They articulate optimization challenges in natural language and direct the LLMs to generate new solutions iteratively, drawing from problem descriptions and previously discovered solutions. Leveraging LLMs in optimization grants the advantage of swift adaptability to diverse tasks, achieved by merely altering the problem description in the prompt. Further customization becomes feasible by appending instructions to specify desired solution attributes.

Post a Comment

0 Comments