GEPA optimizes LLMs without costly reinforcement learning
1 min read
Summary
A team of US-based researchers claim to have developed an improved method of training large language models that they say outperforms other techniques in terms of both runtime and efficiency.
GEPA (Genetic-Pareto) is a new artificial intelligence (AI) optimisation system for large language models (LLM) that substantially outperforms reinforcement learning (RL)-based techniques.
GEPA learns from the natural language feedback it receives, diagnosing errors and improving the instructions given to the AI.
The system can complete complex AI chains or workflows, reducing the time taken to optimise them, thus lowering computational costs.
The researchers who developed GEPA state that their new system enables those with less knowledge of AI to build high-performing systems and that it is particularly beneficial for enterprises in providing more reliable and adaptable AI applications.