Summary

  • A new paper by Google Research and the University of California, Berkeley demonstrates that by simply scaling up sampling-based search, AI’s language models can boost reasoning skills.
  • The technique works by producing numerous responses and using the model to verify the answers.
  • Even with a minimalist implementation, it was found to surpass the performance of deep learning models that specialised in reasoning when tested on popular benchmarks.
  • This stems from the common misconception that complicated architectures and highly specialised training are required to achieve the best results.
  • However, the costs associated with this type of sampling-based search can be prohibitive, with some questions costing up to $650 to answer.
  • Yet, the researchers suggest using simpler verification methods and smaller models to get these costs down.

By Ben Dickson

Original Article