🧀 BigCheese.ai

Social

Researchers run high-performing LLM on the energy needed to power a lightbulb

🧀

UC Santa Cruz researchers created a large language model as efficient as lighting a bulb. By bypassing matrix multiplication, the billion-parameter model uses only 13 watts, over 50x more efficient than standard hardware, equalling state-of-the-art models like Meta’s Llama in performance.

  • New AI model runs on 13 watts.
  • Eliminated costly matrix multiplication.
  • Performance on par with Llama.
  • 50 times more efficient than GPUs.
  • Progress towards smartphone capacity.