UC Santa Cruz researchers created a large language model as efficient as lighting a bulb. By bypassing matrix multiplication, the billion-parameter model uses only 13 watts, over 50x more efficient than standard hardware, equalling state-of-the-art models like Meta’s Llama in performance.