🧀 BigCheese.ai

Social

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

🧀

The Register posted an article explaining how to run a large language model (LLM) on a local PC rather than in the cloud in under 10 minutes. Utilizing tools like Ollama, one can easily set up LLMs on Windows, Linux, and Macs with support for Nvidia and M-series GPUs, and even AMD GPUs with specific guidelines.

  • Ollama supports Nvidia and M-series GPUs.
  • AMD Radeon support guide available.
  • LLMs run on AVX2-compatible CPUs.
  • Quantized models use less memory.
  • Native AMD GPU support is rolling out.