🧀 BigCheese.ai


First Impressions of Early-Access GPT-4 Fine-Tuning


Supersimple's early access to GPT-4 fine-tuning shows a performance improvement of over 50% compared to GPT-3.5 for their data question-answering use case, but with higher latency and cost. GPT-4 models continue to face diminishing returns and challenges with broad queries. Supersimple advocates for a mix of models and heuristics to enhance performance.

  • GPT-4 FT outperforms GPT-3.5 by 56%
  • Davinci FT used as baseline for performance
  • Fine-tuned GPT-4 is slower at 21.6 tokens/s
  • GPT-4 costs 15x more than GPT-3.5 for inference
  • Supersimple never generates SQL using LLMs