#ai #langchain #python #reading-list

🔗 Lessons after a half-billion GPT tokens
kenkantzer.com

My startup Truss (gettruss.io) released a few LLM-heavy features in the last six months, and the narrative around LLMs that I read on Hacker News is now starting to diverge from my reality, so I thought I'd share some of the more "surprising" lessons after churning through just north of 500 million tokens, by my estimate.

Some details first:

  • we're using the OpenAI models, see the Q&A at the bottom if you want my opinion of the others
  • our usage is 85% GPT-4, and 15% GPT-3.5
  • we deal exclusively with text, so no gpt-4-vision, Sora, whisper, etc.
  • we have a B2B use case - strongly focused on summarize/analyze-extract, so YMMV
  • 500M tokens actually isn't as much as it seems - it's about 750,000 pages of text, to put it in perspective
continue reading on kenkantzer.com

⚠️ This post links to an external website. ⚠️