• Velvet
  • Posts
  • Four ways to optimize your AI feature post launch

Four ways to optimize your AI feature post launch

Tactics to analyze, evaluate, and fine-tune your AI features

Like any release, shipping an AI feature to production is only the starting point. Unlike the rest of your tech stack, LLMs introduce a non-deterministic element to your product. To maintain consistency, you need a data-driven evaluation system in place.

In this week’s article, we share examples of how to analyze your AI features, make pre-trained models more specific, and leverage LLM data effectively.

Topics covered in this article:

  • Analyze usage, performance, and costs of your AI

  • Evaluate effectiveness of prompt engineering

  • Prepare your data for fine-tuning models

  • Forward logs to any platform

Articles we enjoyed this week:

The launch of Figma AI: As power users of Figma, we’re excited to use new AI features that automate time-consuming and tedious tasks. Like Apple’s AI announcement, these features illustrate AI capabilities that go far beyond chat.

Comparing fine-tuned models to OpenAI: An in-depth overview of how fine-tuned Mistral, Llama3, and Solar LLMs compare to OpenAI performance. Also worth a scan of the HN comments debating the topic.

New: Warehouse OpenAI logs to PostgreSQL | Learn more