Melodi is the essential platform for AI teams who want to enhance customer engagement.

Unlike traditional product analytics tools that track screen-based clicks, Melodi is designed to track unique metrics for AI agents like trends in user behavior across high volumes of diverse conversations.

Melodi is great for:

  • Product Managers who want to understand their customers and prioritize high-impact features to increase customer usage.

  • Operations Managers who want to report on the success and engagement of AI initiatives.

  • Data Scientists who need quick access to the right datasets for few shot examples, fine-tuning, and custom testing.

Metrics Available in Melodi:

  • User engagement and retention: Track metrics including active users, new user growth, and retention rates. Monitor customer engagement through total sessions, sessions per user, and session duration. These engagement metrics provide critical insights into the value users derive from your AI product.

  • User intents: Automatically identify what users are trying to do when they interact with your AI. These user intents can either manually or be automatically generated after a minimum volume of interactions. Examples of such intents are things like “troubleshoot integrations,” “book an appointment,” or “get a weather update.” Understanding what your users are trying do to is critical for building a valuable AI tool and knowing if your model is working as expected.

  • User feedback: Collect direct feedback from users to understand how your AI models are performing and identify opportunities for improvement. This can be collected via the Melodi feedback widget or through your own feedback UI with Melodi’s feedback API.

Specific data you can track:

  • Projects are the top level container in Melodi. You can have multiple projects, each with their own threads, users, and feedback. In general, a project is a single product or feature.

  • Threads are a flexible data format that support messages with customizable roles, including user, AI response, or RAG lookup.

  • Feedback can be added to threads to help you understand how users think about your AI models and identify opportunities for improvement.

  • Users can contain additional metadata like name, email, and company.