Lupr is currently under development.

AI Improvements

Lupr uses AI to analyze your feedback and suggest actionable improvements ranked by impact.

What are improvements?

When you collect feedback from reviewers, individual responses are useful — but patterns across many responses are where the real insights live. AI Improvements reads all feedback for a project, identifies recurring themes and pain points, and suggests structured improvements you can act on.

Each improvement includes

Title & description

A clear, concise name and a detailed explanation of the suggested change.

Category

Classified as UX, Feature, Performance, Bug, Content, Accessibility, or other relevant types.

Impact score

A ranked score indicating how much this improvement would benefit users based on feedback volume and sentiment.

Evidence from feedback

Direct references to the feedback entries that support this improvement, so you can trace it back to real users.

Note: Improvements are suggestions, not directives. You decide which ones to accept, dismiss, or dispatch to your issue tracker.

Generating improvements

Improvements can be generated in two ways — manually when you want them, or automatically on a schedule.

Manual generation

Click the "Generate Improvements" button on your project detail page. AI analyzes all current feedback and produces a new batch of suggestions.

Automatic via cron

The /api/cron/generate-improvements endpoint runs on a schedule, automatically generating improvements for projects with new feedback since the last run.

How generation works

Lupr sends all project feedback to Anthropic Claude with structured prompts designed to extract patterns, group related concerns, and produce actionable suggestions. The model returns structured JSON with categories, impact scores, and evidence references, which Lupr parses and stores as improvement records.

Reviewing improvements

Improvements appear in the Improvements tab on your project page. Each improvement is displayed as a card with all the context you need to make a decision.

What you see on each card

Title & description

The suggested improvement with a full explanation of what to change and why.

Category badge

A color-coded badge showing the category — UX, Feature, Performance, and more.

Impact score

A numerical score so you can prioritize high-impact changes first.

Supporting feedback

Links to the specific feedback entries that led to this suggestion.

Actions

Accept

Marks the improvement as actionable. Accepted improvements can feed into the Core Loop for automated implementation.

Dismiss

Removes the improvement from your active list. Dismissed improvements are archived, not deleted.

Dispatch

Send the improvement directly to GitHub or Linear as a new issue, pre-filled with all the details.

Accepted improvements can feed into the Core Loop to automate the path from suggestion to pull request.

Dispatching improvements

Once you've identified high-value improvements, dispatch them to your issue tracker to turn suggestions into tracked work.

Dispatch to GitHub

Creates a GitHub issue in your connected repository with the improvement title, description, category, impact score, and supporting evidence.

Dispatch to Linear

Creates a Linear issue in your connected workspace with full improvement details and linked feedback.

Auto-dispatch via rules

Configure rules that automatically dispatch improvements matching certain criteria — for example, automatically send all high-impact improvements to GitHub.

To set up GitHub, Linear, or other integrations, see the Integrations page.

How it works (technical)

Under the hood, Lupr uses the Anthropic Claude model to analyze feedback. All feedback for a project is collected and sent to Claude with structured prompts that instruct the model to identify recurring themes, categorize issues, estimate impact, and cite specific feedback as evidence. The model returns structured JSON, which Lupr validates, parses, and stores as improvement records tied to your project.

Every improvement is grounded in actual feedback evidence — the AI doesn't fabricate patterns. Each suggestion traces back to one or more real feedback entries, so you can always verify why a particular improvement was suggested.

Improvements are only as good as your feedback. More diverse feedback from different perspectives leads to better, more nuanced suggestions. Invite reviewers with varied backgrounds to get the most out of AI Improvements.

Next steps

Now that you understand how AI Improvements work, explore these related features: