MCP Server for AI-powered issue analysis - issue summarization, trend analysis, priority scoring, label suggestions, and project roadmap generation
Find a file
Jonas Hanisch 1f75225674
All checks were successful
Publish Package / publish (push) Successful in 27s
Merge develop → main: Release v2.0.0
Complete refactor with comprehensive improvements:

 New Features:
- 19 comprehensive unit tests with Vitest
- 100% TypeScript type safety
- Roadmap generation capability

🏗️ Architecture:
- Modular structure (types, tool-definitions, handlers, summarizer-service, index)
- index.ts: 274 → 76 lines (72% reduction)

🧪 Testing:
- Full test coverage

📦 Dependencies:
- MCP SDK upgraded to 1.20.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-10 12:49:33 +02:00
.forgejo/workflows fix: use direct authentication token in CI workflow (v1.0.5) 2025-10-07 17:10:12 +02:00
src Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00
.gitignore fix: registry configuration for scope-based installation (v1.0.1) 2025-10-07 12:34:18 +02:00
CHANGELOG.md Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00
package-lock.json Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00
package.json Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00
README.md Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00
tsconfig.json Implement Issue Summarizer MCP Server with AI integration 2025-10-07 11:13:55 +02:00
vitest.config.ts Refactor to v2.0.0: Modular architecture with comprehensive testing 2025-10-10 12:49:25 +02:00

📊 issue-summarizer-mcp

Version CI Status License NPM

MCP Server for automatic issue summarization and prioritization with AI - featuring LLM integration, trend analysis, and comprehensive testing.

What's New in v2.0.0

Complete Refactor - Modular architecture with 72% code reduction 🧪 19 Unit Tests - Full test coverage with Vitest 🎯 100% Type Safety - Eliminated all any types 📚 Complete JSDoc - Comprehensive API documentation 📦 MCP SDK 1.20.0 - Latest SDK with improved capabilities 🗺️ Roadmap Generation - New AI-powered roadmap tool

Features

Core Features

  • 📝 summarize_issue - Summarize a single issue using AI to extract key insights, category, and suggested priority
  • 📊 summarize_all_open - Summarize all open issues in a repository with AI-generated summaries for each
  • 🎯 prioritize_issues - Prioritize issues based on AI content analysis with scores and reasoning
  • 🏷️ suggest_labels - Suggest appropriate labels for an issue based on AI analysis of content
  • 📈 analyze_trends - Analyze trends and patterns across multiple issues using AI to identify common topics and recurring problems
  • 🗺️ generate_roadmap - Generate a project roadmap based on issues with immediate, short-term, and long-term priorities

Advanced Features

  • 🤖 AI-powered categorization and priority assignment
  • 📊 Trend analysis (recurring topics, common problems)
  • 🔍 Multi-issue batch processing
  • 🎯 Score-based prioritization (0-100)
  • 📋 Structured JSON output for all operations

Installation

npm install
npm run build

Configuration

Required environment variables:

  • LLM_API_URL - URL to LLM API (Ollama, OpenAI, etc.)
  • LLM_MODEL - Model name (default: llama3.2)
  • LLM_API_KEY - API key (if required)

Optional:

  • FORGEJO_URL - Forgejo instance URL
  • FORGEJO_TOKEN - Forgejo API token
  • GITHUB_TOKEN - GitHub API token
  • GITLAB_TOKEN - GitLab API token

Usage

export LLM_API_URL="http://localhost:11434"
export LLM_MODEL="llama3.2"
npm start

Examples

Summarize an issue

{
  "tool": "summarize_issue",
  "arguments": {
    "issueText": "Bug: Application crashes when uploading large files...",
    "includeContext": true
  }
}

Prioritize issues

{
  "tool": "prioritize_issues",
  "arguments": {
    "issues": [
      {"id": 1, "title": "Critical bug", "body": "..."},
      {"id": 2, "title": "Feature request", "body": "..."}
    ]
  }
}

Suggest labels

{
  "tool": "suggest_labels",
  "arguments": {
    "issueTitle": "Add dark mode support",
    "issueBody": "Users have requested dark mode..."
  }
}

LLM Integration

This MCP can work with:

  • Ollama (local): Set LLM_API_URL=http://localhost:11434
  • OpenAI: Set LLM_API_URL=https://api.openai.com/v1 and provide API key
  • Custom LLM APIs that follow OpenAI-compatible format

License

MIT