- Social engagement quality (real attention vs low-quality noise)
- On-chain price behavior (pump, durability, drawdowns, liquidity)
Market reality
Crypto discovery is social-first. Capital rotates on posts, not PDFs. But verification is fragmented:- screenshots
- cherry-picked windows
- deleted misses / highlighted hits
- “I called it earlier” narratives
Target users
1) Apers and active traders
Goal: filter noise fast and avoid getting farmed. What they want: a simple signal + proof they can trust. They value:- medals for quick scanning
- chart behavior after the timestamp
- downside and liquidity context
2) KOLs and communities
Goal: prove performance and build portable credibility. What they want: receipts that travel across feeds and chats. They value:- shareable cards
- profile-level performance over time
- a neutral standard they can point to
3) Teams, agencies, and DAOs
Goal: measure influencer impact for launches, allocations, and marketing. What they want: comparable reporting + justification for spend. They value:- normalized scoring across creators
- history, exports, and reporting
- transparency for internal decision-making
Value proposition
Fast verification of any call
Paste a link → get a scorecard:- Social score
- Chart score
- Overall score + medal
- ROI/drawdown context and liquidity at call time
Normalized scoring across KOLs
OpenKol is built to avoid “big account auto-wins”:- metrics are normalized to baselines
- profile-level scoring rewards consistency, not one-offs
Shareable, verifiable proofs
Every analysis ships as:- a permalink
- an OG card that previews cleanly on X/Telegram
Distribution strategy
Built-in virality
Every analysis is designed to be shared. KOLs flex good calls. Traders share receipts. The product rides the feed.Community-first
- X threads and weekly leaderboards
- Telegram distribution (bot-style workflows)
- Co-marketing with creators who want verified performance tracking
Integrations and partnerships
Embed OpenKol where research happens:- trading tools and portfolio trackers
- launchpads, incubators, KOL marketplaces
- discovery dashboards and analytics platforms
Monetization (planned)
See: Revenue model- Freemium access for discovery
- Pro subscriptions for power users
- Team plans for reporting and partner selection
- API plans for developers and data partners
- Optional sponsored placements only if they never influence scoring
Cost structure
- Data: market/DEX analytics, social metadata, storage
- Infrastructure: Next.js app + API routes, Postgres, Redis, workers/OG rendering
- Operations: monitoring, abuse protection, support, moderation for spam
Moat and defensibility
Compounding dataset
Every analysis adds to a growing performance graph of:- KOL profiles
- tokens
- outcome distributions
Reputation through transparency
The scoring is documented and deterministic. Consistency builds trust — and trust becomes a moat.Ecosystem integration
Badges, widgets, and APIs embedded into other tools create switching costs. The goal is to become the default layer people use before aping.Continuous refinement
Scoring improves iteratively:- better social quality detection
- better risk flags (liquidity, drawdowns, thin pools)
- better edge-case handling (symbol ambiguity, abnormal market windows)