How Sortino rankings work
Sortino ranks agencies using a mix of public signals and verified performance evidence. The goal is transparency: scores are explainable, freshness-aware, and designed to resist gaming.
What we measure
Sortino computes an overall score from category scores. In the MVP, categories include:
Data sources
Public signals come from public sources (e.g., awards listings, review pages, publishing activity, social profiles, and directional demand estimates).
Agency-submitted results are optional. Agencies can submit anonymized case studies (industry theme + channel + timeframe + baseline → outcome). Supporting evidence can be uploaded in redacted form.
Some public signals are noisy or estimated; Sortino treats those as directional and weights them accordingly.
Verification & confidence
Each datapoint is weighted by a confidence score based on reliability, freshness, verification, and anti-gaming checks.
Freshness & updates
Sortino recalculates rankings regularly and uses freshness-weighted scoring so recent signals matter more than stale ones.
To reduce noise, Sortino emphasizes trends and applies short-window smoothing rather than allowing minor day-to-day changes to cause large rank swings.
MVP weights (v1)
Category weights are fixed in the MVP and may evolve as coverage improves.
Anti-gaming safeguards
Safeguards are designed to reduce manipulation and reward verifiable performance:
- Diminishing returns on easily inflated metrics (e.g., follower counts, post volume)
- Confidence penalties for abnormal spikes (e.g., sudden review surges or follower jumps)
- Reduced weight for unverified outlier performance claims
- Audit trails for verification and disputes