Executive summary
Rankiteo blends AI, big-data collection, and a transparent scoring method to turn noisy disclosures into a clean early read for cyber underwriters: a single CyberScore, a short incident story, and peer context you can defend in notes, referrals, and committees.
What powers Rankiteo
Big data, curated for cyber underwriting
Rankiteo aggregates public disclosures and signals at scale, then standardizes them into company pages (score, history, benchmarks). The objective is not “more data,” but clean, comparable facts an underwriter can act on quickly. This method is intended for triage, benchmarking, and temporal monitoring—not direct breach prediction.
AI that classifies what matters
AI pipelines normalize messy real-world events into underwriting-friendly categories—for example, ransomware, data breach, cyber attack, and vulnerability—with sector-aware base points that make the categories comparable.
A proven, transparent methodology
Under the hood, Rankiteo converts evidence into a score on a 100–1000 scale with an explicit, auditable composition:
Category base points & sector multipliers.
Ransomware (100), data breach (60), cyber attack (20), vulnerability (5); sector multipliers add domain judgment (for example, hospitals/utilities weigh more than retail).
Time decay by incident type.
Half-lives: ransomware & breach 3 years (1095 days); attack 2 years (730 days); vulnerability 18 months (540 days)—so serious events persist longer, while transient ones fade faster.
Ransomware recurrence.
A bounded multiplier escalates clustered ransomware in a controlled way—stronger signal without runaway inflation.
Aggregation & cap.
Time-decayed incident penalties sum to an entity exposure with a global cap that keeps scores interpretable on the 100–1000 scale.
Scale-aware fairness.
Two size effects: a market-cap baseline (typically in the 750–850 band for very large, clean firms) and a dampening factor that attenuates penalties for very large firms without masking genuine exposure; final score is baseline minus size-adjusted penalties plus industry adjustment, clipped to [100, 1000].
Why this helps underwriters
Clear triage—without a black box
One line to start:
a cyber score that reflects what happened and how recently, not a static questionnaire.
A short story: an incident timeline that shows when the picture changed.
Fair context: benchmarks to see if the company is an outlier or aligned with peers.
Recency and severity handled the way you expect
High-impact categories carry more weight; time decay halves an event’s influence after its category half-life (for example, ransomware/breach at 3 years), aligning with how reputational and regulatory effects persist.
Recurrence of ransomware increases concern in a controlled, capped way—stronger signal without runaway inflation.
Consistency across a portfolio
Additive aggregation and a global cap prevent disclosure-rich giants from swamping the scale, while industry normalization applies only to clean or near-clean firms so realized performance dominates once incidents appear.
How it fits your workflow
Intake — Open a company page and note three lines: Score & direction, two key moments from the history, peer position. That’s your calm early view.
First call — Ask focused questions anchored in the most recent events and peer gaps—no fishing expeditions.
Decisioning — Tie rationale to baseline (size), time–severity penalties, and industry context.
Documentation — Copy the three-line summary; attach a lightweight export.
(Optional) Monitoring — Watch for notable score drops or new incidents ahead of renewal; decay and recurrence make changes meaningful.
“Proven” means documented, stable, and explainable
Documented math, not magic. Category weights, half-lives, recurrence rules, aggregation, size effects, and clipping are all explicit.
Stable by design. Soft caps and smooth curves keep trajectories comparable across industries and firm sizes; light deterministic jitter avoids threshold clustering without changing ordering.
Fair by context. Industry normalization grants a modest offset only for clean histories, then withdraws it when material or recent incidents appear—so reality overrides priors.
Bottom line
AI organizes the facts, big data keeps the view current, and a transparent, battle-tested methodology converts everything into a score and story underwriters can trust. The result is faster triage, sharper questions, and decisions you can defend—every time.