How our trust scoring works — six signals, no black boxes
Every AI service indexed by Fabric receives a composite score from 0.00 to 5.00. The score is computed from six independently weighted signals, all derived from public data. No manual reviews. No paid placements. No provider involvement required. Here's what we measure, why we built it this way, and how publishers can improve their score.
The problem with single metrics
The MCP ecosystem is growing fast. Thousands of AI services, tools, and models are published every week across npm, PyPI, HuggingFace, and MCP registries. The question every agent developer faces: which ones can you trust?
Download count is the most common proxy for trust. It's also the easiest to game. The ClawHub malware incident proved it — the #1 most-downloaded skill on the platform was actively stealing SSH keys, crypto wallets, and browser passwords. It had thousands of installs. It looked legitimate. And it was malware.
GitHub stars, npm downloads, HuggingFace likes — individually, any of these can be inflated. You need multiple independent signals that are hard to fake simultaneously. That's the core design principle behind Fabric's Trust Index.
Six signals, weighted by risk
Each signal scores 0–5 independently, then is multiplied by its weight. The six weighted contributions sum to the final composite score. Security-related signals carry the highest weight because they directly affect safety. Adoption and reputation are supporting evidence, not primary drivers.
Three tiers
Every composite score maps to one of three tiers that determine default behaviour for agents integrating with the Trust Index:
Additional modifiers can reduce a score below what the signals alone would produce. New services with limited history and inactive services both receive penalties. These exist to prevent trust-before-evidence — you shouldn't get a high score just because nothing bad has happened yet.
How publishers can improve their score
Fabric scores are fully automated. You can't contact us to request a higher score, and there's no paid tier that boosts your ranking. The only way to improve your score is to improve the underlying signals. Here's what actually moves the needle:
Vulnerability & Safety
This is the fastest way to improve a low score. Patch known CVEs in your dependencies, keep your dependency tree up to date, and audit transitive dependencies regularly. Avoid unnecessary install scripts — they're a red flag in our scanner. If you maintain packages across registries, ensure consistent versioning.
Maintenance Activity
Ship regularly. Even small releases signal that someone is actively maintaining the project. Respond to issues — you don't have to fix everything, but acknowledging reports matters. Stale projects with months of silence lose points here regardless of how good the code is.
Operational Health
If your service exposes an API or endpoint, uptime and latency matter. Consistent behaviour matters even more. Services that return different responses to identical requests get penalised heavily — it's a common indicator of something wrong.
Adoption
Genuine organic growth is what we're looking for. Inflated download counts don't help as much as you'd think — our scoring normalises by category and weighs velocity over absolute numbers. Focus on making something people actually use, not on gaming metrics.
Transparency
Publish under a recognised open-source licence. Write a real README, not a placeholder. Add a SECURITY.md with a responsible disclosure policy. If you're building an API, publish schemas. If you're building a model, publish a model card. Every piece of documentation you add improves this signal.
Publisher Trust
Use a consistent identity across registries. Publish from an organisation account rather than an anonymous personal account. Verify your domain. Build a track record across multiple packages. This signal rewards longevity and consistency — there are no shortcuts.
Every signal rewards the same thing: genuine, sustained, transparent engineering practice. There's no trick to gaming the score because the score is designed to measure exactly what good publishers already do.
Data freshness
The scoring engine runs continuously against 700+ indexed services and growing. Scores update automatically when conditions change — new version releases, new CVE disclosures, or shifts in operational behaviour all trigger re-evaluation. Some signals (like vulnerability data) refresh more aggressively than others (like publisher trust, which changes slowly by nature).
Why we built it this way
The Trust Index wasn't built in a weekend. The scoring methodology is the result of months of research into how trust actually works in software supply chains — and how it breaks down.
We studied every major supply chain attack in the npm, PyPI, and MCP ecosystems over the past three years. We analysed how malware authors build credibility (inflated stars, cloned READMEs, typosquatted package names) and designed each signal to catch a different part of that playbook. The weight distribution was calibrated against hundreds of known-good and known-bad packages to minimise false positives without letting real threats through.
The six-signal architecture exists because no single metric is trustworthy on its own. Spoofing one signal is easy. Download counts can be inflated overnight. GitHub stars can be purchased. A README can be polished in an hour. But spoofing all six simultaneously — passing vulnerability scans, maintaining consistent uptime, building genuine commit history, publishing from a verified org, showing transparent documentation, and exhibiting normal network behaviour — is almost impossible.
That's the design principle. Not any single signal. The combination.
The scoring engine continues to evolve. We regularly re-evaluate signal weights, add new data sources, and adjust for emerging attack patterns. Every time a new supply chain attack surfaces, we ask: would our scoring have caught it? If the answer is no, we improve it.
Trust scores are fully automated from public sources. Providers cannot pay for a higher score, request manual review, or influence their ranking. The only way to improve a score is to improve the underlying signals.
Search the Trust Index
Look up the trust score for any AI service, model, or MCP skill. Free. No account required.
Search 700+ services →