Skip to content

    Research Use Only

    This site is an independent educational resource for research compounds. We do not sell, distribute, or endorse human consumption of any compound. By entering, you confirm you are 21 years of age or older and agree to our Terms & Privacy Policy.

    🔬 100K+ researchers trust BodyHackGuide — Join r/BodyHackGuide
    HomeVendorsTrust Score Methodology
    Published rubric — Methodology v1.1

    Vendor Trust Score Methodology

    The exact 0–100 rubric we use to score every research compound vendor on this site. Five criteria, explicit weights, explicit pass thresholds. No editorial overrides, no pay-to-win, no hidden inputs. The computation is source-visible on GitHub.

    Last reviewed: April 24, 2026 · Applies to every vendor on /vendor-trust-scorecard.

    Third-Party COA Verification

    30% weight

    Payment Transparency

    20% weight

    Shipping Coverage & Transparency

    15% weight

    Community Reviews

    20% weight

    Active Listing Depth

    15% weight

    How the score is calculated

    Each criterion is a binary pass / fail. A pass earns the full weight. The Trust Score is the sum of earned weights divided by 100 (the maximum possible), rounded to the nearest integer.

    score = round(
      (coa.pass        × 30
     + payment.pass    × 20
     + reviews.pass    × 20
     + shipping.pass   × 15
     + listings.pass   × 15)
      / 100 × 100
    )

    A vendor that passes COA (30) + Payment (20) + Listings (15) but fails Shipping (15) and Reviews (20) scores (30 + 20 + 15) / 100 = 65.

    What the score tiers mean

    75 – 100

    High trust

    Passes at least four of five criteria, typically including COA verification. This is the tier to shortlist from — but the final pre-order check is still on you (COA inspection, reading recent reviews, matching price-per-mg against market floor).

    50 – 74

    Moderate trust

    Passes some but not all core criteria. Usable with caution — often small vendors that haven't built out one category yet (e.g., clean COAs but thin review volume). Expect to do extra diligence before a first order.

    0 – 49

    Low trust

    Fails the majority of criteria. We still list these vendors because suppression is a worse failure mode than disclosure — but we do not recommend them without a specific reason. The published gaps are the reason.

    The five criteria

    Each criterion below is one section of the rubric — its weight, its pass threshold, the data source we use, what we actually check, what we do not check, and why the signal exists at all.

    Third-Party COA Verification

    30% weight

    Does the vendor publish independent lab certificates, and do at least half of them clear our verification check?

    Pass threshold

    ≥50% of tested batches have a verified third-party Certificate of Analysis on file.

    Data source

    Our vendor_coa_scores table — each row represents one batch + one lab report, flagged verified=true only after our editors confirm the PDF is authentic, the lab is real, the compound matches, and the date is legible.

    What we check

    • Lab name is a real, accredited facility (Janoshik, Gourmet, Konawai, Legoney, etc.) — not a brand-owned internal lab
    • Batch / lot number on the COA matches the vendor's batch label or website
    • Compound identity matches what's advertised (mass spectrometry peak or HPLC retention time within tolerance)
    • Test date is present and within a reasonable window of the listing
    • Purity figure is legible and numeric — not rounded marketing prose
    • PDF has not been cropped, compressed to illegibility, or obviously altered

    What we deliberately don't check

    • We do not run our own mass-spec on vendor product — we verify the paper trail, not the molecule
    • We do not require 99.9%+ purity to pass; our pass/fail check is about authenticity, not perfection
    • Self-tests by the vendor's own lab do not count, even if the vendor owns the testing equipment

    Why it matters

    The single biggest failure mode in research-compound sourcing is under-dosed, mis-identified, or contaminated product. A real third-party COA is the only cheap signal that the vendor is running an actual QC loop. We weight it 30% — the largest single factor — because no other criterion substitutes for it.

    Payment Transparency

    20% weight

    Does the vendor accept at least two traceable payment methods, including a card-type method?

    Pass threshold

    vendor.payment_methods lists ≥2 distinct methods AND at least one of them is card-based (Visa / Mastercard / credit / debit / Apple Pay).

    Data source

    Our vendors.payment_methods array — an editor-curated list based on the vendor's public checkout page. Reviewed on listing and re-verified at least quarterly.

    What we check

    • At least one consumer-facing card method (Visa, Mastercard, Apple Pay, Google Pay, Shop Pay)
    • At least one additional method (PayPal, ACH, wire, crypto) — redundancy matters when processors drop adult-adjacent categories
    • Methods are actually on the live checkout page, not just advertised on a marketing page

    What we deliberately don't check

    • Crypto-only vendors automatically fail this criterion — not because crypto is illegitimate, but because a crypto-only checkout removes the consumer's chargeback protection
    • We do not score the processor's fees, reserve policies, or reliability — only the checkout-surface redundancy the buyer actually sees

    Why it matters

    Card payments carry dispute rights under 15 U.S.C. §1666 (Fair Credit Billing Act). Crypto does not. A vendor that can only take crypto has either been cut off by their processor (bad sign) or has chosen an opaque checkout (also a bad sign). Two independent methods also indicates the vendor has done the business-continuity work to survive a processor shutdown — a concrete operational competence signal.

    Shipping Coverage & Transparency

    15% weight

    Does the vendor publish where they ship to?

    Pass threshold

    vendor.shipping_regions lists ≥1 region (e.g., 'US', 'EU', 'UK', 'AU', 'CA', 'Worldwide').

    Data source

    Our vendors.shipping_regions array — populated from the vendor's published shipping policy page. Re-verified quarterly against the live policy.

    What we check

    • At least one named region or country on the vendor's shipping policy page
    • Policy page is accessible without account creation
    • Region claims match reality when we place orders internally for editorial verification

    What we deliberately don't check

    • We do not grade speed (next-day vs. 5-7 business days) — that varies by carrier and region
    • We do not punish small vendors for shipping to one country only — US-only is a legitimate business decision
    • We do not factor in per-order shipping cost (that belongs in price-per-mg, not trust)

    Why it matters

    Vague, missing, or contradictory shipping info is a classic fly-by-night pattern. A real vendor tells you where they ship; a fraud tells you 'worldwide discreet' and nothing else. The 15% weight reflects that this is a hygiene signal rather than a product-quality signal — necessary but not sufficient.

    Community Reviews

    20% weight

    Does the vendor have a real review footprint, and does it average at least 3.5 out of 5?

    Pass threshold

    Average rating ≥3.5 across at least one review. Data source priority: (1) BodyHackGuide community reviews from the vendor_reviews_public view, (2) fallback to our editor-curated Reddit rating aggregate on vendors.rating when no community reviews exist yet.

    Data source

    vendor_reviews_public — a filtered view of on-platform reviews (logged-in users, one review per vendor, text+rating). When the community view is empty we fall back to vendors.rating, a curated weighted average from r/Peptides, r/MoreThanYouWant, and the long-tail Reddit megathreads, refreshed on vendor onboarding.

    What we check

    • Average rating against a 3.5/5 threshold — loosely 'more positive than negative'
    • Presence of any review at all — a brand-new vendor with zero reviews fails this check regardless
    • In the curated fallback: a minimum of three independent Reddit threads across at least two subreddits, not a single cherry-picked thread

    What we deliberately don't check

    • We do not publish review counts as trust signals when the count is small enough to be gamed (under ~5)
    • We do not weight verified-purchase reviews higher on this page (they contribute via the COA check instead)
    • We do not factor Trustpilot or similar off-platform aggregators on vendors that we suspect have been review-bombed in either direction

    Why it matters

    Community reviews are the single hardest signal to fake at scale, and the easiest signal for a legitimate vendor to accumulate. A vendor that has been shipping for 18+ months and still has no real reviews is selling through a different channel than retail — or isn't really shipping. 3.5/5 is set low on purpose: we want this check to be a floor, not a popularity contest.

    Active Listing Depth

    15% weight

    Does the vendor have at least three active, tracked listings on our compare engine?

    Pass threshold

    COUNT(listings WHERE vendor_id = vendor.id AND active = true) ≥ 3.

    Data source

    Our listings table — populated from vendor price sheets and maintained by our price-snapshot pipeline, which re-fetches daily. Only listings marked active=true count.

    What we check

    • At least three active product listings tied to compounds in our catalog
    • Listings have prices attached (null prices do not count)
    • Listings have not been flagged inactive by our editors (sold-out / stealth-discontinued)

    What we deliberately don't check

    • We do not reward breadth-for-breadth's-sake — a boutique vendor with 10 well-priced listings passes easily; the 3-listing floor is deliberately low
    • We do not factor category diversity — a GLP-1-only vendor is valid if they do GLP-1 well

    Why it matters

    Single-SKU 'flash' operations are a dropshipping-fraud tell. A vendor that sustains three or more priced listings through our daily price-snapshot pipeline has a catalog, a pricing process, and enough operational depth that we can actually compare them on this site. This is the only criterion that measures vendor commitment over time rather than a single snapshot.

    Vendor red flags

    Patterns we've seen repeatedly in scam and low-quality vendors. Any one red flag does not automatically mean 'avoid' — but three or more in combination, with a low Trust Score, means the base rate of buyer harm is high enough that we would not order.

    No COA, or only self-tested results from the brand's internal lab

    Internal QC is better than nothing, but it is not a third-party signal. If the lab on the PDF has the same street address as the vendor, it does not count.

    Crypto-only checkout

    Zero chargeback rights. Vendors that can only accept crypto have usually been dropped by payment processors — a piece of information they are not volunteering.

    No reviews anywhere, or exclusively 5-star reviews

    A wall of 5-star reviews with no middle-rating distribution is harder to fake badly than it looks. Real review distributions have a visible 3-4 star tail.

    Vague or missing shipping policy

    If the vendor will not tell you where they ship to without you creating an account, assume they do not ship to your jurisdiction reliably.

    No customer-service contact beyond a generic form

    Real vendors publish a support email on a static page. A contact-form-only operation with no reply SLA is a red flag for every downstream problem — short shipments, wrong compound, missing COA.

    Prices 30%+ below the lowest credible competitor on the same compound

    The floor for legitimately-sourced GLP-1 and long-chain peptides is set by upstream API cost. A vendor pricing dramatically below floor is either underdosing, mislabeling peptide length, or running a churn-and-burn charge model.

    Brand-new domain with an aggressive marketing launch

    WHOIS < 90 days + paid Instagram placement + 'launch discount' + no COA history is the single most common shell-vendor pattern we see.

    Copy-paste COAs (same scan, different batch numbers)

    Visually identical PDFs across batches — same watermark position, same compression artifacts — are a tell that the vendor is reusing a single real report across many batches.

    Compound list that includes obvious scam flags

    'BPC-157 pills', 'oral semaglutide drops', 'transdermal TB-500 cream' — compound forms that aren't bioavailable or aren't what the molecule actually is. A vendor who will sell you a fake form of one compound is selling you questionable everything.

    No editorial response when mistakes are called out publicly

    A vendor's response to a bad COA or a shipping issue on Reddit is more predictive than any marketing page. Silence, deletion, or legal threats are all worse than a clean explanation.

    What we deliberately don't score

    The honest side of a rubric: the things we could measure but choose not to, because folding them in would degrade the signal. Scoring is a disciplined act of leaving things out.

    Vendor marketing claims about themselves

    Taglines like 'the most trusted', '99.9% pure', 'doctor-founded' never enter the formula. We score what we can verify, not what the vendor asserts.

    Price level

    Cheapest and most expensive vendors can both score 100. Price belongs on the compare pages and on price-per-mg comparisons, not in the trust calculation. Conflating price with trust would punish premium small-batch operations and reward race-to-the-bottom sellers.

    Affiliate relationship with BodyHackGuide

    Whether we earn a commission on a vendor's sale does not enter the formula. Adera, our in-house brand, is scored on the same criteria as every competitor (see /affiliate-disclosure).

    Social-media follower count

    A vendor with 100k Instagram followers and no COA scores lower than a no-follower vendor with three clean COAs. Reach is a marketing metric, not a trust metric.

    How 'premium' the packaging looks

    Laser-etched glassware and embossed boxes are fully orthogonal to whether the peptide inside is what the label claims.

    How often the vendor posts on Reddit

    Active community presence is often a good sign — but it's easy to fake and we refuse to bake it into the formula. It can tip an editorial recommendation; it cannot move the score.

    The no-pay-to-win commitment

    No vendor can pay to raise their Trust Score. The score is deterministic — it is computed by code that runs in your browser against data every visitor can inspect. Editorial staff cannot manually adjust the number for any vendor, for any reason.

    Affiliate status is orthogonal. Whether we earn a commission on a vendor's sale does not change any criterion, any weight, or any threshold. Our in-house supplement brand Adera is scored on the identical rubric — see our Affiliate Disclosure for the full ownership disclosure.

    If we ever depart from this commitment, the departure will land in the public GitHub repo before it ships to this site. You can audit the scoring function at any time: src/hooks/useVendorTrustScore.ts.

    Methodology FAQ

    Can a vendor pay to raise their Trust Score?

    No. Trust Scores are computed by a deterministic formula from the five criteria described on this page. No human editorial override exists, no vendor partner tier grants score adjustments, and no amount of affiliate spend changes the number. This is audited by design: the computation lives in src/hooks/useVendorTrustScore.ts and runs client-side on data every visitor can inspect. If we ever change this, the change has to land in the public repo before it shows up on the site.

    How is the 0-100 score calculated exactly?

    Each of the five criteria is a binary pass/fail. A pass awards the full weight of that criterion. The score is the sum of earned weights divided by the total possible weight (100), rounded to the nearest integer. So a vendor that passes COA (30) + Payment (20) + Listings (15) but fails Shipping (15) and Reviews (20) scores (30 + 20 + 15) / 100 = 65.

    Why is COA weighted so heavily (30%) and Shipping so lightly (15%)?

    Weights are set by how much each signal predicts real buyer harm. A vendor shipping under-dosed or mis-identified product causes material protocol failure and, at the GLP-1 end, material safety risk. A vendor with terse shipping-policy copy causes inconvenience. The weights reflect the consequence gap, not the headline importance.

    What does a 75+ score actually mean? Is it safe to buy?

    75+ means the vendor passes at least four of five criteria including, in practice, COA verification. It means the vendor has cleared our hygiene floor across independent dimensions. It does not mean the product will be perfect on your specific batch, that your specific order will ship on time, or that your specific reconstitution will be contamination-free. Trust Score is a prior, not a guarantee. Your own COA inspection on arrival is still the final check — see /guides/coa-verification.

    Why isn't price part of the Trust Score?

    Because mixing price and trust destroys both signals. A cheap vendor can be trustworthy; an expensive vendor can be a scam. Mixing them produces a single number that no buyer can reason about. We keep price on the compare pages (where it's a first-class axis) and keep trust here (where it's the quality-of-counterparty axis). Buyers make the trade-off themselves, with both numbers visible.

    How often are Trust Scores recalculated?

    Continuously. COA counts update when editors log new verification results (typically same-week as batch drops). Payment and shipping data update on vendor-policy re-verification sweeps (quarterly at minimum; immediately when a reader flags a change). Community reviews update in real-time as logged-in users post. Listings update daily via the price-snapshot pipeline. Whenever you load /vendor-trust-scorecard the score is re-computed client-side from the current data.

    What if a vendor thinks their score is wrong?

    Email support@bodyhackguide.co with the vendor name and the specific criterion you believe is mis-scored. We'll reopen the input data, verify against your evidence, and either update the record or publish a short written explanation of why the original call stands. If the input data was wrong, the score updates automatically on the next page load. If we were wrong about methodology, we bump METHODOLOGY_VERSION on this page and document the change in the changelog below.

    Does Trust Score account for batch-to-batch variance?

    Partially — via the COA criterion, which looks at the verified/total ratio across every batch we have on file, not just the latest one. A vendor with 20 verified batches and 2 failed batches passes the 50% threshold comfortably; a vendor with 2 verified and 5 failed does not. This is deliberately more forgiving than a single-batch view, because occasional failed batches happen to real vendors — systemic failed-batch patterns are what we're measuring.

    What about vendors not yet listed on BodyHackGuide?

    They have no score. We only score vendors that have enough listings in our catalog to be comparable (hence the 3-listing floor on the Listings criterion). A vendor with fewer than 3 tracked listings is marked 'insufficient data' on the scorecard rather than being given a misleading low score.

    Is the source code for the scoring public?

    Yes. The deterministic scoring function is src/hooks/useVendorTrustScore.ts in our public GitHub repo at github.com/bodyhackguide/molecule-explorer. Every check function is a few lines of readable TypeScript and maps 1:1 to a section on this page.

    Methodology changelog

    v1.12026-04-24
    • Published full rubric documentation at /vendors/methodology (this page).
    • No change to weights or pass thresholds — purely a transparency release of the existing formula.
    • Added explicit 'What we do not score' section to close a disclosure gap.
    v1.02025-10-01
    • Initial five-factor rubric: COA (30), Payment (20), Reviews (20), Shipping (15), Listings (15).
    • Binary pass/fail per criterion; weighted sum / 100.
    • Deterministic client-side computation introduced in src/hooks/useVendorTrustScore.ts.

    See the rubric applied to every vendor

    The live scorecard ranks every vendor we track, with per-criterion pass/fail breakdowns you can expand inline.

    View scorecard

    Editorial Standards

    How we research, fact-check, and keep content current.

    Affiliate Disclosure

    Full FTC-compliant breakdown of how we make money.

    How to verify a COA yourself

    The visual walkthrough for inspecting a Certificate of Analysis on arrival.

    B
    BioChonchFounder & Lead Researcher

    Independent researcher and founder of BodyHackGuide. Obsessed with evidence-based biohacking, peptide science, and nootropic protocols. Every recommendation is backed by PubMed citations and real-world testing.

    Methodology v1.1 · Last reviewed April 24, 2026 · Source: src/hooks/useVendorTrustScore.ts