How IndustryLens collects, verifies, and scores its data
Every claim in every report is auditable. Here's exactly how.
Last reviewed:
The principle
We publish competitive intelligence so teams can act on what their competitors are doing — but only if they can trust the claim. That's why every Key Finding in every report links to the specific evidence that supports it, every derived figure shows its baseline, and every coverage gap is disclosed up front.
If you're reading a report and you can't verify a statement, that's our bug. Tell us — see Corrections and disputes.
What we monitor
IndustryLens scrapes a fixed set of public sources for every competitor we track, on a weekly cadence. We don't use third-party data brokers, behavioural panels, or proprietary dashboards — every source is a public surface that any reader could verify themselves.
- LinkedIn: sponsored ads (via the EU Ad Library), organic posts from company pages, and engagement metrics
- Review platforms: G2 and Capterra reviews, including reviewer industry, role, and named-customer switching evidence
- Search advertising: Google Ads and Meta Ads creative, copy, and landing pages
- Owned web surfaces: pricing pages, comparison pages, product changelogs, and homepage diffs (we record every meaningful change week-over-week)
- Earned media: Google News, press releases, and category-relevant industry publications
- Social signals: Instagram, YouTube, Twitter, Reddit (where relevant for the vertical)
Per source, we capture the source URL, observation timestamp, and a structured extract (the actual ad copy, the actual review text, the actual price string). Nothing is paraphrased before storage — paraphrasing happens only at the analysis stage, with a link back to the verbatim source.
The pipeline
Every report flows through five stages. None of them are manual.
- Scrape. The pipeline pulls from each source listed above for every tracked competitor. We record the raw response, not just our interpretation of it.
- Normalise. Raw responses are parsed into a consistent shape — every observation has a timestamp, a competitor, a source URL, and a structured payload.
- Analyse. An AI pass synthesises observations into insights — “Apollo launched a new pricing tier”, “Outreach is positioning against Salesloft on AI-orchestration”. Each insight links to the observations that produced it.
- Verify. A second AI pass — adversarial to the first — checks each insight for: (a) a verifiable source, (b) a sound derivation if it contains a number, (c) freshness, and (d) trust tier (see below). Insights that fail any check are flagged and don't make it into reports.
- Publish. Verified insights become Key Findings in a weekly report. Every finding cites the underlying insight(s); every derived figure shows its calculation in the expand panel.
The confidence layer
Every insight carries four scores that sum into a single confidence rating:
- Source quality
- Is this a primary source (the company's own site, an ad, a verified review) or a secondary one (analyst quote, third-party blog)? Primary sources weight higher.
- Evidence sufficiency
- Is the claim backed by enough text to verify it? A one-sentence ad caption supporting a structural claim gets flagged.
- Claim verifiability
- Can a human reader, given the source URL, reproduce the claim? If we're saying “33% undercutting” the reader needs to be able to land on a price and check.
- Recency
- How fresh is the underlying observation? A claim built on observations more than 6 weeks old gets discounted.
Each insight is tagged Tier 1 / Tier 2 / Tier 3 by the verifier. Reports use Tier 1 and Tier 2 only.
Claim reasoning auditability
The single biggest failure mode in AI-generated competitive intelligence is hallucinated figures — “Competitor X is 60% cheaper” with no traceable baseline. We hit this ourselves in early 2026, on a battlecard claiming “D5 Render undercutting V-Ray Solo by over 60%” (the actual undercut was 33%, baselined against the wrong V-Ray price).
Since April 2026, every derived figure (every percentage, every comparison, every multiplier) has had to record:
- The claim itself
- The baseline value used
- The source the baseline came from
- The exact calculation
- The verifier's confidence rating for that derivation
For the corrected D5 example, that record reads:
Claim: D5 Pro undercuts V-Ray Solo by 33%.
Baseline: V-Ray Solo $45/month (from chaos.com/vray/pricing, scraped on the report's effective date).
Derivation: D5 Pro $30/month (from d5render.com/pricing, same scrape window).
Calculation: (45 − 30) / 45 = 0.333 = 33%.
Confidence: Verified — both sides have a primary-source URL and a fresh scrape.
You see the same audit trail in the “View evidence” expand panel under any Key Finding that contains a derived figure. If a derivation looks wrong to you, the audit trail tells you what to check.
What we don't cover
Honest scope statement: IndustryLens covers B2B SaaS, public signals only, on a weekly cadence. That means:
- We don't cover B2C, consumer apps, or industries where the action lives on private platforms (e.g. enterprise sales cycles that play out in board rooms).
- We don't see anything behind logins (we don't scrape paywalled forums, customer-only pricing, sales-team collateral, or paid newsletter content).
- We are weekly, not real-time. A competitive move that happens on a Tuesday will land in our reports the following Monday, not within hours. If your team needs hour-grain signals, IndustryLens isn't the right tool.
- For any vertical, we will openly say if our data volume is low. The methodology card on each report shows item counts per source — if you see two LinkedIn ads and one G2 review supporting a finding, that's the truth and the finding is weighted accordingly.
Cadence and updates
Reports publish every Monday. The underlying scrape pipeline runs Sundays UTC, so every report cites observations captured up to about seven days before publication. If you're reading a report on a Tuesday, the latest observation it draws on is at most nine days old.
Reports carry a dateModified field that updates whenever we materially edit content (correction, new evidence added, finding withdrawn). Browsers, search engines, and LLMs all see this — it's not just a label.
Corrections and disputes
If you see a derivation, a quote, or a finding that you believe is wrong: tell us. Email naveed@cheetahconversions.com or DM Naveed on LinkedIn with the report URL and the specific finding.
You'll get one of three responses, within five business days:
- Update with citation — we agree, the report is corrected, and we credit you in the dateModified note.
- Retract — we agree the finding shouldn't have shipped; we pull it and explain why on the report page.
- Stand by it with explanation — we disagree; we lay out the audit trail in detail so you can judge.
We don't silently edit reports. Every change shows in the dateModified field and in the report's audit history.
About IndustryLens
IndustryLens is built and run by Naveed Ratansi. The company exists to give B2B SaaS sales, marketing, and product teams a weekly competitive briefing they can act on, sourced from 30+ public data surfaces and verified end-to-end.
See the IndustryLens company page on LinkedIn or browse our latest reports.
