Editorial Standards

H1: Editorial Standards

The AI Picker reviews and ranks AI tools for people trying to make a real purchase decision. Our recommendations influence how readers spend their time and money. We take that seriously. This page documents the rules we hold ourselves to on every piece of content we publish.


1. Research, Not Hands-On Testing

We are honest about what we do. The AI Picker is a research and comparison site. Our job is to do the deep research so you don’t have to.

What that means in practice:

  • We read the official documentation, changelogs, pricing pages, and release notes for every tool we cover.
  • We gather user feedback from multiple independent sources — community forums, review platforms, practitioner writing, and video walkthroughs published by working users.
  • We cross-check pricing, feature claims, and capability descriptions against the provider’s own source material before publishing.
  • We compare tools within the same category on a consistent set of category-specific data points, so the comparison is meaningful.

What we do not do:

  • We do not claim to have personally tested, signed up for, or used a tool unless that has actually happened and is explicitly noted.
  • We avoid phrases like “we tested”, “in our experience”, or “we found when using it” across our content. Our language is research-based: “we researched”, “we compared”, “based on our analysis”, “user feedback suggests”.
  • When a specific detail (output quality, UI responsiveness, edge-case behaviour) has not been independently verified, we say so rather than invent a verdict.

This is the single most important rule we hold ourselves to. The value of this site is doing the research legwork transparently — not pretending to have done more than we have.


2. Scoring Methodology — Full Transparency

Every tool we review is scored out of 100 across five weighted factors. The weightings are public, applied consistently, and published alongside every review.

FactorWeight
Core Performance30%
Ease of Use20%
Value for Money25%
Output Quality15%
Support & Reliability10%

The full scoring methodology is documented on our How We Review page. Every review includes the individual factor scores, not just the overall number.

When an overall score includes a minor rounding adjustment (e.g. calculated 83.5 displayed as 84 or 85), we note the adjustment explicitly in the review. No hidden rounding, no scores smuggled up or down.

When a tool’s score changes on a later review, the category leaderboard updates and the change date is noted on the tool’s review page. Historic scores remain visible for transparency.


3. AI Assistance in Our Research

We use AI tools to help us research and draft reviews. We would be absurd not to — this is a site about AI tools.

How AI is used in our workflow:

  • AI-assisted research: gathering and summarising documentation, user feedback, and comparative data from publicly available sources.
  • AI-assisted drafting: producing initial drafts that are then edited, fact-checked, and finalised by an editor.
  • AI-assisted scoring frameworks: structured templates that enforce consistent scoring across tools.

What an editor does before publish:

  • Verifies every factual claim against the underlying source material.
  • Confirms pricing, feature claims, and comparisons are current.
  • Applies Principle 6 content honesty: removes any language that implies hands-on testing that has not taken place.
  • Signs off the review for publication.

We disclose this because transparency is our core differentiator. Other review sites use AI assistance without saying so. We think readers deserve to know how the research was produced — and that the editor-review step is the quality gate, not the AI draft step.


4. Affiliate Disclosure

The AI Picker earns commissions when readers purchase tools through our affiliate links. This is how the site is funded.

Our rules on this:

  • Affiliate status never changes a score. A tool with a generous commission does not rank higher than a tool with no commission. Our scoring is based solely on the five-factor methodology.
  • Every article includes a clear disclosure at the top.
  • We link to the provider’s official site alongside the affiliate link when readers prefer to navigate there directly.
  • We cover tools with no affiliate program when they are the right recommendation. Midjourney is an example — they have no affiliate program, yet we cover them because they are among the best tools in their category.
  • When we recommend a tool purely because a generous commission exists, we are failing our readers. This is a line we hold.

See our Affiliate Disclosure page for the full policy.


5. Conflicts of Interest

Tools we cover do not see reviews before publication. We do not send drafts to providers for approval, correction, or comment.

We do not accept payment for coverage. Sponsored reviews, paid placements, or “editorial partnerships” that influence scoring are not accepted. If a provider reaches out with an offer of this kind, we decline and publish the review on the same terms as any other.

We do not accept free trial extensions, premium access, or other consideration in exchange for coverage. If we use a paid plan as part of our research, it is paid from the site’s own budget.

If a relationship exists between The AI Picker and a tool provider beyond a standard affiliate arrangement (for example, a joint content collaboration), it is disclosed in the specific article.


6. Corrections and Updates

AI tools change. Pricing moves, features ship, models are replaced, providers are acquired. Reviews that were accurate six months ago may be stale now.

How we keep reviews current:

  • Each review shows a visible “last updated” date.
  • We revisit reviews on a rolling basis when major tool updates ship (new model releases, pricing changes, feature launches).
  • When a score changes, the change is reflected in the live review and on the category leaderboard the same day.
  • When a factual error is identified, we correct it immediately and add a brief correction note at the bottom of the article.

How to flag an error: If you spot something that is wrong, outdated, or misleading in any review, email us at theaipicker@gmail.com or use the contact form. We read every message.


7. Reader Safety and Trust

Tools we do not recommend:

  • Any tool with a pattern of user complaints around deceptive billing, difficult cancellation, or misleading marketing.
  • Any tool without a working customer support channel we can verify.
  • Any tool whose parent company has a current reputation for data misuse or privacy violations relevant to the use case.

When a tool we previously recommended develops a problem:

  • We update the review to reflect the new information.
  • If the problem is material enough that the tool should no longer be recommended, we change the verdict, adjust the score, and update the category leaderboard.
  • We do not quietly remove content. Reviews may be marked as historic or superseded, but the editorial change is noted transparently.

8. Who Writes This

This page and every review on The AI Picker is produced by our editorial team. Individual articles carry their editor byline as we roll author attribution across the site. Until the byline system is live, all content is published under the collective editorial accountability of The AI Picker team.

Editorial oversight: Every article is reviewed against the standards above before publication. The publisher of record is John Sadler.


9. What These Standards Mean for You

You should be able to read any review on this site and trust that:

  • The research is real, done to our published standards, and based on publicly available information we have verified.
  • The score is a genuine assessment based on our weighted methodology, not a commercial judgment.
  • The recommendation is what we would genuinely advise — even when it means recommending a tool with no affiliate commission over one with a generous program.
  • The content is current, or clearly marked with the date of the last update.
  • We will correct mistakes quickly and publicly when we make them.

If we ever fall short of these standards, we want to hear about it. Contact us →


Last updated: 16 April 2026 These standards are reviewed quarterly and updated as our processes evolve.