H1: How We Review AI Tools
Every review on The AI Picker follows the same process. No shortcuts, no exceptions. This page documents the full methodology so you can see exactly how each score is produced and what the scoring means.
If you want the full list of rules our editorial team holds itself to (conflicts of interest, corrections policy, AI disclosure), see our Editorial Standards page.
The Research Process — Four Steps, Every Time
Step 1: We Research Thoroughly
We dig into everything publicly available about each tool — official documentation, changelogs, release notes, user reviews across multiple platforms, community feedback on forums and practitioner communities, expert opinions, and case studies from working users.
We do not skim the features page and call it a review. We analyse pricing structures, compare feature sets against competitors, and look for the details most review sites skip — token accounting on credit-based tools, real workflow friction that marketing pages do not mention, and edge-case behaviours that matter in production.
We are honest about what this is. The AI Picker is a research and comparison site. Our job is to do the deep research so you do not have to. We use research-based language throughout our reviews — “we researched”, “we compared”, “based on our analysis” — and we do not claim hands-on testing unless that is what genuinely happened.
Step 2: We Compare Fairly
Every review includes context. How does this tool compare to the obvious alternatives? Is it worth the price difference? We maintain consistent scoring criteria across every tool in the same category so our comparisons are meaningful.
Each tool category has its own set of comparison data points — the specific features and capabilities that matter for that type of tool. An AI writing tool is compared on different criteria than an AI voice generator, because the things that matter are different. Our Category Definition files document the data points for every category we cover.
This means our comparison tables show you what is actually relevant, not generic feature lists.
Step 3: We Score Consistently — The Five-Factor System
Every tool is rated out of 100 based on five weighted factors:
| Factor | Weight | What We’re Looking At |
|---|---|---|
| Core Performance | 30% | Does it do its main job well? Is the primary output competitive with the category leaders? |
| Ease of Use | 20% | Can a non-technical user figure it out quickly? How steep is the learning curve to productive output? |
| Value for Money | 25% | Is the price fair for what you get? How does per-unit cost compare against category alternatives? |
| Output Quality | 15% | How good is the end result compared to competitors on like-for-like tasks? |
| Support & Reliability | 10% | Does it work consistently? Is help available when you need it? Is the provider responsive? |
Why these weights: Core Performance carries the heaviest weight because if a tool does not do its job, no amount of polish matters. Value for Money is second because affordability is a real constraint for most buyers. Ease of Use is third because a tool you cannot use quickly is a tool you will abandon. Output Quality differentiates the best from the good. Support & Reliability catches the issues that only surface after you have committed.
How to read a score: A tool scoring 85/100 is genuinely strong across all five factors. A tool scoring 72/100 has at least one meaningful weakness — the factor breakdown shows which one. The 100-point scale means we can show meaningful differences between tools. An 82 versus a 77 tells a clearer story than 4 stars versus 4 stars.
Rounding transparency: Sometimes a weighted score calculates to a non-integer — for example, 83.5/100. When a small rounding adjustment is applied (up to ±1 point), the review notes the adjustment explicitly. No hidden rounding.
Step 4: We Write What We Think
Our reviews are not summaries of the features page. They are our honest assessment — what stood out, what fell short, what surprised us, and who we would recommend the tool to.
We always pick a winner in comparisons. “It depends” is not a review — it is a cop-out. We tell you what we would choose and why, then explain when the other option might be the better fit.
Example: How a Score Is Built
Take a review scoring 80.5/100 overall:
- Core Performance: 85/100 × 0.30 = 25.5
- Ease of Use: 90/100 × 0.20 = 18.0
- Value for Money: 70/100 × 0.25 = 17.5
- Output Quality: 80/100 × 0.15 = 12.0
- Support & Reliability: 75/100 × 0.10 = 7.5
- Total: 80.5/100 (rounds to 81/100)
You will see the same breakdown table in every single review and best-of entry on the site. Every factor score is visible. No hidden maths.
What We Don’t Do
- Accept sponsored reviews. Nobody pays for placement on this site.
- Let companies preview articles. Our reviews are published without provider approval.
- Change ratings for higher commissions. We recommend what is best, not what pays best. Tools with no affiliate program at all (Midjourney is a current example) are reviewed and ranked on the same terms as everything else.
- Use fabricated hands-on claims. We do not claim to have tested, signed up for, or used a tool when that has not happened. Research-based language is a non-negotiable in every review.
- Bury the methodology. The five factors, their weights, and the individual scores are visible on every review. The methodology is never buried in a footnote.
How You Can Use Our Data
Category Leaderboards
We maintain a Category Leaderboard for each type of AI tool — a living, ranked table of every tool we have reviewed in that category, ordered by overall score. These leaderboards update whenever we publish or revise a review, so they always reflect current data.
Browse the leaderboards at /rankings/.
Interactive Comparison Builder
The Comparison Builder lets you select 3-5 tools from the same category and view them side-by-side in a single table — showing all five scoring factors plus the category-specific data points (avatar library size for AI video, voice library for AI voice, and so on).
It is designed to help you reach a confident decision fast using data we have already gathered and scored.
Product Catalogue
The Product Catalogue is a searchable index of every tool we have reviewed, filterable by category, score, and price range.
Keeping Reviews Current
AI tools change fast. A tool that was brilliant six months ago might have raised prices, cut features, shipped a new model generation, or been overtaken by a competitor.
We revisit our published reviews regularly and update them when things change. Every article shows when it was last updated so you know you are reading current information. When a score changes, the category leaderboard updates automatically and the change is noted on the review.
When a factual error is identified, we correct it immediately and add a brief correction note at the bottom of the affected article.
AI in Our Workflow — Full Disclosure
AI tools are part of our research and drafting workflow. We disclose this openly because transparency is our differentiator.
An editor reviews, fact-checks, and signs off every article before it goes live. We believe the editor-review step is the quality gate, not the draft stage. For the full detail on how AI assistance is used, what the editor reviews, and how Principle 6 content honesty is enforced, see our Editorial Standards page.
Suggest a Tool or Flag an Error
Is there an AI tool you want us to review? Did you spot something wrong in a published review? We want to hear both.
Email: theaipicker@gmail.com Contact us →
Last updated: 16 April 2026