The State of Tech Journalism in 2024

A 3-year Gadget Review investigation reveals that 85% of tech review publications are untrustworthy, with nearly half faking their product tests.
Rex Freiberger Avatar
Christen da Costa Avatar

,

Christen da Costa
Updated Oct 10, 2024 12:11 AM

1. Introduction

We created the State of Tech Journalism Report after a three-year investigation into the trustworthiness of tech reporting—and what we found is shocking. A thriving, for-profit fake review industry is dominating the web, with fake product tests and deceptive practices infiltrating even major platforms like Google.

Our investigation covered 496 tech journalists, involved over 1,000 hours of work, and revealed that 45% of corporate-owned and small publishers produce fake product tests, with a staggering 85% classified as untrustworthy. For online shoppers and those who rely on journalism to guide their decisions, knowing who to trust has never been more critical.

At the heart of this report is our Trust Rating system, which powers the True Score, the web’s most accurate product quality rating system. Think of it as Rotten Tomatoes for products—except instead of movies or TV shows, we evaluate everything from electronics to home tech, backed by data and transparency.

We analyzed tech journalists across 30 product categories, focusing on electronics and home appliances. Using 55 indicators, we measured expertise, transparency, data accuracy, and authenticity. Each publication earned a score from 0 to 100, placing them into one of six classifications: Highly Trusted, Trusted, Mixed Trust, Low Trust, Not Trusted, or Fake Reviewers.

The chart below exposes a troubling reality: nearly half of the reviewed publications fall into the “Not Trusted” or “Fake Reviewers” categories. It’s a wake-up call for the industry and consumers alike, emphasizing the need for greater accountability and transparency. Our goal is simple: to help you shop smarter, trust the right sources, and avoid the pitfalls of fake reviews.

Publication Trust Rating Classifications

In the chart, only four publications made it into the ‘Highly Trusted’ category. That’s out of 496 total publications.

This year, our State of Tech Journalism report ranks Top 10 VPN as the most trusted in the Americas, with a score of 102.2%. RTINGs is second at 99.58% and VPN Mentor is third with 97.45%.  HouseFresh finishes fourth with a score of 95.95%, closing out the ‘Highly Trusted’ group. Air Purifier First lands in fifth, earning a ‘Trusted’ rating at 88.65%.

The top three are all independent publications. Among the top 10, only one belongs to ‘Big Media’.

As for the most untrustworthy publishers, Cool Material and Turbo Future rank lowest among the fake reviewers, with Trust Ratings of 0% and 2.5% respectively. After them are LoLVV, Antenna Junkies and Reliant.co.uk.

In Section 6, we take a closer and more balanced look at the top 10 and bottom 10 publications, grouping them by scope—broad (16+ categories), niche (3-15 categories), and hyper-niche (1-2 categories). This approach ensures a fair comparison among similar types of sites, giving deeper insight into how trust varies across different levels of specialization.

This report also highlights the Fake Five: five publications widely perceived as trustworthy and that have some of the highest monthly traffic in the industry. Despite their reputations, their reliance on fake testing earned them the “Fake Tester” label.

The Fake Five’s Categories by Trust Rating Classification

As shown above, Forbes leads with 27 total categories, including 9 classified as “Fake Tester.” WIRED follows closely with 26 total categories, 15 of which are “Fake Tester.” Popular Mechanics has 24 total categories, with 10 classified as “Fake Tester.” Consumer Reports, with 23 total categories, shows significant signs of fake testing in 17 of them—that’s over half of their categories. Good Housekeeping has the fewest total categories at 18 but still includes 10 “Fake Tester” categories. This underscores the pervasive issue of categories with fake tested reviews across these famous publications.

These brands, with their enormous reach, have a duty to deliver trustworthy reviews. Yet, their reliance on fake testing continues to damage reader confidence, further highlighting the urgent need for accountability in tech journalism.

While the ‘Fake Five’ highlight some of the worst offenders, they’re just part of a much larger picture. The list below reveals the trust ratings for all 496 publications we evaluated, detailing how each performed across our indicators and the reviews we examined.

With the trust ratings of all 496 publications now laid out above, we identified some clear patterns when analyzing the results. Below, we’ve distilled the biggest findings from this investigation into key takeaways that underscore the state of trust in tech journalism today.

Key Takeaways

  1. 🛑 Almost half of all tech reviews are fake online: 45% of online tech reviews of the 496 publications in our data set across 30 categories are faking their product tests.
  2. 🤨 85% of online tech review publications are untrustworthy: Among the nearly 500 publications in our dataset, a significant portion are faking reviews.
  3. 🚨 Five high-traffic sites with household brand names traditionally trusted by people are fake reviewers (AKA the Fake Five). Together, these five (Consumer ReportsForbesGood HousekeepingWired, and Popular Mechanics) alone bring in almost 260M monthly views – that’s about 23% of the total traffic in our entire dataset.
  4. 🏢 The majority of corporate-owned publications suffer from fake testing. 54% of all the corporate-owned tech reviewers in our dataset have been labeled “fake reviewers”.
  5. 🔍 Fake reviews are alarmingly common on the first page of a Google search. For terms like “best office chairs” or “best computer monitors”, 22% of the results will link directly to websites that claim to test products but provide no proof of their testing or even fake it.
  6. 🏆 The only Highly Trusted publications (the highest trust classification a publication can earn) are independent, and there are not a lot of them. There are just 4 publications that managed our “Highly Trusted” classification – 0.8%.
  7. 📊 Not one of the 30 categories we researched into has more trusted reviewers than untrusted ones. 
  8. 🎥 Projectors are the least trustworthy category in our entire dataset. 66.7% of all of the tech reviewers we analyzed are faking their testing.
  9. 🌐 While routers are the most trustworthy category in our dataset, only 19.5% of tech reviewers in this category were rated as “Trusted” or “Highly Trusted.” This highlights a generally low level of trust across all categories, despite routers leading the pack.

Why We Created This Report

The findings of this report highlight a widespread issue, but our intention is not to simply expose and dismiss these companies.

Instead, we aim to engage them constructively, encouraging a return to the fundamental purpose of journalism: to speak truth to power and serve the public.

Our goal is to hold powerful corporations and brands accountable, ensuring that consumers don’t waste their money and time on low-quality products.

We view this as part of a broader issue stemming from a decline in trust in media over recent decades, and we are committed to being part of the solution by helping restore that trust.

Methodology

Reliable statistical insights begin with a solid, transparent methodology, forming the foundation for every conclusion. Here’s how we approached our three-year investigation:

  1. Leverage Category Expertise: We started by identifying 30 core product categories, pinpointing the most critical criteria to test, determining how to test them, and defining the appropriate units of measurement. This ensured a comprehensive understanding of each category’s standards.
  2. Develop Trust Rating System: Next, we created a quantifiable framework to evaluate the trustworthiness of publications. The system measured transparency, expertise, rigorous testing practices, and more, providing an objective and reliable assessment of each review’s credibility.
  3. Collect Data & Conduct Manual Reviews: Using web-scraping tools, we gathered data from hundreds of tech and appliance review publications. Human researchers then manually reviewed the findings using the Trust Rating System to classify reviewers into categories like “Highly Trusted” or “Fake Reviewer.”
  4. Analyze Findings: Finally, we applied statistical and quantitative methods to uncover trends, identify patterns, and generate actionable insights. This rigorous analysis ensured every conclusion was grounded in reliable data.

To visualize the milestones of this investigation, the timeline below outlines our three-year journey:

The timeline above documents major milestones during our investigation which lasted from June 24, 2021, to June 21, 2024.

During this period, we developed multiple iterations of the Trust Rating system, beginning with version 1.0, and refining it to a more efficient yet comprehensive 1.5 methodology. We also expanded our pool of reviewed publications from an initial 102 to 496. Plus we gradually increased the number of product categories analyzed to ensure our Trust Ratings covered a diverse and comprehensive range of reader needs.

1. Category Expertise: The Foundation of Our Investigation

Our journey began with a key step: leveraging our expertise in electronic and appliance categories to create a road map for evaluating each product category. This meant diving deep into what makes a product category tick—understanding its key performance criteria, testing methods, and units of measurement. This groundwork was essential to building a consistent and accurate framework for assessing the trustworthiness of publication reviews.

We identified 30 core categories that broadly cover the tech and appliance landscape, including popular products like air conditioners, air purifiers, blenders, gaming chairs, gaming headsets, and more.

For the 12 categories live on Gadget Review, we developed and published comprehensive testing methodologies.

Additionally, we identified key performance criteria for 18 more categories, expanding our ability to assess product performance across a wide spectrum. These efforts ensured our Trust Rating system could adapt to the nuances of each category, from highly technical products like televisions to more straightforward ones like vacuum cleaners.

Each category demanded its own approach, and this initial research gave us the tools to evaluate them fairly and accurately.

2. Trust Rating System: A Rigorous Framework for Evaluating Credibility

We proceeded to develop the Trust Rating System, a proprietary system designed to measure the credibility and reliability of product reviews and reviewers. This system evaluates publications using 55 indicators across 8 subcategories, providing a detailed assessment of transparency and expertise within a specific product category.

The ratings use a logarithmic scale from 0 to 100, classifying publications into six categories: Highly Trusted, Trusted, Mixed Trust, Low Trust, Not Trusted, and Fake Reviewers.

Our Trust Ratings power the True Score, which we call the web’s most accurate product quality score. By filtering out fake reviews, the True Score delivers unparalleled reliability for consumers.

We brought in a statistician to validate and enhance our Trust Rating and True Score system. Using Bayesian hierarchical models, he improved our methodology to account for any gaps, enabling more accurate and adaptive scoring by pooling insights from both customer reviews and expert assessments.

Scoring, Trust Classification & Type of Sites:

  • Scoring: Each indicator was scored on a predefined scale, with higher scores reflecting greater trustworthiness. These scores were aggregated to generate an overall Trust Rating for each publication.
  • Trust Classification: Publications were categorized based on their Trust Ratings and the amount of fake testing we detected.
    1. Highly Trusted (90–100+ Trust Rating): Leaders in testing, offering comprehensive data, visuals, and deep insights.
    2. Trusted (70–89 Trust Rating): Solid and reliable, though often less rigorous than top-tier sites.
    3. Mixed Trust (60–69 Trust Rating): This group barely passes. They tend to have inconsistent testing with incomplete data and weak photo evidence.
    4. Low Trust (50–59 Trust Rating): Frequently unreliable, with minimal useful data.
    5. Not Trusted (0–49 Trust Rating): Lacking credible testing proof or substantiated claims.
    6. Fake Tester (30% of covered categories show signs of faked testing OR 3 covered categories show signs of faked testing; fulfilling either criteria triggers this classifier): Sites that claim to test but rely on misleading tactics or fabricated results.
  • Type of Site Classification: We analyzed Scope, Content Focus, Review Types, Coverage Categories, and Publisher Type to ensure fair comparisons.
    1. Scope: Broad (16+ categories), Niche (3–15 categories), or HyperNiche (1–2 categories) Scope is important for determining percentiles and rankings, as it ensures leaderboards and trust ratings are more accurate and fair across diverse types of sites.
    2. Content Focus: Sites were categorized based on their primary, secondary, and tertiary focus, such as Reviews, News, or Videos.
    3. Categories of Coverage: Primary and secondary categories, like Tech or Lifestyle, helped define coverage priorities.
    4. Publisher Type: We distinguished between smaller Blogs (with fewer than three writers) and larger Publications (with more than three writers). Sites with primarily eCommerce content were marked as N/A under this criterion.

By incorporating these parameters, our Trust Rating System ensures a fair and accurate comparison between publications. This structured approach highlights how sites perform within their peer groups, giving readers the clearest picture of trustworthiness across the industry.

Indicators and Categories:

  • Indicators: The 55 indicators encompass aspects such as review authenticity, evidence of testing, reviewer expertise, transparency, and consistency.
  • Categories and Subcategories: Indicators are grouped into categories like Review Authenticity, Testing Evidence, Reviewer Expertise, and Transparency. Each category is further divided into subcategories for a more granular analysis.
Trust Rating CategoryDefinition
1. Human Authenticity
(9% of Total Score)
General Trust Grouping – Publication staff are real humans.
2. Review System
(1.95% of Total Score)
General Trust Grouping – 
Publication uses a thorough, numerical scoring system to differentiate products from each other.
3. Integrity
(4% of Total Score | 0.4% Bonus Score)
General Trust Grouping – Publication promotes editorial integrity and prioritizes genuinely helping consumers.
4. Helpfulness
(4.05% of Total Score | 0.8% Bonus Score)
General Trust Grouping – Content is structured to effectively communicate product information to consumers.
5. Category Qualification
(4% of Total Score)
Category Trust Grouping – The publication is actually claiming to test the category, whether directly or through implication.
6. Category Expertise
8% of Total Score)
Category Trust Grouping – The reviewer and publication is an experienced expert in a category.
7. Visual Evidence
(24% of Total Score | 4% Bonus Score)
Category Trust Grouping – The publication provides visual evidence to show they’re testing and using products in real-world scenarios or testing labs.
8. Data Science
(44% of Total Score)
Category Trust Grouping – The reviewer tested the product and provided their own quantitative measurements from their testing.

The point distribution was carefully calibrated to reflect the relative importance of different factors in establishing a publication’s trustworthiness. For example:

  • Human Authenticity accounted for 9% of the total score
  • Integrity contributed 4% to the total score
  • Visual Evidence was weighted heavily at 24% of the total score
  • Data Science, representing the core of testing practices, constituted 44% of the total score

This weighting system allowed us to create a nuanced trust rating that prioritized the most crucial aspects of reliable tech reviews, such as demonstrable testing practices and transparency.

While it may seem unusual that Human Authenticity and Integrity occupy so little of the scoring compared to Visual Evidence and Data Science, this is intentional for a very simple reason: producing visual evidence and data points is difficult and time-consuming, and by doing both, good scores in authenticity and integrity more strongly confirm the validity and expertise of a publication.

3. Data Collection & Manual Review of Data

This study employed a robust data collection process to evaluate the trustworthiness of tech journalism over a three-year period, from June 24, 2021, to June 21, 2024. By combining advanced tools and meticulous manual reviews, we ensured a thorough analysis of publications across the tech and appliance industries.

Sources and Methods:

  • Web Scraping to Locate Publications:
  • Manual Assessment of Each Publication’s Reviews:

4. Analysis Methods

With our trust ratings established, we employed a variety of analytical techniques to explore the data and uncover meaningful insights:

  1. Statistical Analysis: We utilized descriptive statistics to understand the distribution of trust ratings across publications. This included measures of central tendency (mean, median, mode) and dispersion (standard deviation, range) to characterize the overall landscape of tech review trustworthiness.
    • We conduct statistical analysis on our dataset to understand the distribution and spread of trust ratings across 496 sites, ensuring our methodology captures meaningful trends.
    • The mean trust rating is 32.76, with a median of 31.02, indicating that most scores cluster around this range, which in concerning since that means most sites in the dataset are failing and untrustworthy. The first quartile (13.90) and third quartile (47.10) highlight the range in which most trust ratings fall. Outliers like 102.20 and 99.58 showcase exceptional ratings.
  2. Regression Analysis: To explore relationships between various factors, we conducted regression analyses. For instance, we examined the correlation between a publication’s trust rating and its traffic data to determine if there was any relationship between a site’s credibility and its popularity.
  3. Quantitative Analysis: We performed in-depth quantitative analyses on several key aspects:
    • a) Trust Ratings: We analyzed the distribution of trust ratings across different types of publications (e.g., independent vs. corporate-owned) and across different product categories.
    • b) Traffic Data: We examined traffic data in relation to trust ratings and other factors to understand if there were any patterns in user engagement with more or less trustworthy sites.
    • c) Covered Categories: We quantified the breadth of coverage for each publication, analyzing how the number and type of product categories covered related to overall trustworthiness.
    • d) Testing Claims: We scrutinized the claims made by publications about their testing processes, cross-referencing these claims with our visual evidence and data science findings to verify their authenticity.
    • e) Parent Companies: We investigated the ownership structure of publications, analyzing how corporate ownership versus independent status correlated with trust ratings and testing practices.
  4. Comparative Analysis: We conducted comparative analyses across different subgroups of publications, such as comparing the top 10 most trustworthy publications against the 10 least trustworthy to identify key differentiating factors. We also performed a cluster analysis categorized publications by trust scores, revealing patterns in trustworthiness and surprising trends in score distributions.

This multifaceted analytical approach allowed us to not only quantify the trustworthiness of individual publications but also to uncover broader trends and patterns in the tech review landscape.

By combining rigorous data collection with comprehensive analysis, we were able to provide a detailed, data-driven picture of the state of tech reviews and the factors that contribute to review integrity and consumer trust.

To analyze the collected data and derive meaningful insights, various tools were employed.

By employing these methodologies, this study provides a robust and comprehensive analysis of the trustworthiness of tech journalism, offering valuable insights into the prevalence of fake reviews and unreliable testing.

IN THIS ARTICLE

2. The Broader Issue/Problem

To understand the depth of this issue, we focused on five key areas: the overall decline in reviewer quality, the fake review industry on Google, the significant role of corporate-owned media in disseminating misinformation, the trust gap between corporate and independent publishers, and the issues with various product categories.

While corporate giants are often more manipulative, independents also struggle with credibility.

This report underscores the urgent need for a renewed commitment to transparent and honest journalism, ensuring that tech reviews genuinely serve consumers and restore public trust.


3. Google Is Serving Up Fake Reviews

Every day, Google processes around 8.5 billion searches. That’s a mind-blowing number. And with that amount of influence, Google plays a huge role in what we see online.

Despite their efforts to remove fake reviews from search results, our investigation into 30 tech categories shows that Google is still serving up a whole lot of fake reviews. These untrustworthy reviews are sitting right at the top of the search results on page 1, where most of us click without thinking twice.

Big names like CNN, Forbes, WIRED, Rolling Stone, and the most popular tech reviewers like Consumer Reports, TechRadar, and The Verge, along with independent reviewers, are all part of this huge problem of fake reviews.

Key Takeaways

  • Half of Google search results for tech reviews are untrustworthy. 49% of the results on page one of a Google search for terms like “best tv” will direct you to an unhelpful site with low trust, no trust, or even outright fake testing. Meanwhile, only 51% of the results will be some degree of trustworthy.
  • A quarter of the results on the first page of Google are fake. 24% of the results on the first page of a search for terms like “best office chair” will link directly to websites that claim to test products but provide no proof of their testing or even fake it.
  • More than half of the top results for computer keyboard searches are fake reviews. A staggering 58% of page one results belong to sites that fake their keyboard testing.
  • Google provides mostly helpful results when looking for 3D printers. An impressive 82% of page one results lead to trusted or highly trusted sites, with zero fake reviews in sight.

Trust Rating Classifications in Google Search Results

To figure out the key takeaways above, we had to first calculate the Trust Ratings across publications. Then, we Googled popular review-related keywords across the categories and matched the results with their respective Trust Ratings.

This allowed us to see, for example, how many of the results for “best air conditioners” on page 1 were fake, trusted, highly trusted, etc. We pulled it all together into the table below to make it easy to visualize. 

In the table, the percentages in the Classification columns are the share of results that fall into each trust class. They’re calculated by dividing the number of results in that class by the total results we found in each category (Total Results in Category column).

As you can see below, these categories are swamped with fake or untrustworthy reviews. High-traffic, transactional keywords—where people are ready to buy—are overrun with unreliable reviews.

When you add up the total amount of Fake Tester, Not Trusted, and Low Trust results, half of the results are fake or untrustworthy. And it’s disappointing that 25% of the results are fake reviews, with no quantitative data backing up their testing claims.

Now, to Google’s credit, they do show some trustworthy reviews—26.49% of the results are trusted. However, only 13.74% of the results come from Highly Trusted publications. We wish that number was higher.

In reality, the split between trusted and untrusted results is almost even, but it’s concerning how much of the content is unreliable or fraudulent. With so many fake reviews dominating the top spots, it’s clear there’s a serious trust issue in the results that 33.4 million people rely on every month.

Our Dataset of Keywords

To accurately reflect what a shopper is facing on Google for each of the 30 tech categories, we analyzed 433 transactional keywords that total 33.4 million searches per month.

These keywords were divided into three distinct types, each representing a unique aspect of shopper intent:

Type of KeywordDefinition of KeywordExamples
Buying GuideThese keywords help shoppers find the best product for their needs based on a guide format, often used for comparisons.best tv for sports, best gaming monitor, best drone with cameras
Product ReviewConnect the user to reviews of individual products, often including brand or model names, targeting users seeking detailed product insights.dyson xl review, lg 45 reviews
Additional SuperlativesHighlight specific features or superlative qualities of a product, helping users find products with specific attributes.fastest drone, quietest air conditioner

Together, these keyword types provide a comprehensive picture of how users search for and evaluate products, helping us reflect the challenges shoppers face when navigating Google’s crowded marketplace.

Below is a sample of the 433 keywords. We used a mix of keywords from each of the three types.

Type of KeywordKeywordMonthly Search Volume
Buying Guidebest free vpn57000
best cordless vacuum48000
best bluetooth speakers6700
best portable monitor6600
best gaming tv4900
Product Reviewdyson am07 review1200
unagi scooter review1000
apple studio display review900
dji mini 2 review800
windmill air conditioner review600
Additional Superlativefastest electric scooter5900
fastest electric bike5600
quietest blender700
quietest air purifier500
largest 3d printer450

The 433 keywords spanned all three types across the 30 categories we investigated, generating a total of 5,184 search results.

Out of the 5,184 results, 1,491 were actual reviews from publications with Trust Ratings. The rest were mostly e-commerce pages, videos, and forums. Since those types need a different system to measure trust, we excluded them from our analysis to keep things accurate.

Diving Deeper into The Google Fake Review Problem

Researching products online has become a lot harder in recent years. Google’s constantly shifting search results and a steady drop in the quality of reviews from big outlets haven’t helped. Now, many reviews make bold testing claims that aren’t supported by enough or any quantitative test results.

We believe the 30 categories we analyzed paint a strong picture of tech journalism today. Sure, there are more categories out there, but given our timelines and resources, these give us a pretty accurate view of what’s really going on in the industry.

Nearly half the time, you’re dealing with unreliable reviews. And while some publications are faking tests, others may just be copying from other sites, creating a “blind leading the blind” effect. It’s almost impossible to tell who’s doing what, but it seriously undercuts the entire landscape of tech reviews.

So why are fake reviews such a big problem? Money. Real testing is expensive, and for some publishers, it’s easier—and cheaper—to cut corners or just fake it to get a better return on investment.

It’s not just small players doing this. The biggest names in the industry are guilty, too. These corporate giants are leveraging their influence and authority to flood the web with fake reviews, all in the name of bigger profits. Let’s break down how they’re fueling this problem.


4. The Corporate-Media Problem

While our dataset includes hundreds of publications, there’s a hidden layer often overlooked: the parent company.

At first glance, it might seem like individual websites are the main offenders, but the reality is far more interconnected. Many of these sites are owned by the same corporations. Imagine pulling fifteen publishers from a bucket—despite their unique names, several might belong to the same parent company.

Take Future PLC, for example. Of the 29 sites they own in our database, 17 are designated as Fake Reviewers, while another 8 are labeled Not Trusted. These aren’t obscure outlets, either. Future owns high-traffic sites like TechRadar, GamesRadar, and What Hi-Fi?, all of which are plagued with fake reviews.

This raises a troubling point: the benefits of being owned by a parent company—such as consistent branding and oversight—aren’t translating into what matters most: rigorous, objective testing. Instead, publications under the same corporate umbrella often share information and imagery, amplifying half-baked work or even misinformation across multiple platforms.

In the worst cases, parent companies use their reach unscrupulously, pushing products or agendas without accountability. Many publishers see brands as clients rather than entities to scrutinize, prioritizing ad revenue over unbiased reviews.

The result? Corporate-owned media outlets are significantly more likely to manipulate reviews compared to independents—though independents aren’t without their issues. Every major “Big Media” company in our dataset has more fake reviewers under their control than any other category.

Key Takeaways

  1. Corporate publications are overwhelmingly unhelpful, untrustworthy, or outright fake. 89% of the corporate publications in our dataset are fake reviewers or labeled untrustworthy.
  2. You’re more likely to run into a fake reviewer when reading corporate-owned publications. 54% of the corporate-owned publications in our dataset have been classified as fake reviewers.
  3. No corporate publication is “Highly Trusted” according to our data and trust ratings. Out of the 201 corporate publications, there isn’t a single one that manages a Trust Rating of 90%. The highest Trust Rating a corporate publication earns is 82.4% (Sound Guys).
  4. Corporate publications dominate web traffic despite being extremely unhelpful. Of the 1.14 billion total monthly visits every site in our dataset sees combined, corporate publications receive 86% of that traffic.
  5. You have a better chance of getting useful information from an independent publication – but not by much. 6.8% of the independent sites we researched are trustworthy or highly trustworthy, while just 4.9% of corporate sites are trustworthy (and not a single one is highly trusted.)
    • Future owns the most publications out of any of the corporate media companies in our dataset and has the most fake reviewers as well. Of the 29 publications they have, 17 of them are fake reviewers, including major outlets like TechRadar, GamesRadar, What HiFi? and Windows Central.
    • The two highest-traffic publications that Future PLC owns are Tom’s Guide and TechRadar, and together account for 63% of the traffic that Future PLC publications receive. Unfortunately, TechRadar is classified as a Fake Reviewer (57.06% Trust Rating), while Tom’s Guide is stuck with Mixed Trust (65.66% Trust Rating.)
    • Dotdash Meredith features some of the highest aggregate traffic numbers and leads all of the parent companies in combined traffic. Unfortunately, 9 of the 13 publications they own in our dataset are Fake Reviewers.
    • A staggering 35% of the total traffic that Dotdash Meredith’s publications receive goes directly to Fake Reviewers, including websites like Lifewire, The Spruce and Better Homes & Gardens.
    • The two highest-traffic publications Dotdash Meredith owns, People and Allrecipes, are classified as Not Trusted. This is troubling – despite not claiming to test products, they fail to establish what limited trustworthiness they can.
    • The 9 Hearst publications we analyzed attract a hefty 38.8 million monthly visitors (source: Hearst), yet their average trust rating is just 32.65%. High traffic, but low trust.
      • Example: Good Housekeeping has a troubling amount of fake testing plaguing 10 of their 18 categories. 
    • Only one publication owned by Hearst manages a Trust Rating higher than 50%, and that’s Runner’s World. Unfortunately, even it can’t break 60%, with a Trust Rating of just 59.98%
  6. conde nast logo
    • 5 of the 6 brands we analyzed from billion-dollar conglomerate Conde Nast faked their product tests including WIRED, GQ and Epicurious.
    • WIRED is faking their reviews with a 32.36% trust rating across the 26 categories it covers. Worse still, of the categories we investigated, 15 of them have faked testing.
    • Valnet is the only major parent company to have fewer publications faking their testing than not. Unfortunately, the 5 they own that aren’t faking their testing often aren’t testing at all, and none of them are classified better than Not Trusted.
    • Valnet also earns the very bizarre accolade of receiving the least amount of traffic to publications labeled Fake Reviewers, meaning consumers are less likely to be served fake testing – unfortunately, the rest of the traffic is going entirely you publications that are Not Trusted.

In our dataset, corporate-owned publications are those marked with “No” in the independent column. These are publications owned by larger media conglomerates, publicly traded companies, or those that have raised external capital.

Below is a table of the top 5 media conglomerates that dominate the product and tech review space. For our study, we analyzed publications from Future (the largest by number of publications), DotDash Meredith (the leader in traffic), Valnet (the youngest), Hearst (the oldest), and Condé Nast (the most well-known). This breakdown highlights their scale, influence, and estimated reach within the industry.

Parent CompanyNumber of PublicationsEstimated Annual RevenueMonthly Estimated Traffic (Similar Web)
Future250+ (29 analyzed)$986.1 million (source)321,587,741
DotDash Meredith40 (13 analyzed)$1.6 billion653,411,620
Valnet25+ (9 analyzed)$534.1 million296,405,842
Hearst176 (9 analyzed)$12 billion (source)307,141,647
Conde Nast37 (6 analyzed)$1.7 billion (source)302,235,221

Trust Rating Distribution 

We analyzed 66 publications across 5 corporate media giants, and the Trust Rating ranged from a low ranging from 9.30% Trust Rating (Harper’s Bazaar) to a 78.10% (Mountain Bike Rider). Here’s some statistics of trust ratings for corporate-owned publications: 

StatisticValue
Sample Size (n)66
Mean40.14%
Median38.82%
Range9.30% to 78.10%
Standard Deviation16.17%

The mean Trust Rating of just 40.14% highlights a significant trust deficit across the publications in this dataset. This already troubling average is compounded by an alarming minimum score of 9.30%, which suggests that some publications are almost entirely untrustworthy.

With such a low baseline, it’s clear that trust isn’t just inconsistent—it’s fundamentally broken for many of these outlets.

The graph below illustrates just how entrenched the problem of fake reviews is in the largest parent companies in our dataset. Every single corporation has more Fake Testers than any other Trust classification.

Let’s dive into our 5 shortlisted parent companies, starting with the biggest fake testers of them all, Future PLC.

4.1. The Largest Parent Company: Future PLC

You’ve likely encountered Future PLC’s sites, even if the company name doesn’t ring a bell. They own some of the biggest names in tech and entertainment, like TechRadar, Tom’s Guide, and Laptop Mag—popular destinations for phone, TV, and gadget reviews. With over 250 brands under their umbrella, Future is the largest parent company we investigated.

The Problem With Future’s Reviews

On the surface, Future’s brands appear trustworthy, but a deeper look tells a different story. Despite their massive reach—having over 100 million monthly visitors and generating about $986 million a year—Future’s trustworthiness crumbles, earning a low 44.65% Trust Rating across 29 of the publications we investigated. 

The issue? Their reviews often lack quantitative test results that prove the true performance of the product. For instance, you need a sophisticated colorimeter and calibration software to measure the color gamut of a monitor (in %). Readers need to make informed decisions—so they might consult somewhere like TechRadar or Laptop Mag. One would expect to find detailed test results at either publication, but in reality, the results nowhere to be found, like below—only product specs, like in the screenshot below from Techradar’s HyperX Armada 27 review.

What’s even worse besides the lack of test results above? Future has scaled its less trustworthy sites like TechRadar because it’s more profitable to do so, but this also means that 17 of the publisher’s websites have earned “Fake Tester” labels. By skipping proper testing cuts costs, Fake Testers not only minimize overhead, but can still draw massive traffic and profits, proving that fakery pays off.

Meanwhile, their smaller sites we do trust like Mountain Bike Rider and AnandTech—which was recently shut down—receive far less traffic and attention. Why? Building genuine trust is more expensive and harder to scale.

The bottom line? Future is chasing profits at the expense of their readers. To win back trust, they need to stop prioritizing quick cash and scale, and instead focus on real testing, transparency, and putting reader trust ahead of shareholder demands.

Their Trust Rating Breakdown: Mostly Fake Testers

Look at all of their trust ratings grouped by classifications below. See the huge group of sites labeled as Fake Testers versus the few that we actually trust? And to make it worse, the most trusted sites barely get any traffic, while the ones publishing fakery are raking in millions of visitors.

Then we conducted a detailed statistical analysis of Future’s 29 publication Trust Ratings to identify patterns and inconsistencies. They have the highest mean out of all five parent companies, which is still a failing Trust Rating.

StatisticValue
Sample Size (n)29
Mean44.65%
Median40.95%
Range11.90 – 78.10%
Standard Deviation18.03%

The wide range of trust ratings and high standard deviation of 18.03% show how inconsistent Future’s reliability is.

As for patterns, we noticed how Future brands score better in certain Trust Rating categories versus others.

They’re transparent about their staff and authors who award-winning tech journalists, which is why they score high in Authenticity and Expertise. And they actually use the products they review.

But the big problem is that they rarely provide quantitative test results, units of measurement, and testing equipment.

Their scoring systems also lack precision, so they scored poorly in the Scoring System and Testing & Data Science categories.

For more detail on how Future does in each Category of Performance, check out the following table.

CategoryAverage Score (Points)Total Possible PointsAverage Score (%)Category Assessment
Authenticity19.372288.02%Future PLC has clear About Us pages, contact info, and team details. It’s easy to see who’s behind their reviews.
Scoring System0.754.815.68%This is a weak spot for Future PLC. They aren’t consistently scoring categories of performance and performance criteria. Overall their scoring system lacks precision.
Integrity7.921552.83%They’re doing okay here, but there’s room to grow. While there are affiliate disclosures, the presence of ads and inconsistent ethics statements take away from that ‘above-board’ feeling readers expect.
Helpfulness4.121041.22%Content could be sharper—product comparisons and buying guides feel a bit scattered. Their content lacks useful comparison tools and product comparisons.
Qualification4.18583.65%They claim to test the products they review frequently.
Expertise6.941069.39%Future PLC features experienced writers, but many lack long-term industry experience. More visible credentials would help turn ‘good’ expertise into ‘great’.
Visual Evidence17.473058.24%They’re using some real-world images, but the content could benefit from more original photos and testing visuals. Showing the equipment or methods used in reviews would build more trust with their audience.
Testing & Data Science15.955529.00%This is where Future PLC really falls short. Readers want to see solid testing with quantifiable results, and that’s just not happening. Incorporating more realistic usage scenarios would go a long way in earning trust.

SHOW MORE +

Millions of readers are coming to these sites expecting trustworthy reviews to guide important purchasing decisions, but the Trust Ratings say otherwise. When you’re serving content at that scale, missing the mark this often is a huge red flag.

Let’s take a look at an example from one of their publications to see the lack of test results in action.

TechRadar’s Illusion of Testing

TechRadar is one of the most popular tech sites in the world. They earned an overall Fake Reviewer classification with a 57.06% Publication Trust Rating. 

We investigated 27 of their product categories, and 8 received a Fake Reviewer classification such as gaming monitors and drones. We trust their coffee maker (85.27% Trust Rating) and VPN (81.00%) reviews, but we steer clear of them when it comes to fans (11%) and cell phone insurance (15%) reviews.

TechRadar’s review style often gives the impression of thoroughness—they definitely use these products they review. They almost always score well on Test Indicator 8.5 that looks for the reviewer using the product in a realistic scenario. But they tend to stop short of real performance testing, leaving out the test results and benchmarks needed to back up their test claims.

They provide units of measurement only half the time and barely include quantitative test results. So sometimes TechRadar tests, but it’s not consistent enough.

Here’s what we found in their Gaming Monitor category for example. They earned a 39% Trust Rating in this category, and we found their claim to test to be untruthful (Test Indicator 8.11).

We investigated their HyperX Armada 27 review, and right off the bat, and you’ll notice at the top of all their product reviews, TechRadar provides this message that they test every product or service they review for hours.

So they’re setting our expectations immediately that we should see test results on the Armada 27 in this review.

We know that the author (John) definitely had this monitor in front of him at one point and used it thanks to all of the real photos. But we couldn’t find any quantitative test results in the review–only specifications, which the manufacturer already provides.

For monitors, you need to be testing quantitative test results like brightness, input lag, color accuracy, and response time (Indicators 8.6 – 8.9).

If they were actually testing color gamut and brightness, they would be mentioning equipment like luminance meters, colorimeters, and calibration software, which we explain further in our computer monitor testing methodology.

They also didn’t get any points on Indicator 8.4, where we look for correct units of measurement on test results they provide. There weren’t any mentions of nits or /cd/m² (for brightness), milliseconds (input lag), etc. aside from on specs.

The How We Test section at the bottom of the review isn’t helpful either. There’s no dedicated Monitor Testing Methodology to be found on TechRadar. That’s another indicator (8.1) that they never get points for across their categories.

The Armada 27 is actually a great gaming monitor, so John gives a Performance score that makes sense. But without units of measurement or test results to back up their claim to test, the review is unreliable by itself.

Again, this is a disappointing pattern across many of TechRadar’s other categories where they also end up labeled as Fake Testers. You can dig into more in the table at the bottom of this section.

But TechRadar doesn’t have bad Trust Ratings all around. They still get some credit for testing in certain categories, like coffee makers (85.27% Trust Rating). They’re the third most Trusted publication for coffee maker reviews behind CNET and TechGear Lab.

Our team investigated their Zwilling Enfinigy Drip Coffee Maker review by Helen McCue. She definitely used the coffee maker to brew a full carafe plus provided her own quantitative test results.

In the screenshot below, she measured brewing speed by brewing a full carafe in about nine minutes. Notice how she included the unit of measurement (Indicator 8.4).

We generally recommend naming the equipment you use to test something, so while it’s obvious she used a timer or her phone for this, in other cases, like testing color gamut or luminance, knowing what software and/or hardware was used to test it is extremely helpful.

She also measured the temperature of the coffee (°F or °C) immediately after brewing and 30 minutes after sitting on a warming plate.

Again, normally we’d like to know what kind of testing equipment was used down to the model, but ubiquitous stuff (a timer, a scale, a thermometer) matters much less.

She answered two out of three Test Criteria Indicators, so the only test result missing was the flavor of the coffee brew (measured in pH or with total dissolved solids).

Helen’s review is still very helpful thanks to her test results and experience using the Zwilling coffee maker. Her review contributed to TechRadar’s great coffee maker Trust Rating.

For more reliable reviews from Tech Radar like VPNs and printers, check out the table below.

GamesRadar? Same Smokescreen

Curious how this concerning pattern plays out across Future PLC’s other brands? Expand the section below for a deeper dive into GamesRadar’s review practices and why it earned a mediocre 39.01% Trust Rating.

We looked at 6 of their product categories, and half of them contain fake testing claims. We only trust one category of theirs–TVs (73.80% Trust Rating) which had enough test results and correct units of measurement to pass. We’re definitely avoiding their routers (23.00%) and office chair (73.80%) reviews though.

GamesRadar’s reviews look legit at first. They’re usually written by expert journalists with at least 5 years of writing experience, and they definitely use the products. They take tons of real photos, so GamesRadar tends to score well on Test Indicators 7.1 and 7.2.

But they skimp out on testing by providing no hard numbers and units of measurment, despite claiming to test. They also barely have any category-specific test methodologies published (Test Indicator 8.1).

We found in this fake testing claim and lack of evidence in their Router category, for example. Hence why it got a Fake Tester label.

We looked into their ASUS ROG Rapture GT-AX11000 Pro review, and right away, you’ll notice that while they claim to test their routers, there’s little evidence of test results.

At the bottom of the review, you’ll notice how the author Kizito Katawonga claims to test the router.

He explains his process for “testing”. The author explains how he set up the ASUS as his main router, connecting 16 to 20 household devices and dividing them into different network channels. He then tested it through regular usage, including gaming and streaming, but this approach lacks the objective, data-driven testing needed for a comprehensive review.

If you scroll up to the Performance section, he admits he doesn’t have the equipment to properly test the router’s performance objectively.

So, if he isn’t able to test it properly… why is he saying he tested it? Now I’ve lost confidence in the reliability of this review.

If you scroll further up, he’ll mention the specifications in detail, but when it comes to actual performance data, nothing is provided. He simply talks about the specs like maximum speeds and talk about how the router should perform.

But he says nothing about how the router did perform in regards to quantitative test results. The author should have tested the router’s download/upload speeds, latency, and range using tools like browser speed software, ping tester apps and heat map software.

This lack of real testing isn’t just limited to routers—it’s a recurring issue in several of GamesRadar’s other categories with low trust scores. You’ll find similar patterns across the board in the table below.

However, not all of GamesRadar’s reviews are unreliable. We trust their TV category, for instance, which got a passing Trust Rating of (73.80%).

A team member investigated this insightful LG OLED C1 review

The author, Steve May found the HDR peak brightness to be 750 nits, so he got GamesRadar points for one Test Criteria and the correct units of measurement (Indicator 8.4).

The only thing missing for that measurement is what kind of luminance meter he used.

Same story with his input lag measurement of 12.6ms.

These measurements are helpful, and for even better transparency, we’d like to know the input lag tester and/or camera he used.

He provides some real photos of the TV’s screen and back panel.

So there’s even more evidence that he used this TV.

We’re overall more confident in the reliability of this review on the LG OLED C1 TV versus that ASUS router review. These test results are why GamesRadar’s claim to test in TVs was found to be truthful.

If you want to dig into the other 4 categories we investigated on GamesRadar, check out the table below.

Show more +

What’s the future hold for Future PLC?

As you see, Future’s reviews lack test results, units of measurement, and clear methodologies needed to back up their test claims. They’re still written by expert journalists who definitely use the products. But without the testing evidence, their credibility takes a big hit. Ultimately, this reveals a huge problem—Future is prioritizing profits over readers.

To rebuild trust, they need to make some changes. If they have the hard data, equipment, and methodologies, then simply show the work.

If Future can’t provide the evidence to back up the testing claims, it’s time to adjust the language in their reviews. Rather than saying products are “tested,” they should call these reviews “hands-on”, meaning that they’ve used the products without rigorous testing.

They should also remove the “Why Trust Us” banners at the top of every review on fraudulent sites like TechRadar and Digital Camera World.

These changes would eliminate any perception of fakery and bring a level of transparency that could help restore trust. Future still publishes valuable reviews, but they need to align with what they’re actually doing.

Transparency is key, and Future has the potential to lead with honest, hands-on reviews, even if they aren’t conducting full-on tests.

4.2. The Parent Company with the Most Traffic: Dotdash Meredith

Google receives over one billion health-related searches everyday. Health.com often tops the list of results when people look for advice.

It’s one of 40 brands under Dotdash Meredith, a media giant founded in 1902, now generating $1.6 billion annually. While they don’t have as many publications as Future PLC, they’re the biggest in terms of revenue.

But here’s the catch: money doesn’t always mean trust.

Expanding Beyond Educational Content… Into Affiliate Marketing

You’d think a company with such a strong legacy would deliver trustworthy content across the board. And to be fair, their home and wellness advice is generally solid. But when it comes to product reviews? They often miss the mark.

Dotdash Meredith’s Trust Rating reflects this gap, coming in at a mediocre 40.53% across 13 publications.

The reason? Many of their reviews are labeled as “tested,” but the testing isn’t real. Instead, they’ve prioritized speed and profitability, pumping out content that drives revenue rather than builds trust.

What used to be a focus on educational content has shifted. These brands are now leaning heavily into affiliate marketing, using product reviews as a quick cash grab. And let’s face it—thoroughly testing products takes time and money, two things that don’t fit neatly into this new strategy.

Trustworthy? Not so much. Profitable? Absolutely.

Dotdash’s Fraudulent “Tested” Product Reviews

Take Health.com, for example. It falls under the YMYL (Your Money or Your Life) category—a space where content can directly impact your health or finances. That kind of content should meet the highest standards, with expert verification and thorough research. But their “Best Air Purifier for Mold” article? It claims to test products, yet fails to provide critical data like ppb or µg/m³—essential criteria for measuring air quality. Instead, they’re just repeating what’s printed on the box, which we point out in the screenshot below.

Vague claims about the EnviroKlenz Mobile Air System’s performance don’t cut it without solid test results. “A noticeable reduction in congestion and allergy symptoms overnight”? What does that even mean? Was it tested on an actual person? And how severe were their allergies?

Without context, how can anyone trust this purifier to handle serious allergy problems? This declaration that the EnviroKlenz is the “best air purifier for mold” needs to be backed up with reliable filtration rate data. For a site claiming to offer health advice, this kind of oversight is a glaring red flag.

Sadly, this isn’t an isolated issue. Fraudulent reviews are popping up across several Dotdash publications. While Serious Eats stands out as trustworthy, it doesn’t get nearly as much traffic as sites like Health.com or The Spruce.

Those sites feature misleading “How We Test” sections (like in The Spruce’s “Best Roomba” guide below), claiming to evaluate key criteria but offering little to no real data to back it up.

It’s a bait-and-switch that deceives readers into thinking they’re getting expert insights, when they’re not.

Then there’s People.com where reviews stretch the definition of “testing.” They’ll measure something simple, like vacuum battery life, but skip over crucial tests like debris pick-up on carpets or hard floors. It’s testing for the sake of appearances, not usefulness.

Even Lifewire has its inconsistencies. Their journalists might use the products and share plenty of photos, occasionally even showing test results. But there’s little mention of the equipment used or a consistent approach to testing.

What does this mean for readers? Dotdash Meredith seems more focused on driving sales than delivering truly trustworthy content. When it comes to health or home advice, take a hard look—because it’s not just your trust on the line. It’s your life and your money, too.

Dotdash’s Trust Rating Breakdown Across Publications

The brands DotDash owns break down to:

As you can see above, Dotdash Meredith’s publications we investigated have a concerning pattern of trust issues. Better Homes & Gardens and Allrecipes both fall below the 50% threshold, indicating significant reliability concerns despite their household names.

The Spruce Eats and The Spruce Pets all cluster in the mid-30% range, with their trust ratings signaling a lack of credibility in their content. Even niche brands like Trip Savvy and Food and Wine score alarmingly low at 30.74% and 20.15%, respectively.

These numbers underscore a systemic problem across Dotdash Meredith’s portfolio, where only one brand—Serious Eats—rises above the threshold of truly trustworthy content.

Here’s a statistical analysis of DotDash’s Trust Ratings:

StatisticValue
Sample Size (n)13
Mean40.53%
Median37.67%
Range20.15 – 70.87%
Standard Deviation13.67%

The 40.53% Trust Rating isn’t just bad—it’s alarmingly low. Even worse, the median score of 37.67% shows that most of their publications fail harder than the average.

Their range, from 20.15% to 70.87%, doesn’t offer much hope either. And with a standard deviation of 13.67%, the message is clear: Dotdash’s performance isn’t just inconsistent—it’s consistently poor.

Their priorities are clear: speed and profit come first, while trust falls to the back of the line. This is painfully obvious in their approach to reviews.

Look at sites like Very Well Health and The Spruce. Both include a “How We Tested” section at the bottom of reviews. At first glance, this looks great. But dig deeper, and you’ll find a glaring issue—they never actually share the results of their so-called tests.

Let’s break this down with Very Well Health as an example of how misleading these practices can be.

How Very Well Health makes it seem like they’re testing.

Very Well Health is widely regarded as a trusted source for reliable, accessible health information crafted by healthcare professionals. On the surface, it’s a beacon of credibility. But dig a little deeper, and the cracks start to show—especially with their 24.70% Trust Rating across two categories out of the 30 we investigated.

So what went wrong with this so-called “trusted” resource?

In their air purifiers buying guide, the “How We Tested” section (screenshotted below) looks promising, suggesting they’ve evaluated key criteria like noise levels (dB) and air filtration rates (ACH or CADR)—what they vaguely refer to as “effectiveness.”

But here’s the issue: it stops at appearances. As we point out above, there’s no actual evidence or detailed results to back up these claims. For a site built on trust, that’s a major letdown.

But as you scroll through the guide, something becomes clear: the measurements they claim to collect are missing. Instead, you’re left with basic specs and little else.

Above, we screenshotted one of many written air purifier reviews featured in the guide. However, we noticed that there aren’t any individual reviews for the air purifiers linked anywhere in the guide. What you see above is as deep as it gets—surface-level summaries that don’t offer meaningful insights.

It doesn’t stop there. In their other category, vacuums, the Trust Rating is even lower than for air purifiers. Curious about how that played out? Take a look below.

Can DotDash regain our trust again?

Dotdash Meredith has two clear paths to rebuild trust.

  1. If their brands are truly testing the criteria they claim—like noise levels, debris pickup, or air filtration rate—they need to prove it. Show the data. Readers deserve to see the actual test results.
  2. The “How We Tested” sections either need to be removed or rewritten to clearly state these reviews are based on research or hands-on impressions—not testing.

If they’re not testing, they need to stop pretending. Reviews shouldn’t say “tested” when they’re just researched.

These aren’t complicated fixes, but they’re critical. Dotdash has a real opportunity to set things right. It starts with one simple thing: being honest about what they’re doing—or what they’re not.

4.3. The Youngest Parent Company: Valnet

Among the publishing giants we analyzed, Valnet is the youngest. Founded in 2012, this Canadian company has quickly built a portfolio spanning everything from comics to tech, with popular sites like CBR and MovieWeb under its wing. Owning over 25 publications, Valnet ranks as the fourth-largest parent company in our analysis.

We investigated 9 of Valnet’s publications, and unfortunately, the company earned a poor average Trust Rating of 36.56%. With sites like Make Use Of and Android Police in its fold, this shouldn’t be the case, being brands long considered reliable in the tech world.

So what’s the problem? Despite clear testing guidelines, our investigation into several categories revealed troubling inconsistencies. Testing TVs and soundbars is undeniably challenging, but if you’re not doing it thoroughly, it’s better to avoid the claim altogether.

Let’s take a closer look at how the Trust Ratings break down across Valnet’s investigated publications.

Valnet’s publications show consistently low trust ratings, even for established names. Pocket Lint leads this group with 47.86%, yet still falls under Fake Reviewer classification. XDA Portal and Hot Cars, at 45.71% and 43.93%, highlight similar reliability concerns.

Review Geek and Android Police fare no better, both scoring around 40%, while Game Rant and How To Geek drop near 25%. Pocketnow trails significantly at 14.60%, marking the lowest trust score in Valnet’s portfolio.

These results underscore widespread credibility issues across the company’s brands.

Here’s our statistical analysis of the 9 Valnet publications’ Trust Ratings, which revealed the lowest standard deviation among all parent companies.

StatisticValue
Sample Size (n)9
Mean36.56%
Median40.92%
Range14.60 – 47.86%
Standard Deviation10.94%

Valnet’s trust ratings are all shockingly low, with an average score of just 36.56% and a median of 40.92%. That’s a troubling performance for a company of this size.

Their most trusted brand, Pocket-Lint, managed only 47.86%—meaning even their best got a failing trust rating. At the lower end, scores drop as low as 14.60% (Pocketnow), making it clear that credibility is a widespread issue across their portfolio.

The standard deviation of just 10.94%—the lowest among all parent companies—might suggest consistency, which can sometimes be a positive indicator. However, in this case, it only highlights how uniformly poor these trust ratings are across all their publications.

For the average reader, this means opening a Valnet-owned publication comes with serious trust concerns. And even if you stumble upon one of their honest publications, the content still doesn’t inspire confidence. The Trust Ratings are bad across the board.

Both Valnet’s tested and researched reviews are low-quality.

Valnet’s core problem isn’t just Fake Testers—though that’s still a major issue. Out of the 9 Valnet publications we analyzed, 4 are Fake Testers. But even beyond fake reviews, the rest of their content doesn’t fare much better. When they’re not serving up fake testing claims, they’re publishing low-quality reviews that fall short of being genuinely helpful.

Over half of their publications earned a Not Trusted classification. This means they either test truthfully but don’t test enough, or they skip testing entirely and offer lackluster researched reviews. It’s one thing to lie about testing, but it’s another to fail our 55-point inspection so badly that even a Low Trust score is out of reach. These publications aren’t just misleading—they’re failing to provide readers with useful, actionable information.

And Game Rant, one of Valnet’s juggernauts has the lion’s share of the traffic, falls flat even when they do test.

Their gaming mice reviews include truthful testing claims, but they only cover the bare minimum, like customization software. Sure, that tells you what the software can configure, but it’s hardly enough. Key criteria like click latency and sensitivity—the metrics that matter most to consumers—are nowhere to be found.

Game Rant isn’t entirely wrong in this category; they’re just not doing enough.

And that’s Valnet’s biggest issue: when they’re not faking reviews, they’re simply not putting in the effort required to earn trust.

But let’s not forget—fraudulent reviews are still a significant part of Valnet’s portfolio, and that’s a red flag that can’t be ignored.

The Fakery at Android Police

Android Police has built a reputation as a go-to source for Android news and reviews, popular among tech enthusiasts. But when it comes to their reviews, readers should tread carefully. With a 40.92% Trust Rating and a Fake Tester label, it’s clear their credibility doesn’t hold up. They’re one of four Fake Testers we uncovered in Valnet’s portfolio of nine publications.

Five of Android Police’s 14 categories we investigated contained fake testing, including VPNs which received a 17.75% Trust Rating. The issues with their VPN guide are hard to ignore.

For starters, there are no screenshots of the authors, Darragh and Dhruv, actually using the VPNs they recommend. In the screenshot below, their VPN reviews lack quantitative data like download speeds (Mbps) or latency (ms)essential metrics for evaluating VPN performance.

As you can see above, Android Police is doing the bare minimum with their VPN reviews, relying on specs to describe what the VPNs do. Reading through the guide feels more like skimming a collection of product listings than actual reviews. There’s no depth, no real analysis—just surface-level details.

If you’re curious about the other 13 categories we investigated at Android Police, take a look at the embed below. While we trust them in Drones (with a decent 72.82% Trust Rating), the vast majority of their reviews fall short of earning any real confidence.

How Valnet Can Bounce Back

Valnet’s reviews are failing readers on nearly every front. Whether it’s Fake Testers misleading audiences with false claims or “honest” reviews that don’t fake testing lack depth and useful information, the result is the same: a loss of trust.

This isn’t just a problem—it’s a credibility crisis. To turn things around, Valnet needs to prioritize two key areas: transparency and effort.

First, stop faking testing. If a review claims to test, it should include measurable, quantitative results that back up those claims.

Second, for researched reviews, make them insightful and genuinely helpful. These should go beyond specs, offering detailed analysis and real value. Including original images—whether taken in-person or screenshots showing software or services in action—adds authenticity and trustworthiness.

Without these changes, Valnet risks losing what little trust they have left. It’s time for them to step up.

4.4. The Oldest Parent Company: Hearst Digital Media

Hearst is a truly massive publishing entity with a very long history that stretches all the way back to 1887. Every month, their network draws an impressive 307 million online visitors, making them one of the most influential players in the industry.

With ownership of over 175 online publications, Hearst ranks as the second-largest parent company in our dataset. But their history isn’t all positive. The founder, William Randolph Hearst, is infamous for his use of yellow journalism to build his empire. Today, Hearst Digital Media, their digital arm, seems to carry on that legacy, prioritizing sensationalism over substance.

Fake Testers dominate the Hearst publications we analyzed. Of the nine publications we reviewed, six were flagged for Fake Reviews. That’s a troubling statistic, especially considering Hearst’s enormous reach and the credibility its brands claim to uphold.

Here’s a closer look at the Trust Ratings for the nine Hearst publications we investigated.

The Trust Ratings for Hearst’s publications reveal widespread issues with credibility. Runner’s World leads with 59.98%, barely reaching “Low Trust,” while Car and Driver and Bicycling follow at 45.05% and 43.65%, failing to inspire confidence. Even Good Housekeeping, known for product recommendations, scores just 38.62% as a Fake Reviewer. At the lower end, Men’s Health has a shockingly low rating of 9.70%, highlighting the pervasive trust challenges within Hearst’s portfolio of brands.

Below is the statistical analysis on the Trust Rating data, with Hearst showing the lowest minimum Trust Rating across all parent companies analyzed. This means they own the most untrustworthy publication, Harper’s Bazaar (9.30%), of all 66 publications owned by the five parent companies.

StatisticValue
Sample Size (n)9
Mean32.65%
Median33.52%
Range9.30% to 59.98%
Standard Deviation15.62%

Hearst’s average Trust Rating is just 32.65%, far below the 60% threshold for credibility. The median score of 33.52% reinforces this, showing most publications are consistently underperforming.

Even their best rating of 59.98% falls short, while the lowest plummets to 9.30%. A standard deviation of 15.62% suggests slight variation, but the overwhelming pattern is one of unreliability.

Much of Hearst’s traffic comes from major names like Good Housekeeping, Men’s Health, and Harper’s Bazaar. Yet, Good Housekeeping and Men’s Health are flagged as Fake Testers, and Harper’s Bazaar isn’t far behind with its dismal 9.3% Trust Rating. These results raise serious concerns about Hearst’s editorial standards.

Take Good Housekeeping, for example. A trusted name for over a century, it’s famous for its product reviews and the iconic Good Housekeeping Seal seen on store shelves. So why does this reputable product reviewer have a 38.62% Trust Rating? Let’s break it down.

The Fall of Good Housekeeping

Good Housekeeping has long been a trusted name for consumers, offering advice on products since its founding in 1885—before Hearst even existed. But its legacy and iconic seal of reliability is taking a hit, with a failing average Trust Rating across nine categories we investigated.

The failures here are especially disappointing because Good Housekeeping was once a go-to for verifying product quality. They also heavily promote their unique testing labs as a cornerstone of their credibility, so the mediocre average Trust Rating and amount of fake testing we found were shocking to us.

In regards to fake testing, take their soundbars guide for example in the screenshot below. They frequently mention their “testers” and testing, but there’s no actual data to back up their claims. No scores, no measurements—just vague statements like testers being “blown away” by sound quality.

Who are these testers mentioned above? Do they have names? What soundbar testing tools were used? What data supports the claim that a soundbar’s sound is “fantastic” or “powerful”? There’s nothing on maximum volume or frequency response. It’s all fluff, accompanied by a small spec box that offers product details but no real testing insights.

The individual reviews on the guide also do not feature real images of the soundbars, which is strange for one of the biggest product testers in the world.

The only real image of the soundbars is in the How We Test section and the featured image. It gives the impression that they tested the soundbars, but it doesn’t convince us since they still have zero test results.

Speaking of the “How We Test” section, it lists performance criteria they supposedly evaluate, but there’s no mention of methods, tools, or data.



It’s all talk, with nothing quantitative to back it up. This kind of surface-level effort makes it hard to trust their recommendations.

That said, there are a few exceptions. We (barely) trust them for vacuums (62.23%), e-bikes (61.70%), and routers (61.70%). It’s good to see vacuums pass, given their longstanding reputation in that category. But the bigger picture is grim—Good Housekeeping failed in 15 other categories and used fake testing in 10.

The corrections we demand from Hearst

Hearst has a chance to turn things around, but it requires real action. If the Good Housekeeping testing labs are functional, it’s time to prove it.

Readers need to see measurable results—data like sound levels, frequency response, or speed tests. Without that proof, their testing claims feel hollow.

Visual evidence is just as important. Real images of products, videos of testing processes, and screenshots of testing software would add much-needed transparency.

Naming their testers and sharing their credentials would also go a long way in building trust.

And if no testing is actually happening? They need to stop pretending. Reviews labeled as “tested” should instead be called “researched” or “hands-on.” Honesty matters.

Good Housekeeping and other major Hearst brands have built their reputations on trust. Now, that trust is hanging by a thread. The fixes are simple, but they require effort. Without them, Hearst risks losing not just their credibility—but their audience, too.

4.5. The Most Well-Known Parent Company

Condé Nast, founded in 1909, built its reputation on glamorous publications like Vogue but has since expanded into tech and food with brands like WIRED and Bon Appétit. Across its portfolio, Condé Nast attracts over 302 million visitors each month, a testament to its broad influence.

However, the trustworthiness of their reviews is another story. In our dataset, we analyzed six of their publications, and the findings were troubling. Most of their traffic goes to Wired and Bon Appétit—two brands heavily plagued by fake reviews. This has dragged Condé Nast’s average Publication Trust Rating down to a failing 34.09%, far below the 60% benchmark for credibility.

Even their top performer, Ars Technica, narrowly misses a passing score at 59.48%, while Bon Appétit plummets to a dismal 16.15%. The rest, including WIRED and GQ, hover in the low 30s, revealing inconsistent and unreliable standards across their portfolio.



For a company of Condé Nast’s stature, these ratings are a serious red flag. Here’s a closer look at the Trust Ratings for the six Condé Nast publications we investigated.

Architectural Digest fares slightly better than the others in the lower range, with a rating of 36.55%, though it’s still classified as a Fake Reviewer. Epicurious, known for its culinary focus, sits at just 27.95%, falling far short of expectations for reliability. GQ, despite its global reputation, scores a mere 32.07%, indicating trust concerns even among its high-profile brands. These consistently low ratings underscore significant flaws in Condé Nast’s review practices across diverse categories.

To better understand these results, here’s the statistical breakdown of the Trust Ratings for Condé Nast’s publications below. With a sample size of just six, it has the smallest dataset among the parent companies analyzed, limiting the scope for variation.

StatisticValue
Sample Size (n)6
Mean34.09%
Median32.22%
Range16.15% to 59.48%
Standard Deviation13.02%

The numbers paint a bleak picture. Condé Nast’s range spans from 16.15% to 59.48%, with a median trust rating of 32.22%. A standard deviation of 13.02% highlights some variation, but the pattern is clear: most reviews lack credibility.

WIRED’s struggles are particularly concerning. As a long-established authority in tech, Wired has traditionally been trusted for insightful, well-tested reviews. Yet, their 32.36% Trust Rating paints a different picture. Their reviews often fail to back up testing claims with meaningful data, show limited real-world images of products, and over-rely on specs rather than genuine insights. This earns Wired the Fake Reviewer label across several categories, further tarnishing Condé Nast’s credibility.

WIRED’s Testing Claims Fall Apart Under Scrutiny

23 out of 26 of Wired’s categories we investigated earned a failing Trust Rating, like their webcam category (30.40% Trust Rating). The issues are glaring in their reviews, starting with a complete absence of custom imagery.

Consider this webcam review. It’s packed with claims of testing but offers no real proof. There are no photos of the webcam in use, no screenshots of the video quality it outputs, and not even a snippet of footage captured from it.

Simply put, there’s no data to lean into. No real measurements, nothing that suggests the use of actual testing equipment. Just qualitative assessment of a product.

Real-world use? Maybe, even that’s hard to confirm.

The pros section that they call “WIRED” (screenshotted below) mentions key performance criteria that make up webcam picture quality, implying they made detailed evaluations.

But if you follow the “test webcams” link that we highlighted above, you’ll find yourself on a generic “best webcam” guide (that we screenshot below), not a methodology page. It’s a misleading start that only raises more questions.

Even more frustrating, the review discusses autofocus (highlighted above)—one of the most critical features of a webcam. The reviewer claims it performs well, maintaining focus up to four inches away. Yet there’s no imagery or video to substantiate this claim. No screenshots, no test footage—nothing but words.

Instead of helpful visuals, the review relies on stock images from the manufacturer that we show in the screenshot below.

This stock image approach undercuts their testing claims entirely. While the text suggests the webcam may have been used, the lack of any real testing data or custom visuals makes it hard to take these reviews seriously.

Wired’s reliance on vague assertions and generic images doesn’t just weaken this particular review—it reflects broader credibility issues across all their categories. We trust them for blenders, coffee makers, and drones, but we recommend you steer clear of the other 23 categories, especially the 15 containing fake tests.

How Conde Nast Can Rebuild Trust

Condé Nast’s trust problem is big, but it’s fixable. The solution starts with transparency and honest communication. If they want to regain credibility, they need to make some changes to their review and editorial practices. They can take next steps similar to the previous four parent companies:

  1. They need to show their work. Testing claims mean nothing without proof. Reviews should include real data, like performance metrics, screenshots, or even videos of products in action. Custom images, not stock photos, should back up every claim. Readers need to see the tools, testers, and processes used. No more vague promises—just clear, measurable evidence.
  2. If a product hasn’t been tested, they need to say so. Misleading phrases like “tested” should be replaced with honest descriptions like “researched” or “hands-on.” Readers value transparency, even if it means admitting a review is less thorough.

For a company as big as Condé Nast, these changes aren’t just a suggestion—they’re a necessity. Readers are watching. Rebuilding trust starts with doing the work and showing the proof.

Conclusion: Fakery is Prominent in Big Media

None of the parent companies have a decent average Trust Rating, as you can see below. None of them break 45%, let alone 50%, which is a sign of major trust issues. This isn’t just a one-off issue—it’s a systemic problem across the board.

For readers, this means approaching any publication under these conglomerates with caution. Whether it’s TechRadar, Good Housekeeping, Wired, etc., their reviews often lack the rigor and transparency needed to earn trust. Until these companies prioritize real testing and honest reporting, relying on their scores and recommendations is a gamble.

The bottom line? Trust needs to be earned, and right now, these major parent companies aren’t doing enough to deserve it.

Now you’ve seen how the five biggest parent companies are deceiving their audiences as their brands pull in a staggering 1.88 billion visitors a month. But what about individual publishers? Who are the five biggest fakers?


5. The Fake Five: These “Trusted” Publishers are Faking Product Tests

Millions of readers trust these five publications to guide their buying decisions, expecting reliable, data-backed recommendations. But what if that trust is misplaced? The truth is, some of the most famous, reputable names in the review industry are faking their testing claims.

Meet the Fake Five: Consumer Reports, Forbes, Good Housekeeping, Wired, and Popular Mechanics. Together, these five sites attract a staggering 259.76 million visitors a month—that’s 23% of all traffic in our dataset of 496 sites. A quarter of the total traffic is going to reviews with little to no evidence of real testing.

Four of these sites are veterans in the review space, making their shift to fake testing especially frustrating. Forbes, on the other hand, has weaponized its financial credibility to churn out commerce-driven reviews that trade trust for clicks.

Let’s look at the key takeaways from these sites, offering an overview of how they mislead their massive audiences while profiting from fake reviews.

Key Takeaways

  1. 📚 The Fall of Once-Trusted Tester Consumer Reports
    • ⚠️ Widespread Fake Reviews: Consumer Reports earned a “Fake Reviewer” classification in 17 out of the 23 categories we analyzed, with only three showing credible testing evidence.
    • 📊 Failing Trust Rating: Despite its strong reputation and revenues, Consumer Reports scored just 45.49% on our Trust Rating, falling far below the benchmark for credible reviews.
    • 👯 Duplicated Reviews & Lack of Transparency: Their reviews rely on vague scores, repetitive language, and cookie-cutter content across products, offering little measurable data or visual proof to support claims of rigorous testing.
  2. 🏠 The Decline of Good Housekeeping’s Credibility
    • 🔬 Low Trust Rating Despite Test Labs: By earning a 38.62% Trust Rating in the 18 categories we evaluated, Good Housekeeping no longer lives up to the rigorous standards its seal once represented. The lack of quantitative test results, despite their renowned test labs, raises serious concerns about the rigor and honesty behind their reviews.
    • Falling Short in Key Categories: Despite its reputation, Good Housekeeping barely passed in appliances like vacuums (61.40%) and e-bikes (61.53%), while categories like air purifiers (13.40%) and drones (20.20%) scored disastrously low.
  3. ⚙️ Popular Mechanics: A Legacy Losing Steam
    • 📉 A Troubling Trust Rating: With a 28.27% Trust Rating, Popular Mechanics’ reviews fall far below expectations, undermining over a century of credibility. We trust them in only two (3D printers and e-scooters) out of the 24 categories we investigated.
    • 🚧 Widespread Fake Testing Concerns: Despite claiming to evaluate products in 13 categories, we found evidence of fake testing in 10 of them.
  4. 🖥️ WIRED: Tech “Expertise” Without the Testing Depth
    • 🔍 Lack of Transparency: Their reviews frequently omit critical testing data and real-world images, leaving readers questioning the thoroughness of their evaluations.
    • 📉 Falling Credibility: WIRED earned a disappointing 32.36% Trust Rating across 26 categories we investigated.
  5. 💵 Forbes: Deceiving Their Massive Audience
    • 🔍 With 181 million monthly visitors, Forbes earns a Fake Reviewer classification, with a shocking 9 out of 27 categories we investigated featuring faked testing. They’re the most popular publisher out of the Fake Five. This level of traffic amplifies the impact of their misleading reviews, eroding trust in their once-reputable name.
    • 📉 Barely Trusted in Anything: Earning an average 34.96% Trust Rating, Forbes’ non-financial reviews fail to uphold their credible legacy. We only trust them (barely) in 2 out of the 27 categories: e-scooters (62.60%) and routers (62.20%).

As you can see, the top five publications by traffic and fake testing paint a very troubling picture. Here’s some more high-level information on them below, including their average Trust Rating, independence status, Trust Classification, and more. The list is in order of highest average Trust Ratings, with Consumer Reports in the lead.

These five high-traffic publications, all classified as Fake Testers, represent a mix of independent and parent company-owned brands, each covering over 15 categories, placing them in our broad focus group along with RTINGs (99.58% Trust Rating), Your Best Digs (83.18%), and Wirecutter (80.38%).

  • Forbes stands out with the highest monthly traffic at 181.4 million visitors, leveraging its independent ownership under Forbes Media LLC. Despite its business credibility, Forbes has extended into commerce content, earning a 34.96% Trust Rating due to shallow reviews and a lack of rigorous testing.
  • Good Housekeeping and Popular Mechanics are both owned by Hearst Digital Media, collectively reaching over 42 million visitors per month. While Good Housekeeping has a slightly better Trust Rating at 38.62%, Popular Mechanics trails significantly at 28.27%, reflecting widespread fake testing concerns under the Hearst umbrella.
  • WIRED, owned by Condé Nast, garners 21.52 million visitors monthly, but with a 32.36% Trust Rating, its reviews lack the depth readers expect from a long-established tech authority.
  • Consumer Reports is the only truly independent publication on this list, though it has investors influencing its operations. Despite its reputation as a consumer advocate, its monthly traffic of 14.43 million and 45.49% Trust Rating indicate inconsistent testing practices and declining trust.

Together, these publications dominate traffic in their respective niches, but their broad coverage and questionable testing standards reveal significant gaps in credibility and consumer trust.

Let’s dig into these fakers and find out why they were labeled this to begin with.

5.1. Consumer Reports

CR faking tests

Many of you can remember a time when Consumer Reports was the trusted name in product reviews. Back then, if Consumer Reports gave a product the thumbs-up, you could buy with confidence. But these days, their reviews aren’t what they used to be.

As a nonprofit, they generate over $200 million annually, supported by nearly 3 million print magazine members and more than a dozen special-interest print titles covering autos, home appliances, health, and food.

With over 14 million monthly online visitors and 2.9 million paying members, they’ve built a massive audience—and they’re taking advantage of it.

Their content is distributed across multiple platforms, including mobile apps and social media channels.

Nowadays, there’s so much circumstantial evidence indicating that Consumer Reports hides their test results and duplicates their reviews across different products. And their disappointing 45.49% Publication Trust Rating reflects that.

While their car reviews are still pretty reliable, other product reviews and buying guides in other categories lack the in-depth test evidence that once distinguished Consumer Reports.

We reached out to Consumer Reports in December 2023, and we learned that they’ll give the actual test results if you contact them for the data. So the testing is happening, but getting that information is very inconvenient.

Many reviews are templatized, repeating the same sentences across different products’ reviews. See how these blender reviews for the Vitamix Professional Series 750 and the Wolf Gourmet High Performance WBGL100S have the exact same written review below?

And on top of duplicated reviews, little to no visible test results to back up their claims make Consumer Reports’ reviews unreliable.

Subscribers shouldn’t be receiving these basic reviews nor have to jump through hoops to see the test results—especially when Consumer Reports used to set the standard for transparency and detailed product reviews.

18 of the 23 categories we investigated earned the Fake Reviewer class. There’s repeated circumstantial evidence across categories that Consumer Reports is concealing their test results and duplicating their reviews.

Let’s take a look at their TV reviews for example, a category which earned a failing 47.60% Trust Rating.

TVs

TVs are the most difficult product category to test. It’s also expensive to afford all the proper test equipment, which probably isn’t an issue for Consumer Reports. However, why did they get that terrible 35.40% Trust Rating? Let’s look at their “Best TV” buying guide first.

Immediately in the subheadline, you see the author James K. Wilcox state that Consumer Reports tests a huge amount of TVs every year.

By seeing that subheadline, a reader expects to see test results in this TV buying guide from the best product testers in the world.

However, that’s exactly not the case. The guide lacks quatitative test results and visual evidence of the TVs being tested. Since the guide is missing test results, one might try finding them in the single TV reviews, like the LG OLED65C2PUA TV review. Unfortunately, that’s not the case as you can see in the screenshot below.

The Results section shows that CR rates TVs based on multiple criteria, many of which are very important, like picture quality, sound quality, and viewing angle.

At first glance, you’d think that this review of this LG TV looks pretty thorough and reliable. But a score out of 5 is pretty basic.

You may want to find out more info. How did this LG TV get a perfect score on Picture Quality? Luckily, there’s a tool tip that should go further in-depth and show the actual test results, right?

Unfortunately, when you mouse over the tooltips to get more information on their bizarrely simple scoring, there isn’t much beyond additional claims about the various criteria they tested.

Testing produces actual hard data, but there isn’t any here, just an explanation about what they were testing for.

Where’s the test data? You may try scrolling down the review to find those results. Then you’ll spot a Detailed Test Results section that must surely contain the test results to back up their scores. Right?

Even in the section that would seem most likely to provide “detailed results” there’s just qualitative language with no quantitative data. There are no color gamut graphs, no contrast screens, and no brightness measurements. Instead, we get statements like “brightness is very good” or “contrast is excellent.” 

If they were really testing contrast ratio (which they say is excellent) they’d provide a quantitative test result in an x:y format (Indicator 8.4 where we look for units of measurement). Consumer Reports never got points for Indicator 8.4 due to hiding their test results.

There’s also no mention of testing equipment like a luminance meter to measure the contrast or a colorimeter to measure color accuracy. There’s no TV testing methodology linked on the page either.

Even after you pay for a membership, you still don’t get any actual test results–you only get vague scores out of 5 for various criteria. You have to contact Consumer Reports to see the test results.

This is why they lost so many points in the Testing & Data Science category. Since we couldn’t see any of their quantitative test results, we had to mark their claim to test as “untruthful”. Their poor performance in that scoring category is what ultimately brought their Trust Score down below 60% in TVs. It’s the same story in many of their other categories. 

And even worse, their reviews are presented in this cookie cutter format that use templatized language.

Blenders was the worst case of these tokenized reviews that we came across. Check out the reviews for the Vitamix Professional Series 750 and the Wolf Gourmet High Performance WBGL100S below. The “Detailed Test results” Sections? Exact duplicates, as you can see by the highlighted parts.

It seems whoever put these reviews together didn’t even try to change the wording up between the reviews. 

Still don’t believe us? Take a look at these other sections reviews and see for yourself.

Let’s also take a look at two different reviews for an LG OLED77G4WUA TV and a Samsung QN77S90C, both 77 inch OLED TVs.

The two different TV reviews’ Detailed Test Results paragraphs above are the exact same except for one sentence. Both reviews highlight identical statements like “picture quality was excellent” and “color accuracy was excellent,” without offering any distinctive insights or data points for each model. These cookie-cutter reviews embody generic product praise rather than meaningful analysis.

And again, it’s not just TVs that’s the problem area in CR. Let’s take a look at some other categories where they claim to test despite publishing vague product reviews with no test results. The claim that they test products stands out most on pages where they ask for donations or memberships.

Routers

Routers is another problematic category with hidden test results and duplicated reviews. This category earned a pretty bad 45.20% Trust Rating. Click the “SHOW MORE +” below to expand this section to read it.

Let’s take a look at the top of a single product review this time.

There aren’t any bold testing claims at the top of the review page for CR, unlike other sites.

But as you scroll, you’ll see the same Ratings Scorecard with the basic 5-point scoring system that they call “test results”.

Like we saw with how CR handled televisions, the test results section for their router reviews follows a very similar structure. Lots of different criteria are examined and supposedly tested and that’s how an item receives scores out of five per criteria. Unfortunately, the tooltips contain no useful information – just further explanation on what makes up any given test criteria without actually providing test results.

If you keep scrolling, you’ll see that the detailed test results for routers are even more anemic than they were for televisions.

There isn’t much to go off of here beyond qualitative explanations of how the router performed. There’s no information about actual download speeds, upload speeds, latency, or range testing.

Show more +

Membership and Donation Pages

To top it all off, Consumer Reports promotes their product testing across their website, including pages where they solicit donations or memberships. Again, click the “SHOW MORE +” to read this section.

This emphasis on rigorous testing is out of sync with their anemic review content and lack of test data.

Show more +

The Correction We Demand

For a brand that built its name on transparency, having to jump through hoops to get actual test results is frustrating and bizarre.

People expect real, tested insights—not vague claims or recycled templates. And when that trust cracks, it’s hard to rebuild.

If Consumer Reports doesn’t change course, they risk losing what made them different: the confidence readers felt knowing they were getting honest, thorough advice. Without that trust, what’s left? Just another review site in a crowded field.

The corrections we demand? Show the test results and stop copying and pasting the same basic paragraphs across different product reviews. Give users access to real numbers, side-by-side product comparisons, and actually helpful reviews.

That’s how Consumer Reports can reclaim its place as a reliable source and boost their Trust Ratings. Because trust comes from transparency, not just from a good reputation.

5.2. Good Housekeeping

If you’ve ever grabbed a product off the shelf with the Good Housekeeping Seal on it, you know the feeling. That seal wasn’t just a logo—it was a promise.

It meant the product had been rigorously tested by the experts at the Good Housekeeping Institute, giving you peace of mind, right there in the store aisle. But things aren’t the same anymore.

What used to be a symbol of trust now feels like it’s losing its edge. Since 1885, Good Housekeeping has been a trusted name in home appliances, beauty products, and more.

With 4.3 million print subscribers and 28.80 million online visitors every month, they’ve built a reputation that millions have relied on. And now they’re taking advantage of that trust and cutting corners in their reviews.

Good Housekeeping has a similar story to Consumer Reports–it seems they’re hiding the test results, which is a big reason for their awful 38.62% Trust Rating across 23 categories we evaluated.

For a brand that once set the gold standard in product testing, this shift hits hard. Without the transparency they were known for, it’s hard to trust their recommendations. And that’s a tough pill to swallow for a name that’s been synonymous with reliability for over a century.

In the table below, we show how we evaluated 23 of Good Housekeeping’s categories that we found out of the 30 total.

They barely passed in vacuums (61.40%), which is unexpected since they’re know for testing appliances – it should be way higher. We trust them a little in e-bikes (61.53%), and routers (61.40%) thanks primarily to their actual use of the products, though there aren’t any test results so the Trust Ratings are passing yet rather low.

In the majority of their categories, however, it’s hard to tell if the products were actually put to the test due to a lack of quantitative test results. Their worst categories are air purifiers (13.40%) and drones (20.20%).

The way that Good Housekeeping (54.8M monthly views) handled their TV reviews is part of what spurred the intense analysis we started performing on product review testing: their testing claims didn’t reflect in the text they were publishing.

But the problems don’t stop at TVs – out of the 16 categories they claim to test, 11 were found to have faked testing. One of them is soundbars, which we dive into in the next section.

Soundbars

Good Housekeeping (GH) soundbar reviews earned them a rough 33.70%. How did this happen?

Let’s start with their “Best Soundbars” buying guide. GH makes an immediate claim about their testing in the title of the post, and has an additional blurb about it down below the featured image.

The expectation is clear: these 9 soundbars have supposedly been tested. Let’s keep reading further below.

The paragraph above dedicates itself to assuring the reader that testing is being performed by dedicated tech analysts who cover a variety of home entertainment equipment. The implication, of course, is obvious: soundbars are also covered.

The actual review portion (which we screenshot below), however, leaves a lot to be desired.

There’s no data despite clear mention of testers giving feedback, including direct quotes. No maximum sound levels are recorded, frequency response isn’t noted, and there’s no indication they tried to measure total harmonic distortion.

Ultimately, the testing claims fall flat without anything to back them up – instead, we just have qualitative language assuring us that the soundbar sounds “good” and gets loud. A small spec box accompanies the text, but specifications aren’t testing data. Anybody can get specs from the product listings.

The dedicated “how we test” blurb further down the guide isn’t any better.

As you can read above, there’s plenty of mentions of important performance criteria that are supposedly being tested, but there’s no data or mention of tools being used to help facilitate this testing. It’s just a bunch of claims with no supporting data to show they did what they claimed to do. A dedicated soundbars testing methodology should’ve been linked here.

TVs

These unreliable reviews are a pattern across many of Good Housekeeping’s categories. Take their TV category for example. TVs received an even worse Trust Rating of 21.70% than their soundbar reviews. Click the “SHOW MORE +” below to read this section about their “Best TV” buying guide.

The title doesn’t make any claims, so there’s nothing particularly out of place or unusual here.

The “How We Test” blurb makes a mountain of promises. Everything from measuring brightness with industry-standard patterns to investigating sound quality to look for “cinema-like” sound is mentioned. GH also notes they care a lot about qualitative performance criteria, in addition to the hard and fast numbers of things like brightness. Ease of use in day-to-day interactions with the TV is also part of their testing process. This is nice, but a TV that is great to use and extremely dim is not a particularly good television.

There isn’t much of use when you get to the actual review text, though. Beyond explanations of how good the TV looks (which is purely qualitative) there’s no data that suggests they actually tested. Mentioning how wide the color space is indicates they tested the gamut – but there’s nothing to suggest they did, because there’s no percentages given or gamuts mentioned. Bright whites, deep blacks – there’s no data to support this and no images either.

Show more +

The Correction We Demand

The trust Good Housekeeping has spent generations building is at risk here. With a 38.62% average Trust Rating, there’s a clear gap between the testing they claim to do and the evidence they provide.

If they can’t start showing their work—they need to get real about where thorough testing happens and where it doesn’t.

If they can’t provide hard data, they need to state that their review is based on “research” instead of “testing.”

In some categories, like e-bikes and routers, their testing holds up. But in others—like air purifiers and Bluetooth speakers—it’s hard to tell if the products were actually put to the test.

Like Consumer Reports, Good Housekeeping has spent decades earning consumer trust. But leaning on that trust without delivering transparency is a risky move.

They could jeopardize the reputation they’ve built over the last century. And once trust is broken, it’s hard to win back.

5.3. Popular Mechanics

Popular Mechanics has been a staple in science and tech since 1902, known for its no-nonsense, hands-on advice and practical take on how things work.

With a total reach of 17.5 million readers in 2023—split between 11.9 million digital readers and 5.69 million print subscribers—it’s clear that they’ve got a loyal following. Every month, their website pulls in 15.23 million online visitors, all eager for insights on the latest tech, from 3D printers and gaming gear to electric bikes and home gadgets.

But lately, there’s been a shift.

Despite their history and resources, many of Popular Mechanics’ product reviews don’t quite measure up. They often skip the in-depth testing data that today’s readers are looking for, leaving a gap between their testing claims and the proof behind them.

This has landed them a disappointing average Trust Rating of 28.27% across the 24 categories we investigated, raising doubts about the depth of their reviews. For a brand with over a century of credibility, this shift makes you wonder if they’re still delivering the level of rigor that their readers expect.

Popular Mechanics claims to test products across 13 categories, but there is strong evidence of fake testing in 10 of those categories. You can see all 24 categories below along with their classifications:

Ten out of 24 categories with fake testing is a serious breach of trust. While Consumer Reports has a higher percentage of fake-tested categories overall (74%), nearly half of Popular Mechanics’ investigated categories have fraudulent testing.

What do we mean by fraudulent testing in the case of Pop Mech? Let’s look at a few categories, like air conditioners to start.

Air Conditioners

Pop Mech’s air conditioners category earned a terrible 41.35% Trust Rating, the highest out of all their fake tested categories. To test this category well, it’s important to measure how long a unit takes to cool a space in seconds or minutes.

Take a look at their “Best Window Air Conditioner” guide.

Popular Mechanics makes claims to test right in the subheadline of their guide and has a dedicated “Why Trust Us?” blurb that covers their commitment to testing, which we show above.

The buying guide itself even has a small “How We Tested Segment” (the screenshot below) that promises that the air conditioners covered we tested in a real-life setting and involved taking multiple important measurements, like cooling throw and temperature drops.

There’s a minor red flag in their blurb above, however: they note that some models weren’t tested, and to compensate, they “consulted engineers” and “scoured the specs”. The former is interesting – the general public usually can’t speak to engineers, but what an engineer says and what a product does aren’t necessarily aligned. The latter point, “scouring” specs, is something anyone can do and doesn’t involve testing: it’s just reading a spec sheet, often included with the air conditioner itself.

Let’s continue reading the actual reviews on the guide. We provide a screenshot of one part below about the Amana air conditioner. Unfortunately, the segments dedicated to talking about how the AC performed don’t have much quantitative measurements to support their testing claims.

There’s no data showing the temperature readings they supposedly took, nor any information about how long it takes for an A/C to cool a room of a certain size. This is the reason they received a No for our AC testing question (Indicator 8.5) that looks for quantitative tests of cooling and/or dehumidification speed. There aren’t even noise level measurements, just qualitative language saying, in effect, “Yeah, it’s decently quiet.”

For all the claims of measurement, there aren’t any measurements to be found such as.

The spec data is the one place we can find hard numbers, but spec data is freely available to anyone and doesn’t involve any testing.

We also never found a dedicated air conditioner testing methodology, further hinting that their AC testing claims are fraudulent.

Vacuums

Vacuum cleaners is another fake tested category, where Pop Mech earned a mediocre 32.55% Trust Rating. We investigated their “Best Vacuum Cleaners” guide.

Popular Mechanics doesn’t make a testing claim this time, but their dedicated “Why Trust Us?” blurb leads to a page about their commitment to testing.

This time around, Popular Mechanics has dedicated space to the expert that is reviewing the vacuum cleaners, who is also making claims about testing a wide variety of vacuum cleaners. Models that were included on the list include “several” vacuums that the expert personally tested. “Several” is an important word here, because it means that not every vacuum on the list has been personally tested. In fact, “several” don’t even mean that most of them were tested. Including review data to help make picks isn’t bad.

There’s a lot in this image, but all of comes down to one thing: there’s no test data. Despite claims of personally testing the vacuum, there’s nothing in the actual guide to suggest they did. Where’s the data on how much noise the vacuum makes? How much debris it picks up or leave behind? Even the battery life, something that requires nothing more than a stopwatch, isn’t given an exact measurement, and instead is given a rough approximation. Sure, the battery can be variable, but tests can be run multiple times to get average battery life, and we don’t see that here.

Show more +

So what’s next for Pop Mech?

Pop Mech does a poor job of living up to their testing claims because there isn’t anything on the page that really hammers home that they did their homework and tested everything they said they tested. This ultimately damages Pop Mech’s authority in practical, hands-on advice.

To regain trust, they should commit to sharing clear, quantitative results from their product testing, making sure that their readers know exactly how a product performs. Otherwise, we ask them to change their “testing” claims to “researched”.

Coasting on your reputation as a trusted source and just letting your reviews begin to decay is exploitative, and means you’re just cashing in public trust and goodwill for an easy paycheck.

Fixing this problem doesn’t have to be difficult. But the choice is theirs on how or even if they decide to fix this massive breach of trust.

5.4. WIRED

If you’ve searched for the latest tech trends, you’ve probably run into WIRED. Since 1993, WIRED has made a name for itself as a go-to source for everything from the newest gadgets to deep dives into culture and science.

With 21.81 million visitors a month, WIRED reaches a huge audience, shaping opinions on everything from the latest gadgets to where artificial intelligence is headed.

They cover everything from laptops and gaming gear to electric bikes and smart home devices, making them a go-to for tech enthusiasts.

The problem is, despite their influence, a lot of WIRED’s reviews don’t dig as deep as today’s readers expect. With a 32.36% Trust Rating across 26 categories we investigated, their recommendations don’t exactly inspire confidence.

Their reviews often skip key testing data or even real-world images of the products they claim to test, leaving us wondering just how thoroughly these products were reviewed.

It’s not uncommon to see their reviews skip out on hard testing data or even real-world images of the products they claim to put through their paces.

Over time, WIRED’s focus has changed. Their writing is still engaging, but their product reviews have become more surface-level, leaning heavily on impressions instead of detailed metrics.

That’s why we included them in the Fake Five—their massive online reach gives their reviews weight, but without solid testing data to back them up, it’s hard for readers to fully trust what they recommend.

For a brand that once set the bar in tech journalism, this shift has been frustrating for readers who expect more.

To reclaim the reputation they built their name on is the same as it is for many of the Fake Five: show their work. When they claim to test, provide proof. Photos, units of measurement, real-world undeniable proof you’re actually testing products.

It’s difficult to test thoroughly, but everyone wins when proper tests are performed. Alternatively, they could simply stop claiming to test. Their Trust Rating won’t improve much, but it’s better than lying, and that’s worth something.

Here’s how WIRED stacks up across the 24 categories we investigated.

The results are alarming. Out of 26 categories, 15 fell into the Fake Reviewer classification—a shocking 62.5%. This includes essential products like routers, TVs, and electric bikes. Another 8 categories, like gaming headsets and keyboards, earned a Not Trusted classification.

We do trust them (a little) in 3 categories which managed a Mixed Trust classification: blenders (62.40%), coffee makers (61.07%), and drones (61.07%).

Let’s dive into a few of WIRED’s fake-tested categories to see exactly what fake testing looks like in their reviews.

Gaming Mouse

WIRED earned an abysmal 18.00% Trust Rating in the gaming mouse category, landing them a Fake Tester classification. Our investigation focused on their “Best Mouse” guide (screenshotted below), which revealed significant trust issues.

The authors Eric Ravenscraft and Jaina Grey make a testing claim right in the subheadline above. Immediately, we expect to find test results in the guide. But as we read through the guide, we can’t find any test results.

In the highlighted part above, those are specs, which are always great to include, but specs don’t indicate you tested. Anybody can get them from a product description or the back of the box.

As we read further down the guide, the highlighted parts below are close to a test – after all, battery life is extremely important for wireless mice.

However, the fact that the language around the time is so evasive leads us to have doubts. If you’re measuring battery life as a proper test, you’d have not only the time in hours but also minutes. This is what constitutes a good, rigorous test: getting good data that is concrete, and not so approximate. Unfortunately, this is also the battery life claimed by the manufacturer, which is commonly a “best case” estimate under specific conditions that the user might not be able to replicate. Including information like the response time but no proof you tested it also doesn’t help build a case for having actually performed any real testing.

The problem with WIRED’s buying guide is that there’s nothing to support their testing claims. If these mice were indeed tested, they’d have concrete battery life, click latency, DPI tests, and software customization covered, but we don’t get test data for these criteria, just specs. Mentioning the wireless receiver boosts the polling rate just doesn’t cut it, because there’s no explanation of how they got this data. Buying guides tend to be brief, but they can link out to reviews or test data to support the claims without sacrificing brevity.

Webcams

Let’s look at WIRED’s webcam reviews, another fraudulent category. This one’s trust rating wasn’t as low as gaming mice, but it’s still a failing score (30.40%).

The Insta360 Link review we investigated fell short in several areas.

The top of the page doesn’t show us any testing claims, just a basic subheadline, and a nice opening image.

The testing claims roll in once we hit the opening paragraph, and they’re even comparative in nature, with the testing claims suggesting the camera is among the nicest of the ones they’ve tested. The pros section suggests testing too, with key performance criteria mentioned. Despite what the “test webcams” link might have you think in context, it’s actually a link to a “best webcam” guide, not to a testing methodology.

Multiple highlighted portions in this paragraph all suggest not only usage but clear testing. The autofocus in particular, one of the most important aspects of a webcam, is put through its paces and does an excellent job of maintaining focus up to 4 inch away – or so the reviewer claims. But there’s no imagery to showcase this.

Instead of imagery that gives us an idea of how the camera performs (for example, showing off the excellent autofocus), we instead just get stock images from the manufacturer. This doesn’t build a convincing case for the testing claims that Wired is making in this review, and it casts serious doubt over the whole review. While the text suggests that the webcam may have been used, the total lack of any kind of testing imagery and data makes it very hard to believe.

Show more +

Conclusion

For a brand that’s all about tech deep-dives, WIRED has some serious issues in how it backs up its product reviews, and with its 32.36% Trust Rating, the gaps are starting to show.

To get back on track, WIRED should bringing more transparency into their testing—think real photos (without the unhelpful colorful backgrounds), hard data, named equipment, and detailed testing processes.

This would help them reconnect with readers who want the facts to make smarter buying decisions.

5.5. Forbes

Forbes, founded in 1917 by B.C. Forbes, has been a cornerstone of financial journalism for over a century, renowned for its trustworthy coverage of finance, industry, and business.

The magazine earned credibility with business professionals through initiatives like the “Forbes Richest 400” and by maintaining high standards of factual accuracy.

However, over the past five years, Forbes made a calculated move to leverage its brand equity and trust to gain prominent placement in Google’s search results, expanding into areas like “best CBD gummies” and “best gaming TVs.”

This strategy, known as “parasite SEO,” has led to reviews that lack the depth and expertise Forbes is known for, earning them a terrible Publication Trust Rating of 34.96% (out of the 27 categories we investigated) that raises concerns about the credibility of their non-financial content.

With 65M monthly visitors, Forbes lands a Fake Reviewer Classification with a shocking 9 categories featuring faked testing out of the 27:

Their Trust Rating reflects this poor performance: at 34.62%, even if they cleaned up their act, they’d still be deep in the red and would have a long way to go to get themselves to a more respectable classification.

Too often is Forbes claiming to test without actually backing up their test claims, and if they want their product reviews to have the same authority as their financial coverage, there’s a lot of work to be done.

Testing, providing imagery to prove testing, real measurements, solid analysis – it’s a hard road to legitimacy, but worth walking it when everyone stands to gain from it.

Forbes has a lot of ground to cover though – not only do they have a substantial number of categories they’re faking, they’re simply not in the majority of the categories they cover with their reviews, which means they have to ramp up testing across the board.

Alternatively, they could simply stop claiming to test altogether and instead note that they’re publishing “first impressions” reviews or simply research reviews based on data from around the web. They’ll have stopped lying, but they won’t earn substantial a Trust Rating for doing so.

TVs

Forbes’ TV reviews earned a 18.20% Trust Rating, placing this category squarely in the Fake Tester category. To uncover why, we examined their “Best Gaming TVs” guide (featured below), which revealed glaring issues in trustworthiness.

This testing claim is right in the title, front and center, so everyone can see it. Let’s keep scrolling down to get into the individual reviews.

The highlighted text above is worrying. Testing is said multiple times, but in the same breath, Forbes also says that LG doesn’t share what the actual nits measurement is. Part of testing is to do that yourself – grab a colorimeter and measure the actual brightness. Publications that perform rigorous testing have the equipment to test, but it is clear from this text that Forbes isn’t actually testing – they’re just using the TV.

Methodology is huge for testing – it’s how you get consistent results and provide meaningful information to the reader.

The lack of numbers offered above is the first red flag. While the inclusion of a photo of the distortion is extremely helpful and absolutely a step in the right direction, the important thing about viewing angles is the angle. Showing that distortion can occur is good – but not giving the actual angle this happens at is a major misstep.

Now let’s move onto the “How I Tested The Best Gaming TVs” section that we screenshotted below. Here you get more claims about testing and expertise here, as well as a few notes on what the reviewer finds important: great picture, solid refresh rate and console compatibility, along with good audio quality.

The issue is that “great picture” is a multi-faceted thing. It’s an entire category of performance, with multiple criteria under it that all have an impact on the overall picture quality. Brightness, color gamut, contrast… the list goes on, and it’s a very important list for getting a complete “picture” on picture quality.

The highlighted segments below are huge red flags. There’s just no data in the previous text of the review to really warrant to support the claims being made.

Things like color gamut, brightness, contrast ratio, and EOTF are all testable, measurable and important, but none of them are mentioned in the review. Instead, we get confirmation that specs were cross-checked and warranties were looked at, neither of which constitute an actual test.

Arguably, the most concerning part is the claim that not only is this testing, but this testing is enough to recommend that the televisions will last. Longevity tests can’t be done in an afternoon – they take time; months at the minimum, and years ideally. There’s no doubt Forbes actually used the TVs they got – there’s plenty of photo evidence of them being used. But they didn’t test them at all.

Gaming Headsets

Another category with significant signs of fake testing is gaming headsets, which earned a 30.60% trust rating. Let’s look at their gaming headset buying guide below.

The testing claim from Forbes is (once again) right in the title of the post. Forbes even makes the claim that they’ve put these headsets through their paces. Unfortunately, using a headset for hours isjn’t quite the same as testing it – at least not entirely. There’s plenty you can learn about from a headset by using it, especially given how much things like sound are a matter of taste, but other things like maximum volume and latency are firmly in the realm of measured testing, not simply using a headset while you play a few matches of CoD.

The first red flag comes up with the battery life. After stating how important battery life is, we simply get estimates on battery life, instead of actual testing data that shows how long the battery lasted. And for how much the reviewer talked about how important so many factors of the headset were, to get not only an approximation of the battery life, but also no further concrete numbers (like maximum volume) is disheartening.

Ultimately, the biggest issue is the lack of measurements from Forbes. For all the subjective qualities a headset has (microphone quality, sound quality, comfort, feel) there are multiple objective qualities that require testing to determine. Battery life, latency, and range are all objectively measurable, but no data is provided. How far can you go with the headset before it cuts out? Without the data, it’s hard to say that Forbes actually tested any of the headsets it claims to have spent so much time on – though they definitely did use them. Even when Forbes seems to be providing an actual measurement (like charge time) the numbers match up one to one with what’s listed in a spec sheet from HyperX.

Show more +

Conclusion

Simply put, Forbes is not worth trusting until they can get real testing results in front of readers and actually deliver on their claims of testing. They could also simply remove references to testing from categories where they were found to be faking it, the less you test, the less you can climb.


6. Most Trusted Publications

After all that, it may seem like finding reliable consumer electronics and appliance reviews online feels impossible these days. Between fake testing claims and profit-driven motives, who can you actually trust if the biggest names are faking it?

The good news? Not every publisher cuts corners. We’ve uncovered the top trusted testers during our investigation—publications scoring about a 60% Trust Rating that actually test their products and deliver honest, reliable insights.

You might have thought it was impossible, that the 55 indicators were too strict—but four publications, RTINGs, HouseFresh, Top 10 VPN, and VPN Mentor proved otherwise during our investigation, achieving the status of Highly Trusted testers.

To keep things fair since publications can be diverse and complex, we split the Top 10 Trusted Publications into three groups based on their focus:

  • Broad: Covering over 15 product categories with versatility.
  • Niche: Focused on 3–15 categories for a mix of range and expertise.
  • HyperNiche (or Specialized): Laser-focused on 1–2 sub-categories for unmatched depth.

We investigated 163 Broad, 232 Niche, and 101 HyperNiche publications to find the best in each group. All 496 sites were evaluated against the same 55 indicators. Here’s an overview of who the true testers are we discovered.

Key Takeaways

  • 🛠️ Publications reviewing less categories tend to be more trustworthy:
    • 16.8% of Hyperniche publications (17 out of 101) earned a passing Trust Rating of over 60%. This surpasses Niche’s 10.8% success rate (25 out of 232) and Broad’s 8.6% (14 out of 163).
    • Hyperniche publications have the lowest percentage of Fake Testers (21.78%) compared to Niche (42.24%) and Broad (63.41%) groups. This underscores how a narrow focus helps maintain transparency and accountability, while broader scopes struggle with consistency and credibility.
  • 🏆 Top 8* Trusted Broad Publications
    • The Top 3: RTINGS takes the top spot with an impressive 99.58% Trust Rating, excelling in 13 categories we investigated for broad tech reviews. Your Best Digs (83.18%) and TechGear Lab (80.07%) follow closely, proving their trustworthiness across 7 and 21 categories we reviewed, respectively.
    • Traffic Leaders: Tom’s Guide pulls in an incredible 20.11 million visitors a month, making it the most visited publication on this list. RTINGS (9.10 million) and Tom’s Hardware (3.79 million) also command large audiences, showing their strong influence among readers seeking reliable reviews.
    • Three Exclusions: Wirecutter (80.38% Trust Rating), PCMag (66.85%), and Which? (65.70%) were excluded due to their Fake Reviewer classification. Wirecutter and PCMag were flagged in 3 categories each, while Which? had 7 categories with fake testing. *This left us with only 8 broad publications with at least a 60% Trust Rating.
  • 🏆 Top 10 Trusted Niche Publications
    • The Top 3: HouseFresh dominates with an outstanding 95.95% Trust Rating, earning it a Highly Trusted classification for air purifier reviews. E Ride Hero (83.80%) and Sound Guys (82.40%) round out the top three, showing reliability in electric scooter and audio reviews, respectively.
    • Traffic Leaders: Sound Guys leads the pack in traffic with 2.32 million visitors monthly, followed by BabyGearLab (500,186) and Motor 1 (339,700), reflecting their influence within their niches.
    • One Exclusion: Outdoor Gear Lab (81.35% Trust Rating) was excluded despite its score, as half of the reviews we investigated were flagged for fake testing, disqualifying it from the trusted leaderboard.
  • 🏆 Top 10 Trusted Hyper-Niche (or Specialized) Publications
    • The Top 3: Top 10 VPN leads with an exceptional 102.20% Trust Rating, standing out for its reliability in VPN reviews. VPN Mentor (97.45%) and Air Purifier First (88.65%) round out the top three, excelling in hyper-focused categories like VPNs and air purifiers.
    • Patterns in Specialization: A significant trend emerges in VPN reviews, with 3 of the top 10 publications (Top 10 VPN, VPN Mentor, and VPN Testing) specializing in this category. Other common areas of focus include e-cooters and consumer electronics.
    • No Exclusions: Unlike other focus groups, no publications were excluded from the hyperniche list. This reflects both the smaller number of categories we investigated and the rigorous specialization that these sites bring to their trusted reviews.

Let’s take a look at each focus group’s top trusted testers plus some examples of how they prove they’re testing unlike the many fake testers we discussed earlier.

Top 8 Trusted Broad Publications

The Broad group covers 15 or more product categories, which provides more opportunities to “mess up” in testing practices, like what happened with Wirecutter, Which? and PCMag. This is likely why we only have 8 viable top contenders in this group—there’s simply more room for inconsistencies to surface.

While we discussed the top 3 earlier, some publications in the Broad group still have room for improvement. Tom’s Hardware, Tom’s Guide, and Tech Pro Journal received Mixed Trust ratings of 67.79%, 65.66%, and 63.35%, respectively. While they secured spots on the top trusted leaderboard, their reviews often fall short of the consistency and transparency readers expect across the categories we investigated.

However, this makes the achievements of the leaders even more impressive. At the top of the Broad group is RTINGS, our most trusted publication with an outstanding 99.58% Trust Rating. In a group as challenging as this one, where consistency is hard to maintain across 16 or more categories, RTINGS is a clear leader that proves it’s possible to thoroughly test and be consistent about it. Their commitment to thorough, detailed testing (screenshotted below) sets a high standard that few can match.

Source: RTINGs’ Samsung S90C OLED TV Review

As you can see above and below, RTINGs includes multiple real world photos of the Samsung S90C TV they review in their testing labs plus images of the testing equipment being used.

Source: RTINGs’ Samsung S90C OLED TV Review

In this image above, they include several quantifiable test results they collected for a single criterion like brightness. If you visit their Testing Methodology for brightness, you’ll learn that they use a Konica Minolta LS-100 Luminance Meter to obtain their measurements. And they have a Testing Methodology for practically every criteria they test, like color gamut, input lag, etc.

As we’ve seen, broad publications face unique challenges, tackling 15 or more product categories while striving for consistency.

Now, let’s turn our attention to the Top 10 Trusted Niche Publications, where specialization plays a key role in building trust.

Top 10 Trusted Niche Publications

The top 10 trusted niche publications showcase the strength of specialization, excelling across 3–15 categories with consistency and trust. This group tackled another challenge by balancing testing depth within a larger scope than hyperniche sites in order to secure a spot in the top 10. Despite one Fake Reviewer exclusion, we still had enough niche publications to qualify for the top 10.

After leaders like HouseFresh, E Ride Hero, and Sound Guys, publications like AV Forums (82.15%) and Food Network (81.85%) highlight excellence in focused areas like AV equipment and food.

Rounding out the list, BabyGearLab (72.90%) and AnandTech (71.40%) demonstrate that niche sites can maintain trust even with technical or everyday essentials. This group proves that success comes from mastering multiple categories, balancing specialization with consistency.

The top testers in the niche group are evaluated using the same 55-indicator framework as the broad and hyperniche groups. This means we’re looking for consistent testing evidence across the board, such as measured test results, testing charts, real images, named testing equipment, etc. Sound Guys demonstrated a thorough testing approach in the headphones category, where they earned a 90.50% Trust Rating.

Take their headphone attenuation test below, for example.

Source: Sound Guy’s Air Pods Pro 2 review

The screenshot above from this AirPods Pro review shows how Sound Guys measures attenuation in decibels (dB), offering clear insights into a headphone’s noise isolation capabilities. This precise dB measurement demonstrates how much ambient sound is blocked, providing users with a quieter, more immersive listening experience.

With their specialized focus and commitment to transparent testing, the top 10 niche publications prove that performing testing depth across a moderate range of categories is achievable.

Now, let’s narrow the lens even further and explore the top 10 Trusted Hyper-Niche Publications, where mastery in 1–2 categories sets these sites apart.

Top 10 Trusted “Hyper-Niche” (or Specialized) Publications

HyperNiche publications focus on just 1–2 specific categories, making them the most specialized group we analyzed. This narrow scope allows these sites to demonstrate unmatched depth and expertise in their chosen areas, from VPNs to electric scooters to air purifiers.

With fewer categories to cover, hyperniche publications can devote their resources to thorough testing and transparency, often setting the gold standard for what trusted reviews should look like.

Take Top 10 VPN and VPN Mentor as examples. Both excel in VPN reviews, earning Trust Ratings of 102.20% and 97.45%, respectively. Their detailed performance metrics and testing transparency set them apart, proving that hyper-focus pays off. Similarly, Electric Scooter Guide and Air Purifier First specialize in highly specific categories, earning their spots by going deep where it counts.

What’s striking is how these publications demonstrate consistent excellence, even in categories where testing can be highly technical. For instance, Aniwaa brings precision to 3D printing reviews, while TFT Central dives into consumer electronics with thorough testing protocols. This attention to detail not only builds trust but also sets a benchmark for how hyperniche reviews should be done.

All hyperniche sites are held to the same 55-indicator evaluation framework as every other group, and the best of them consistently rise to the challenge. For example, Top 10 VPN provides detailed testing data, such as recorded download speed measurements (Mbps) in their ExpressVPN review, offering clear, actionable insights in their VPN comparisons.

Their reviews are effective, clear, and backed by real data—a trend that continues across the most trusted hyperniche publications.

The top 10 hyperniche publications prove that specialization is one path to trustworthiness. When you focus on doing one thing exceptionally well, you deliver value that readers trust—and return for time and again.

Looking at our Trust Rating results across all 496 sites, a pattern emerges across the groups: a publication’s scope plays a critical role in determining its trustworthiness.

Success Rate by Scope

While we investigated less hyperniche publications versus the broad and niche groups, the hyperniche groups had a higher success rate out of any group.

Out of the 101 hyperniche publications we investigated, 17 publications surpassed the 60% Trust Rating benchmark—a success rate of 16.8%. While this may seem low, it exceeds the success rates of broader focus groups.

For comparison, only 25 out of 232 niche publications passed (10.8% success rate), and 14 out of 163 broad publications (8.6% success rate) passed.

The higher success rate of the hyperniche group is likely a direct result of their narrow focus on just one or two categories, allowing for deeper expertise and more reliable reviews. So there’s less room to “mess up” and fake test the products you review.

And it cascades down the groups. Niche publications have the second highest success rate, and broad publications have the lowest success rate. This is due to having many more categories, so they have a higher chance at being less consistent in thorough testing across multiple categories.

As for the Fake Tester distribution by Scope:

  • We had 22 Fake Testers out of 101 hyperniche publications (21.78%).
  • 98 out of 232 niche publications were Fake Tetsers (42.24%).
  • 104 out of 164 broad publications were Fake Testers (63.41%).

These numbers highlight a critical takeaway: the broader the scope, the harder it is to uphold trustworthiness.

Hyperniche sites demonstrate that focus and specialization often lead to higher-quality reviews, while broad publications face an uphill battle in delivering consistent and reliable testing.

That said, there are some exceptions to this takeaway. Broad-focused websites like RTINGS prove it’s possible to achieve Highly Trusted status, or Your Best Digs and TechGearLab show that they can be Trusted as well thanks to consistent, high-quality reviews. These examples emphasize that while patterns exist, trust ultimately comes down to thorough testing and transparency.

Now, let’s shift gears and explore the least trusted publications by scope and try to identify any patterns.

The Least Trusted Publications

As we know, not every publication earns readers’ trust. We’ve discussed many famous publications and parent companies that earned failing trust ratings and were found to be faking their testing. But what about who’s at the very bottom of the barrel? Who has the lowest Trust Rating of all?

If you were curious, 86 publications earned under a 10% Trust Rating (17% of the dataset). Two publications (Cool Material and Reviewnery) scored a 0%. Across the broad, niche, and hyper-niche categories, a lot of publications stand out for all the wrong reasons—unreliable testing, fake reviews, and a lack of transparency.

These trust issues aren’t confined to one type of ownership either. Both independently owned sites and those backed by large corporations contribute to the problem.

In this section, we’ll highlight the least trusted publications by scope, showcasing the Top 10 Untrustworthy Broad, Niche, and Hyper-Niche publications. These are the sites with the lowest Trust Ratings in our dataset, reflecting how poor practices can undermine even specialized expertise.

Let’s take a closer look at how these sites failed to meet the standards readers deserve.

Key Takeaways

  • 🛑 Top 10 Untrustworthy Broad Publications
    • 📉 The Worst Performers: The bottom three publications—Cool Material (0.00%), HackerNoon (3.60%), and Reliant.co.uk (4.20%)—reflect a severe lack of trustworthiness. Cool Material stands out as the absolute worst, with no evidence of reliable testing or transparency, earning a Fake Reviewer classification.
    • 🤔 Independence Doesn’t Equal Always Trust: Out of the 10 untrustworthy publications, five are independently owned—HackerNoon, Reliant.co.uk, Reset Anything, Perform Wireless, and Frenz Lifestyle Hub. These results show that while independence offers the opportunity for unbiased testing, it doesn’t guarantee credibility.
  • 🛑 Top 11 Untrustworthy Niche Publications
    • 📉 The Absolute Worst: Reviewnery (0.00%) stands out as the least trustworthy niche publication, showing no evidence of reliable testing or transparency. Similarly, LoLVVV (2.50%) and Antenna Junkies (3.25%) fail to deliver credible reviews, with both earning a Fake Reviewer classification.
    • 🤔 More Independent Ownership Challenges: Out of these 11 untrustworthy sites, 8 are independently owned, including Reviewnery, iOSHacker, and Fizzness Shizzness. This trend highlights that independence doesn’t inherently lead to credibility—transparency and rigorous testing remain key.
    • 🤝 A Tie in Untrustworthiness: Wilderness Today and Tech Junkie both earned a 6.10% Trust Rating.
  • 🛑 Top 10 Untrustworthy Hyper-Niche Publications
    • 📉 The Worst of the Specialized: TurboFuture ranks as the least trustworthy hyper-niche publication, with a 2.50% Trust Rating and a Fake Reviewer classification.
    • 🤔 Independence Doesn’t Equal Reliability: 7 out of 10 of these publications are independently owned, including Best Double DIN Head Unit and The Audio Experts.

All three groups of Top 10 Untrustworthy Publications had their top 10 score less than a 7% Trust Rating. Let’s start off with the Broad group. These sites, with their wide-reaching focus and significant influence, failed to uphold the standards of transparency and rigorous testing that readers rely on.

Top 10 Untrustworthy Broad Publications

Broad publications typically cover 16 or more categories, giving them significant reach and influence over a vast array of consumer decisions. However, with such a wide scope comes the challenge of maintaining consistency and transparency across every category. The Top 10 failed to meet this challenge, with poor testing practices, fake reviews, and a lack of evidence undermining their credibility.

These sites, spanning everything from tech to lifestyle to general news, demonstrate how a lack of rigor across multiple categories can lead to widespread mistrust. Let’s take a closer look at how these publications fell short in upholding standards across their broad focus.

  • While the Top 3, Cool Material (0.00%), HackerNoon (3.60%), and Reliant.co.uk (4.20%), hit rock bottom, the rest of the list includes big names with serious trust issues.
  • The Economic Times (5.30%) and The Sacramento Bee (5.57%) are well-known news outlets, yet their reviews lack transparency. The Sacramento Bee even earned a Fake Reviewer classification, showing how even established brands can mislead readers.
  • Good Morning America (6.45%) is a household name, but its product reviews fall flat. Without credible testing to back them up, trust takes a hit.
  • In tech, Reset Anything (6.30%) and Perform Wireless (6.85%) claim expertise but fail to deliver. Both earned Fake Reviewer classifications for lacking testing proof.
  • Even popular eCommerce and lifestyle sites like HT Shop Now (5.70%) and Frenz Lifestyle Hub (6.85%) struggle with consistency. Their broad focus exposes readers to questionable recommendations across categories.

Out of the Top 10 Untrustworthy Broad Publications, 50% were classified as Fake Reviewers, and the other 50% were marked as Not Trusted. Additionally, 50% of these sites are independently owned, highlighting that independence doesn’t necessarily equate to trustworthiness in the Broad group.

Top 11 Untrustworthy Niche Publications (because of a tie)

Niche publications typically focus on 3 to 15 categories, making them more specialized than broad publications but less narrowly focused than hyper-niche sites. This balance offers them a unique opportunity: they can dive deeper into specific topics while still appealing to a broader audience. However, maintaining consistency across even a handful of categories can be challenging.

These publications demonstrate what happens when specialization doesn’t translate into trustworthiness. From Fake Reviewer classifications to a lack of transparency in testing, their shortcomings undermine their ability to deliver reliable recommendations.

  • Reviewnery (0.00%) stands out as the least trustworthy niche site in our analysis. With no evidence of reliable testing or transparency, it fails to meet even the lowest expectations.
  • LoLVVV (2.50%) and Antenna Junkies (3.25%) are also among the worst offenders. Both earned Fake Reviewer classifications, reflecting a lack of credible testing practices despite their focus on gaming and tech reviews.
  • GenderLess Voice (4.00%) represents a unique niche in voice training but struggles with credibility. As a collaborative effort, its lack of consistent testing protocols leaves readers questioning its reliability.
  • Removu (4.35%) and iOSHacker (5.25%) fail to deliver on their tech-focused promises. Both sites lack transparency and testing evidence, earning their places on this untrustworthy list.
  • The Gadget Nerds (5.45%) and Architizer (5.75%) fall short in their respective niches of tech and architecture. While their specialized focus should enable deeper insights, their low Trust Ratings suggest otherwise.
  • Fizzness Shizzness (6.05%) and Wilderness Today (6.10%) round out the list alongside Tech Junkie (6.10%). These sites highlight the ongoing challenge of maintaining credibility in niche markets, where readers expect expertise but often receive inconsistent or misleading information.

These results show that niche sites, while smaller in scope than broad publications, still face significant challenges in delivering trustworthy reviews. Specialization alone isn’t enough—without transparency and rigorous testing, even niche publications can fall flat.

Out of the Top 11 Untrustworthy Niche Publications, 36% were classified as Fake Reviewers, and 64% were marked as Not Trusted. Notably, 73% of these sites are independently owned, showing that independence alone doesn’t guarantee credibility either in the Niche group.

Top 10 Untrusted “Hyper-Niche” (or Specialized) Publications

Hyperniche publications focus on just one or two specific categories, which should allow for detailed, high-quality reviews. However, as this list of the Top 10 Untrustworthy Hyperniche Publications shows, specialization alone doesn’t guarantee trustworthiness.

  • TurboFuture (2.50%) earns the dubious honor of being the least trustworthy hyperniche site, flagged as a Fake Reviewer with no evidence of credible testing.
  • Other hyper-focused sites like Best Double DIN Head Unit (2.85%) and FPS Champion (3.65%) fail to deliver reliable reviews, despite their narrow scope in automotive and gaming categories, respectively.
  • Interestingly, WGB (3.60%) and Best Airpurifiers (4.20%) operate in technical categories where precision is crucial, yet both fall short of providing measurable, transparent testing.
  • The majority of these sites—70%—are independently owned, including Silvia Pan and The PC Wire. This high independence rate underscores that autonomy alone isn’t enough to ensure quality or trustworthiness.
  • At the higher end of this untrustworthy list, Aspire360 (6.05%) and Audio Direct (6.10%) still failed to provide the consistency and transparency necessary to gain reader trust.

Among the Top 10 Untrustworthy Hyperniche Publications, 10% were classified as Fake Reviewers, while the remaining 90% were marked as Not Trusted. Additionally, 70% of these publications are independently owned, highlighting that independence alone doesn’t guarantee trustworthiness.

Overall while examining our top 10 untrustworthy publications across the three scope groups, it appears that independence alone doesn’t guarantee trust.

But we noticed a pattern that most of the Highly Trusted and Trusted publications are independent, which we explore further in the next section.


7. The Independent Testers

A significant chunk of our dataset was taken up by independent testers – publications that forged their own path without raising capital and have stayed free of the influence of large publishers and megacorps.

Unfortunately, this has not translated to excellence. Huge numbers of independent publications aren’t trusted — or worse, are fake reviews — but they do fare better than the corporate-owned publications do.

While independent publications overall suffer from a lower Trust Rating (32.11% on average compared to 33.71%), there are more independent Trusted and Highly Trusted publications.

In fact, independent publications are the only Highly Trusted publications in our dataset!

Beyond that, corporate-owned publications that earned a “Trusted” classification from us have a lower Trust Rating on average (74.56%) compared to “Trusted” independent publications (80.43%).

What does this mean? Independent publications are more trustworthy when they’re actually classified as Trusted.

More importantly, because independent publications are the only publications that actually managed Highly Trusted classifications, they’re the only publications that produce truly useful, actionable information and testing data.

So if consumers are trying to find useful information on products, they have very few resources to turn to – and none of them are owned by corporate media. The power of the independent publisher means not only are they your best bet for highly trustworthy practices and information, they’re your best bet for truth.

Key Takeaways

  1. You have a better chance of getting useful information from an independent publication – but not by much. 6.8% of the independent sites we researched are trustworthy or highly trustworthy, while just 4.9% of corporate sites are trustworthy (and not a single one is highly trusted.)
  2. The only Highly Trusted publications are independent, and there are not a lot of them. There are just 4 publications that managed a “Highly Trusted” classification – 0.8%.
  3. Among the aspects of trust ratings, Data Science contributes most significantly to independent publishers’ scores, followed by Visual Evidence. This suggests that independent publishers may excel in providing data-driven, evidence-based reviews.
  4. The wide range of trust ratings among independent publishers (0.00% to 102.20%) indicates substantial variation in trustworthiness. This variation might be explained by factors such as resources, expertise, niche focus, and individual publication practices.
  5. Takeaway: The chances of independent publications being fake reviewers are distressingly high. 39% of the independents in our dataset are fake reviewers, with testing claims that are not supported by testing data and custom imagery. It’s difficult to say why exactly there’s so many – but pressures created by corporate media could be the culprit by effectively creating a “race to the bottom” where costs are minimized (along with useful information and testing) in order to keep ledgers in the black.

For this analysis, independent publishers are defined as tech review publications that have not raised external capital, have not acquired other sites, are not a division of a larger conglomerate, and are not publicly traded companies. In our dataset, these publishers are marked with a “Yes” in the independent column.

We have around a 60/40 split when it comes to Independent and Corporate publishers, skewed slightly positively towards indies. Here are some fast facts:

  • Of the 295 independent publishers we have in our dataset, 116 of them are Fake Reviewers. That means 39.32% of indies are fake testers.
  • By contrast, of the 201 corporate publishers in our dataset, 108 of them are Fake Reviewers. That means 53.47% of corporate-owned publishers are fake testers.

Independent publications are also notable for largely gaining their trust thanks to their high level of specialization. Consider this table:

You’ll notice that aside from RTings, TechGearLab and Your Best Digs, most of independent publications are covering a single category.

This tells us two different things.

The first thing it tells us is that specialized sites are doing better than broader sites – but as we saw above, specialization doesn’t necessarily mean you’re guaranteed to do better (in fact, single category sites tend to do worse on average.) Consider Cool Blue – they claim to test televisions, but their testing process is qualitative, without hard numbers to back up what mostly comes across as first-impressions, hands-on reviews.

By comparison, TFT Central creates detailed charts and tables that cover how a monitor is actually performing, with a variety of response times being measured out down to the tenths of a millisecond. Their focus is on monitors, and they turn this narrow focus into expertise.

tft central response time test results

Single category websites have the ability to perform extremely in-depth dives into a category and provide incredible information, but most prefer not to.

Secondly, it tells us that having a high average Trust Rating while testing multiple categories is an impressive feat, and difficult to manage, but actually possible. Publications like RTings and TechGearLab, which cover over a dozen each, are particularly praiseworthy for successfully using rigorous testing methods across multiple product categories and providing useful testing data. Consider how RTings structured out their input lag data – it’s exhaustive.

rtings input lag test results screenshot

Meanwhile, places like TechGearLab provide excellent charts that show not only important testing data, but cover how thoroughly they actually test by providing temperature ratings across multiple brews with coffee makers.

Results like these are heartening to see, because it shows that independent testing can produce exceptional results that actually illustrate how a product performs.
Not all independent testers do a job as good as places like RTings or TechGearLab (in fact, many do a terrible job), but the fact only independent testers produce the highest quality, most trusted data means the profit-over-people approach of so many corporate media companies is limited and short-sighted.


8. Product Categories in Focus

Trying to do research on products has become more difficult over the last several years. In addition to the ever-changing landscape of search results on the part of Google’s search empire, there has been a continuous shift (mostly a decline) in the quality of reviews being published by outlets. This isn’t to say they’re being written more poorly, but rather that many reviews now make testing claims that are either poorly supported or entirely unsupported.

Assumptions: we assume that the 30 categories we assessed are a good representation of the tech journalism. We recognize that there are many other categories to cover but given the time tables and cost, we felt these 30 are an accurate representation of what is occuring in the industry. 

Implications of our findings: almost half the time you’ll find a fake review when you’re on the web. Be it via Google or another source. However, despite this, we also found that even though publications are faking their tests, there is always the potential of a publisher copying another website’s tests. The problem with this, though, is it could very well be the “blind following the blind,” and it’s very difficult to quanity who is doing this. And thus undermines the entire ecosytem. 

Impetus and why fake reviews exist: the economics of the matter dictate that some publications. Meaning, testing is pricey, especially on more technical categories like TVs, and to get an ROI as a publication, or a better ROI on one’s investment, it’s cheaper to copy or fake tests. 

8.1. The Downfall of Gaming Journalism

For this analysis, we focus on gaming hardware and peripherals, specifically gaming chairs, gaming headsets, and gaming mice. These products form a crucial part of the gaming experience and are frequently reviewed by tech publications. We investigated 111 individual gaming publications and 266 sets of reviews and guides across three individual categories.

Key Takeaways

  1. Gaming Reviews Lack Trust: A staggering 77% of gaming tech reviews are untrustworthy, highlighting the severe credibility issues in this space.
  2. Top Trusted Sources: Rtings and WePC stand out as reliable publications in the gaming category, earning our highest trust.
  3. Fake Testing Red Flags: 12% of gaming tech reviews exhibit fake testing practices. The biggest offenders? Forbes, Consumer Reports, and Digital Trends.
  4. Non Trusted Dominates: Nearly half (45%) of gaming reviews fall into the Non Trusted classification. This includes reviews from major names like GamesRadar, PC Gamer, and IGN Middle East.
  5. Trusted but Barely: Trusted gaming publications earn an average Trust Rating of 76%—just scraping the bottom half of the Trusted range (70–89%). There’s room for improvement.

How to Properly Test Gaming Gear

When reviewing gaming products, proper testing means focusing on measurable, meaningful results—not vague, unsupported claims. Our investigation is guided by our own expertise and trusted expert testing methodologies that prioritize data-driven insights and real-world performance.

  • For gaming headsets, the key criteria to test include audio quality, microphone clarity, and latency. Reliable testing requires tools like acoustic chambers, artificial ear simulators, and signal generators to evaluate performance. Without such tools, claims about audio precision or low latency lack credibility. Read our gaming headset testing methodology to learn more.
  • When testing gaming chairs, comfort and durability are critical, with a focus on ergonomics, adjustability, and weight limits. To ensure these are more than just buzzwords, testers should use pressure mapping systems and weighted testing equipment for objective results. We have a gaming chair testing methodology as well.
  • For gaming mice, we look at sensitivity (DPI/CPI), click latency (ms), and software customization. Proper testing involves DPI testers, latency analyzers, and software evaluation tools to measure how accurately and quickly the device responds to user input. We haven’t published a gaming mouse testing methodology yet, but our experts determined the most important gaming mouse criteria to test at the start of our investigation.

Using credible tools and transparent methods, gaming product reviews can achieve higher trust ratings that reflect true testing. After all, publications with gaming reviews that had category-specific testing methodology have a higher average Trust Rating as a group (62.90%) than publications without one (42.28%).

Now let’s take a closer look at which publications did and didn’t do their homework in the gaming product group.

Gaming Trust Rating & Classification Analysis

Now, let’s take a closer look at the gaming reviews dataset with this table that showcases the classifications of 111 publications across gaming headsets, chairs, and mice.

Gaming Chairs (81 publications)

Gaming Headsets (97 publications)

Gaming Mice (88 publications)

  • Fake Testers: 13 publications, such as Wirecutter, CNET, and The Outer Haven, make up 14.77%, the largest group of Fake Testers in a single category.
  • Highly Trusted: 1 publication, RTINGs, attained the highest trust rating.
  • Not Trusted: 44 publications, like Forbes, Popular Mechanics, and Mashable, represent 50%, marking the highest percentage of Not Trusted reviewers in the gaming category.
  • Trusted: Only 2 publications, Techgear Lab and Tom’s Hardware, were classified as Trusted, showing a severe shortage of reliable sources in this category.

Gaming categories face significant trust challenges, with Not Trusted publications dominating the landscape, especially in gaming mice (50%) and gaming headsets (40.21%). Fake Testers are less prevalent compared to other categories, but the low number of Trusted and Highly Trusted publications, particularly in gaming mice, leaves readers with limited reliable sources for making informed purchasing decisions.

Now, let’s examine the statistical overview for the gaming category. These numbers highlight key trends and variations across all 266 reviews and guides in this group to better understand the trust landscape of this group and against other groups as well.

StatisticValue
Mean44.99%
Median46.85%
Range5.25% – 99.80%
Standard Deviation19.90%

The gaming category stands out with a mean Trust Rating of 44.99%, slightly above other categories like small appliances and audio & visual.

The median of 46.85% further suggests a moderate level of trust, though the presence of “Fake Testers” keeps the average lower than it could be.

The range spans from 5.25% to 99.80%, showing that while a few exceptional publications excel, the majority struggle to meet trustworthiness benchmarks.

A standard deviation of 19.90% reflects a less volatile distribution compared to categories with broader variation, such as home office products.

While gaming reviews show marginally higher trust levels compared to some other categories, the prevalence of untrustworthy publications remains a significant issue. This highlights the need for more rigorous testing and transparency in the gaming space.

Next, we shift focus to the Home Office category, where reliability is critical for products that support everyday work and productivity. Let’s see how those publications stack up.

8.2. Fake Testing in Home Office Reviews

The home office segment of our analysis focuses on a few important home office products. These include computer monitors, keyboards, mice, office chairs, printers, routers and webcams. We feel these products are the cornerstone of any home office when it comes to tech, and they’re the most likely products to be reviewed by any tech publication.

  1. Computer monitors and keyboards have huge problems with fake reviews and fake testing. 42% of the reviewers covering computer monitors and keyboards showed evidence of faked testing.
  2. There is an alarming amount of untrustworthiness surrounding keyboard reviews. 92% of all of the reviewers in our dataset that covered keyboards were either faking their testing or were untrustworthy. Only a single publication earned a trust rating greater than 90%.
  3. Almost half of router reviewers are worth trusting to some extent. While reviewers with more mixed Trust Ratings make up more than half of these reviewers (34 of the 57) this still leaves routers as the category a consumer is most likely to find reviews and reviewers worth trusting on some level. 

Here’s a closer look at how the home office categories fared in our analysis. Below is an Airtable embed showcasing all 723 sets of reviews and buying guides we investigated across these seven critical product categories. This comprehensive dataset spans 208 individual office sites, with some publications covering just one category while others had reviews on up to all seven categories.

Computer Monitors (117 reviewers)

  • Fake Reviewers: 49 reviewers, comprising 41.88% of this category.
  • Highly Trusted: Just 1 reviewer achieved top-tier trustworthiness.
  • Not Trusted: 35 reviewers lacked credibility, further emphasizing trust issues.
  • Trusted: 18 reviewers provided reliable insights, offering a small but valuable resource.

Keyboards (123 reviewers)

  • Fake Reviewers: 52 reviewers, making up 42.28%—the largest proportion of Fake Reviewers in the home office group.
  • Highly Trusted: Only 1 reviewer stood out as highly trustworthy.
  • Not Trusted: 62 reviewers, highlighting significant reliability issues.
  • Trusted: Just 2 reviewers provided credible reviews, indicating a major gap in trustworthy content.

Mice (142 reviewers)

  • Fake Reviewers: 30 reviewers, or 21.13%—a smaller proportion compared to keyboards but still notable.
  • Highly Trusted: 1 reviewer achieved a top-tier rating.
  • Not Trusted: 70 reviewers—nearly half—demonstrate trust issues in this category.
  • Trusted: Only 5 reviewers offered reliable insights, underscoring a small pool of trustworthy sources.

Office Chairs (80 reviewers)

  • Fake Reviewers: 8 reviewers, representing 10%—the smallest proportion of Fake Reviewers in the home office group.
  • Not Trusted: 40 reviewers—half of the dataset—lacked credibility.
  • Trusted: 8 reviewers were found to be trustworthy, matching the number of Fake Reviewers.

Printers (66 reviewers)

  • Fake Reviewers: 11 reviewers, or 16.67%, highlighting fewer trust issues compared to other categories.
  • Highly Trusted: 2 reviewers stood out with top-tier ratings.
  • Not Trusted: 27 reviewers showed credibility concerns.
  • Trusted: 9 reviewers provided reliable and transparent reviews.

Routers (118 reviewers)

  • Fake Reviewers: 19 reviewers, making up 16.10%—a relatively low proportion for this group.
  • Highly Trusted: 2 reviewers achieved the highest trust ratings.
  • Not Trusted: 30 reviewers showed reliability concerns.
  • Trusted: 21 reviewers offered reliable and consistent content, the largest group of Trusted reviewers in the home office category.

Webcams (77 reviewers)

  • Fake Reviewers: 21 reviewers, comprising 27.27%.
  • Not Trusted: 26 reviewers, making up over one-third of this category.
  • Trusted: 12 reviewers provided credible reviews, showing a decent pool of trustworthy content compared to other categories.

The numbers reveal that while trustworthy reviewers exist, they’re vastly outnumbered by Fake and Not Trusted reviewers, leaving categories like keyboards and computer monitors particularly problematic. Only a handful of categories, like routers and printers, show consistent trust scores, highlighting a need for better transparency across home office reviews.

We conducted another statistical analysis on this group, encompassing all seven product categories, providing insights into overall trends and averages.

StatisticValue
Mean40.56%
Median39.20%
Range2.45% – 101.40%
Standard Deviation22.44%

The home office category shows a slightly lower mean Trust Rating of 40.56% compared to gaming’s 44.99%, reflecting greater challenges in achieving consistent trustworthiness across its reviewers. The median Trust Rating of 39.20% for home office products also falls below gaming’s 46.85%, highlighting a broader issue with mid-range trust scores in this category.

The range of Trust Ratings in the home office category is wider (2.45% – 101.40%) than in gaming (5.25% – 99.80%), which points to more significant variability among reviewers. This variation could be attributed to the diverse and complex nature of products like computer monitors and keyboards, where testing standards are harder to maintain consistently.

The standard deviation of 22.44% in home office is slightly higher than gaming’s 19.90%, indicating more spread in reviewer reliability within the dataset. Overall, while both categories face trust challenges, home office shows slightly more inconsistency and lower overall trustworthiness, making it a more problematic group for consumers seeking reliable reviews.

Show more +

8.3. Big Problems in Small Appliances

This product category is pretty wide as far as the types of products it can cover, but we focused on a small set that are frequently covered by tech publications. The categories include air conditioners, air purifiers, blenders, coffee makers, fans and vacuum cleaners.

  1. It’s difficult to trust publications for useful information about health-sensitive devices like air purifiers. Three out of every four publications covering air purifiers produce fake reviews or untrustworthy ones, either because they fake their tests or do not perform any at all. 23% of the sites we analyzed that covered air purifiers faked their testing, meaning you have a nearly 1 in 4 chance of being outright misled.
  2. A little over one in three vacuum reviewers have serious issues with faking testing. 36% of the reviewers analyzed exhibited clear signs of faked testing and fake reviews.
  3. Air conditioners are the most likely place to find trustworthy reviewers, but none of them are “Highly Trusted”. Almost 27% of the reviewers covering A/Cs are worth listening to, almost half of those are only “Mixed Trust”.

Let’s take a closer look at the numbers. We analyzed 555 total appliance reviews and buying guides from 205 individual sites. Some focused on one category, while others tackled all six categories. Here’s the full breakdown of our work in small appliances.

Vacuum Cleaners (126 reviewers)

  • Fake Reviewers: This category has 45 fake reviewers, making up 35.71% of this category – the largest proportion of Fake Reviewers across small appliances.
  • Highly Trusted: 7 reviewers achieved the highest trust ratings, highlighting some strong performers in this category.
  • Not Trusted: 42 reviewers, showing a significant portion that lacks credibility.
  • Trusted: 9 reviewers stood out with reliable and transparent practices.

Air Purifiers (125 reviewers)

  • Fake Reviewers: 29 reviewers, representing 23.20% of the group.
  • Highly Trusted: 4 reviewers, reflecting a rare but notable presence of trustworthy sources.
  • Not Trusted: 65 reviewers dominated this category, highlighting credibility issues with over 50% untrustworthy.
  • Trusted: 9 reviewers offered reliable insights, demonstrating a small pool of trust in the dataset.

Coffee Makers (106 reviewers)

  • Fake Reviewers: 25 reviewers, accounting for 23.58%, showing similar issues to air purifiers.
  • Highly Trusted: 2 reviewers achieved standout scores, proving excellence in this space is rare.
  • Not Trusted: 50 reviewers—nearly half the dataset—further emphasize trust concerns.
  • Trusted: 11 reviewers provided credible, thorough insights.

Blenders (82 reviewers)

  • Fake Reviewers: 17 reviewers, representing 20.73% of the group.
  • Highly Trusted: Just 1 reviewer reached the highest trust level.
  • Not Trusted: 39 reviewers, indicating this category also struggles with trust.
  • Trusted: 12 reviewers stood out as trustworthy, offering consumers a valuable resource.

Fans (64 reviewers)

  • Fake Reviewers: 18 reviewers, making up 28.13%—the largest proportion of Fake Reviewers outside of vacuum cleaners.
  • Not Trusted: 34 reviewers, showing major reliability gaps in this category.
  • Trusted: 8 reviewers provided credible and reliable reviews, though the pool remains small.

Air Conditioners (52 reviewers)

  • Fake Reviewers: 9 reviewers, or 17.31%—the smallest proportion of Fake Reviewers in this group.
  • Not Trusted: 24 reviewers, reflecting widespread issues with credibility.
  • Trusted: 8 reviewers stood out for their transparency and testing practices.

The small appliance categories reveal a mixed landscape of reliability. While vacuum cleaners and air purifiers lead in dataset size, their average Trust Ratings reflect only moderate trustworthiness. Fake Reviewers dominate several categories, underscoring the ongoing challenge of finding reliable sources. Despite this, a small but consistent pool of Trusted reviewers offers some hope for readers seeking honest recommendations.

Now, let’s dive deeper into the numbers with our statistical analysis, showcasing Trust Ratings across the small appliance group.

StatisticValue
Mean38.39%
Median35.00%
Range2.85% – 101.40%
Standard Deviation24.45%

When analyzing the small appliances category, its statistical performance highlights notable issues in trustworthiness when compared to home office and gaming categories:

  • Trust Ratings: Small appliances had the lowest mean Trust Rating at 38.39%, trailing behind home office (40.56%) and gaming (44.99%). This lower average underscores widespread reliability challenges in small appliance reviews.
  • Median Trust Ratings: At 35%, the median for small appliances is also the lowest among the three categories, further emphasizing trust issues. Home office fares slightly better at 39.20%, while gaming leads with 46.85%, indicating that gaming publications are more consistently trusted.
  • Range and Consistency: While small appliances feature a broad Trust Rating range (2.85% to 101.40%), the standard deviation of 24.45% signals higher variability compared to gaming (19.90%) and home office (22.44%). This inconsistency makes it harder for consumers to rely on small appliance reviewers across the board.

The small appliances group reveals some of the most troubling trustworthiness issues across all product groups. With the lowest average and median Trust Ratings amthe ong categories we’ve analyzed, paired with significant variability, this group clearly struggles with credibility.

As we transition to the audio & video category, we’ll explore whether these trends persist or if this segment shows greater promise in delivering trustworthy reviews.

Show more +

8.4. Trust Issues in Audio & Video Reviews

The audio and video category is a much more focused category that has far fewer products in it, though it also features some of the most difficult to test products across all the categories we researched. Televisions, headphones, projectors, soundbars and speakers all live in this product category, and many of these require specialized equipment to properly test.

  1. Problems with fake testing and fake reviewers run rampant in audio/video tech reviews. Of the tech reviewers that covered audio & video equipment, 42% of them were faking their testing.
  2. Fake reviews are a major problem when it comes to projector reviews. Over 66% of the tech reviewers who covered projectors showed signs of faked testing.
  3. You can’t trust 4 out of every 5 TV reviews. Over 82% of the tech reviewers covering televisions have serious trust issues, either because they’re untrustworthy or show clear signs of fake testing and fake reviews.

Our analysis includes 769 sets of reviews and buying guides from 332 individual publications. Each publication could cover anywhere from one to all five of these categories, offering insights into the breadth and depth of their testing practices.

Below is an embedded Airtable showcasing all the data we gathered for these five categories.

Headphones (196 Reviewers)

  • Headphones had the largest dataset, yet only 17 reviewers (8.67%) achieved Trusted status, while 37 reviewers (18.88%) were Fake Reviewers.
  • The vast majority—101 reviewers (51.53%)—fell into the Not Trusted classification, reflecting significant credibility issues in this market.

Projectors (93 Reviewers)

  • Projectors had the highest proportion of Fake Reviewers, with 62 reviewers (66.67%) classified as such.
  • Only 3 reviewers (3.23%) earned Trusted status, making it the most challenging category for consumers to find reliable reviews.
  • With just 1 Highly Trusted reviewer, it’s clear that credible projector reviews are extremely rare.

Soundbars (114 Reviewers)

  • Soundbars had a similarly dismal landscape, with 74 reviewers (64.91%) flagged as Fake Reviewers.
  • Trusted reviewers were limited to 4 (3.51%), and only 1 Highly Trusted publication emerged.
  • A staggering 31 reviewers (27.19%) were Not Trusted, showing a significant gap in quality and transparency.

Speakers (177 Reviewers)

  • Speakers followed the same troubling pattern, with 94 reviewers (53.11%) classified as Fake Reviewers and 63 reviewers (35.59%) as Not Trusted.
  • Trusted reviewers were few and far between, with just 3 reviewers (1.69%) achieving this classification, alongside 2 Highly Trusted reviewers.

TVs (189 Reviewers)

  • The television category stands out for having the highest number of Not Trusted reviewers: 87 (46.03%).
  • Fake Reviewers accounted for 69 reviewers (36.51%), while 15 (7.94%) were Trusted, and 3 (1.59%) earned Highly Trusted status.
  • TVs show slightly better performance in the Trusted classification compared to projectors and soundbars but still lag far behind consumer expectations for reliability.

The classification data makes it clear: finding reliable reviews in the audio and visual category is an uphill battle. With Fake Reviewers dominating every product type and Trusted publications making up only a small fraction, consumers are left sifting through questionable sources to make informed decisions.

To further explore these trends, we conducted a statistical analysis of the Trust Ratings for this category.

StatisticValue
Mean34.24%
Median32.95%
Range0.00% – 101.40%
Standard Deviation21.12%

Mean Trust Rating: The audio and visual category averaged 34.24%, the lowest among all categories analyzed so far. For comparison:

  • Small Appliances: 38.39%
  • Home Office: 40.56%
  • Gaming: 44.99%

This highlights how audio and visual products face more trust issues, even within an industry already plagued by reliability problems.

Median Trust Rating: The median of 32.95% suggests that over half of the reviewers fell well below even moderate levels of trustworthiness, further reinforcing the category’s challenges. This is lower than:

  • Small Appliances (35.00%)
  • Home Office (39.20%)
  • Gaming (46.85%)

Range: With Trust Ratings spanning from 0.00% to 101.40%, the audio and visual category shows extreme variability. While it has standout reviewers achieving highly trusted status, these are overshadowed by a large volume of publications at the bottom end of the spectrum.

Standard Deviation: A standard deviation of 21.12% indicates significant inconsistency in reviewer reliability. Although smaller than small appliances (24.45%), it is slightly higher than gaming (19.90%) and home office (22.44%), reflecting a category with widespread disparities in trustworthiness.

Show more +

8.5. Innovation Meets Deception: All the Fake Emerging Tech Reviews

The “Emerging Tech” category covers drones, e-scooters, 3D printers, and e-bikes. As the name of this category suggests, the products living here are “emerging” in nature. Many are new technologies that are still seeing major refinements, like 3D printers, while others are categories that are becoming more accessible and affordable to your average consumer (again, 3D printers would fall into this umbrella, but so do drones, e-bikes and e-scooters.)

These categories are also unique in that they have very diverse testing methods, many of which are still being pioneered. Drones and 3D printers, for example, are still be iterated on and see dramatic improvements with each “generation” of the product, be it in flight time and stability for drones, or resolution and complexity with 3D printers.

  1. Products considered “emerging” tech have the most trustworthy reviewers, but that still only comes to one in three. Almost 59% of the reviewers covering products like e-bikes and drones aren’t trustworthy or exhibit signs of fake testing. In fact, 15% of the tech reviewers in the dataset are classified as fake reviewers.
  2. You’re most likely to run into good reviewers and good data when reading reviews about electric bikes. 36% of the reviewers in our dataset were at least “Mixed Trust” or higher.

Now that we’ve introduced the unique challenges and opportunities within the Emerging Tech category, let’s take a closer look at the data behind the reviews. Our analysis spans 359 sets of reviews and buying guides from 169 individual publications. Each publication may cover one to four of the categories within this product group.

Below is an embedded Airtable showcasing all our findings across these four categories.

Drones (100 Reviewers)

  • Nineteen reviewers, or 19%, were classified as Fake Reviewers. This is the largest proportion of Fake Reviewers across all emerging tech categories.
  • Forty-four reviewers, or 44%, were Not Trusted, showing a significant trust issue within this category.
  • Trusted reviewers made up 15% of the dataset, with only one Highly Trusted reviewer demonstrating credible and thorough testing practices.

3D Printers (70 Reviewers)

  • Eight reviewers, or 11.4%, were Fake Reviewers, the smallest proportion in the emerging tech group.
  • Nearly half the reviewers, or 45.7%, were classified as Not Trusted, reflecting the challenge of consistent quality in this niche.
  • Nine Trusted reviewers and one Highly Trusted reviewer stand out in a category where specialized testing methods are critical.

Electric Bikes (97 Reviewers)

  • Seventeen reviewers, or 17.5%, were Fake Reviewers, showing ongoing issues with fake testing in this category.
  • Thirty-seven reviewers, or 38.1%, were Not Trusted, though this is the smallest Not Trusted proportion in the group.
  • Sixteen Trusted reviewers and one Highly Trusted reviewer highlight that reliable reviews are available for consumers seeking accurate insights.

Electric Scooters (92 Reviewers)

  • Eleven reviewers, or 12%, were Fake Reviewers, a relatively low proportion compared to drones and electric bikes.
  • Forty-three reviewers, or 46.7%, were classified as Not Trusted, making this category the second highest in terms of untrustworthy reviews.
  • Thirteen Trusted reviewers and one Highly Trusted reviewer offer credible insights, though their presence is limited.

With the emerging tech categories revealing distinct trust challenges and varying proportions of credible reviewers, it’s crucial to assess the overall trust landscape statistically.

StatisticValue
Mean43.15%
Median42.60%
Range4.10% – 98.35%
Standard Deviation23.88%

The statistics highlight notable challenges in trustworthiness. With a mean Trust Rating of 34.24% and a median of 32.95%, the group aligns closely with audio & visual categories, reflecting similar struggles with inconsistency and unreliable reviewers. A range spanning from 0.00% to 101.40% emphasizes the stark contrast between the least and most trustworthy reviewers, while a standard deviation of 21.12% reveals significant variability within the group.

When compared to other categories, emerging tech fares worse overall:

  • Gaming stands out as the most reliable group, with a mean Trust Rating of 44.99% and a median of 46.85%, thanks to relatively established review practices and fewer Fake Reviewers.
  • Home Office follows with a mean of 40.56% and a median of 39.20%, benefiting from mature product categories and more robust testing methods.
  • Small Appliances, with a mean of 38.39% and a median of 35.00%, perform slightly better than emerging tech but still suffer from high proportions of Fake Reviewers.

The emerging tech category faces unique challenges due to the novelty and rapid evolution of its products. This creates gaps in standardized testing methods and increases the likelihood of untrustworthy reviews. While electric bikes and e-scooters show promise with higher proportions of Trusted reviewers, the overall reliability of this group lags behind others, underscoring the difficulty of maintaining trust in newer product spaces.

Show more +

9. Why Does All Of This Matter?

Because fake reviews aren’t just a nuisance—they’re actively harming readers.

It means sifting through countless reviews with no way to verify what’s real. They’re left questioning whether a product was actually tested or if the review is just another marketing ploy. Trusting a brand’s reputation alone isn’t enough when it comes to big purchases.

For corporate publishers, this is a wake-up call. They’ve turned the tech review industry into a race to the top. Instead of prioritizing rigorous testing, they’ve focused on making flashy, dishonest content and more revenue.

But readers deserve better. They rely on these reviews to make informed decisions, and right now, the system is failing them and wasting their money.

For Google, the implications are massive. With 8.5 billion daily searches, Google has unparalleled influence over what people see. Yet, our findings show they’re delivering fake reviews straight to the top of search results. This undermines trust in their algorithms—and the internet itself.

The solution isn’t complicated. Honest, transparent, and thorough testing needs to be the standard, not the exception. That’s where independent publishers can shine. They’ve proven it’s possible to be trustworthy, even in a crowded and competitive field.

Or if the testing isn’t possible, we ask that publications call their reviews “researched” or “hands-on” reviews — not “tested” reviews. In fact, we did the same to our own reviews over a year ago.

We’re here to hold tech journalism accountable. We want to call in the “Not Trusted” and “Fake Testers” and push for real change. We aim to restore trust in an industry that millions depend on everyday. Because accurate and honest reviews aren’t a luxury—they’re essential.

It’s time to fix tech journalism. For the readers, for the brands, and for the future.

Share this Article



At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →