Quantcast

Why We Don't Rush to Review Hardware

By Norman Chan

Without having lived with a phone, tablet, camera, or notebook for an extended period of time, there's no way to grasp the experiential nuances of using a piece of hardware--or uncovering any potential manufacturing defects that don't reveal themselves in the first day or week of use.

Google's new Nexus 7 is the first Android tablet I've used that doesn't feel like it's playing catch-up to anything. It's light enough to use comfortably with one hand, has a beautiful screen, and is fast and responsive in Chrome and other Google apps. It also has that incredibly competitive price--you can buy one on Amazon right now for $230 and have it in your hands tomorrow. But I can't make a substantive evaluation of it yet, let alone a recommendation. I've only owned one for a day. And regardless of how good I feel about the tablet after that day of heavy use and testing, that's simply an insufficient amount of time to make a judgement call about whether someone should spend their money on one. That's not how we do things around here.

This is a philosophy that bears reminding, in light of how many reviews of the new Nexus 7 you can find on the web today. Google just announced the thing last week! And to the best of my knowledge, reviewers received test units the day of the announcement. And regardless of how many processor benchmarks, photo tests, and battery rundowns you're able to cram into a day or two of use, an evaluation is much more than the sum of those tests. Just as tech journalists latched on to the "specs aren't everything" mantra in parsing product announcements, the same must apply to the review of a product. Without having lived with a phone, tablet, camera, or notebook for an extended period of time, there's no way to grasp the experiential nuances of using a piece of hardware--or uncovering any potential manufacturing defects that don't reveal themselves in the first day or week of use.

Case in point: that new Nexus 7 I've been using? The LCD screen failed after about three hours, and now has a column of dead pixels that runs down the left side of the screen. This could be (and probably is) an isolated incident, but it could also be symptomatic of a widespread defect. The point isn't that all reviewers should wait until every potential problem is vetted by the community at large--it's that rushing to put your review out negates the consideration that any surprises, good or bad, exist.

It's no secret why sites want to get reviews out as soon as possible. They want those reviews to be timely. They want those reviews to be relevant. And they want their reviews to be indexed by Google and aggregation sites like TechMeme. The way web search and social sharing work now, early reviews (ie. stories that call themselves Reviews in the headline) are rewarded with more initial clicks and links, which entrench them into higher Google rankings. It's a game that all websites play because they think they have to. This--search engine optimization--is the reality of the modern web, and how siterunners argue that they have go along with in order to survive. It's also bullshit.

The immediacy of web publishing affords opportunities for increased transparency and meaningful engagement between writers and readers. Those tools are not at odds with the keeping of high editorial standards. But at the point where your goal is to have a review out as soon as possible, you are absolutely compromising the quality of the review and your editorial credibility--those things are absolutely mutually exclusive. Those reviewers may as well camp out in a Best Buy and use a product on the shelf for a few hours and call it a review. The sad thing is, I don't think we're too far away from that.

Reviews serve to give purchasing advice to readers who want to know if something is worth their money. Tests, benchmarks, and the sharing of a reviewers' personal use experience are the means to that end--but they are not the review. A review is a conclusive statement: buy this product or don't, and why. You get one opportunity for that; everything in support of that final evaluation, rating, or thumbs up/down is the story you're telling to back that recommendation up. The concept of an "evolving review" is also bullshit--it's the testing that evolves, not the conclusion.

And here's the great thing about writing on the internet: you can tell that story with as much depth as you want, in as many posts as is necessary before the review. The opportunity to be transparent with your in-progress testing is a wonderful gift. Both readers and reviewers benefit from that exchange, because frankly, reviewers can learn a lot from their readers.

That's another important point in the reality of modern tech writing: you can't review a product in a vacuum. It's another false dichotomy in the minds of reviewers--that to be definitive, you also have to be detached. Some reviewers seem to pride themselves on being uninfluenced by others' opinions, but there's a big difference between letting someone else's review influence your evaluation and ignoring the revelations of the community at large. No single reviewer has more knowledge, expertise, and access to a large sample size of products than the user community. User (and fan) forums are where products get scrutinized en masse, it's where issues like the iPhone 4's antennae problem and GoPro Hero 3's SD card defect are discovered--things that never made those first wave of early reviews. I'm not saying that reviewers have to wait for the community to vet hardware before publishing their pieces, but to rush a review out without acknowledging that resource is a disservice to your readership.

So how long do reviewers have to test a product before they can give a credible evaluation? There is no one answer to that. It depends on a whole bunch of factors--which again is why transparency and communication with readers is so important during the review process. For our process, a month is about appropriate for big purchases, which gives enough time to share testing experiences, get feedback from readers, and conduct research in user communities.

There are many people who I think are finding a great balance between timeliness and thoroughness. AnandTech is my favorite hardware site on the web because their reviews are so in-depth and never rushed. But even they are playing the Google game by calling their testing impression posts "mini reviews". In the end, much of my disagreement with the modern review process boils down to semantics--what I believe the word "review" should mean. The loosening of that definition, whether reviewers want to admit it or not, takes away from the gravity of that word. And for sites whose business it is making product recommendations, that's a very dangerous slippery slope.