You’re at a fast food restaurant, and you get a lengthy receipt. On the bottom, it has a website. “Fill this out,” it states, “and get a chance at a monetary reward!” It’s not a large reward, but hey, why not. You’re filling out the questions, and it asks you “How was your service?” Well, it wasn’t like, the best. Everything was as expected, but I mean, it’s fast food. So that answer gets a 4 out of 5.
And at that moment, you just gave the survey a critical (negative) review. Strange, right? 4 out of 5 is like, 80%! Why would that be a negative?
You’re on Amazon, and you’re reviewing a thing you bought. It’s competent. You enjoyed it, but you realize it’s kind of niche, might not be for everyone. Not fantastic. But not bad! Not bad at all. So you give it 3 stars. Three stars should be average, right?
And just like that, you’ve given the seller a critical, negative review.
It’s strange in a way how we’ve gotten to this point. At a time when we’d expect surveys to allow us to share our opinions and help shape the future of the things we love and enjoy (or fix the things we hate and abhor) you’d think something as simple as a reviewing system would make this straightforward and easy. Instead, it creates a muddled system which actively hurts those who are graded by the system, but also paradoxically muddles the system for those relying on a rating. Is a product that is gushing with 5 star reviews really that good? Is a product with three and four star reviews really that bad?
Well, in a word, yes. According to this article on Wired, Amazon’s Star ratings aren’t as straightforward as they appear. Some things under the hood skew them, like previous ratings left by the rater. But interestingly, it also shows how four and five star reviews are the ones that enable sales while anything less is seen as critical. Perhaps there’s something to be had there: after all, it could be seen as “critical” that an item is “only average” or “only okay.”
One of the most prominent issues with surveys is the focus they may have on the numbers, instead of the focus on reaching out with comments or concerns. In a perfect world, a survey would allow the customer to reach out to management, reward a customer’s good work or share concern about negative experiences, and move on. Instead, it isn’t uncommon for corporate to spend more time worrying about the numbers and their high score. According to this post by Verde Group, the traditional customer survey can actually lead to a company making poor decisions. Suddenly, a “this was good but not great” experience becomes “this was not a great experience, so it should be weighted as if it were awful.”
Of course, this influences customers negatively as well. How can a customer trust a website that actively encourages such gaming? The Star Tribune wrote this article that shows how these reviews have been weaponized, politicized, and in some ways made useless from their original intent. Too many 5 star reviews and it’s hard to take the reviews seriously; too many one star reviews and a product becomes novel and worth noting.
There, I feel, lies the issue. We want reviews to be meaningful, but they are often tied to systems that hold meaning and weight differently from how we would. We say “three stars – kind of average.” The system hears “not 5 stars, absolute garbage.” This can be an issue if you’re trying to support a product or producer. But of course, this also means those trying to play by the supposed rules of the review ecosystem create reviews that confuse casual observers and ultimately make them less likely to trust them, feeling that their time and money is being wasted as a result.
I personally am not sure the answer here. I do find, whenever possible, explaining the review makes a huge difference. This way, a review isn’t just the stars but a short understanding with it. It would be nice if there was a way to hold these different ecosystems to a standard, or at least have what their standard is listed and easily accessible… but that’s on their end, and not the user’s.