MH - stock.adobe.com
How companies can do the right thing with online reviews
TripAdvisor and Amazon have had to learn how to manage online reviews in ways that other companies can learn from
People have been leaving reviews for a very long time, as the following user-generated content about bars in a city near Naples demonstrates. “What a lot of tricks you use to deceive, innkeeper. You sell water but drink unmixed wine,” wrote one. “You can get a drink here for only one coin. You can drink better wine for two coins. You can drink Falernian for four coins,” wrote another, referring to a local wine.
These comments, translated by Kent State University professor Brian Harvey, are among graffiti found in the Roman city of Pompeii, preserved by the volcanic ash and pumice that enveloped it in AD79. Nearly two millennia later, reviewing has moved from scratching on stones to tapping on phones, making the process much easier and the results far more accessible.
Online reviews are used by many consumers to decide where to stay, drink or eat and what to buy, making them vital for hospitality and accommodation providers as well as retailers. Research carried out by market researcher GfK in 2015 with 3,729 Britons for the Competition and Markets Authority (CMA) found that 54% of adults read online reviews and influence about £23bn of annual UK consumer spending.
They are most used for choosing travel and hotels, with 24% of people making a purchase after reading an online review, as well as for 18% for electronic items. The British Hospitality Association told the CMA that three-quarters of its members found user reviews to be very or quite useful in marketing and promotion.
But online reviews are open to abuse. While there is no guarantee that the Roman writers were being truthful, it is obvious they were at least present at the Pompeian bars. Depending on the rules of the online platforms used, a “reviewer” may never have visited the place or item in question. Some may have a reason to promote the business, such as a personal connection or a straightforward bribe, while negative ones may be written by competitors.
The question is whether those running online review sites can minimise abuse without discouraging the public from providing free content.
In September, UK consumer group Which? published an analysis of 247,277 reviews on travel platform TripAdvisor. Focusing on top 10 ranked hotels in 10 tourist destinations, Which? said that one in seven had what it called “blatant hallmarks of fake reviews”.
The research found that 79% of the five-star reviews for the “best” hotel in Cairo came from accounts that had no other reviews on the site, compared with 14% of those leaving a neutral three-star review. After Which? passed on its findings, TripAdvisor deleted some reviews and the hotel lost its “best hotel” status.
The consumer group found similar issues with top-rated hotels in Jordan and Las Vegas, as well as with two Travelodges in London, over which the budget hotel chain admitted “a breakdown in our internal communication”.
Fraudulent reviews a significant problem
TripAdvisor said the Which? research was based on “a flawed understanding of fake review patterns and is reliant on too many assumptions, and too little data”. However, in a transparency report it published later the same month, it confirmed that fraudulent reviews were a significant problem, reckoning that 2.1% of the 66 million reviews submitted to the site in 2018 were written by someone unfairly trying to manipulate a business’s rating.
The company believes that 91% of fake reviews were biased positive ones, posted by a business owner, staff member or relative, with a further 6% biased negative ones from rivals or someone attempting blackmail. The final 3% were positive reviews from people and companies paid to do this – something that can be detected by looking for multiple reviewer accounts using the same device, among other measures.
TripAdvisor says it saw a spike in such fake reviews of restaurants in the 11 Russian host cities for the 2018 FIFA World Cup, gathering evidence of 18 paid review companies working in that country. It takes legal action against such companies, and globally has stopped more than 60 since 2015.
Becky Foley, TripAdvisor
TripAdvisor looks for fraudulent reviews through a two-stage process. Its automated screening system last year rejected 2.1% of the reviews submitted and sent a further 4% for checking by staff, who rejected 62% of this group. In total, TripAdvisor rejected or removed 4.7% of reviews submitted in 2018, including ones with unintentional problems, such as referring to the wrong hotel.
“Our review analysis system has evolved a lot over the years,” says Becky Foley, the company’s senior director of trust and safety. “The technology we use is able to analyse hundreds of data points associated with each review prior to a review going live on the site.”
This uses machine learning based on hundreds of millions of previous reviews, says Foley. “A lot of the techniques we use – including fraud modelling, network forensics and behavioural analysis – are adapted from the techniques other industries, such as banking, use to catch fraud.”
The company sees a continuing need for people in screening. “Our aim is to ensure our human agents are looking at the most problematic submissions only,” says Foley. “If they are reviewing content that is ultimately OK to be posted to the site or could be easily identified as fake, then that amounts to time wasted, in my view.”
She adds that, at present, technology is good at spotting clues in data. “But when it comes to linguistic analysis, while it can master the basics, currently our technology can’t identify certain contextual nuances, such as sarcasm, in the way that a human agent could.”
Read more about how to manage user-generated content
- What is user generated content?
- Nordstrom digs into 5-star customer reviews and finds a shipping problem.
- Can AI solve developers’ “image” problems?
Other companies that use online reviews employ similar techniques. Amazon also uses both automation and human investigation, looking at reviewer identity, seller and product history and making use of artificial intelligence in doing so. “Our objective is to catch and remove abusive reviews before a customer ever sees them and in the last month, over 99% of the reviews read by customers were authentic,” says an Amazon spokesperson.
The same is true for Texas-based Bazaarvoice, which collects customer feedback, including online reviews, for other companies. “Language patterns and the use of certain words are monitored to determine authenticity, alongside the data associated with each submission, such as velocity, geographic analysis and consumer characteristics,” says Chris Neenan, the company’s European vice-president.
He adds that the ideal is “a human moderation system, backed by fraud detection technology”.
But external monitors of review sites believe these measures are failing to stop many dodgy reviews. Fakespot, a New York-based company that monitors and grades reviews on a range of sites for the public and clients, reckons that 30-35% of TripAdvisor reviews are not reliable, with the proportion rising during heavy booking times.
Chief executive Saoud Khalifah says rival Yelp does a better job, with about 20% of its reviews judged not reliable. “Yelp is more focused on reviews,” he says, pointing out that TripAdvisor is increasingly taking reservations. “You can see there’s a lack of monitoring of reviews as the focus has shifted,” he adds.
Fakespot generates grading from A to F for reviews of a product or service, based on a number of factors. If a reviewer has appeared before on any of the sites it monitors, it considers all those as part of the process, along with natural language processing of review wording and other data. Khalifah says it is harder to detect untrustworthy reviews by humans rather than software, but machine learning helps.
Why fake reviews happen
He says hotels and other businesses represented on review sites are generally focused on getting to the top of common searches, which usually means those with the highest ratings. For the sites, highlighting what looks like a highly-recommended option is more likely to generate a sale or a commission, he says. “That’s why fake reviews happen.”
One option is to limit the opportunities for, and the nature of, customer responses. London-based feedback collector TruRating surveys its clients’ customers as part of the buying process through payment keypads, “purchase complete” webpages or follow-up emails.
Online, TruRating asks a single numerical question of each customer and claims completion rates of 50%, with an optional text box used by 10%. Although average results to questions can be published, textual responses are available only to clients, so there is little incentive to provide fake feedback.
TruRating founder and chief executive Georgina Nelson, who worked as a lawyer for Which?, says online reviews tend to be written by about 1% of customers. Most want to guide other people or help businesses improve, she says, adding: “I think it’s done with goodwill. Unfortunately, it’s just been ruined and I don’t think it can be trusted any more.”
Tommy Noonan, ReviewMeta.com
Tommy Noonan, who ran an online review site for a decade, thinks there are other ways to make them work. He now runs ReviewMeta.com, which provides a free public check of online reviews on Amazon using 15 different tests, including tracking that reviews the site deletes.
Based on his research, Noonan estimatesthat 7-9% of Amazon reviews are not authentic, compared with the company’s claim of less than 1%. Fakespot puts the figure at higher at 30%, with books at around 10%, but Bluetooth headphones at 50-60%.
Electronics and accessories are vulnerable to review hijacking, where Amazon allows reviews of similar products to be pooled and which sellers exploit by “hijacking” reviews from completely different products.
Noonan agrees that feedback should only come from verified purchasers, with his research suggesting that reviews from unverified buyers are more than twice as likely to be deleted by Amazon. But he thinks Amazon should also get the first few submissions from a new reviewer checked before posting by established users.
He says it should also show when an item was purchased and received and when the review was posted, as an instant reaction has less credibility, along with how long someone has been a reviewer and their average awarded score. He also advocates Amazon employing in-house product testers to provide a trusted alternative view.
Despite the difficulties in making them work, Noonan believes well-managed online reviews have value to customers and companies running them. “Nobody wants to look up a product and see zero reviews,” he says. “There’s a balance between getting as many reviews as you can – and making sure you know who each and every reviewer is.”