The bad news: Human beings are lousy at identifying deceptive reviews.
The good news: Cornell researchers are developing computer software that’s pretty good at it. In 800 Chicago hotel reviews, their software was able to pick out 90 percent of deceptive reviews. In the process, the researchers uncovered some key features to help determine if a review was spam, and even evidence of a correspondence between the linguistic structure of deceptive reviews and fiction writing.
The work was recently presented by Myle Ott, Cornell doctoral candidate in computer science, at the annual meeting of the Association for Computational Linguistics in Portland, Ore. The other researchers include Claire Cardie, Cornell professor of computer science, Jeff Hancock, Cornell associate professor of communication and information science, and Yejin Choi, a recent Cornell computer science doctoral graduate.
“While this is the first study of its kind, and there's a lot more to be done, I think our approach will eventually help review sites identify and eliminate these fraudulent reviews,” Ott said.
The researchers created what they believe to be the first benchmark collection of opinion spam by asking 400 people to deliberately write false positive reviews of 20 Chicago hotels. These were compared with an equal number of randomly chosen truthful reviews.
As a baseline, the researchers submitted a subset of reviews to three human judges – volunteer Cornell undergraduates – who scored no better than chance in identifying deception. The three did not even agree on which reviews were deceptive, reinforcing the conclusion that they did no better than chance. Historically, Ott notes, humans suffer from a “truth bias,” assuming that what they are reading is true until they find evidence to the contrary. When people are trained at detecting deception they become overly skeptical and report deception too often, generally still scoring at chance levels.
The researchers then applied statistical machine learning algorithms to uncover the subtle cues to deception. Deceptive hotel reviews, for example, are more likely to contain language that sets the scene, like “vacation,” “business” or “my husband.” Truth-tellers use more concrete words relating to the hotel, like “bathroom,” “check-in” and “price.” Truth-tellers and deceivers also differ in their use of certain keywords, punctuation, and even how much they talk about themselves. In agreement with previous studies of imaginative vs. informative writing, deceivers also use more verbs and truth-tellers use more nouns.
To evaluate their approach, the researchers trained their algorithms on a subset of the true and false reviews, then tested them on the rest. The best results, they found, came from combining keyword analysis with the ways words are used in combination. This combined approach identified deceptive reviews with 89.8 percent accuracy.
Ott cautions that the work so far is only validated for hotel reviews, and for that matter, only reviews of hotels in Chicago. The next step, he said, is to see if the techniques can be extended to other categories, starting perhaps with restaurants and eventually moving to consumer products. He also wants to look at negative reviews.
“Ultimately, cutting down on deception helps everyone,” Ott says. “Customers need to be able to trust the reviews they read, and sellers need feedback on how best to improve their services.”
Review sites might use this kind of software as a “first-round filter,” Ott suggested. If one particular hotel gets a lot of reviews that score as deceptive, the site will know to investigate further.
Blaine Friedlander | Newswise Science News
Robots as Tools and Partners in Rehabilitation
17.08.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau
Low bandwidth? Use more colors at once
17.08.2018 | Purdue University
New design tool automatically creates nanostructure 3D-print templates for user-given colors
Scientists present work at prestigious SIGGRAPH conference
Most of the objects we see are colored by pigments, but using pigments has disadvantages: such colors can fade, industrial pigments are often toxic, and...
Scientists at the University of California, Los Angeles present new research on a curious cosmic phenomenon known as "whistlers" -- very low frequency packets...
Scientists develop first tool to use machine learning methods to compute flow around interactively designable 3D objects. Tool will be presented at this year’s prestigious SIGGRAPH conference.
When engineers or designers want to test the aerodynamic properties of the newly designed shape of a car, airplane, or other object, they would normally model...
Researchers from TU Graz and their industry partners have unveiled a world first: the prototype of a robot-controlled, high-speed combined charging system (CCS) for electric vehicles that enables series charging of cars in various parking positions.
Global demand for electric vehicles is forecast to rise sharply: by 2025, the number of new vehicle registrations is expected to reach 25 million per year....
Proteins must be folded correctly to fulfill their molecular functions in cells. Molecular assistants called chaperones help proteins exploit their inbuilt folding potential and reach the correct three-dimensional structure. Researchers at the Max Planck Institute of Biochemistry (MPIB) have demonstrated that actin, the most abundant protein in higher developed cells, does not have the inbuilt potential to fold and instead requires special assistance to fold into its active state. The chaperone TRiC uses a previously undescribed mechanism to perform actin folding. The study was recently published in the journal Cell.
Actin is the most abundant protein in highly developed cells and has diverse functions in processes like cell stabilization, cell division and muscle...
17.08.2018 | Event News
08.08.2018 | Event News
27.07.2018 | Event News
17.08.2018 | Physics and Astronomy
17.08.2018 | Information Technology
17.08.2018 | Life Sciences