Two college doctoral candidates in France attempt to quantify biases of U.S. film critics in their college thesis paper, which left “New York Post” newspaper critic Kyle Smith amused.
Toulouse School of Economics Ph.D. candidates Fanny Camara and Nicolas Dupuis dissect tendencies of movie critics in their paper “Structural Estimation of Expert Strategic Bias: The Case of Movie Reviewers.” The 55-page analysis uses sophisticated mathematics such as logarithms to predict opinions of film critics after allegedly discovering biases.
Writes film critic Smith: “They assess the professional reputations of 35 veteran film critics (including me), whether we thought the movie was going to be good before we saw it, whether our opinions were ‘correct’ (i.e., in accord with the majority) and whether we were telling the truth about what we actually thought. The model meant to predict our responses didn’t work on me (I’m unpredictable), so they concluded I’m a liar. Among 35 critics surveyed, I came out as the leader in ‘misreporting’ my actual views.”
Let me say that an obvious problem with employing analytical data analysis here is that all the arts are subjective. Movies are difficult to peg due to different personal to the tastes, bias, cultural outlook and prejudices. Reasonable people can completely disagree about the same movie (song, TV show, stage play, video game, etc.).
The professional film critic space is also in turmoil because media outlets like newspapers are discharging their on-staff critics in cost cutting, as consumers posting opinions online have devalued professionals. Further, the children, youth and horror audiences are not particularly influenced by professional film critics. Audiences for high-brow drama movies are. The bottom line: I think that this is an area that can’t be broken down to universally-accepted statistical analysis.
Says “Expert Strategic Bias”, “The career of these reviewers is built on their reputation for accuracy.” I disagree. Accurate-to-what baseline? The reputations of film critics rely on their insights and whether consumers find reviewers helpful in picking movies that consumers will like/not like.
“Expert Strategic Bias” cites mass-market website Rotten Tomatoes and Google Trends as its benchmarks—which I don’t consider pillars of truth or absolute wisdom. The paper also references “reputational cheap-talk” frequently that isn’t explained; looking at other sources, it apparently is a phrase popular with European academics denoting one’s tendency to be untruthful.
Writes film critic Smith: “I confess I don’t understand the mathematical equations in the paper, which to me look like a box of spilled toothpicks, but I apply the cold calculus of GIGO: Garbage In, Garbage Out.”
Related content:
Leave a Reply