Tag Archives: Chris Kwak

Video Games’ Ratings And Sales

Most of us assume that the better the product is, the better it will sell. This carries to video game sales, where conventional wisdom is that the better a game is, the better it will sell. In a recent report, which I first learned about via a Joystiq post, SIG analysts Jason Kraft and Chris Kwak test this assumption.

To simplify the report, they collect data on 275 games published for the Sony PlayStation2 console. Running a number of basic OLS regressions, they find no statistically significant relationship between ratings and sales. For those wondering about the specific methods and sample, follow the link to the Joystiq post above to receive a copy of the report, but my opinion is that they pass on methodological grounds (this does count for something). It should also be noted that they do some clever testing and sample groupings, even if I am not going to discuss them here.

But I do have a criticism of their analysis, and it is one that I frequently use with most statistical analyses done in the social sciences. I do not want to tip my hand, yet, because I am going to request their dataset and run some regressions myself, but you might be able to figure it out if you have been in my classes and/or study this chart taken from their report (the y-axis is millions of units sold).

Ratings vs Games Sold data plotted

For now, however, let’s assume that their findings are solid. What explains the insignificance of the relationship? Unfortunately, and this is particularly odd given that Kraft and Kwak are in the business of predicting sales patterns so they can make investment recommendations, the report lacks an answer.

Perhaps the answer can be found in the movie industry. In the past, I have seen research that argues that the older people become, the less they rely on press reviews of movies and more on their friends’ opinions. Might the same dynamic be found with video games? I suspect so.

Kraft and Kwak’s analysis does not let us test this hypothesis, although it does support the argument. That is because the data they collect is for the past five years and for games published on the PS2. As has been widely reported, video games are now dominated by late 20 somethings (the average age of players is 30), rather than teens. This is the age group that visits the movies based on what their friend says, not what the newspaper writes. With this kind of sample group, and using the age-dependent argument, we expect a small or non-existent relationship between video games’ ratings and sales.

This, however, does not allow for any conclusions. Instead, we need a larger data set. This could include breaking data down based on the age of the primary player’s age for each game. A less satisfactory way would be to collect data on the Nintendo GameCube or data from the 1990s (as opposed to the 2000s). The problem with the GC data is that even this group is old. The problem with the 1990s data is that reviews of video games were minimal, and so there was a much smaller chance of people receiving ratings reviews.

Unfortunately, finding that best dataset would be a lot of work and time, neither of which I can afford to do. Despite these problems, Kraft and Kwak’s research is interesting, fun, and well done. I will be sure to report the results of my own analysis if I am allowed access to Kraft and Kwak’s data.