A Short Quant Analysis of Video Game Ratings
A kind Redditor made a database of all video game reviews from IGN, a gaming website. Obviously I couldn’t let this go to waste, so I popped it into R to see what could be done. First off, a histogram of game scores reveals two obvious biases:
First off, there seems to be a substantial upward bias in assessment. As a customer, I’d like a nice normal distribution centered around five. This is centered around 6.9 and skewed to the right, which seems to validate the intuition that reviewers are often overly generous with games. However, this could reflect the data-generating process: reviewers do not randomly select the games that they review, and may be non-randomly sampling a normal distribution. However, the second problem is that reviewers seem to operate by flawed heuristics – the obvious big huge spikes are at round numbers and at .5, suggesting that reviewers are operating within a constrained space. I would suggest we could probably eliminate ratings more finely-grained than a half-point, because this suggests it’s not tremendously informative.
As a gamer, as opposed to a social science measurement guru, I’m also curious about something else – are some platforms systematically better than others? The answer seems to be, “not really – but some platforms are worse”. See the graph below for the effects – basically, the line represents the average mean score, and the distance from the middle reflects whether games on that platform are better than the average (right of the line) or worse (left of the line). Most are worse! There are few platforms with positive and statistically significant positive effects on predicted score – interestingly, Macintosh is one of them. Either Macs are a better gaming platform than given credit for – or only good PC games get ported to Mac. However, there are a bunch of negative effects from platforms, and as you see below, there are a number of categories that produce truly awful games. DVD interactive “games” are rated horribly, which shouldn’t be surprising – but Wii games, Game Boy games, and Nintendo DS games are – the coefficients on those are large, negative, and statistically significant. I wonder if that’s because their games are targeted towards kids.
The data is kinda neat, but limited. It’d be nice if it were dated, time series data – I’d love to examine whether ratings are changing over time. Eyeballing the chart of which systems do well and poorly, I doubt it. Curmodgeons love to claim ratings are too generous and it was better back in the day, as curmodgeons claim to do. I suspect the data would prove them wrong. Next step might be learning how to use this hacked-together Metacritic API…