Study Finds That Video Game Loot Boxes Share Link With Problem Gambling
Are Loot Boxes Gambling?

A recent peer-reviewed survey report from David Zendle that was published on PLOS One on November 21st, 2018, titled “Video game loot boxes are linked to problem gambling”, delved into the tangled, gritty web of loot boxes and their potential tie-in with problematic gambling addiction. As some of you may know, unsanctioned or unlicensed gambling is illegal in most developed countries.

The results of the survey were interesting to say the least. After garnering results from 7,422 gamers, it found that there was a shared link between people who engage in purchasing loot boxes and potential problem gambling. The gist of Zendle’s findings states…

“This research provides empirical evidence of a relationship between loot box use and problem gambling. The relationship seen here was neither small, nor trivial. It was stronger than previously observed relationships between problem gambling and factors like alcohol abuse, drug use, and depression. Indeed, sub-group analyses revealed that an individual’s classification as either a non problem gambler or a problem gambler accounted for 37.7% of the variance in how much they spent on loot boxes.”

Zendle counteracts the claim from the Entertainment Software Association – who claimed that loot boxes were not gambling and that there was insufficient evidence to move forward with any regulatory actions – by stating that what empirical evidence they gathered from this particular study seems to show that there could be potential dangers in loot box usage and problematic gambling.

While the study might seem like a slam-dunk win against those aiming to proliferate the spread of loot boxes throughout the gaming industry, it comes with quite a few caveats.

Loot Boxes

Firstly, Zendle concedes that there may have been some bias from participants going in who knew that the survey would be about loot boxes, and may have answered in favor or against certain questions in recognition of this fact.

The report also acknowledges that the gamers recruited for the survey from Reddit may not have actually been core gamers from the core demographic, but rather individuals who represented a community interested in the discussion, study, and research analysis of loot boxes.

Then there’s the recognition of correlative discrepancies, with the report being up front that loot boxes could be luring in problem gamblers as opposed to loot boxes encouraging people to become problem gamblers.

And finally, there was an issue with the way the survey was conducted. However, the issue with the survey was not mentioned in the report itself, but it was pointed out by Kotaku In Action user Ask Me Who. It was argured that the PGSI (Problem Gambling Severity Index) method of measuring problem gambling could very well label some people as problem gamblers even if they don’t actually gamble much, and vice versa. They write…

“In this test where you place on the gambling scale has no real relevance to how much of a gambling problem but does link to what game you play, and if you have a gambling problem (such as the whales) you’re disproportionately more likely to be attracted to games with those features.”

This issue with the test raised doubts among some of the others within the Kotaku In Action community, who mostly value critical analysis and taking a skeptical, factual approach to verifying information. The community oftentimes breaks down shoddy reports from agitprop agents who attempt to push the narrative that violent games make people violent (they don’t), or push out studies that attempt to show that games featuring sexy characters make you a sexist (they don’t).

I did reach out to David Zendle, to ask him about the potential discrepancy involving the PGSI as a means of veracity for a study relating to correlative discrepancies that were brought out by Ask Me Who. Zendle responded with a rather lengthy and comprehensive answer breaking down the relevance of PGSI being used for this particular study, and how relevant it may be for building crossover data between video games with loot boxes and gambling.

Zendle published his statements via Twitter on November 28th, 2018, writing…

“This is a really interesting question!

 

“Looking back at the thread, the crux is the following:

 

“This is the PGSI: http://www.ccgr.ca/en/projects/resources/ProblemGamblingSeverityIndex.pdf …. It’s pretty much the standard questionnaire used to measure problem gambling.

 

“People are worried that it stops working in a gaming context

 

“Here’s an example from the OP: Someone could answer: Have you needed to gamble with larger amounts of money to get the same feeling of excitement?

“With a high score, despite never actually using real world money – referring instead to the OP’s “virtual funnymoney”. So, if a bunch of in-game behaviours that the OP deems fairly normal are enough to classify them as a problem gambler via the PGSI, despite them never actually spending real world cash, *does the PGSI still work in this domain*. Or is it broken for gamers?

 

“This is a really cool question. In the words of the OP, when you take players of games where “the grind and the gamble was the point of the game”, does the scale still make sense? Does it still work?

 

“Disclaimer before I begin: I can’t definitively answer this question.”

Even though Zendle mentioned that he couldn’t answer the question definitively, he did go into an explanation of how the PGSI retains validity even if it may not be 100% accurate for the topic at hand. Zendle went on to say…

“When I was about a year into my PhD, I became super obsessed with measuring things: How could we work out what was going on in people’s minds? How could we measure things like fun, addiction, etc? And how could we do this accurately?

 

“My supervisor pointed me to a really great book – Paul Kline’s “Psychometrics Primer”. It (and other related books – e.g. ‘Scale Development’ by DeVellis) set out a comprehensive set of practices for building scales.

 

“These books crucially made it clear that actually writing a questionnaire was only about 1/10th of the battle: Much more important was a process of validation: Working out if a scale actually measured the thing you thought it was measuring. In the language of psychometrics, what we have here is a validation problem: Is the PGSI always measuring what we think it’s measuring? Or does it measure something else a substantial portion of the time?

 

“There are a bunch of techniques for working out if this is the case. They’re usually statistical in nature.

 

“For instance, you’d expect two tests that predicted similar things to correlate with each other a certain amount if you ran a bunch of people through them. So, you run a bunch of people through the PGSI and some other gambling measure – maybe the (now slightly less popular) South Oaks Gambling Screen.

 

“You see if they correlate. If they do – it’s some evidence that the PGSI is measuring the right kind of thing. Another thing you can do is see if your questionnaire is predicting the right kind of thing.

 

“So run a bunch of people through the PGSI, and also get them to answer a bunch of questions about the kind of things problem gambling predicts (e.g. problems at work). If your questionnaire is predicting the right things (and NOT predicting the wrong things), there’s some more evidence that it’s valid.”

Loot Boxes

According to Zendle, even if the validity of PGSI is being called into question in this particular study, it has been reliable in past studies, and this gives him the confidence to believe that it retains its validity in this particular case…

“I think one of the reasons the PGSI is so popular is that it’s been validated a bunch. Studies like this one (http://www.ccgr.ca/en/projects/resources/CPGI-Final-Report-English.pdf …) tell us that it seems to be working the way it should if it really does measure problem gambling.

 

“Crucially, however, I’m not aware of whether it has been re-validated on a specific sample of gamers. We have good evidence that it predicts the right things and correlates with the right things across people in general: But does it break down on this subgroup?

“I doubt this was a question anyone is asking. Why would they? It works everywhere else, so why would it break here?

 

“However, if OP’s experience is common across gamers, there might be something here. In other words, if the presence in-game mechanics that relate to gambling cause gamers to answer PGSI questions so that their responses systematically do not predict ‘the right things’… … the scale might be invalid in these contexts.

 

“The only way to find this out is to run a validation study:

 

For instance, you could:
1. take a bunch of gamers
2. give them the PGSI
3.ask them some questions about the kinds of things it should predict
4. see if they correlate

 

“This is how psychometrics should operate – continually testing our measurement procedures, and making sure they work. And if they don’t work: Coming up with something better. This is how psychometrics should operate – continually testing our measurement procedures, and making sure they work. And if they don’t work: Coming up with something better.

 

“As it stands, it’s not clear how big the threat to validity is here. OPs experience may indicate that the PGSI systematically misestimates problem gambling amongst gamers:

 

“But then again, the game OP plays may be an exception, and this misestimation may only occur…amongst a tiny % of gamers.

 

“Alternatively, the way OP interprets the questions may be unusual, and not generalise to other people (e.g. imagine if almost everyone else thought “gambling – obviously that means the horses, lottery, etc – not in-game”). So, it’s hard to quantify how big of a threat to validity we have here. It might be nothing much – on the other hand, it might be something important.

 

“If we’re accidentally measuring the wrong thing here *we need to know ASAP*.

 

“What should happen now?

 

“Science is supposed to be self-correcting. So, when we spot a potential bug, people who care should spin some studies up to look at the validity of the PGSI in gaming populations. It seems like an interesting question.

 

“I’m not sure if I’m convinced that it’s broken in these contexts. I’d require more concrete evidence.

 

“But it’s a very sensible question to ask. And the only way to find that more concrete evidence? Running a validation study.”

Of course, none of this negates the fact that the report itself repeatedly states that more research is required and more investigations into loot boxes and gambling may be needed before drawing any conclusions on the matter.

At the moment, the Federal Trade Commission is doing just that by launching a thorough investigation into loot boxes at the behest of a U.S. Senator. Australia has also begun investigating loot boxes as well following behind initiatives taken by Belgium and the Netherlands, both of whom took a hard stance against loot boxes in premium priced games following their own independent investigation.

(Thanks for the news tip Lyle)


Ads (learn more about our advertising policies here)

About

Billy has been rustling Jimmies for years covering video games, technology and digital trends within the electronics entertainment space. The GJP cried and their tears became his milkshake. Need to get in touch? Try the Contact Page.

Do NOT follow this link or you will be banned from the site!