melannen: Commander Valentine of Alpha Squad Seven, a red-haired female Nick Fury in space, smoking contemplatively (Default)
melannen ([personal profile] melannen) wrote2018-12-12 12:07 am

Let's Read A Scientific Paper: Is Social Media Bad For You?

Specifically, this one:

Melissa G. Hunt, Rachel Marx, Courtney Lipson, and Jordyn Young (2018). No More FOMO: Limiting Social Media Decreases Loneliness and Depression. Journal of Social and Clinical Psychology. free preprint on Researchgate.

For quite awhile I've had a recurring desire to start a blog that was just me reading whatever scientific study was hitting the mainstream media that week, summarizing/analyzing the paper for a layperson's perspective, and then pointing out what the media coverage should have been saying instead. I tried doing this on a tumblr sideblog for awhile; what I always forget is that it takes a long time to do. Even if I pick a simple paper that's available open access or preprint, isn't super long or technical, don't look up any of the citations, and just accept any math I can't figure out, it still takes more free time than I seem to consistently have these days (I say, as I look down my long trail of excessively long journal entries from this week...)

But this paper - about how social media is very bad for your psychological health - hit the media a few weeks ago, and the coverage made me so mad I either had to write this up or stew over it all night silently instead, and it seemed like a topic y'all would be interested in, so here we go, let's do this!

Abstract:
Introduction:
Given the breadth of correlational research linking social media use to worse well-being, we undertook an experimental study to investigate the potential causal role that social media plays in this relationship.
Method:
After a week of baseline monitoring, 143 undergraduates at the University of Pennsylvania were randomly assigned to either limit Facebook, Instagram and Snapchat use to 10 minutes, per platform, per day, or to use social media as usual for three weeks.
Results:
The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.
Discussion:
Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being


Okay, let's go down the actual paper point-by-point, looking at what they're actually doing here. You may want to pull up the paper I have linked above and read along, it's pretty readable as these go, but you should be able to follow along without that.

  • The experiment was done on 143 disproportionately female undergraduates who were taking psychology classes at an urban Ivy League university, who were iPhone owners and active users of Facebook, Snapchat, and Instagram, and who chose this social media study out of a list of other possible psychology studies to participate in to earn course credit. It was not a study of “humans in general”. This is such a problem in Psychology studies that it’s become almost a joke, but it doesn’t stop researchers from continuing to do it, and most of the other studies they’re citing did the same thing. I can think of half a dozen ways offhand why a study of psychological health and social media use of self-selected current undergraduates at an Ivy League university might not generalize well to other human beings; I bet you can think of even more.

  • They didn’t test “social media use”, they tested use of the Facebook, Instagram, and Snapchat iPhone apps. This is a very specific subset of social media that, in some cases, we know for a fact have a profit motive that have caused them to specifically (if not intentionally) design their technology to make people feel worse. It’s also specifically use of the mobile apps, which would generally apply to use when someone is away from their computer. I can think of half a dozen ways offhand why a study of mobile use of Snapchat, Instragram, and Facebook might not generalize to all social media use; I bet you can think of even more.

  • They explicitly point out (good for them!) that nearly all previous studies have been correlational – that is, “less happy people use facebook more” – which doesn’t make it clear whether the unhappiness causes the facebook use or vice versa. They list two previous studies that do try to prove causation, but have issues with both of those studies. I’m not going to dig in enough to read those studies too, but based just on what’s in this paper, I have issues with the FOMO authors’ analyses of them.
    • Verduyn et al., 2015 apparently studied passive vs. active engagement on Facebook (reading vs. posting) and found that passive was worse, an observation that backs up my anecdata, but I’m not sure what that says about social media use in general. (I’d particularly like to see passive social media use compared to a) other passive media consumption and b) sitting in a room where other people are talking and you aren’t allowed to join in.) The FOMO authors’ objection, however, is that “that the longer one spends on social media, the more one will be engaging with it in a passive way.” Have they. Have they never lost an entire night to a comment thread before? Do they just… not understand what active use of social media is? Sure, I can lose an hour scrolling Tumblr or Wonkette comments, but I can lose a day in having a discussion or writing a post or taking photos. I'm not even going to think about the amount of time I've spent on this post and I haven't even posted it yet. (Maybe they don’t think “content creation” counts as social media use?)
    • Tromholt, 2016 apparently randomly assigned people to completely abstain from Facebook for a week. I agree with most of what they say about this, but then I get to “many users have grown so attached to social media that a long-term intervention requiring complete abstention would be unrealistic.” Dude. People have done scientific studies requiring abstention from food, I think recruiting people to give up Facebook for a few months is doable. Especially given stats you quoted in this same paragraph that half of US adults don’t even use it regularly anyway. Either you’re saying that giving up Facebook would mess up people’s lives so badly that you can’t ethically do it – in which case you’ve already answered your research question about whether it’s a net good – or you’re saying that you don’t think you could get Ivy League undergraduates to do it for course credit, which is another thing entirely.

  • They chose the "well-being constructs" they were testing based on ones that had been shown in other studies to correlate with social media use. From one side, this is good - it means that they're trying to double-check the results of previous studies. In other ways, this is bad - the "well-being constructs" they're using are subjective self-reported questionnaires; by choosing specific questionnaires that have already been shown to correlate well with social media use, but changing the rest of the experiment, they're basically just double-checking those specific questionnaires. Rather than testing actual well-being, by filtering for questionnaires that they already know get the results they are looking for they may just be testing against certain layout of questionnaire that tends to get certain results from social media users. (Note: self-reported questionnaires in psychology are. not the most rigorously scientifically supported things, in general. They are sometimes the best we've got, but that's often not saying much.)

  • They tested well-being using seven different factors, all of which were measured using standard questionnaires. I’ll leave most questions about the specific questionnaires to people who know their current psychology more than me, but an important thing to note is that self-reported questionnaires are very vulnerable to biased reporting – that is, the study participants think they know what result they should be getting, so without even realizing they’re doing it, they slightly round up their answers toward that result. This is why blinding is important – trying to make sure the participants don’t know what result they should be getting, in order to reduce the effect of this (this study is not even slightly blinded.)

  • They tracked whether the students really did limit their usage by having students send them screenshots of their phone’s battery usage by app. This means any usage not via the app on their own phone wasn’t tracked – which, to their credit, they point out as a weakness of the study. They also had a lot of trouble getting the screenshots sent consistently every day, so halfway through the study they changed the rules to weekly, which isn’t great in terms of the study design, and also meant they lost days’ worth of data whenever anybody’s phone battery ran out and cleared the records.

  • In the first half of the study, they had students do the depression inventory less often than the others, because “it was assumed that depression would not fluctuate much on a week-by-week basis.” Is that… is that a standard assumption in psychology? Because I remember being an undergrad; my perceived level of depression fluctuated wildly based on things like how many hours ago I’d last remembered to make some ramen – and the study authors changed their mind on this halfway through the study and started including the depression inventory a lot more often, so clearly they too realized they were wrong. It worries me that they had such a limited understanding of depression in their study population going into a study about depression that they changed their mind on something this basic halfway through the study.

  • “We found that baseline depression, loneliness, anxiety, perceived social support, self-esteem, and well-being did not actually correlate with baseline social media use in the week following completing the questionnaires.” – that is, people who used social media before the study started more weren't actually less happy. Which is interesting, because it seems to contradict those previous studies which all claimed it did. The paper proposes that this is because the correlation is not with actual social media use, but with perceived social media use – that is, unhappy people think they use social media more, when in fact they don’t – and that self-reported vs. objectively tested usage doesn’t actually match up all that well. This seems to me like the actual most interesting result in the paper, especially given that their well-being data was all self-reported and might be subject to the exact same bias, but they don’t dwell on it much.

  • “Baseline Fear of Missing Out, however, did predict more actual social media use.” Okay. You guys. Fear of Missing Out. This is such a buzzwordy thing. It was invented by ad execs in the late 90s to convince people to buy stuff, and has since become something that is basically used entirely for social media scaremongering. Which is to say, Fear of Missing Out has, in large part of late, been defined by “that thing that young people who use a lot of social media experience”, so of course it correlates with social media use. I’m not saying it’s not a valid psych concept, but I’d like to see a lot more about it outside of the context of social media use and/or kids these days before I stake very much on it, and see more of it in the context of other less-buzzwordy psychological factors.

  • The students were divided into a control group and an experimental group. They don’t seem to provide the numbers for exactly how many students were in each group, but with two groups and a total of 140, that means the number of students who limited their use was only around 70. Because of the changes halfway through the study, the number who limited their use and had their depression scores fully recorded would have been only around 30. This is the level where three or four outliers can drastically change the significance of your results, even if they are far less dramatic as outliers than Spiders Georg (and may not even be obvious as outliers - if a couple of kids in the control group caught the flu and none did in the experimental group, for example, that's within the range of chance for those kinds of numbers, but might have a big effect on your results beyond what your statistics are going to capture.)


  • They found quite high significance for “loneliness”, which convinced me to look at the actual test they used for loneliness, which was the revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980. What struck me, when I looked up the questionnaire, is that it asks things like “I have a lot in common with the people around me” or “there are people I can talk to”. Those are the kinds of questions that are not designed to acknowledge the existence of long-distance text-based friendships, and given that these are college undergrads away from home, they are likely to have a lot of long-distance friendships, and higher social media use probably corresponds to putting more effort into those long-distance friendships. (There’s discussion to be had about the quality of internet vs. in-person friendship, but this questionnaire, which was designed when email was barely a thing, is not the way to have it.) A difference between interacting more with local vs. long distance friends leading to differences in answers in the questionnaire about people “around me” could explain the entire difference in the loneliness scale. It wouldn't even require the students to feel like their online friendships were more lonely - because the questionnaire automatically assumes that fewer in-person interactions means more loneliness.

  • Mind you, they didn’t make it easy to figure out what the actual difference in loneliness score was. They give the p-values and state that it’s “significant” but “significant” in statistics doesn’t mean “large”, it just means “probably not an accident”. It looks from the poorly-labeled graph like there was an (average?) drop of about 6 points out of 80 possible, which is the equivalent of answering about one and a half of the questions differently – certainly within in the range of what might have happened if even a couple of the questions were biased against online interactions. The only other actual data we get for loneliness is “F (1,111) = 6.896” without any context, and I will confess to not knowing what that means without any further explanation.

  • For depression, they split the results into two groups – “high baseline depression” and “low baseline depression”. They don’t at any point say how many people were in each of these groups. But remember we’re starting with about 70, so if it split evenly, it would be at most 35 in each; it’s unlikely it was split evenly, so now we’re looking at even smaller numbers in each group. Also, remember they changed the way they measured depression halfway through the study, so it’s at most 15 in each group at that point. At that point, just one kid gets the flu (physical illness tends to increase depression scores) and your numbers are off. And at no point do they discuss how they factored the change in methodology into their statistical analysis.

  • There is a graph for depression. I hate this graph. It appears to directly contradict what is said in the text. If it doesn't directly contradict the text, it is an incredibly bad graph that manages to give the impression it does (well, it's an incredibly bad graph either way.) So I have no idea whether to go by what the text says or what the graph says.

  • Either way, they do both agree that even the control group had a significant decrease in their depression scores, and the experiment group improved more. That seems to imply that simply taking part in the study reduced depression levels. This makes sense to me – for my personal depression, feeling like I’m accomplishing something and providing a net good to humanity is one of the best ways to improve my mood a little, as was (when I was an undergrad) successfully finishing homework, so getting a little hit of “I did the study!” every day for a month would probably reduce depression scores (not cure it, just reduce the scores a tiny bit), and in the experimental group, knowing that I had succeeded at meeting the study conditions of limiting my use would have helped even more. And given the other result that perceived social media use is more closely correlated to well-being than actual social media use, it could be that just being mindful of the time spent on social media improved mood, regardless of actual changes in use, because people weren’t overestimating time spent and beating themselves up for it. It would be interesting to see if there was a difference here between the people who reported every day and those who reported every week, or between people who were always successful in meeting the conditions and those who weren’t – but of course they don’t break that down at all. Probably because at that point the number of people in each group would be so small it would be impossible to treat it like good data.

  • They also saw a decline in FOMO and anxiety over the course of the study, but with no significant difference between control and experimental groups, which further backs up the possibility that just participating in the study increased well-being, or the possibility that the results are due to bias in self-reporting due to the study not being blinded (or due to taking the same tests repeatedly changing the way the students responded to the tests), or reversion to the mean (where people are more likely to sign up for a study like this if they’re doing worse than normal, and then get a little closer to normal over the course of the study, because most things to eventually get a little closer to normal.)

  • You’ll note that the paper’s title mentions FOMO but FOMO was not one of the things that showed a significant result for experimental vs. control. See what I meant about it being a buzzword?

  • The other three tests showed no significant difference either between experimental and control groups or between baseline and post-experiment. It seems to me that "social support, self-esteem, and well-being are completely unaffected by social media use" is almost a more interesting result, but for some reason, that part didn't show up in their abstract or press release. Wonder why.



  • Is it time to talk about p-values? I think it is. Almost all of their results are reported mostly as p-values. If you think of experiments as attempts to figure out if something interesting is happening, p-value is basically how most scientists score how interesting it really is.

  • P-values are relied on for this probably a lot more than they should be, for lots of complicated reasons - the word "bayesian" gets thrown around a lot - but statistics are not my favorite thing, so let's just go with P-values are the standard for now. Explain p-values:

    • Flip a coin once, it comes up heads. This is not very interesting. Flip it twice, it’s heads both times, it’s a little bit interesting, but not very, that could just be coincidence. Flip it ten times and it’s heads every time, then either there’s something up with the coin, something up with the way you’re flipping it, or you’re a character in some kind of postmodern theater production and may actually be dead, and regardless, you should probably try to figure it what’s going on.
    • The lower the p-value, the more heads in a row you’ve rolled. A p-value of .05 means you’ve rolled about 5 heads in a row – there’s about a one in twenty, or 5%, or .05, chance of that happening purely by coincidence. .05 is what people usually use as the cut-off for interesting (aka significant). A p-value of .001 means you’ve rolled 10 heads in a row, and you’re in Tom Stoppard territory - something is definitely up.
    • With a coin flip, we have a pretty good idea of what the results would be if nothing interesting was happening (we would get 50% heads if we tried enough times, and get closer to 50% the more often we tried.) With most things in actual science, we don't really know what the probability of something happening by chance really is - some of the numbers plugged into the statistics are always going to be best guesses (otherwise, we wouldn't need to do the science.) For something like a particle accelerator, most of those best guesses are going to be really good best guesses, based on a lot of previous science, and engineering tests, and very hard numbers. For something involving people,
      not so much.

  • For loneliness, where there was a 'significant' difference between control and experiment - their p-value was .01, which is somewhere around seven coin-flips coming up heads in a row. For FOMO and anxiety – where the change was around the same in the control group and the experimental group – they get p-values around 9-10 heads in a row. These are pretty interesting results - something is probably happening, even if we disagree about what it actually is.

  • For depression, they only give the p-value as “<.05” and not the actual value, which seems kind of sketchy, and usually means it was somewhere around .04999.That’s back to about 5 coin flips in a row, or a one-in-twenty chance of it happening by chance.

  • Remember, they tested seven different things, and at least some of those things they cut up into different categories, so they were basically doing more than one test on them. We know they tested change for high depression groups and low depression groups, in both the experimental and control groups, and tested both the experimental and control groups for six other characters, so that’s up to 16 tests, and we don’t know if they tried to divide up any of the other characters and didn’t see anything interesting so they didn’t report it. But already we’re up to the 1-in-20 chance coming up in one test out of 16 – suddenly, a 1-in-20 chance doesn’t sound that interesting. (see this xkcd comic if I lost you on that one.)

  • This is why you need to share more data than just p-values. And why you need to replicate by having someone else perform the tests and see if they get the same results - the same 1-in-20 chance coming up twice in a row is suddenly a lot harder to write off. (This kind of psychology study is notorious for not getting the same results twice in a row.)

  • They wanted to get final follow-up data from the participants several months later. However, only about 20% of them responded, because they’d already earned their course credit and the semester was over (one of many reasons why using undergrads is not always the best choice.) Also, looking at the numbers elsewhere in the papers, it looks like for most of the tests they were only using data from about 110-120 of the 143 undergraduates who signed up. Why? What happened to the other 20-30? Did they drop out of the study, or was their data unusable for some reason, or did they reject it as outliers? Dunno, it’s not in the paper.

  • ALL THAT SAID - and it was a lot - this is not a particularly bad paper, as papers in this field go. It's probably significantly better than most - it's better than the last few I've read. The fact that I could shoot it full of holes is not so much on the scientists involved as on the field as a whole - on the pressure to publish something, on the fact that papers like this do get published regularly, on the fact that most of the things they did that I objected to are things that most psychology papers do.

  • Also, the free version of this paper is a pre-print version, which means the full version that's published in the journal might be slightly improved (not very much improved, though, probably, because this version was on the journal website for awhile.) Some preprints are actual rough drafts, and I wouldn't pick them apart this hard - but this one had a press release to go with it, and if you're putting out a press release, you're saying your paper is ready for prime time.

  • The title and all the press coverage - including in the science blogosphere, which used to know better but has been getting worse and worse - vastly overstated the results and the extent to which the results could be generalized to all people and all social media. They always do this. It's terrible. It builds mistrust and misunderstanding of science, reinforces bad practices in the lab, and creates a space for actual fake science to come in. It needs to stop. I don't know how to stop it.

  • Actual interesting results from this paper that I would have put in an article about it if I was a science journalist:
    • People who are unhappy probably think they are wasting time on social media more than they actually are. (Or, perhaps, happy people think they're using it less.)
    • It's possible that the reason studies like this get positive results is that press coverage of studies like this convinces people that if they're unhappy they must be using more social media which means that when they participate in studies like this they report that they use more social media if they're unhappy which means the studies get press coverage and the cycle repeats, because science doesn't happen in an ivory tower.
    • Tracking your social media use - or possibly just any sort of small, achievable daily task that feels productive, such as tracking your social media use for a study - seems to make people less unhappy.
    • Reducing Facebook and Instagram and Snapchat use on mobile devices among mostly-female ivy league college undergraduates appears to improve their mental health (which, given that Facebook at least was invented in order to make ivy league college girls feel like shit about themselves, is not a surprising result regardless of your opinion of social media in general!) Given the mental health problems being seen among Ivy League college undergrads, this is probably a very useful thing to explore further even if it's kept to that limited scope.


  • ...so that is what I do whenever I see a "new scientific study" being reported in the media. It is probably something you could learn to do as well! Even if you don't want to make a hobby of it as I have, it's useful to remember that pretty much any scientific study that is supposed to be giving "amazing new results" is really just somebody saying "I tried a thing and this result looks kind of interesting and maybe worth following up but I dunno really," only with stats, and with grant money on the line.

    I leave this study apparently claiming that women think with their wombs for someone else to analyze. :P
alexseanchai: Katsuki Yuuri wearing a blue jacket and his glasses and holding a poodle, in front of the asexual pride flag with a rainbow heart inset. (Default)

[personal profile] alexseanchai 2018-12-12 05:19 am (UTC)(link)
blood is compulsory, you see
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-12 05:25 am (UTC)(link)
I mean I sort of feel that "getting young women away from venues that are filled with the disproportionate harassment and targeting of young women makes their mental health improve" is not a particularly controversial or revolutionary finding. ODDLY.
peoriapeoriawhereart: Steve in khaki, Peggy foreground (Behind Woman)

[personal profile] peoriapeoriawhereart 2018-12-12 05:51 am (UTC)(link)
Bold of you to assume there is a difference between FB and an ILU.

I can think of half a dozen parameters that weren't in the part Jim's truck avoided a ruminant.

Swingline, Sweet Chariot.
peoriapeoriawhereart: Blair freaking and Jim hands on his knees (Jim calms Blair)

[personal profile] peoriapeoriawhereart 2018-12-12 05:47 am (UTC)(link)
I'm declaring teal deer for now, but may read in greater detail when my spoons aren't all grapefruit (I've no Bucky icon so Blair will have to do.)

WTF?!

Psychology is supposed to be one of the sciences and this wouldn't pass muster (or shouldn't, I don't know what standards 'are' currently) over where social sciences are committed.

I'm glad you read this for us and presented it; I've got people who 'may know more' or will at least have Caustic Eyebrows about it.
peoriapeoriawhereart: Pre-Serum Steve Rogers, shirt and suspenders (Sad Steve)

[personal profile] peoriapeoriawhereart 2018-12-12 06:02 am (UTC)(link)
Tis the publish or perish. (it's more me, tonight, I'm just home from work and about to turn in once I've run down my coffee from earlier. Later, I'll want a good strop of my teeth; this should do nicely.)

Well, the whole gutting of the fourth estate (my dear inky wretches) and the garret-ing of the humanities have left no one watching the watchmen, let alone the silverware.

lannamichaels: Astronaut Dale Gardner holds up For Sale sign after EVA. (Default)

[personal profile] lannamichaels 2018-12-12 03:29 pm (UTC)(link)
Who needs humanities? Full STEM ahead!
peoriapeoriawhereart: Blair freaking and Jim hands on his knees (Jim calms Blair)

[personal profile] peoriapeoriawhereart 2018-12-13 04:06 am (UTC)(link)
You get further when you include the arts. STEAM.

(I really need a battle cry for my fields such as they are. I spend most of my time Running with Engineers because it's easier for me to talk their shop than bring them up to speed to any thing over in history.)
peoriapeoriawhereart: Steve in khaki, Peggy foreground (Behind Woman)

[personal profile] peoriapeoriawhereart 2018-12-13 04:00 am (UTC)(link)
No, I know, during the Culture Wars (Bloom and Newt) I had a French major express utter disdain for science, all of them.

I'm all for STEAM. Which makes me think of an archaeology professor shared with the art department showing up to a pottery practicum in a heat suit (aka the baked potato beekeeper) so undergrads could engulf him in gouts of fire.

That's how you get interest in science. Great balls of fire.

Anthropology and History, classics minor. Fandom is my fandom.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 07:15 pm (UTC)(link)
I was deeply and perhaps weirdly (but hey it worked) affected in how I think about "STEAM" type stuff long before we even came up with the idea of the acronym when my (somewhat autie, very acidic, I hearted him but have always been afraid he just found me annoying) mediaeval history prof (of many very detailed high level seminars) pointed out that properly all knowledge is Humanities and that science and math are absolutely humanities, it's all natural philosophy, the division has been absurd since it started.

It's weird that THAT was the thing, maybe, that made my little historian brain go "OH" but it worked!
peoriapeoriawhereart: very British officer in sweater (Brigader gets the job done)

[personal profile] peoriapeoriawhereart 2018-12-14 04:43 pm (UTC)(link)
Math is a language and I took a weaving class specifically because I was early in my anthropology courses. The art majors seemed to think that was a weird reason to give. Later I got to explain to a history professor (he did appreciate these little nuggets) why old cloth was the width it was. He did teach American Industrialization, but was limited to steam powered looms.

The division came from sheer time constraints and people with limited capital needing to keep boilers from bursting before they could learn Latin.

Civics and civil engineering should be contained by the same body.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 01:49 am (UTC)(link)
Fuck "publish or perish" it is the worst.

THE. WORST.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 07:12 pm (UTC)(link)
Like I actually almost want to rant more about how the Worst it really is and how it is destroying knowledge and understanding and and and and . . . .

. . . .but like CLEARLY YOU ALREADY KNOW and also it's at the same time tiring and depressing and eventually one really does just end up at the point of that tumblr post that goes "reblog if AAAAAAAAAAHHHHHHHHHHHHH".

Kill it with fire okay.
peoriapeoriawhereart: Cartoon Stantz post-kafoom (Ray with marshmellow creme)

[personal profile] peoriapeoriawhereart 2018-12-14 04:19 pm (UTC)(link)
Like Bruce I'm always angry. I double-wield the English language pulling as much as I can from the fires through a narrow window and sprang it all into fanfiction.

Recharge at the little perfect moments. Like getting up from the dimpled bus roof and laughing at the bank you flew from.

landofnowhere: (Default)

[personal profile] landofnowhere 2018-12-12 06:10 am (UTC)(link)
Wow! Thank you for taking the time to go through all this!
nicki: (Default)

[personal profile] nicki 2018-12-12 06:24 am (UTC)(link)
This is a bad study and they should feel bad.

I have to consciously not skew my answers in a study. Even in a blind one, I can often figure out what they are trying to do just by the questions they are asking.

What they have told me is that if you have a bunch of people all of the same age, similar life spaces, and similar social class in a compressed geographical area and encourage some of them to limit their screen time, they feel better. Probably because they probably interact more, since they have a high level of opportunity for direct interactions and lots of targets of opportunity for interaction, and so they are in a better mood. (if you are doing depression testing that often, you aren't getting actual depression levels, you are getting mood). There are many studies on increased direct social interactions improving mental health.
ratcreature: The lurkers support me in email. (lurkers)

[personal profile] ratcreature 2018-12-12 09:06 am (UTC)(link)
I don't think it even has anything to do with interaction increasing. They told people to essentially fast a bit from an indulgence (the widely held assumption is that it is "good" not to be too attached to screens, which these college kids would have been told since childhood screen time limits imposed by their parents) and gave them a feedback method to perform their commitment and virtuousness (rather than having to chronicle their continued indulgence like the control).

I think a control group reporting they ate green vegetables once a day or drank water instead of a soft drink might have shown the same positive mental health effect, and the study may not say anything about social media as such at all.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-12 08:19 pm (UTC)(link)
Do not get me started on the bs around "screen time" I will scream and scream and scream.

(It particularly gets me because there are actually specific, actionable dangers and potential dangers associated with overstimulation and distress caused by continual access to certain things that increase markedly in situations where there is unmediated access to things like smartphones or tablets etc etc etc but DOING ANYTHING ABOUT THESE REQUIRES ACTUALLY ENGAGING WITH MEANINGFUL ASSESSMENTS OF THESE THINGS AS TOOLS AND NOT WEIRD MORAL-CLASS-MARKER BS ABOUT "SCREEN TIME" THAT OBFUSCATES AND MAKES BULLSHIT OF EVERYTHING . . . .)

(I may have a Thing here.)
nicki: (Default)

[personal profile] nicki 2018-12-13 01:43 am (UTC)(link)
Could be other things. My assumption is based on the very very specific situation their group of participants was sourced from.
nicki: (Default)

[personal profile] nicki 2018-12-13 01:37 am (UTC)(link)
*shrug* My grad research design prof wouldn't have been happy with it because they were only looking at a very very very specific group of people in a very very very specific situation and it only would have gotten through my undergrad because we were all undergrads mostly living on or very very near campus.
fred_mouse: line drawing of sheep coloured in queer flag colours with dream bubble reading 'dreamwidth' (Default)

[personal profile] fred_mouse 2018-12-13 08:55 am (UTC)(link)
Picking up on the very specific group of people detail -- pretty sure the uni I did psych at only allowed the course credits approach to be done for getting Honours students their research subjects, and it was kind of assumed that those papers weren't going to pass muster for publication. There was a general sneer at Those Kinds Of Universities that actually used such studies to 'bolster their publication records' or some such. Which, as a young thing with no idea of the North American university system, took at face value at the time. I'm not sure the Ivy League would appreciate being looked down upon by little foreign unis!
ratcreature: eyeroll (eyeroll)

[personal profile] ratcreature 2018-12-12 08:51 am (UTC)(link)
They should have had a control group that similarly did a small daily "fasting" kind of lifestyle limit that made them feel vaguely virtuous, and got tracked an reaffirmed. Like asked them to reduce their "sugar intake" or take the stairs more often or reduce "junk" food or drink water instead of a soft drink at their meal or whatever. Following something like that really tends to create a good feeling, that is totally independent from whatever the habit or changing it accomplishes. Just successfully doing the fast gives that accomplishment kick, which makes you feel better about yourself and thus presumably would show up in self reported depression levels.
ratcreature: eyeroll (eyeroll)

[personal profile] ratcreature 2018-12-12 05:25 pm (UTC)(link)
Yeah. And unlike changing the subject selection to be really representative for humans (or even the US population) this wouldn't have added much more difficulties in the setup, and it would have added at least more hints that there might be really something specific to engaging or not engaging with social networks, or if it is just a kind of effect of feeling some self-improvement in general is happening.
ratcreature: RatCreature is thinking: hmm...? (hmm...?)

[personal profile] ratcreature 2018-12-12 07:37 pm (UTC)(link)
That controls for something different though. I didn't mean just any small accomplishment, I meant the performance of "virtue" of any kind is something that makes you feel good about yourself. Obviously some of the things commonly thought of as "virtuous" might also influence mood through other pathways (like I guess exercise would), but I specifically would want to choose something that people think would be good for them, like controlling for a placebo effect.
peoriapeoriawhereart: very British officer in sweater (Brigader gets the job done)

[personal profile] peoriapeoriawhereart 2018-12-15 04:09 am (UTC)(link)
Maybe something like "bring your own bag", take recyclable to a bin if one not provided at event?
genarti: Knees-down view of woman on tiptoe next to bookshelves (Default)

[personal profile] genarti 2018-12-14 02:22 am (UTC)(link)
Oooh, yes, that would have been a really interesting control!
schneefink: River walking among trees, from "Safe" (Default)

[personal profile] schneefink 2018-12-12 09:26 am (UTC)(link)
this is not a particularly bad paper, as papers in this field go. It's probably significantly better than most - it's better than the last few I've read.
This is worrying.

That said, the results you got from it are definitely interesting. And some of it lines up with my personal experience: when depressed I too feel accomplished and thus better when I manage to regularly track something. ...I should start doing that again.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-12 08:21 pm (UTC)(link)
Given it's a formal part of behavioural therapy interventions etc I'm sure there has to be some studies on it. (Their quality, I cannot vouch for, but I'm sure they exist.)
extrapenguin: A collection of beakers filled with blue fluids. (science)

[personal profile] extrapenguin 2018-12-12 02:17 pm (UTC)(link)
Thank you for this! My field of expertise (thankfully?) is not one that is often reported on in public media, so there's less to be annoyed about in daily life. It's always nice to see someone who knows about the state of the field go through and deconstruct the claims of the article du jour.

given that Facebook at least was invented in order to make ivy league college girls feel like shit about themselves
Huh, this is the first I've heard of this! Do you have additional context?
isis: (bite me)

[personal profile] isis 2018-12-12 03:28 pm (UTC)(link)
The book Accidental Billionaires: The Founding of Facebook, a Tale of Sex, Money, Genius, and Betrayal by Ben Mezrich describes this (it's the book that eventually became the movie The Social Network). Basically, Eduardo Saverin and Mark Zuckerberg just wanted to get laid, so they crowdsourced a hotness database of female students.

(It's a terrible book, I don't recommend it, I DNFed it.)
Edited (darn parentheses) 2018-12-12 15:28 (UTC)
extrapenguin: Northern lights in blue and purple above black horizon. (Default)

[personal profile] extrapenguin 2018-12-12 05:11 pm (UTC)(link)
Ah, thanks! I'd heard (a slightly censored?) version where it was simply about trying to figure out which of the college girls were single and thus OK to hit on. The hotness database is new information.
extrapenguin: A collection of beakers filled with blue fluids. (science)

[personal profile] extrapenguin 2018-12-12 05:16 pm (UTC)(link)
Oh wow that is ... eep. Including some of the other shit he pulled, hacking into people's other accounts.

Huh, that RSS feed looks interesting! Thank you for bringing it to my attention; time to subscribe.
thenewbuzwuzz: converse on tree above ground (Default)

[personal profile] thenewbuzwuzz 2018-12-12 06:26 pm (UTC)(link)
Oh, YIKES. The way I read that Wiki article, though, they only ever compared people. The animals were a fleeting thought that wasn't implemented, sounds like.
cadenzamuse: Cross-legged girl literally drawing the world around her into being (Default)

[personal profile] cadenzamuse 2018-12-12 02:20 pm (UTC)(link)
I feel like there are a lot of assumptions in this paper*. (Also, wtf, with the not reporting any n or actual differences between things.) Honestly, what I really want to know is why J Soc Clin Psychol is accepting this peer-reviewed without poking for better numbers and more talk about limitations in the discussion section.

*Based on your analysis. I don't have the spoons for reading it straight right now.
stellar_dust: Stylized comic-book drawing of Scully at her laptop in the pilot. (Default)

[personal profile] stellar_dust 2018-12-12 02:39 pm (UTC)(link)
Congratulations, you have just won the opportunity to spend Christmas week commenting on rough drafts of my dissertation chapters! Enjoy!
lannamichaels: Astronaut Dale Gardner holds up For Sale sign after EVA. (Default)

[personal profile] lannamichaels 2018-12-12 03:27 pm (UTC)(link)
The experiment was done on 143 disproportionately female undergraduates who were taking psychology classes at an urban Ivy League university, who were iPhone owners and active users of Facebook, Snapchat, and Instagram, and who chose this social media study out of a list of other possible psychology studies to participate in to earn course credit. It was not a study of “humans in general”. This is such a problem in Psychology studies that it’s become almost a joke, but it doesn’t stop researchers from continuing to do it, and most of the other studies they’re citing did the same thing. I can think of half a dozen ways offhand why a study of psychological health and social media use of self-selected current undergraduates at an Ivy League university might not generalize well to other human beings; I bet you can think of even more.

I wanna complain about this stuff forever, especially since I was in that position. "You have to participate as a requirement of this course, here are things to do", and then you show up, you half-ass it, and then you leave. Or they recruit college students saying we'll give you 20 bucks, and then you half-ass it and you leave with some money to cover the food budget for the next couple days.

So, yeah, great population for your research, even on top of the problems of relying entirely on college students to generalize for humanity.
lannamichaels: Astronaut Dale Gardner holds up For Sale sign after EVA. (Default)

[personal profile] lannamichaels 2018-12-12 04:01 pm (UTC)(link)

CHILD PSYCHOLOGY MAKES ME SO MAD. Oh god I'm still not over this, but as part of my grad requirements, I had to take some kind of course, and nothing in my actual degree would fit my schedule, so I took one that was in early child (think pre-verbal) and the PAPERS WE HAD TO READ. That was also the place I got into a fight with someone about how, no, not everyone speaks differently to men and woman, and also with migration, talking about having traits that are endemic to a specific population of a place is just steroetype, so no, you can't say, you're prone to X because your father was Y, but you get X because your mother was a dolphin. Oh and I also got into a fight about how there's no one correct way to think and growing up does not include all learning to think in the same method of each other and you frankly don't necessarily ever know how some other person thinks. ...I am so not over this. It's been a decade.

But I'm never gonna let anyone tell me that a sample size of 5 children, none of whom can give you specific information, will tell you anything about how they THINK. Oh and of course it's not longitudinal, so if stuff comes up later, shrug emoji. Everything just seemed to be out of adult biases, I even remember one paper specifically talking about how they decided that a 6 month old was bored. Ugh, it's all adult biases, and their sample sizes are tiny, and made up of tiny people who are still learning how to communicate with the world.

susanreads: barcode of my username (barcode)

[personal profile] susanreads 2018-12-29 04:37 pm (UTC)(link)
Marshmallow experiment! (Here via [personal profile] jesse_the_k) I heard vaguely about the replications on ... some BBC science radio show or other ... and wondered whether they'd done anything about the massive holes in the original study; it sounded as tho they'd commented on the biggest problem but not paid enough attention to fix it, but of course that could be reporting bias.

As for the original: I don't remember when it was done or when I heard about it (even whether that was closer to 10 years ago or 40), but I remember my reaction. Soooooo ... this is supposed to be testing willpower in the form of delayed gratification? But it isn't. It's testing ... probably half a dozen entangled things, of which the most F-ING OBVIOUS is how much "subjects" trust the "experimenter". Also I'll eat my hat if any of these people grew up working class, let alone in any actually precarious or unstable home environment. CAN READ NO FURTHER MY HEAD WILL EXPLODE.

(It was never my field, but I used to read stuff.)
sylvaine: Dark-haired person with black eyes & white pupils. ([gen:science] learn it love it live it)

[personal profile] sylvaine 2018-12-12 08:44 pm (UTC)(link)
OH MY GOD ALL OF THIS. My endless goddamn frustration with the entire field is one reason I bloody well ditched that uni. "This is a serious problem in our field and we're going to dissect why in the first week... we will also continue perpetuating that exact problem in all of our studies and also uncritically and unquestioningly teach students about seminal studies that are just plain Bad Science. Also Ethics totally matter yup."
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 01:48 am (UTC)(link)
One of the things I love about UBC is that they literally did a whole Paper on how psychology is frequently mislead because they're basically only studying the people who make it to undergraduate courses at universities.

(There are still a bunch of those kinds of studies that happen all around the university but I gather at this point there is also a solid awareness of "we're using this to teach you How To Do The Study, you should not kid yourself that this is actually a Significantly Meaningful Result based on THIS population.")
hlagol: (Default)

[personal profile] hlagol 2018-12-12 05:04 pm (UTC)(link)
Lord, reading this made me groan. I do not miss grad school, and library science wasn't even particularly egregious as a field!
hlagol: (Default)

[personal profile] hlagol 2018-12-12 07:37 pm (UTC)(link)
Yeah, they do alright, more or less!

I was always frustrated with the "jack of all disciplines, master of none" vibes, but I was in information science where I think that "STEM? Digital Humanities? Personal Info Management? Content Analysis? Quantitative or Qualitative? Computer Science or Scared-of-Statistics?" disjointedness was strongest.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 01:46 am (UTC)(link)
Nah it's TOTALLY a hazard of the field - there were multiple occasions during my core where I flat out stuck my hand up and said "I know for a fact I know more about neuroscience and behavioural psychology than anyone else in this room and I know just enough to know that we do NOT know enough to be using these concepts/etc in this context with any meaning or rigour, please stahp."

(Why do I know it for a fact? Ask me about how many times I'd already gone "okay but no actually that's not how human behaviour works let me explain you with citations - " before that point. >.>)

I don't MIND the jack-of-all-trades-master-of-none aspect - I think it's actually appropriate to what a lot of what we do in the field, whether it's in the information sciences or the services side - but what drove me up the wall was the sense of not being aware that no really you are fucking around with the very surface superficial of a field that is HUGE AND VAST (be it stats, psych, behaviour, whatever . . . . ) and you need to be AWARE of that and of what you don't know?

. . . but I digress. >.>
hlagol: (Default)

[personal profile] hlagol 2018-12-13 05:05 pm (UTC)(link)
Yeah, you're very right about that. Definitely felt like colleagues were playing dress up, at times.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 07:09 pm (UTC)(link)
Like, I love my colleagues! I love so much of their enthusiasm!

I WOULD REALLY LIKE US TO GET A PROPER ACTUAL TRAINED NEUROPSYCH UP IN HERE TO AT LEAST EXPLAIN THIS SHIT REALLY THOROUGHLY BEFORE WE START MAKING ANY KIND OF SOLID KNOWLEDGE-DECISIONS BASED ON SUPERFICIAL PODCAST READINGS OF THIRD HAND STUDIES, EVEN IF IT'S NOT FOR A PRACTICAL PURPOSE, BECAUSE DO YOU KNOW HOW HARD IT IS TO GET THE WRONG IDEAS OUT OF PEOPLE'S HEADS SOMETIMES.

(The vast majority of my most frustrating moments in library school were indeed entirely mired in "okay thanks to being neuroatypical, to twenty five years of living beside a very very neuroatypical sister with a trained medical doctor for a mother and MORE SELF-DRIVEN RESEARCH THAN I CAN EXPLAIN TO YOU RIGHT NOW, I can promise you that this article from a philosopher of education is bullshit because he has in fact literally asserted a factually. incorrect. thing. about how people learn and interact with stimulus and yes actually we do know that, we can look at brains now and that particular aspect we've studied a LOT so seriously, no.")

(One prof in particular found me very frustrating while trying to pretend he didn't. Oh well.)
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 07:19 pm (UTC)(link)

LOL. SOB. ;-;

I have a good imagination, I think I can colour in this sketch pretty well.

And all I can say is LOL. SOB.

hlagol: (Default)

[personal profile] hlagol 2018-12-13 08:38 pm (UTC)(link)
Lord. Doctor of Education = red flag on principal.
thenewbuzwuzz: converse on tree above ground (Default)

[personal profile] thenewbuzwuzz 2018-12-12 06:14 pm (UTC)(link)
Wonderful post, thank you!
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-12 08:23 pm (UTC)(link)
Also I keep thinking there has to be some way to set up a service where someone can, like, pay people like you and me to go through their intended study and point out all the bad assumptions or "citation needed" assumptions they're already making or whatever they're not controlling for, BEFORE they actually go through with it.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 01:42 am (UTC)(link)
Oh I totally meant just as a service people MIGHT ACTUALLY WANT TO HAVE and do BEFORE they do the research - thus going through the INTENDED study, BEFORE they do all the work without addressing the issues in question, rather than afterwards.

This, obviously, is predicated on me assuming that even a major minority of people doing studies actually want to do serious, useful, field-helpful research on real things, and thus would LIKE someone to make sure their study design and assumptions and orientation makes sense before they go through the whole process of actually DOING IT. Peer review is, tbh, much more like being adjudicated as a performer: you've already done the work and hopefully it's not totally bullshit but you have to submit it to the panel and see what their judgement is.

I'm talking much more about the sense of what would be, in performance, a masterclass (except without claiming I'm a 'master' as such, because some elements of the metaphor don't go across directly) or a workshop: you know the performance piece well enough to run through it but you're still solidly early in figuring out all the details and directions.

Which among other things would hopefully actually SAVE MONEY AND WASTED TIME later on, before you ever get to the peer review judgement stage.
Edited 2018-12-13 19:28 (UTC)
sylvaine: Dark-haired person with black eyes & white pupils. ([gen:science] learn it love it live it)

[personal profile] sylvaine 2018-12-12 08:42 pm (UTC)(link)
Oh goodness, this gives me flashbacks to my own stint at uni in psychology. *scrubs this comment of multiple bitter, sarcastic asides*

This is an excellent analysis of the honestly many, many problem with that study. The one that I am still stuck on (I read this post on the way to school this morning; it is now nearly bedtime) is that they changed their methodology halfway through. I just. *hands* You can't do that!!! You can't do that and not somehow address that change in the data analysis I - *incoherent yelling*

Anyway, I love the idea that you had to dissect a study like this regularly, and I'd definitely be a delighted reader of any subsequent post like this you'd like to make, but I also very much understand that being rather a work-intensive undertaking.

Anyway, *points at icon in complete and utter lack of patience at the entire field of research psychology at this point*
dhampyresa: (Default)

[personal profile] dhampyresa 2018-12-12 10:52 pm (UTC)(link)
143 undergraduates
YOUR SAMPLE SIZE IS SMALL

Your methodology is bad AND YOU SHOULD FEEL BAD
sabotabby: (teacher lady)

[personal profile] sabotabby 2018-12-13 01:25 am (UTC)(link)
Here via a link and this is awesome and fascinating. Thank you for writing it!
loligo: Scully with blue glasses (Default)

[personal profile] loligo 2018-12-13 04:13 am (UTC)(link)
So this was an interesting post for me, because I have a graduate degree in social psychology, and quit after three years as a professor in part because I had come to believe that the research paradigm I was trained in (laboratory studies of group decision making, mostly using undergrads) was meaningless. So I am all for the critiquing! And yet... I don't want people to lose sight of the fact that *something* happened in this study. Something happened, and it was described in enough detail that various people on this post have been able to suggest intriguing alternative hypotheses that future researchers could test. This is still science! It's science that's probably taking the scenic route to usable insights, instead of heading directly there, but this study wasn't pointless.

I'm glad I got out of the field when I did. I support the efforts to replicate classic studies, but from what I've heard (in media reports, because I've lost touch with everyone who's still working in the area) a lot of it has been in a spirit of drama, wank, and back-biting, rather than a spirit of inquiry.

Because here's the thing: high-impact social psychology studies have always required certain skills in the researchers. You're trying to capture a complex human phenomenon and model it in a brief, safe time and space. That modeling is possible *if* you can get the participants to buy into it, and I don't know if anyone ever succeeded in quantifying or describing the contextual qualities that create that buy-in. Ugh, it's getting late and I'm not sure I know where I'm going with this anyway... I think maybe my point is that a failure to replicate doesn't necessarily mean that the original study was just a random fluke - it can just as likely mean that there were other social forces at work in the original study that were not specified in a way that makes them easily replicable... but something still happened that is worth understanding and pursuing, albeit maybe in a different direction.
recessional: a photo image of feet in sparkly red shoes (Default)

[personal profile] recessional 2018-12-13 07:25 pm (UTC)(link)
I completely agree with this, mind you?

It's kind of part of why I have that wistful sense of "like can you just like . . .give me 20$? and I'll run through your setup beforehand and point out all the places where even I as an autodidact with a broad knowledge range can tell you that this study is not actually doing what you think it's doing but is in fact doing something else due to this that and the other bias/perspective/lack of control/thing you appear not to have noticed?"

Sometimes they might even decide that what they were ACTUALLY going to study is useful! It just wasn't what they THOUGHT they were studying! Like that time a bunch of neuropsych people decided to scan the brain of some famous climber because they were in awe of his ability to be totally cool and collected in very dangerous climbing situations . . . . except they decided this meant that he must naturally have some physical abnormality in his brain that meant he Didn't Experience Fear?

. . . . .except that in interviews with him he talks, explicitly, about being unable to watch (say) Game of Thrones because it's upsetting and agitating, and he's CLEARLY anxious in social-dialogue situations, so actually you're not studying a guy who has NO fear response, you're studying a guy who has fear/arousal responses that do not match up with what a LOT of people experience and that in and of itself is fascinating, interesting and valuable! It would be a good thing to study!

It's just, you're not studying a guy "without fear". That's not what's going on here.


Also hooooo boy are you right about the actual attempts to replicate in many high profile cases. (The not-actually-like-that ones don't attract as much attention.)
mindstalk: (science)

[personal profile] mindstalk 2018-12-13 05:42 am (UTC)(link)
Thanks, interesting discussion!
fred_mouse: line drawing of sheep coloured in queer flag colours with dream bubble reading 'dreamwidth' (Default)

[personal profile] fred_mouse 2018-12-13 09:05 am (UTC)(link)
As both a psychologist (lapsed) and a statistician (fundamentalist) I alternated between laughter and the horror of 'oh, no, really, has no-one trained bloody psychs out of that *yet*' reactions. Particularly with social media research, which seems to start from the bias of 'healthy people would be Doing More Important Things if they Weren't On Facebook'.

Also -- I too, have contemplated a side blog on reading scientific papers. Although I'm not sure that I could get to this level of rant.
Also, also - that F-test statistic that is reported? I initially read it as bollocks, because I thought the 1,111 was a single number; as it is, I'm assuming that the 1 and the 111 are the degrees of freedom. But really, the psych obsession with putting in their t- and F-values into papers instead of useful things like the mean and 95% confidence intervals on their estimated differences irks me.

While I'm thinking of post-scripts: I have no idea who linked to this discussion, but I'm assuming I followed a link from somewhere.
genarti: Me covering my face with one hand. ([me] face. palm.)

[personal profile] genarti 2018-12-14 02:14 am (UTC)(link)
Ugh, what a messy study! Maybe not 100% useless (though it sounds like it's at least 95% useless without significant further follow-up, but so goes the course of much science, really. "Mixed results, some potentially significant things not tested or controlled for, needs more data and more replication and more analysis" is not inherently bad or anything.) But definitely not worth reporting on as the media apparently has -- which, of course, is nothing new with the media and scientific studies, especially in psych, but ooof.
Edited 2018-12-14 02:21 (UTC)
alessnox: Aless Nox - A writer (Default)

[personal profile] alessnox 2018-12-14 04:28 am (UTC)(link)
Thanks, this was interesting. I enjoyed your analysis and not having to read the paper myself. I happen to know that student mood tends to vary during the course of the semester, and there is a marked rise in mood after they pass. I would bet the missing data is from students who dropped the class.
I think the best way to test this is to make a cool mood testing app that people want to use, then give it permission to monitor usage of other apps. Have the person tell in words which media sites they use most often, and use that to monitor social media, because in my circle of knowledge only old folks do most of their surfing on facebook. It should monitor pinterest, tumblr, twitter, discord, etc. it should be distributed to non students. Just place it in the app store with all the permission forms preloaded after download,
Then again, I honestly don’t care what the study says. I find the important thing is limiting access to toxic sites, and not missing too much sleep that has the most affect on mood. Not the old “In my day we had real friendships” kind of moralizing that papers like to report.