For nearly a decade, concerns have grown over Facebook’s role in amplifying low-quality content and misinformation. Various studies have pointed to the platform’s algorithms as key contributors to the spread of false information, especially during pivotal moments such as elections and public health crises. These concerns have cast a long shadow over Facebook’s influence on public discourse and raised questions about the ethical responsibility of social media companies. However, in 2023, a study published in the prestigious journal Science seemed to challenge this narrative, asserting that Facebook’s algorithms were not significant drivers of misinformation during the 2020 U.S. election.
Facebook’s Role in Spreading Misinformation: New Study Sparks Controversy
The study, funded by Facebook’s parent company, Meta, and co-authored by several Meta employees, was hailed as a victory for the tech giant. Nick Clegg, Meta’s president of global affairs, celebrated the findings, stating that they demonstrated how Facebook’s algorithms had “no detectable impact on polarization, political attitudes, or beliefs.” The research attracted widespread media attention, positioning Meta as a company that could potentially be exonerated from long-standing accusations of exacerbating political divisions and spreading misinformation.
However, a few months later, a different story began to unfold. A team of researchers led by Chhandak Bagchi from the University of Massachusetts Amherst published an eLetter in Science, raising serious concerns about the original study’s conclusions. Bagchi’s team argued that Facebook had altered its algorithm while the study was being conducted, skewing the results and undermining the findings. This revelation has now cast doubt on Meta’s claims, highlighting broader issues of transparency, bias, and the problematic influence of Big Tech on academic research.
The Controversy: Algorithmic Tweaks and Their Impact
The Science study that triggered this debate claimed that Facebook’s news feed algorithm reduced users’ exposure to untrustworthy content during the 2020 U.S. election. The research team conducted an experiment where participants—Facebook users—were divided into two groups. One group (the control group) used the regular, algorithm-driven news feed, while the other (the treatment group) saw content presented in reverse chronological order. This setup aimed to assess how different types of content curation impacted users’ exposure to misinformation.
The results, at first glance, seemed robust. The study concluded that users exposed to the algorithmic feed saw less untrustworthy news than those in the reverse chronological group. The authors praised the experiment’s design and emphasized that Meta had no prepublication approval rights, maintaining that the research was independent. However, they also acknowledged that Meta’s internal team had provided substantial support in executing the project.
What the original study failed to fully account for was a crucial factor: during the brief period when the study was conducted, Meta made temporary changes to its news feed algorithm, boosting more reliable news sources. Bagchi’s team pointed out that these algorithmic changes effectively altered the “control” group, making it impossible to draw meaningful conclusions about the typical functioning of Facebook’s algorithm. In other words, the reported reduction in misinformation exposure could be attributed to these short-lived algorithmic adjustments, rather than to Facebook’s algorithms as they normally operate.
In a subsequent response published in Science, the authors of the original study acknowledged that their results “might have been different” if Facebook had not changed its algorithm during the study. However, they stood by their findings, insisting that the experimental design remained valid. Despite this defense, the controversy has sparked widespread debate about the reliability of research funded and facilitated by tech giants like Meta, as well as the broader issue of corporate influence over scientific inquiry.
Big Tech’s Growing Influence in Academia: A Cause for Concern?
The debate over Facebook’s role in spreading misinformation is not an isolated incident. It highlights a growing trend in which Big Tech companies, such as Meta, Google, and Amazon, fund academic research on their platforms and products. This raises serious ethical concerns about potential conflicts of interest and the credibility of the resulting research.
Meta, in particular, has made significant investments in universities and academic institutions. Meta and its CEO, Mark Zuckerberg, have collectively donated hundreds of millions of dollars to over 100 colleges and universities across the United States. While such funding supports valuable research, it also opens the door to undue influence, where the results of studies could be biased in favor of the companies providing the financial backing.
The strategies employed by Meta mirror those used by the tobacco industry in the mid-20th century. As evidence mounted linking smoking to cancer and other health issues, tobacco companies launched a coordinated campaign to create doubt about the dangers of smoking. Rather than directly falsifying research, they funded studies that produced inconclusive or contradictory results, fostering uncertainty in the public mind. This allowed tobacco companies to maintain a public image of responsibility while delaying regulatory action.
Similarly, Meta’s funding of research that downplays the role of its algorithms in spreading misinformation could serve to distract from the platform’s real-world impact. By selectively funding studies that produce favorable results, Big Tech companies can control the narrative around their products, deflecting criticism and delaying calls for regulation.
Unprecedented Power: How Social Media Platforms Control the Narrative
One of the most troubling aspects of Meta’s involvement in the 2023 Science study is the unprecedented power that social media companies hold over the research conducted about them. Unlike traditional industries, such as tobacco or pharmaceuticals, social media platforms can directly influence public opinion by controlling both the content that appears on their platforms and the research that is conducted into their operations.
In the case of Facebook, the company not only funded the Science study but also provided the platform on which the research was conducted. This gives Meta an extraordinary level of control over both the experiment and the interpretation of its results. Additionally, Meta can promote the study’s findings through its platform, shaping public perception in real time.
This level of influence is unparalleled. Even the tobacco industry, with its deep pockets and well-oiled public relations campaigns, could not control public opinion as directly as social media platforms can today. Meta’s ability to control the narrative surrounding its algorithms and their impact on misinformation highlights the urgent need for greater transparency and independent oversight of tech companies.
The Need for Independent Oversight and Data Access
The controversy surrounding the Science study underscores the dangers of allowing tech companies to fund and control research into their own platforms. When these companies control access to the data and systems necessary for studying the effects of social media, they effectively control the science behind it. This not only undermines the credibility of the research but also allows platforms like Facebook to continue operating without sufficient accountability.
To address these issues, many experts are calling for greater independent oversight of social media platforms. This could involve mandating that companies provide large-scale data access to independent researchers, allowing for more transparent and objective analysis of their algorithms and their impacts. Additionally, there should be real-time updates about any changes to algorithms during the research process, ensuring that studies accurately reflect the platforms as they typically operate.
Without such measures, platforms like Facebook will continue to prioritize profits over public welfare, allowing misinformation and political polarization to flourish unchecked. The debate over the Science study is just one example of how tech companies can divert attention away from their harmful practices by funding research that casts them in a more favorable light.
Conclusion: A Call for Greater Accountability
The 2023 Science study and the subsequent controversy over its findings represent a critical moment in the ongoing debate about the role of social media in spreading misinformation. While Meta has sought to use the study to defend its algorithms, the revelations about algorithmic changes during the experiment have cast doubt on the validity of the results. More broadly, this controversy highlights the need for greater scrutiny of Big Tech’s influence over academic research and the public narrative.
As social media platforms continue to shape public discourse in unprecedented ways, the need for independent oversight has never been more urgent. Without greater transparency and accountability, the true impact of platforms like Facebook on democracy, public health, and society as a whole will remain obscured. The public deserves to know the full story—and that will only be possible with independent, transparent research that is free from the influence of the companies it seeks to study.
Author
-
Lia Timis is one of our staff writers here at TechTime Media. She writes on many subjects on how technology is changing our lives from environmental issues, financial technology and emerging uses for blockchain technology.
View all posts