Thursday, 03 November 2022 11:38

Transparency is essential for effective social media regulation

By  [Source: This article was published in brookings.edu By Mark MacCarthy]

In response to the information disorder on social media platforms, governments around the world are imposing transparency requirements in the hopes that they will improve content moderation practices. In the U.S., for instance, a new California law would impose a range of disclosure and openness requirements. In Europe, the recently finalized Digital Services Act is filled almost entirely with transparency requirements.

But a surprising number of academics, observers, and policymakers say “meh.” One participant in a recent Georgetown University tech policy symposium said transparency is a “timid” solution. Another participant shared that disclosure rules imply that whatever the companies do is fine as long as they are transparent about it. Haven’t policymakers learned anything from the failures of privacy notices and consent rules? Transparency initiatives, the critics say, just distract from the hard work of developing and implementing more effective methods of control.

These critiques of transparency make two points. The first is that transparency requirements written into law wouldn’t ensure much useful disclosure. The second is that substantially increased disclosures wouldn’t do much to mitigate the information disorder on social media.

In a series of reports and white papers (herehere, and here), I’ve argued that transparency is a necessary first step in creating a regulatory structure for social media companies. The answer to the first criticism, that a legal requirement for disclosure will not produce useful disclosures, is to insist on the importance of a regulator. Disclosure requirements alone are not self-enforcing. A dedicated regulatory agency must define and implement them through rulemaking and must have full enforcement powers, including the ability to fine and issue injunctions. That will help to ensure that disclosure actually happens as mandated by law.

The response to the second criticism, that transparency by itself won’t do much to stem the tide of disinformation and hate speech, is that without transparency, no other regulatory measures will be effective. Whatever else governments might need to do to control social media misbehavior in content moderation, they have to mandate openness, which requires implementing specific rules governing these disclosures.

Regulatory oversight of transparency

Transparency is not a single policy tool. It has different dimensions. Roughly, they are providing disclosures to users, public reporting, and access to data for researchers.

Disclosure to users includes revealing information about the content moderation standards a social media company has in place, its enforcement processes, and explanations of takedowns and other content moderation actions, descriptions of complaint procedures, among other things. Each of these outputs provides users with opportunities to complain about problematic content and to receive due process when social media companies take action against them.

While general requirements for disclosures to users can be written in statute, a regulator would have to determine the specifics, which might differ according to the characteristics of a company’s line of business. The regulator would have to specify, for instance, at what level of detail content rules and enforcement processes need to be disclosed to users, when and how often, and the adequacy and timing of notification and complaint procedures. This should be done through public rulemaking with input from civil society, industry, and academia. Without these regulatory specifications and enforcement, disclosures to users might well be useless.

The second dimension of transparency is transparency reporting. This includes reports and internal audits of platform content moderation activity, the risks created through social media company activities, the role of algorithms in distributing harmful speech, assessments of what the companies do about hate speech, disinformation, material harmful to teens, and other problematic content. Transparency reporting could also include a company’s own assessment of whether its activities are politically discriminatory, a favorite topic of political conservatives. For instance, a 2021 internal Twitter assessment disconfirmed conservative complaints of bias, finding instead greater algorithmic amplification of tweets by conservative political leaders and media outlets.

Regulators are absolutely key in implementing transparency reporting duties. They have to specify what risks must be assessed and what statistics have to be provided to assess these risks. It cannot be left up to the companies to determine the content of these reports, and the details cannot be specified in legislation. How to measure the prevalence of harmful material on social media is not an immediately obvious thing. Is it views of hate speech, for instance, as a percentage of all views of content? Or is it hateful posts as a percentage of all posts?

The metrics that must be contained in these reports have to be worked out by the regulator, in conjunction with the industry, and with researchers who will use this public information to assess platform success in content moderation. There might be a place here, although not a determinative one, for a social media self-regulatory group, similar to the Financial Industry Regulatory Authority, the broker-dealer industry organization, to define common reporting standards. Almost certainly the important and relevant statistics will change over time and so there must be regulatory procedures to review and update reporting statistics.

The third element of transparency, access to data for researchers, is a very powerful tool, perhaps the most important one of all. It requires social media companies to provide qualified researchers with access to the internal company data that researchers need to conduct independent evaluations. These outside evaluations would not be under company control and would assess company performance on content moderation and the prevalence of harmful material. Data transparency would also allow vetted researchers to validate internal company studies, such as Twitter’s own assessment of political bias. The digital regulator in conjunction with research agencies such as the National Science Foundation or the National Institute of Health would have to vet the researchers and the research projects. Researchers and civil society groups working with an industry self-regulatory organization can help define access parameters, but ultimately, they will have to be approved by a government agency.

The regulator must at a minimum assure that companies do not seek to frustrate the goals of access transparency by not providing timely or accurate data. But even assuming company goodwill to comply with the rules, there are lots of controversies in this area that only a regulator can resolve.

Regulators will need to decide whether researchers will be able to access data on Resolving this might require the regulator to make a balancing judgment on compliance burden versus data utility. Another issue is whether the data has to be provided to researchers in a form that technologically protects privacy, and if so, which form? Is it differential privacy or K-anonymity or some other technique? Alternatively, some research might demand access to identifiable data, and privacy can only be assured by contract such that a researcher involved in a serious privacy violation would be banned from further access to social data and might face financial penalties as well.

[Source: This article was published in brookings.edu By Mark MacCarthy - Uploaded by the Association Member: Alex Gray]

Live Classes Schedule

There are no up-coming events

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.