Skip to main content

Blog

CREATe suggests improvements to upcoming DSA transparency database

Posted on    by CREATe Team
BlogOnline PlatformsPolicy

CREATe suggests improvements to upcoming DSA transparency database

By 21 July 2023No Comments

By Stefan Luca, Konstantinos Stylianou, Aline Iramina, and Martin Kretschmer

CREATe contributes to a public consultation of the European Commission on an upcoming transparency database under the EU Digital Services Act (DSA). This is the third in a series of public consultations held by the EU Commission this year on the implementation of the DSA. The first two were on data access for researchers and independent audits under the new regulation. 

The DSA is a landmark piece of legislation regulating digital platforms, aiming in particular to provide users with transparency, predictability and procedural guarantees when subjected to platform moderation. A key requirement of the DSA is for platforms to provide affected users with a statement of reasons for moderation decisions. A public database of all these (anonymised) decisions will be maintained by the Commission, to ensure accountability and, to some extent enable research into moderation practices and challenges. It is the initial design of this public database that the Commission’s consultation addressed. Very Large Online Platforms and Very Large Online Search Engines will have to start submitting moderation decisions to the Commission’s database by August, with smaller platforms following suit later. The outcome of this consultation has the potential to shape moderation accountability and research in the EU for the foreseeable future and serve as a blueprint for other jurisdictions.  

CREATe submitted comments and suggestions to the Commission on how to improve the DSA transparency database. These comments aim to help the Commission strike a balance between making the database as useful as possible and not creating unnecessary burdens for the regulated platforms. In detail:

(1) The database requires platforms to designate whether the reason for moderation is a violation of the platform’s Terms of Service (ToS) or the illegality of the content. Even if the moderation decision was taken on the grounds of the ToS, platforms will still be required to determine whether the content was also illegal. We are concerned that the requirement to determine the legality of content for every moderation decision is both burdensome and fuzzy. Online platforms with billions of users make millions of moderation decisions per day, and their tools and staff are trained to assess content against their own ToS, not (local or international) laws, which is a much more specialised, nuanced, and difficult assessment, requiring the input of experts and often a full adjudicatory procedure. While the database requires only a Yes or No response to legality, when the decision is taken on the grounds of violation of ToS the platform still has to maintain a sophisticated mechanism capable of reaching this conclusion. With no further guidance or limitations, this requirement not only creates a potentially unmanageable burden for platforms, but introduces a significant measure of arbitrariness and incohesion to the database, which undermines its standardised and transparent character.

(2) We suggested that the Commission consider expanding the information requested from platforms, and in particular: 

  • Add synthetic media to the list of content types concerned by the moderation decision (in addition to text, image & video). This will this help track the propensity of AI-generated content to violate ToS, which is of interest for policy-makers and researchers alike. We appreciate that this obligation presupposes the existence of a mechanism for platforms to identify AI-generated content. The EU is working to include a relevant labelling provision in the Code of Practice on Online Disinformation, which can serve as the basis of how this can be done.
  • Request the date of the post subject to moderation, the date of the platform’s decision, and, if applicable, the date of any user or trusted flagger notice, so as to track response times.
  • The Commission already asks platforms whether the moderation process was triggered by a user notice (under article 16 DSA). It would be helpful to know the total number of notices received at the time of the moderation decision. This is desirable not only on consistency grounds, but also as a window into platforms’ systems for handling mass reporting of the same piece of content, which may affect freedom of expression.
  • Building on Article 17 DSA, require that platforms also report any decisions of their own adjudicatory bodies (e.g., Facebook’s oversight board) when reaching moderation decisions. This not only provides colour to the decision, namely, the rationale behind taking the decision based on the platform’s own adjudication “caselaw”, but also legitimises internal review mechanisms, which should be welcome.
  • Building on Article 17(c) DSA, require that platforms also provide an explanation of the steps automated mechanisms take in reaching moderation decisions.4 This can help improve accountability of the operator (the platform or any third-parties to which moderation is outsourced), when employing automated mechanisms.
  • Ask for more granularity about account-level moderation decisions, to capture two common moderation practices: strikes against accounts and account tiering. (1) ‘Strikes’ are a record of an account’s individual ToS violations and the accumulation of a certain number of strikes may translate into an account suspension or termination. Communicating to the user whether they have incurred a strike is in keeping with the DSA’s aim to bring transparency and predictability to content moderation. In that direction, the Commission can consider requiring disclosure of any strikes incurred by the account. (2) Tiering refers to placing accounts in categories that are afforded different treatment, which may include differences in visibility or discoverability at the account level.  One moderation decision may remove a specific piece of content while at the same time restricting the visibility of an entire account’s existing and future content. In the current database design, only the former element would be captured.  Examples of account level visibility decisions include Twitter’s “do not amplify” list of accounts or labelling an account as “state media”. A requirement should be considered to capture account-level decisions short of suspension or termination.