Media statement

Annual transparency reports reveal tech sector progress in the misinformation fight

Monday 29 May 2023 – Today eight major technology companies have released new insights and data into their management of mis- and disinformation in Australia, by filing their annual transparency reports under DIGI’s Australian Code of Practice on Disinformation and Misinformation. In line with the annual reporting period under the code, the reports cover data from the 2022 calendar year. 

All reports have been reviewed by an independent expert Hal Crawford, who also developed the best practice guidelines that guide signatories’ transparency reporting. Crawford assesses each transparency report against those guidelines, asks signatories for improvements, and attests claims prior to publication. 

On the 2022 transparency reports, Hal Crawford said: 

“The transparency reports filed this year for the most part continue to incrementally improve on the provision and presentation of information on the efforts of the Signatories to counter mis/disinformation on their platforms. All reports meet the formal requirements of the Code, and many reports have improved in complying with the Transparency Reporting Guidelines.

“It is clear that in the case of all big platforms, the reports represent summations of ongoing activity that is massively complex. Collating the information is a substantial task, and presenting it in a way that balances accuracy and readability is not easy. I am happy that most of the signatories have struck this balance. They have also heeded the request to include case studies in order to give life to general statements and policies.”

In addition to the release of the signatory transparency reports, DIGI has also released its annual report about code governance. On the release of the 2022 transparency reports, DIGI Managing Director, Sunita Bose, said: 

“The third set of transparency reports generally contain more quantitative insights than ever before, demonstrating signatories’ commitments to providing interested stakeholders with insights into what their efforts to combat mis- and disinformation involve.”

“Transparency reporting helps create a culture of accountability, establishes benchmarks for future progress, and creates public resources that can provide researchers, civil society and Governments in particular with insights into the scale and management of mis- and disinformation.

“This was the first Australian federal election since the code was introduced, and we believe the code positively contributed to the Australian Electoral Commission’s determination that this election saw much lower levels of electoral mis and disinformation this election than in other like minded democratic elections across the globe.” 

The extensive electoral integrity work that signatories undertook to protect the 2022 Australian Federal Election is detailed in relevant reports, with multiple signatories directing Australians to the AEC to elevate official electoral information, in addition to content removal. Signatories are now preparing similar work to prevent mis- and disinformation in relation to the referendum into an Aboriginal and Torres Strait Islander Voice to Parliament, which is expected to be reported on in the 2023 calendar year reports. 

The reports illustrate the volume of misinformation takedowns in Australia, and changes over time. For example:

  • TikTok has seen improvements in its ‘proactive removal’ rates of misinformation content – its enforcement of content prior to users even viewing it increased from 37.6% in Q1 to 69.8% in Q4 of 2022. In Q4, 89.1% of harmful misinformation was removed before a user reporting it.
  • Google disrupted a Chinese influence campaign known as ‘dragonbridge’ that perpetuates disinformation on news topics such as China’s COVID-19 to the war in Ukraine. Despite this content receiving relatively low engagement from users, in 2022 Google disrupted over 50,000 instances of ‘dragonbridge’ activity across its services, and terminated over 100,000 accounts.

The reports detail the nature and effectiveness of interventions to counter misinformation. For example:

  • Meta has reported on its work to display warnings on content that is found to be false by independent third-party fact checkers; over 2022, it displayed warnings on over 9 million distinct pieces of content in Australia. 
  • RedBubble uses third-party machine learning software to detect users who use bots to create networks of multiple accounts and attempt to upload large amounts of images that may cause public harm.
  • Twitter details the shifts in its approach after its transition as a company in late October 2022, including the role of its ‘community notes’ feature that enables Australian users to offer context in surfacing credible information. If enough contributors with different points of view rate a Tweet as helpful, this is indicated on a Tweet.

The reports illustrate efforts to elevate reputable content and provide users with information to counter misinformation. For example:

  • Apple News informed and educated readers by sending push notifications for stories sharing important updates about the emergence of COVID-19 Omicron subvariants and other variants. 
  • Adobe released a suite of open-source developer tools based on the C2PA specification, enabling more developers to integrate content provenance across web, desktop, and mobile projects so that consumers can see the origins of the content they are viewing.
  • Microsoft’s search engine Bing applied defensive search interventions for over 45,000 distinct queries impacting over one million impressions in Australia between February 2022 through 31 December 2022.

In December 2022, DIGI strengthened The Australian Code of Practice on Misinformation and Disinformation, making changes in response to stakeholder feedback received through a planned review of the code. Changes included an improved definition of ‘harm’ in relation to mis and disinformation and additional commitments reflecting updates to the strengthened EU Code of Practice. DIGI also introduced more proportionate annual transparency reporting requirements for smaller platforms to encourage them to adopt the code, which can be flexibly applied to different types of digital service providers.   

These updates are the latest set in a series of improvements driven by DIGI and code signatories since the code was introduced in February 2021. In October 2021, DIGI introduced independent oversight and a complaints facility to increase accountability. In 2022, independent assessment and best practice reporting guidelines were introduced to drive improvements in the transparency reporting process. 

Looking ahead, DIGI has welcomed the announcement by the Government about the ACMA receiving greater oversight powers of the code, which will reinforce its efforts and formalise its long-term working relationship with the regulator in relation to combating misinformation online. 

Background

The Australian Code of Practice on Disinformation and Misinformation (ACPDM) was developed in response to Australian Government policy announced in December 2019, in response to the ACCC Digital Platforms Inquiry, where the digital industry was asked to develop a voluntary code of practice on disinformation. 

DIGI developed the ACPDM with assistance from the University of Technology Sydney’s Centre for Media Transition, and First Draft, a global organisation that specialises in helping societies overcome false and misleading information. 

The ACPDM was launched in February 2021 and its signatories are Apple, Adobe, Google, Meta, Microsoft, Redbubble, TikTok and Twitter. The ACDM was further strengthened in December 2022 in response to stakeholder feedback received through a planned review of the code that included a six week public consultation. These updates are the latest set in a series of improvements driven by DIGI and code signatories since the code was introduced in February 2021.

Mandatory code commitments include publishing & implementing policies on misinformation and disinformation, providing users with a way to report content against those policies and implementing a range of scalable measures that reduce its spread & visibility (Mandatory commitment #1). Every signatory has agreed to annual transparency reports about those efforts to improve understanding of both the management and scale of mis- and disinformation in Australia (Mandatory commitment #7). 

Additionally, there are a series of opt-in commitments that platforms adopt if relevant to their business model: (Commitment #2) Addressing disinformation in paid content; (#3) addressing fake bots and accounts; (#4) transparency about source of content in news and factual information (e.g. promotion of media literacy, partnerships with fact-checkers) and (#5) political advertising; and (#6) partnering with universities/researchers to improve understanding of mis and disinformation.

DIGI is a non-profit industry association that advocates for the interests of the digital industry in Australia. DIGI’s founding members are Apple, eBay, Google, Linktree, Meta, Spotify, Snap, TikTok, Twitter and Yahoo, and its associate members are Change.org, Gofundme, ProductReview.com.au and Redbubble. DIGI’s vision is a thriving Australian digitally-enabled economy that fosters innovation, a growing selection of digital products and services, and where online safety and privacy are protected. DIGI is a key industry, Government and community collaborator in efforts to address online harms, data and consumer protection online and to grow the digital economy. We work in a range of ways including advocacy for effective and implementable approaches to technology policy, code development, and partnerships.

All transparency reports are available at digi.org.au/disinformation-code/transparency/. DIGI’s annual report is available at digi.org.au/disinformation-code/governance/. For media enquiries, please email press@digi.org.au.