2022 Submission Report
This report explains the outcome of the 2022 code review, explaining code changes and how stakeholder feedback was addressed.
On February 22 2021, DIGI launched a new code of practice that commits a diverse set of technology companies to reducing the risk of online misinformation causing harm to Australians.
The Australian Code of Practice on Disinformation and Misinformation has been adopted by Adobe, Apple, Facebook, Google, Legitimate, Microsoft, Redbubble, TikTok and Twitch.
All signatories commit to safeguards to protect Australians against harm from online disinformation and misinformation, and to adopting a range of scalable measures that reduce its spread and visibility.
Participating companies also commit to releasing an annual transparency report about their efforts under the code, which will help improve understanding of online misinformation and disinformation in Australia over time. Transparency reports were published in May 2021 and May 2022, and are available to read here.
DIGI developed this code with assistance from the University of Technology Sydney’s Centre for Media Transition and First Draft, a global organisation that specialises in helping societies overcome false and misleading information. The final code has been informed by a robust public consultation process.
The Code was developed in response to the Australian Government policy announced in December 2019, where the digital industry was asked to develop a voluntary code of practice on disinformation, drawing learnings from a similar code in the European Union. In October 2021, DIGI strengthened the code with independent oversight and a facility for the public to report breaches by signatories of their code commitments. In June 2022, DIGI launched a review of the code to inform its continued improvement.
This is the latest version of The Australian Code of Practice on Disinformation and Misinformation, updated on December 22, 2022 to reflect the outcome of DIGI’s review of the code. Previous versions of the code are linked below:
The Australian Code of Practice on Misinformation and Disinformation has been signed by eight major technology companies that are the founding signatories. The code is open to any company in the digital industry as a blueprint for best practice for how to combat mis and disinformation online. If you are interested in adopting the code, please contact us at hello@digi.org.au
The table below shows signatories’ current commitments to the code’s objectives. Objectives #1 and #7 are mandatory, and other commitments are opt-in recognising the diversity of signatories’ products and services. For example, signatories may choose not to adopt #5 in relation to political advertising if their service does not offer political advertising.
For a more detailed breakdown of the outcomes under each objective that signatories have adopted, view the opt-in disclosures provided in 2021. The most recent information about signatories’ activities relating to each of their commitments, and any changes to those commitments, can be found in the most recent transparency reports.
The purpose of the code is to drive improvements in the measures that signatories take to address misinformation and disinformation, and the complaints handling approach is consistent with that aim. Through the complaints portal, DIGI accepts complaints from the Australian public where they believe a signatory has breached their code commitments.
DIGI will not be able to accept complaints about individual items of content on signatories’ products or services, which should be directed to the signatory via their reporting mechanisms or otherwise. We have included general information about how to report content on signatories’ services below, however often the ability to report content exists on the URL where you are viewing it.
Users may report violations of Adobe’s Terms of Use or Community Guidelines, including mis- and dis- information on Adobe products and services, by following the product-specific directions contained on this page. For any products and services not listed in this page, users may reach out to abuse@adobe.com to file a report with Adobe’s Trust & Safety team.
To report a concern:
1. While in a story, tap the More Actions button . On your Mac, click the Share button .
2. Tap or click Report a Concern.
3. Choose a reason that you don’t want to see the story and provide more details.
4. Tap or click Send.
Ads
Step 1: Confirm that it’s a Google ad
The first step is to confirm that the ad you want to report is in fact a Google ad. Here are some of the different types of Google ads you might see.
Ads on Google Search:
These are ads you see on Google Search results pages and other Google services such as Google Shopping.
Ads on non-Google websites and apps:
You may see Google ads on non-Google websites and apps. You can identify them as Google ads if you see an AdChoices icon accompanied by an [X] icon that allows you to block the ad.
Ads on YouTube:
These are ads you see at the bottom of YouTube videos, or on the right side of YouTube videos.
Step 2: Report the ad
Complete the Report an ad form.
You’ll receive an email confirmation after you’ve submitted the form. Your report will be reviewed, and if appropriate, action will be taken on the ad.
This YouTube video also demonstrates how to complain about a Google served ad.
Search
To complain about a result that appears in Google’s search index, please click on the three vertical dots that appear alongside the URL text and select “Send feedback”.
YouTube
There are a variety of ways to report content on YouTube depending on what device you are using. For more information and instructions, please visit https://support.google.com/youtube/answer/2802027?hl=en&co=GENIE.Platform%3DAndroid.
Users may report content in-app by following the instructions in the Facebook Help Centre or the Instagram Help Centre.
Meta’s Australian third-party fact-checking partners are also able to receive referrals from the public by contacting them directly:
Users with concerns related to disinformation or misinformation can report through the following mechanisms:
In order to report an artwork or design, scroll to the bottom of a page where you will see a link titled “Report Content”. After clicking this link you will be able to select the reason for reporting the work, and leave a comment if necessary.
To report potentially violative content, including videos that may contain harmful misinformation, TikTok users can:
Users can also use this online form to report content on TikTok.
If you come across a broadcaster or user on Twitch whom you feel has violated Twitch’s Terms of Service (ToS) or Community Guidelines (CG), you have the ability to send a report to our Moderation team for review.
To report a Channel
Here you will find DIGI produced or commissioned reports that have informed the code's initial development and evolution.
This report explains the outcome of the 2022 code review, explaining code changes and how stakeholder feedback was addressed.
This report provides research about Australians’ perceptions of misinformation. It contains information about how the code has evolved since it was initially launched.
This discussion paper provides background and specific questions and proposals to assist public consultation on the code review. It takes into account the ACMA’s report to the previous Government that was released in March 2022.
DIGI conducted public consultation on a draft code in October 2020 and closely reviewed all public submissions to inform changes to the first version published in February 2021.
Developed by UTS CMT for DIGI, this paper provides background research that DIGI released in October 2020 as part of its public consultation on the draft code.
We think misinformation is best understood as false or misleading information disseminated online which can, but may not be intended to, cause harm. For example, individuals can share harmful false information on social media that they genuinely believe to be true. Disinformation is false or misleading information that can cause harm and is disseminated online by spam or other kinds of manipulative aggressive bulk behaviors. For example, disinformation can be spread by malicious actors with the aim of causing deliberate damage to democratic political processes, such as elections, or to undermine public health initiatives or harm marginalised or vulnerable groups.
Every company that signs this code is agreeing to safeguards to protect Australians from harmful dis and misinformation online. That includes publishing and implementing policies on their approach, providing a way for their users to report content that may violate those policies and implementing a range of scalable measures that reduce its spread and visibility online. The specific measures will vary depending on the type of digital service the signatory provides, but could include content labelling and removal, restricting inauthentic accounts and behaviours, partnerships with fact-checking organisations, and technology to help people to check the authenticity of digital content.
The Code was developed in response to the Australian Government policy announced in December 2019, where the digital industry was asked to develop a voluntary code of practice on disinformation. Voluntary codes of practice are broadly used in a range of industries, including the media and advertising. A self-regulatory approach means the code can evolve to address advances in threats and technology faster than legislation, which is important because perpetrators of disinformation are constantly updating their tactics to evade the responses of technology companies. This code requires constant, proactive efforts by signatories to meet their commitments.