On February 22 2021, DIGI launched a new code of practice that commits a diverse set of technology companies to reducing the risk of online misinformation causing harm to Australians.

The Australian Code of Practice on Disinformation and Misinformation has been adopted by Adobe, Apple, Facebook, Google, Microsoft, Redbubble, TikTok and Twitch.

All signatories commit to safeguards to protect Australians against harm from online disinformation and misinformation, and to adopting a range of scalable measures that reduce its spread and visibility.

Participating companies also commit to releasing an annual transparency report about their efforts under the code, which will help improve understanding of online misinformation and disinformation in Australia over time. Transparency reports were published in May 2021 and May 2022, and are available to read here.

DIGI developed this code with assistance from the University of Technology Sydney’s Centre for Media Transition and First Draft, a global organisation that specialises in helping societies overcome false and misleading information. The final code has been informed by a robust public consultation process.

The Code was developed in response to the Australian Government policy announced in December 2019, where the digital industry was asked to develop a voluntary code of practice on disinformation, drawing learnings from a similar code in the European Union. In October 2021, DIGI strengthened the code with independent oversight and a facility for the public to report breaches by signatories of their code commitments. In June 2022, DIGI launched a review of the code to inform its continued improvement.






This is the latest version of The Australian Code of Practice on Disinformation and Misinformation, updated on December 22, 2022 to reflect the outcome of DIGI’s review of the code. Previous versions of the code are linked below:

  • October 11, 2021: This version was updated from the original to reflect the improvements DIGI made to the code’s governance.
  • February 22, 2021: This was the original version of the code launched.


Disinformation Code PDF


The Australian Code of Practice on Misinformation and Disinformation has been signed by eight major technology companies that are the founding signatories. The code is open to any company in the digital industry as a blueprint for best practice for how to combat mis and disinformation online. If you are interested in adopting the code, please contact us at hello@digi.org.au

















Signatories Commitments

The table below shows signatories’ current commitments to the code’s objectives. Objectives #1 and #7 are mandatory, and other commitments are opt-in recognising the diversity of signatories’ products and services. For example, signatories may choose not to adopt #5 in relation to political advertising if their service does not offer political advertising.

For a more detailed breakdown of the outcomes under each objective that signatories have adopted, view the opt-in disclosures provided in 2021. The most recent information about signatories’ activities relating to each of their commitments, and any changes to those commitments, can be found in the most recent transparency reports.


The purpose of the code is to drive improvements in the measures that signatories take to address misinformation and disinformation, and the complaints handling approach is consistent with that aim. Through the complaints portal, DIGI accepts complaints from the Australian public where they believe a signatory has breached their code commitments.

DIGI will not be able to accept complaints about individual items of content on signatories’ products or services, which should be directed to the signatory via their reporting mechanisms or otherwise. We have included general information about how to report content on signatories’ services below, however often the ability to report content exists on the URL where you are viewing it.

Users may report violations of Adobe’s Terms of Use or Community Guidelines, including mis- and dis- information on Adobe products and services, by following the product-specific directions contained on this page. For any products and services not listed in this page, users may reach out to abuse@adobe.com to file a report with Adobe’s Trust & Safety team.

To report a concern:
1. While in a story, tap the More Actions button . On your Mac, click the Share button .
2. Tap or click Report a Concern.
3. Choose a reason that you don’t want to see the story and provide more details.
4. Tap or click Send.

Step 1: Confirm that it’s a Google ad
The first step is to confirm that the ad you want to report is in fact a Google ad. Here are some of the different types of Google ads you might see.

Ads on Google Search:
These are ads you see on Google Search results pages and other Google services such as Google Shopping.

Ads on non-Google websites and apps:
You may see Google ads on non-Google websites and apps. You can identify them as Google ads if you see an AdChoices icon accompanied by an [X] icon that allows you to block the ad.

Ads on YouTube:
These are ads you see at the bottom of YouTube videos, or on the right side of YouTube videos.

Step 2: Report the ad
Complete the Report an ad form.
You’ll receive an email confirmation after you’ve submitted the form. Your report will be reviewed, and if appropriate, action will be taken on the ad.
This YouTube video also demonstrates how to complain about a Google served ad.

To complain about a result that appears in Google’s search index, please click on the three vertical dots that appear alongside the URL text and select “Send feedback”.

There are a variety of ways to report content on YouTube depending on what device you are using. For more information and instructions, please visit https://support.google.com/youtube/answer/2802027?hl=en&co=GENIE.Platform%3DAndroid.

Users may report content in-app by following the instructions in the Facebook Help Centre or the Instagram Help Centre.

Meta’s Australian third-party fact-checking partners are also able to receive referrals from the public by contacting them directly:

Users with concerns related to disinformation or misinformation can report through the following mechanisms:

  • Microsoft Bing: Report a Concern
  • Microsoft Start: users can find a feedback form by selecting “feedback” in the settings menu on the Start landing page.
  • Microsoft Advertising: Low quality ad submission & escalation – Microsoft Advertising
  • LinkedIn: members can report concerns using the in-product reporting mechanism represented by the three dots in the upper right-hand corner of a post on LinkedIn.

In order to report an artwork or design, scroll to the bottom of a page where you will see a link titled “Report Content”. After clicking this link you will be able to select the reason for reporting the work, and leave a comment if necessary.

To report potentially violative content, including videos that may contain harmful misinformation, TikTok users can:

  1. Go to the video they wish to report.
  2. Press and hold on the video.
  3. Select Report and follow the instructions provided.

Users can also use this online form to report content on TikTok.

If you come across a broadcaster or user on Twitch whom you feel has violated Twitch’s Terms of Service (ToS) or Community Guidelines (CG), you have the ability to send a report to our Moderation team for review.

To report a Channel

  • Click the 3 Vertical Dots icon in the bottom right below the video player on the channel to report the live stream itself (using Report Live Stream) or other attributes of the user such as their username or their avatar (using Report Something Else).
  • You can also initiate a report for a user’s chat messages or Whispers under the Report Something Else menu, but we recommend reporting directly from the chat message or the Whisper so that our team knows exactly what message you are reporting.
  • Clicking on either Report Live Stream or Report Something Else will open the reporting flow.
  • Follow the report flow to select the most appropriate category for your report and write a detailed description of the violation in the Tell Us More field. If the correct category isn’t listed on the first page of the form, you can select Search in order to search for the reason category.

Code development

Here you will find DIGI produced or commissioned reports that have informed the code's initial development and evolution.

2022 Submission Report

This report explains the outcome of the 2022 code review, explaining code changes and how stakeholder feedback was addressed.

Download PDF

2022 Annual Report

This report provides research about Australians’ perceptions of misinformation. It contains information about how the code has evolved since it was initially launched.

Download PDF

2022 Review Discussion Paper

This discussion paper provides background and specific questions and proposals to assist public consultation on the code review. It takes into account the ACMA’s report to the previous Government that was released in March 2022.

Download PDF

2021 Submission report

DIGI conducted public consultation on a draft code in October 2020 and closely reviewed all public submissions to inform changes to the first version published in February 2021. 

Download PDF

2020 Discussion paper

Developed by UTS CMT for DIGI, this paper provides background research that DIGI released in October 2020 as part of its public consultation on the draft code. 

Download PDF


What is the difference between misinformation and disinformation?

We think misinformation is best understood as false or misleading information disseminated online which can, but may not be intended to, cause harm. For example, individuals can share harmful false information on social media that they genuinely believe to be true. Disinformation is false or misleading information that can cause harm and is disseminated online by spam or other kinds of manipulative aggressive bulk behaviors. For example, disinformation can be spread by malicious actors with the aim of causing deliberate damage to democratic political processes, such as elections, or to undermine public health initiatives or harm marginalised or vulnerable groups.

What kinds of commitments are signatories making under the code?

Every company that signs this code is agreeing to safeguards to protect Australians from harmful dis and misinformation online. That includes publishing and implementing policies on their approach, providing a way for their users to report content that may violate those policies and implementing a range of scalable measures that reduce its spread and visibility online. The specific measures will vary depending on the type of digital service the signatory provides, but could include content labelling and removal, restricting inauthentic accounts and behaviours, partnerships with fact-checking organisations, and technology to help people to check the authenticity of digital content.

Why is this a voluntary code, not mandatory?

The Code was developed in response to the Australian Government policy announced in December 2019, where the digital industry was asked to develop a voluntary code of practice on disinformation. Voluntary codes of practice are broadly used in a range of industries, including the media and advertising. A self-regulatory approach means the code can evolve to address advances in threats and technology faster than legislation, which is important because perpetrators of disinformation are constantly updating their tactics to evade the responses of technology companies. This code requires constant, proactive efforts by signatories to meet their commitments.