Imagine scrolling through your social media feed and stumbling upon a post from a government agency warning you not to trust AI—only to find out later that the very post itself was created by AI. Sounds ironic, right? Well, that’s exactly what happened when Fisheries Queensland, a department with over 143,000 followers, shared AI-generated images on its Instagram and Facebook accounts without disclosing their origin. But here’s where it gets controversial: while the department defended the use of AI for 'illustrative purposes,' experts argue this lack of transparency could erode public trust. And this is the part most people miss—as AI technology advances, distinguishing between real and AI-generated content is becoming increasingly difficult, raising questions about accountability and ethics in public communication.
One particularly eyebrow-raising post featured a floating fishing rod paired with nonsensical text, captioned, 'Don’t trust AI for your fishing rules.' Curtin University professor Tama Leaver called it 'ironic'—a post generated by AI warning against trusting AI. The ABC uncovered that at least four posts from late last year used AI-generated images, discussing serious topics like infringement notices and court cases. When tested, two images bore Google’s invisible AI watermark, while others showed telltale signs of AI creation. Yet, none of these posts disclosed their AI origins, either in captions or alt text.
Should governments be upfront about using AI in their communications? Leaver, also a chief investigator at the ARC Centre of Excellence for the Digital Child, believes so. He argues that transparency is crucial, especially as AI becomes more sophisticated. 'It’s trivially easy to create cartoonish, representational, or photorealistic images with AI,' he said. 'We’re at a transitional moment where best practice would be full transparency—though it’s not yet a legal requirement.'
A spokesperson for the Department of Primary Industries, which manages Fisheries Queensland’s accounts, confirmed AI use but claimed the images were for 'illustrative purposes' where real images couldn’t be used due to privacy or legal concerns. They added, 'We have not received any concerns about the images being unclear or mistaken for real.' But is the absence of complaints enough to justify the lack of disclosure?
Queensland’s guidelines for generative AI encourage productivity but recommend clearly identifying AI-produced content. Yet, during the 2024 state election, the now-elected LNP government circulated a deepfake video of the Labor leader dancing on TikTok—a move they defended as 'clearly labeled.' This raises a broader question: Are we holding government agencies to the same transparency standards as political campaigns?
Marketing professor Paul Harrison from Deakin University notes that more government agencies are turning to AI for efficiency. 'The public expects agencies to behave appropriately and be transparent,' he said. Yet, he criticizes the Fisheries Queensland posts as 'obviously AI-generated' and believes their failure to disclose feels 'lazy.' He warns that not disclosing AI use creates a risk of backlash, with people asking, 'Why didn’t you tell me it was AI?' From a marketing perspective, he adds, 'AI-generated images on social media are not very good. Was this the most effective way to grab attention?'
So, what do you think? Is it acceptable for government agencies to use AI without disclosure, or does transparency outweigh efficiency? As AI becomes more integrated into public communication, should there be stricter regulations? Let us know in the comments—this debate is far from over.