Generative artificial intelligence is prone to risks from deepfakes spreading disinformation to impersonation attacks and unintentional employee misuse. In such a scenario, communications leaders need to protect their organisation’s reputation from external and internal gen AI threats.
Chief communications officers need to establish effective guardrails in order to balance the opportunity from gen AI with the reputational threats the technology poses. According to Gartner research, there are five strategies CCOs should consider when protecting their organisation’s reputation from gen AI threats. They are:
Enhance Social Media Monitoring Capabilities
A Gartner survey between April and May 2024 found that 80% of consumers agreed gen AI has made it more difficult to distinguish between what’s real online and what’s not. CCOs should have visibility into what is trending with audiences on social media and ensure vendors that monitor the spread of misleading content can detect potential spread in real time.
By establishing a human-in-the-loop protocol, social media managers can have tools to monitor and manage reputational risks. A triage process and risk management protocol should also be adapted for gen AI-specific scenarios, including updating social media processes to alert IT partners and report false and misleading content to social platforms.
Strengthen Owned Media Credibility
Disinformation and an erosion of trust remain major challenges in today’s media landscape. Therefore, it is imperative that communications leaders establish their own organisation as a source of accurate and reliable information. From trustworthy social media profiles for brands and executives to external transparency with online newsrooms, communications executives should drive credible and ethical use of gen AI.
Scenario Plan For Most Likely Attacks
Communications leaders play a vital role in identifying and flagging sensitive topics prone to disinformation and most likely to cause reputational damage. Leaders across functions should plan for gen AI-related attacks, identifying areas with the highest reputational risk. By pinpointing gaps in internal response processes, communications teams can incorporate gen AI considerations into crisis communications plans and develop counternarratives before an attack occurs.
Clarify Gen AI Use To Employees and Consumers
Consumers want transparency around how brands use gen AI in content or communications, with 75% agreeing that brands should disclose when they use gen AI to help produce their content. Leaders should ensure that content is subject to human review and fact-checking, and appropriate disclosures around how AI was deployed to support its creation should be added.
Guidance around gen AI should also be given to employees by providing relevant use cases and real-life examples that demonstrate its practical application.
Gen AI Experimentation For Employees
By fostering a culture of safe gen AI experimentation, communications leaders can build their employees’ confidence around usage and ultimately encourage adoption. Experimentation opportunities should be focused on the most useful and lowest-risk use cases to minimise the risk of a mishap.