The rapid advancement and adoption of generative artificial intelligence (AI) is revolutionizing the communications sector. AI-powered tools can now generate compelling text, images, audio, and video from text prompts.
Generative AI is powerful and useful, but it also poses significant risks, including misinformation, bias, and privacy.
Generative AI is already causing some serious communication problems. AI image generators have been used during political campaigns to create fake photos aimed at confusing voters and embarrassing opponents. AI chatbots have provided inaccurate information to customers and damaged an organization’s reputation.
Deepfake videos of celebrities making inflammatory statements or endorsing stocks are going viral. Similarly, AI-generated social media profiles are being used in disinformation campaigns.
The rapid pace of AI development presents challenges. For example, AI-generated images are becoming much more realistic, making deepfakes much harder to stop.
Without clear AI policies in place, organizations are at risk of creating misleading communications that can undermine public trust, and of personal data being misused on an unprecedented scale. Masu.
Establishing AI guidelines and regulations
There are several efforts underway in Canada to develop different perceptions of AI regulation. The federal government has introduced a controversial bill in 2022. If passed, the bill will outline how AI will be regulated and data privacy protected.
In particular, the Artificial Intelligence and Data Act (AIDA) of the Act has been the subject of strong criticism from 60 organizations, including the Assembly of First Nations (AFN), the Canadian Chamber of Commerce, and the Canadian Civil Liberties Association. They are calling for the document to be withdrawn and rewritten following wider consultation.
Recently, in November 2024, Canada Innovation, Science and Economic Development (ISED) announced the creation of the Canadian Artificial Intelligence and Safety Institute (CAISI). CAISI aims to “support the safe and responsible development and deployment of artificial intelligence” by working with other countries to establish standards and expectations.
The development of CAISI allows Canada to join the United States and other countries that have established similar institutions, with hopes of working together towards establishing multilateral standards for AI that promote responsible development while fostering innovation. has been done.
The Montreal AI Ethics Institute offers resources such as a newsletter, blog, and an interactive AI Ethics Living Dictionary. The Swartz Reisman Institute for Technology and Society at the University of Toronto and CARE-AI at the University of Guelph are examples of universities that are building academic forums to investigate ethical AI.
In the private sector, Telus is the first Canadian telco to publicly commit to AI transparency and accountability. Telus’ Responsible AI division recently released the 2024 AI Report, which describes the company’s commitment to responsible AI through customer and community engagement.
Read more: Bletchley Declaration: International agreement on AI safety is a good start, but ordinary people, not just elites, need a voice
In November 2023, Canada was among 29 countries that signed the Bletchley AI Declaration after the first International AI Safety Summit. The purpose of this declaration was to reach consensus on how to assess and mitigate AI risks in the private sector.
Recently, the governments of Ontario and Quebec introduced legislation regarding the use and development of AI tools and systems in the public sector.
Looking ahead, the European Union’s AI law, dubbed “the world’s first comprehensive AI law,” is scheduled to come into force in January 2025.
Put frameworks into action
As the use of generative AI becomes more widespread, the communications industry, including public relations, marketing, digital and social media, and public relations, will need to develop clear guidelines for the use of generative AI.
While progress has been made by government, universities and industry, more work is needed to turn these frameworks into actionable guidelines that can be adopted by Canada’s communications, media and marketing sectors.
Industry associations such as the Public Relations Association of Canada, the International Association of Business Communicators, and the Canadian Marketing Association should develop standards and training programs that meet the needs of public relations, marketing, and digital media professionals.
The Public Relations Association of Canada is moving in this direction in partnership with the Chartered Institute of Public Relations, the professional body for public relations practitioners in the United Kingdom. The two professional bodies collaborated to create the AI in PR Panel, a practical guide for communicators who want to use generative AI responsibly.
Establishing standards for AI
Maximizing the benefits of generative AI while limiting its drawbacks requires the adoption of professional standards and best practices in the communications field. The use of generative AI over the past two years has revealed several areas of concern that should be considered when developing guidelines.
Transparency and disclosure. AI-generated content must be labeled. When and how generative AI is used should be disclosed. AI agents should not be presented to the public as humans.
Accuracy and fact checking. Professional communicators must uphold journalistic accuracy standards by fact-checking AI output and correcting mistakes. Communicators must not use AI to create or spread false or misleading content.
Fairness. AI systems should be regularly checked for bias to ensure they respect an organization’s audience along variables such as race, gender, age, and geographic location. To reduce bias, organizations must ensure that the datasets used to train generative AI systems accurately represent their audiences and users.
Privacy and consent. Users’ right to privacy should be respected. Must comply with data protection laws. Personal data must not be used to train AI systems without the user’s explicit consent. Individuals should be able to opt out of receiving automated communications and data collection.
Accountability and oversight. AI decisions must always be subject to human oversight. Clear lines of accountability and reporting should be spelled out. Generative AI systems should be audited regularly. To enable these policies, organizations must appoint a standing AI task force that is accountable to the organization’s board of directors and members. An AI task force should monitor the use of AI and regularly report results to appropriate stakeholders.
Generative AI has immense potential to enhance human creativity and storytelling. By developing and following thoughtful AI guidelines, the communications sector can build public trust and maintain the integrity of public information that is essential for a thriving society and democracy.
This article is republished from The Conversation, a nonprofit, independent news organization that provides facts and trusted analysis to help you make sense of our complex world. Author: Terry Flynn of McMaster University and Alex Sevigny of McMaster University
read more:
Terry Flynn is a fellow and certified member of the Public Relations Association of Canada.
Alex Sévigny is the lead judge for the Canadian Public Relations Association’s Public Relations National Certification Program and a member of the International Association of Business Communicators.