Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI-Media and Lightning International join forces to make first channel accessible to everyone

May 31, 2025

Promote your creativity with new generation media models and tools

May 31, 2025

UK companies compete to embed AI into their enterprise workflows.

May 31, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, May 31
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Unrestricted AI threatens public trust — here are some guidelines to maintain the integrity of communications
AI Legislation

Unrestricted AI threatens public trust — here are some guidelines to maintain the integrity of communications

By December 16, 2007No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

The rapid advancement and adoption of generative artificial intelligence (AI) is revolutionizing the communications sector. AI-powered tools can now generate compelling text, images, audio, and video from text prompts.

Generative AI is powerful and useful, but it also poses significant risks, including misinformation, bias, and privacy.

Generative AI is already causing some serious communication problems. AI image generators have been used during political campaigns to create fake photos aimed at confusing voters and embarrassing opponents. AI chatbots have provided inaccurate information to customers and damaged an organization’s reputation.

Deepfake videos of celebrities making inflammatory statements or endorsing stocks are going viral. Similarly, AI-generated social media profiles are being used in disinformation campaigns.

The rapid pace of AI development presents challenges. For example, AI-generated images are becoming much more realistic, making deepfakes much harder to stop.

Without clear AI policies in place, organizations are at risk of creating misleading communications that can undermine public trust, and of personal data being misused on an unprecedented scale. Masu.

An AI-generated image appears on the open laptop screen.

The rapid pace of AI development poses challenges for both regulators and researchers. (Shutterstock)

Establishing AI guidelines and regulations

There are several efforts underway in Canada to develop different perceptions of AI regulation. The federal government has introduced a controversial bill in 2022. If passed, the bill will outline how AI will be regulated and data privacy protected.

In particular, the Artificial Intelligence and Data Act (AIDA) of the Act has been the subject of strong criticism from 60 organizations, including the Assembly of First Nations (AFN), the Canadian Chamber of Commerce, and the Canadian Civil Liberties Association. They are calling for the document to be withdrawn and rewritten following wider consultation.

Recently, in November 2024, Canada Innovation, Science and Economic Development (ISED) announced the creation of the Canadian Artificial Intelligence and Safety Institute (CAISI). CAISI aims to “support the safe and responsible development and deployment of artificial intelligence” by working with other countries to establish standards and expectations.

The development of CAISI allows Canada to join the United States and other countries that have established similar institutions, with hopes of working together towards establishing multilateral standards for AI that promote responsible development while fostering innovation. has been done.

The Montreal AI Ethics Institute offers resources such as a newsletter, blog, and an interactive AI Ethics Living Dictionary. The Swartz Reisman Institute for Technology and Society at the University of Toronto and CARE-AI at the University of Guelph are examples of universities that are building academic forums to investigate ethical AI.

In the private sector, Telus is the first Canadian telco to publicly commit to AI transparency and accountability. Telus’ Responsible AI division recently released the 2024 AI Report, which describes the company’s commitment to responsible AI through customer and community engagement.

Read more: Bletchley Declaration: International agreement on AI safety is a good start, but ordinary people, not just elites, need a voice

In November 2023, Canada was among 29 countries that signed the Bletchley AI Declaration after the first International AI Safety Summit. The purpose of this declaration was to reach consensus on how to assess and mitigate AI risks in the private sector.

Recently, the governments of Ontario and Quebec introduced legislation regarding the use and development of AI tools and systems in the public sector.

Looking ahead, the European Union’s AI law, dubbed “the world’s first comprehensive AI law,” is scheduled to come into force in January 2025.

Put frameworks into action

As the use of generative AI becomes more widespread, the communications industry, including public relations, marketing, digital and social media, and public relations, will need to develop clear guidelines for the use of generative AI.

While progress has been made by government, universities and industry, more work is needed to turn these frameworks into actionable guidelines that can be adopted by Canada’s communications, media and marketing sectors.

Industry associations such as the Public Relations Association of Canada, the International Association of Business Communicators, and the Canadian Marketing Association should develop standards and training programs that meet the needs of public relations, marketing, and digital media professionals.

The Public Relations Association of Canada is moving in this direction in partnership with the Chartered Institute of Public Relations, the professional body for public relations practitioners in the United Kingdom. The two professional bodies collaborated to create the AI ​​in PR Panel, a practical guide for communicators who want to use generative AI responsibly.

Establishing standards for AI

Maximizing the benefits of generative AI while limiting its drawbacks requires the adoption of professional standards and best practices in the communications field. The use of generative AI over the past two years has revealed several areas of concern that should be considered when developing guidelines.

Transparency and disclosure. AI-generated content must be labeled. When and how generative AI is used should be disclosed. AI agents should not be presented to the public as humans.

Accuracy and fact checking. Professional communicators must uphold journalistic accuracy standards by fact-checking AI output and correcting mistakes. Communicators must not use AI to create or spread false or misleading content.

Fairness. AI systems should be regularly checked for bias to ensure they respect an organization’s audience along variables such as race, gender, age, and geographic location. To reduce bias, organizations must ensure that the datasets used to train generative AI systems accurately represent their audiences and users.

Privacy and consent. Users’ right to privacy should be respected. Must comply with data protection laws. Personal data must not be used to train AI systems without the user’s explicit consent. Individuals should be able to opt out of receiving automated communications and data collection.

Accountability and oversight. AI decisions must always be subject to human oversight. Clear lines of accountability and reporting should be spelled out. Generative AI systems should be audited regularly. To enable these policies, organizations must appoint a standing AI task force that is accountable to the organization’s board of directors and members. An AI task force should monitor the use of AI and regularly report results to appropriate stakeholders.

Generative AI has immense potential to enhance human creativity and storytelling. By developing and following thoughtful AI guidelines, the communications sector can build public trust and maintain the integrity of public information that is essential for a thriving society and democracy.

This article is republished from The Conversation, a nonprofit, independent news organization that provides facts and trusted analysis to help you make sense of our complex world. Author: Terry Flynn of McMaster University and Alex Sevigny of McMaster University

read more:

Terry Flynn is a fellow and certified member of the Public Relations Association of Canada.

Alex Sévigny is the lead judge for the Canadian Public Relations Association’s Public Relations National Certification Program and a member of the International Association of Business Communicators.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe AI ​​browser tracks everything you do, the CEO reveals
Next Article OpenAI suspends access to Sora video generation tool following artist outcry

Related Posts

AI Legislation

AI and Employment | Constangy, Brooks, Smith & Prophete, LLP

May 30, 2025
AI Legislation

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 2025
AI Legislation

Trump’s One Big Beautiful Bill State AI Regulation 10 Years Prohibitions Really Makes Meaning

May 28, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

AI-Media and Lightning International join forces to make first channel accessible to everyone

May 31, 2025

Promote your creativity with new generation media models and tools

May 31, 2025

UK companies compete to embed AI into their enterprise workflows.

May 31, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?