Like Africa and elsewhere in the world, artificial intelligence (AI) poses a major challenge for media experts in Tanzania.
And new research supported by the United Nations Cultural Organization (UNESCO) still paints a picture of so many industries on fences in the face of a lack of AI recognition and literacy, questions about their potential impact on job security, as well as ethical implications.
There is a consensus that AI and related technologies are already part of their operations. But much of the discussion in Tanzania’s newsroom revolves around the ability to expand misinformation, misinformation and totally fake news.

“The power of AI is spiritual. It makes the fake ones look real, the real ones look fake, the lies to the truth, the truth turn into lies,” says William Xiao, a veteran Tanzanian journalist.
“The higher the AI, the more wise and careful we need to be. Otherwise, instead of screening us, we’ll get through it,” he adds.
The study, launched in Dar es Salaam on February 27, highlights the extent to which evolving AI affects Tanzania’s media space, but is a very confusing subject for the general practitioner.
The final report, state of artificial intelligence for media development in Tanzania, was created by local company Tech & Media Convergency (TMC), in collaboration with UNESCO’s International Communications Program (IPDC).
Most of the 350 journalists, editors and support staff interviewed from traditional and digital media platforms talk about the need for more AI training and appropriate policy guidelines for responsible use of AI in the newsroom.
The survey found that up to 95% of respondents were keen to learn more about AI as a journalistic tool, but lacked access to structured training programs.
Less than a quarter of newsroom managers (22%) had officially raised the issue of implementing AI policies to staff.
Almost three-quarters of respondents (73%) perceived AI as a real game changer in local journalism practices, while 40% were more concerned about the possibility of misinformation and disinformation, especially by spreading harmful political propaganda.
Eight in ten (84%) of those interviewed said AI skills should be a priority in the curriculum of local journalism schools and universities.
Newsroom efficiency
Rather than replacing human intelligence, the report strongly advocates AI as a tool to improve newsroom efficiency, recommending that media houses and businesses “proactively” integrate AI tools while addressing concerns about misinformation and bias.
For example, it highlights how AI can automate regular content preparation tasks and help improve research and fact-checking within a vast data stream.
At the same time, the study recommends that media companies take special care to ensure that AI does not negatively affect content creation, job security, and the originality of audience trust.
While acknowledging the growing demand for AI and digital journalism training among journalists in Tanzania, the report points out that the lack of structured training programs tailored to local journalistic needs has proven to be a major issue.
“Most of the existing digital courses are Western-centric and fail to address the challenges inherent in Tanzania, such as local datasets, Swahili AI tools, and AI-driven fact-checking inherent in regional misinformation trends. This creates a disconnect between global AI advancements and practical applicability in Tanzanian journalism.”
Furthermore, the study argues that the slow adoption of AI in newsrooms in Tanzania is due to the “deeper challenges present in the changing mindset required for AI adoption” beyond issues such as cost, skills and access to AI tools.
“Tanzanian journalists tend to view AI as a competitor rather than a tool, leading to reluctance to integrate it into their workflow.” “While access to free courses and AI tools is a major incentive, trust in AI systems and the clarity of its role in journalism remains an important concern.”
While it emphasizes that AI-generated content such as deepfakes, synthetic media, and automated news articles are “abused for political propaganda, clickbait, or agenda-driven narratives,” generative AI models such as ChatGPT and Gemini can “generate unintended, false or biased content, and mitigate echo chambers and misunderstandings.
“There is also fear that AI-driven automation could replace traditional journalistic roles, particularly in content generation, editing and research. The lack of transparency in AI systems is an additional concern, indicating mistrust in its contribution to the editorial decision-making process and potential biases built into AI-generated content.”
The report comes two years after the government committee, tasked with assessing the financial position of media players and the economic welfare of individual journalists, recommended that AI integration guidelines be prepared for the sector in conjunction with regulatory frameworks.
However, government-led actions have not yet been taken to implement the proposal, leaving the ball in the courts of media experts themselves.