Imagine this. This is 2025. Marketing interns used AI tools to generate content for the biggest clients and accidentally hit Send before anyone reviewing, including hallucination product features.
It gave you cold.
As creator economy competes to adopt generative AI tools, pauses to build proper content governance should be the next step.
Luckily for us, Kate O’Neill, author of What What Matters Next and founder and CEO of KO Insights, shared his wisdom about navigating the wild west of AI-driven content creation before the organization faces its own content crisis.
This interview is part of G2’s Q&A series. Please subscribe to such content Newsletter with G2 Tea, Saas-Y news and entertainment.
To see the full interview, watch the video below.
Within the industry with Kate O’Neill
Your latest book, Whats Matters Next, is committed to making decisions that are ready for the future. How does this apply specifically to content risk management?
I think that future-ready decisions are concepts or ideas that involve a balance between business goals and human values. This is deployed in technology because the scale and scope of technology decisions is so large. And many leaders feel they are aware of how complicated decisions are.
In content risk management, what we see is the need to implement governance and a kind of policy. We are also considering proactive approaches that go beyond regulatory compliance.
The key is to understand what is important in the present reality, while predicting what is important in the future. Everything is led to a clear understanding of what your organization is trying to achieve and what defines your values.
I think focusing on developing a robust internal framework would really benefit people when it comes to content risk. And these frameworks must be based on purpose and organizational values. It is very important to have a very clear understanding of what an organization is trying to achieve and what it defines their values.
Transform your AI marketing strategy.
Join G2’s Free AI In Action Roadshow industry leaders and show you proven strategies to rethink your funnel. Sign up now
Speaking of content risks, what are the most important hidden risks of content strategies that organizations usually overlook? Also, how can you be aware of the future?
When I worked for a large company with an intranet team, our focus was not only on content dissemination, but also on maintaining content integrity, managing regulations and preventing duplication. For example, various departments often maintain copies of their own documents, like codes of conduct. However, updating these documents can lead to inconsistent versions across departments, making them “orphans” or outdated content.
Another classic example I’ve seen many times is that some kind of work process is instantiated and codified into a document. However, the document represents one person’s eccentric tastes and still seeps into the document even after the person leaves. This leads to maintaining non-essential information without a clear reason. And I think they are the kinds of risks that are very modest kinds of risk. They are summed over time, but they are at low risk.
What we see in high-risk interests is that there is no clarity or transparency across communications and is unable to understand which stakeholders are responsible for different content.
Also, because generative AI is used within organizations, many people generate their own version of content and send it to clients or external media organizations on behalf of the company. And they are not necessarily approved by stakeholders within the organization who want to have some kind of governance over documents.
A comprehensive content strategy that addresses these issues at the regulatory, compliance and business engagement level will go a long way in mitigating these risks.
With content strategies becoming global, regulatory differences between global markets demonstrate how complex content risk management is present, especially with the advent of generated AI. What are the specific compliance issues that your organization is most concerned about?
I often see this in many areas of AI. We see how AI is generated conflicts with global regulations, particularly due to its widespread use. Especially in regions like the United States, where deregulation is prominent, businesses face challenges in establishing effective internal governance frameworks. Such internal governance frameworks are important to ensure resilience in the global market, to prevent issues such as the spread of non-specialized content that can incorrectly organize it as a company’s value or position, and to undermine safety and security.
You need to think about resilience and future preparation from a company’s leadership perspective. And that means we can say, “Our organization needs the best steps for us.” And that probably means it can be adapted to any market. If you are doing business globally, you need to prepare to consume or engage content in the global market.
“I think the right way to do this is focusing on developing value-driven frameworks that transcend certain regulations.”
Kate O’Neill
Founder and CEO of KO Insights
You need to be proactive about governance. This will create competitive advantages and resilience that will help change global markets and situations. This is because as soon as a particular government is changed to a different leader, there could be complete fluctuations in these regulated countries.
Thus, by focusing on long-term strategies, businesses can protect content, people and stakeholders and continue to prepare for changes in government policies and global market dynamics.
You will find yourself very active on LinkedIn, and you will talk about intertwining things about AI abilities and human values. So, considering the balance between AI capabilities and human values, what framework would you recommend to ensure that AI-powered content tools are consistent with human-centered values and not the other way around?
Contrary to the belief that human-centric or value-driven frameworks suppress innovation, I believe they actually reinforce it. Understanding what an organization is trying to achieve and how it benefits both internal and external stakeholders makes innovation easier within these well-defined guardrails.
I recommend using the “Next Continuum” framework from my book “Whats Whats next”. This includes identifying current priorities, engaging in planning for potential future outcomes, defining preferred outcomes, and working to bridge the gap between possible and favorable outcomes.
This exercise, applied through a human-centered lens, is the best I can think of to actually promote innovation. Because you can not only allow you to move really quickly, but also let people know that you’re not moving fast enough to hurt people. It creates a balance between technical capabilities and associated ethical responsibility that benefits both business and human beings.
“Think about the balance between technical capabilities and ethical responsibility and do it in a way that benefits people outside of business as well as business.”
Kate O’Neill
Founder and CEO of KO Insights
What skills should the content team develop in the future to prepare for future content risks?
Content teams should focus on developing skills that blend technical understanding and ethical considerations until this integration becomes a second nature. The other is active leadership and I really think about how there is a lot of uncertainty for geopolitics, climate, AI and many other topics.
And given this uncertainty of time, I think I tend to get very stuck. Instead, this is actually the perfect time to understand what’s important now and what’s important in the future and do integrated work a year to 100 years from now.
The key is to draw these future considerations into current decisions, actions and priorities. This future-view integration is the essence of “the next important thing” and represents the skills that many people need right now.
If you enjoyed this insightful conversation, subscribe to G2 Tea for the latest technology and marketing thought leadership.
Follow Kate O’Neill on LinkedIn to learn more about AI Ethics, content governance and responsible technologies.
edit Supanna Das