Artificial intelligence has had a transformative impact on the way we do our business. As organizations adapt to AI-driven efficiency, new evolutions are emerging to redefine business operations beyond Agent AI, perhaps the foundational model for future generations.
Unlike previous technologies, which are rule-based and have limited ability to act independently, Agent AI engages in complex multi-step processes, often interacting with different systems to achieve the desired results. Imagine an AI-powered help desk that uses natural language processing to understand and process support tickets. Passwords will be autonomously reset and software update installation will escalate the problem to human staff if necessary. By 2028, Gartner predicts that 33% of enterprise software applications will include agent AI from 0% in 2024, ensuring at least 15% of daily work decisions are made autonomously.
But while the possibilities are exciting, the journey to implementation is not without hurdles. Companies need to prepare to address some key issues before fully adopting Agent AI to ensure its reliability and effectiveness.
Logic and thought
Agent AI operates through a network of autonomous agents, each with a different role. The core has one agent that acts as a “planner” and coordinates the actions of multiple agents, while another model provides the “critical thinker” function, providing feedback on the planner output and the various agents running on those instructions. This feedback loop enhances the model’s insight over time, resulting in progressively better results.
However, for this process to work reliably in real-world applications, the critical thinking model must train data as closely based in reality as possible. This includes extensive feedback along with detailed information about goals, plans, actions and outcomes. Achieving this level of accuracy is not a small task. In many cases, numerous iterations may be required to provide the model with enough data to ensure that it acts as a critical thinker. Without this foundation, however, Agent AI risks generating inconsistent or unreliable output that limits its potential as a reliable business tool.
Predictability and reliability
For decades, interaction with computers has been a rather predictable process. Users provide clear instructions and the system follows in stages. Agent AI changes this dynamic change by allowing teams to lead on the outcome they want to achieve, rather than step-by-step instructions. The agent then autonomously decides how to achieve the goal and introduces some degree of unpredictability into the process.
This randomness is not new. Early generation AI systems like ChatGpt faced similar challenges. However, over the past two years, we have seen significant improvements in the consistency of the generated AI output thanks to fine-tuning, human feedback loops, and consistent efforts to train and refine these models. A similar level of effort must be spent to minimize the randomness of the agent AI system.
Data Privacy and Security
Some companies are reluctant to adopt Agent AI due to growing privacy and security concerns. These risks are based on what is found in generative AI and other systems.
In large-scale language models (LLM), all the data provided to the model is included within it. There is no way for the model to “forget” that information. Security attacks like rapid injection take advantage of this to extract unique or sensitive information. Agent AI further increases interest as these systems often have broad access to multiple platforms, increasing the risk of exposing private data across a variety of sources.
To mitigate these risks, businesses need to take a structured, security-first approach to their implementation. It’s important to start small. Companies need to containerize their data as much as possible so that they are not exposed beyond the required internal domains. It is also important to anonymize the data, obscure the users, and remove any personally identifiable information from the prompt before sending it to the model.
At a high level, you can see three different types of agent AI systems and their respective security implications for each business use.
Consumer Agent AI: An external AI model that is normally accessed via an internal user interface. Companies have no control over AI itself. Employee Agent AI: In-house built AI used in-house. While this setup minimizes risk, there is still concern that it could lead to highly personal information exposure to unqualified users of the company. Customer Agent AI: AI systems built by businesses to serve customers. Because there is a certain amount of risk in customer interaction and cooperation, effective segmentation is essential to avoid the disclosure of private customer data.
Data Quality and Applicability
But even with strong privacy measures in place, Agent AI is as effective as the quality and relevance of the data it relies on.
Generated AI models often fail to provide the expected results because they are disconnected from the most accurate current data. Agent AI systems face additional problems as they interact with multiple platforms and data sources, pulling information dynamically and performing tasks.
This is where data streaming platforms (DSPs) play a key role. By enabling real-time data integration, DSPS can be used to connect agent AI to accurate and reliable information and provide relevant answers. Solutions such as Apache Kafka and Kafka Connect allow developers to bring data from different sources, and Apache Flink promotes seamless communication between models. These tools ensure that agent AI systems are effective, overcome hallucinations, and produce reliable results based on reliable, fresh data.
The road ahead
AI remains a new realm for many businesses, and it takes time and a lot of investment to fully utilize the benefits that technology offers. Many companies need to buy new hardware and GPUs and create new data infrastructure, particularly with new memory management for caches and for short-term and long-term storage. Beyond technical requirements, companies must build internal inference models and develop or hire talent with specialized AI skills. The return on investment takes time.
Despite these challenges, Agent AI is on track to follow the same rapid adoption curve as Generated AI. It has already been seen that some AI technology vendors are moving in this direction, and companies that are prepared for the age of agent AI will be in the best position to enjoy profits later on. Although upfront investment is important, the potential impact can far outweigh the generated AI alone.
Image credit:istock.com/bangon Pitipong