Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


João Freitas is general manager and vice president of engineering for AI and automation at Pager Service
As the use of AI continues to evolve in large organizations, executives are increasingly looking for the next development that will generate major ROI. The latest wave of this current trend is the adoption of AI agents. However, as with any new technology, organizations must ensure that they adopt AI agents in a responsible way that allows them to improve both speed and security.
More than half Organizations have already deployed AI agents to some extent, and more are expected to follow suit over the next couple of years. But many pioneers are now re-evaluating their approach. Four out of 10 technology leaders regret not having implemented a a more solid governance foundation from the start, suggesting that they adopted AI quickly, but with room for improvement in policies, rules and best practices designed to ensure the responsible, ethical and legal development and use of AI.
As AI adoption acceleratesOrganizations must strike the right balance between their risk of exposure and implementing safeguards to ensure the safe use of AI.
There are three main areas to consider for safer adoption of AI.
The first is Shadow AIwhen employees use unauthorized AI tools without express authorization, bypassing approved tools and processes. IT must create the processes necessary for experimentation and innovation to introduce more effective ways of working with AI. Although shadow AI has been around as long as AI tools themselves, the autonomy of AI agents makes it easier for unauthorized tools to operate outside of IT purview, which can introduce new security risks.
Second, organizations must close gaps in AI ownership and accountability to prepare for incidents or processes that go wrong. The strength of AI agents lies in their autonomy. However, if agents act unexpectedly, teams must be able to determine who is responsible for resolving issues.
The third risk arises when there is a lack of explainability for the actions taken by AI agents. AI agents are goal-orientedbut how they achieve their goals can be unclear. AI agents must have explainable logic underlying their actions so that engineers can trace and, if necessary, undo actions that could cause problems with existing systems.
While none of these risks should delay adoption, they will help organizations become more secure.
Once organizations have identified the risks that AI agents may pose, they should implement guidelines and safeguards to ensure safe use. By following these three steps, organizations can minimize these risks.
1: Make human monitoring the default
Agency AI continues to evolve at a rapid pace. However, we still need human oversight when AI agents have the ability to act, make decisions, and pursue a goal that can impact key systems. A human should be informed by default, especially for business-critical use cases and systems. Teams using AI need to understand what actions it can take and where they may need to intervene. Start conservatively and, over time, increase the level of action granted to AI agents.
At the same time, operations teams, engineers, and security professionals need to understand the role they play in overseeing AI agent workflows. Each agent should be assigned a specific human owner for clearly defined oversight and accountability. Organizations must also allow any human to report or ignore an AI agent’s behavior when an action has a negative outcome.
When considering tasks for AI agents, organizations should understand that while traditional automation is effective at managing repetitive, rule-based processes with structured data inputs, AI agents can handle much more complex tasks and adapt to new information more autonomously. This makes it an interesting solution for all kinds of tasks. But as AI agents are deployed, organizations need to control the actions agents can take, especially in the early stages of a project. Thus, teams working with AI agents should have approval pathways for high-impact actions to ensure that the scope of the agent does not extend beyond expected use cases, thereby minimizing risk to the overall system.
2: Cook safely
The introduction of new tools should not expose a system to new security risks.
Organizations should consider agent platforms that meet high security standards and are validated by enterprise-grade certifications such as SOC2, FedRAMP, or equivalent. Additionally, AI agents should not have free rein in an organization’s systems. At a minimum, an AI agent’s permissions and security scope should be aligned with the owner’s scope, and tools added to the agent should not allow broad permissions. Limiting AI agents’ access to a system based on their role will also ensure a smooth deployment. Keeping complete logs of every action taken by an AI agent can also help engineers understand what happened in the event of an incident and trace back the problem.
3: Make the results explainable
The use of AI in an organization should never be a black box. The reasoning behind any action must be illustrated so that any engineer attempting to access it can understand the context the agent used to make its decision and access the traces that led to those actions.
IThe inputs and outputs of each action must be recorded and accessible. This will help organizations establish an accurate overview of the logic behind an AI agent’s actions, providing significant value should something go wrong.
AI agents offer a huge opportunity for organizations to accelerate and improve their existing processes. However, if they do not prioritize security and strong governance, they could expose themselves to new risks.
As AI agents become more common, organizations need to ensure they have systems in place to measure their performance and their ability to take action when they create problems.
Learn more about our guest writers. Or consider submitting your own post! See our guidelines here.