<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AIOps on Ortelius</title>
    <link>/tags/aiops/</link>
    <description>Recent content in AIOps on Ortelius</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Wed, 22 Apr 2026 10:02:58 -0600</lastBuildDate>
    <atom:link href="/tags/aiops/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The New Categories of Security in AI Systems</title>
      <link>/blog/2026/04/21/the-new-categories-of-security-in-ai-systems/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>/blog/2026/04/21/the-new-categories-of-security-in-ai-systems/</guid>
      <description>&lt;h1 id=&#34;the-new-categories-of-security-in-ai-systems&#34;&gt;The New Categories of Security in AI Systems&lt;/h1&gt;&#xA;&lt;p&gt;For years, cybersecurity has largely focused on a familiar set of questions: Is there a software bug? Is a dependency vulnerable? Has the patch been applied? Those questions still matter. But Artificial Intelligence (AI) systems, especially AI agents, introduce a broader set of risks.&lt;/p&gt;&#xA;&lt;p&gt;Unlike traditional software, AI agents do not simply execute fixed instructions. They interpret prompts, retrieve information, remember context, use tools, interact with people, communicate with other agents, and respond to changing environments. That means their security cannot be assessed only by the software they contain. It also depends on how they think, what they remember, what they can access, and what influences them after deployment. The TrustAgent survey describes this shift by dividing agent security into internal and external modules (Yu et al., 2025). Internal modules include the agent’s brain, memory, and tools. External modules include the user, other agents, and the environment.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
