AI Agents

What are AI Agents?

Because not everything that uses an API deserves a job title

Dan True
Head of Solutions

Following up from our recent blog about What AI is, let’s take a further look at one other recently hyped and misunderstood feature: AI Agents. They’ll book a table for you at your favourite restaurant, make you a budget, and even write code – all while matching your Friday-night energy, whether that means getting everything done with calm precision or charging ahead with bold, unstoppable momentum.

An Agent is an entity with Agency, conversely an AI Agent is an AI with Agency – or at least the ability to Act in some sort of an environment. AI Agents have been studied and implemented for decades, and classic examples include many robots and game AI. I was building AI agents that could plan & act on their own in unpredictable game-like digital environments back in my university days a few decades ago. It’s nothing new.

In recent years LLM-based AI Agents have brought the hype to the masses. While we have decades of available literature on what AI Agents are, the casual observer of the current hypetrain could be excused for believing any LLM/GPT-based system nowadays is an AI Agent in disguise.

What are AI Agents? AI Tech Blog - Todai AI-konsulenthus

If you get this reference, I hope your back pain isn’t too bad

The underlying discussion about what constitutes an AI Agent relates to philosophy and etymology: to be an Agent do you need your own Agency or is it sufficient to Act On Behalf of someone else? To avoid injecting any existential crisis into our reader, we’ll keep this limited to AI systems 🙂

Personally, I prefer to distinguish between two different systems explained below. I use these terms to establish some clarity in this age of hype and vagueness. Do note that these are my definitions based on what I’ve read and learned, so you’ll find plenty of people who will tell you I’m wrong. Some of them may even be right.

AI Agents: When an AI has true Agency in some environment, continously pursuing its defined goals with the tools it has available.

Agentic Behaviour: When some AI system can Act, but only on behalf of a user and only in accordance with the goals set forward by that user.

Let’s take a deeper look at each.

 

AI Agents

For a system to be a true AI Agent I believe it must:

  1. Be at least partially continuously active, e.g. run all the time or at least as a scheduled job. I.e. to be an agent it needs to have agency – it can’t just be reacting to user-prompts and then be inactive until the next prompt.
  2. Go through a Sense-Think-Act cyclus. Meaning it first observes an environment, then considers how to achieve its goal(s) in that environment and potentially takes actions.
What are AI Agents? AI Tech Blog - Todai AI-konsulenthus

Most people can figure how this would work in a robot, but let’s take a more useful example from industry: an AI agent to generate leads for your business.

Imagine you have a business where most of your sales leads come through sites such as LinkedIn, Job Postings, Announcement platforms and the like. An AI Agent cyclus could look like this:

  1. Sense: Continuously (e.g. every minute, hour or night) Scan LinkedIn, Job Posting etc. sources for leads to generate a list of potential leads not yet evaluated.
  2. Think: Evaluate each lead according to various criteria such as the Ideal Customer Profile. This may mean more look-ups to websites or databases to get more information about the potential lead and their business.
  3. Act: Take action towards those leads, using tools or APIs made available to the Agent. This could include sending an introductory email or creating a new lead entry in a Customer Relationship Management system.

As you can see, such AI Agents could have their uses – but is a much narrower definition than what you see used on LinkedIn at the moment. Reasonable people can disagree, but to me all the above is necessary before I call it a true AI Agent.

 

Agentic Behaviour

Now, where it gets messier is with Agentic Behaviour. To me, this is when an AI (usually a GPT, but not necessarily) has parts of the full flow of an AI Agent, but not the entirety or doesn’t go through a cycle but only ever acts in response to user inputs. For instance:

A GPT that has access to one or more tools such as APIs, which it can call on the users behalf – Acting. This usually also entails some Think steps, as it needs to evaluate whether it has enough information to call a specific API or needs to ask clarifying questions from the user. But since it doesn’t exist in an environment which it Senses and then reacts to – it only ever reacts to user-input instead of existing in its own cycle – so to me, this it’s not a true AI Agent.

To be fair, many GPT systems with access to tools and based on so-called LLM Reasoning Models (terrible name, since they don’t do Automated Reasoning – but that’s for another day), do have an internal cycle:

  1. They consider the user prompt
  2. Considers how to answer the query
  3. Decides whether to call potentially-useful tools such as APIs either to get more information or to update some record somewhere
  4. Decide whether they have a satisfying response to the user and either return it or loop through the cycle again

This internal loop is often what is used as justification for calling it an agent. Again, to me this isn’t enough to be called an AI Agent as they only ever act on behalf of a user’s input, but it’s definitely Agentic Behaviour.

 

Rounding off

At the risk of annoying half of LinkedIn: not everything with a loop and an API call is an AI Agent. There’s a difference between a system that occasionally wakes up when poked, and one that actively exists in its own right – pursuing goals, reacting to change, and doing more than just fetching things when asked.

That doesn’t mean today’s tool-using GPTs are useless or uninteresting. Far from it. Many are impressive and very useful. But calling every tool-enabled prompt loop an Agent muddies the waters – and it makes it harder to talk about where this tech is actually heading.

Eventually, many of these Agentic Behaviour systems might evolve into true AI Agents. But if we want to get there, we’ll need more than branding. We’ll need robust systems that can actually act, not just react.

In the meantime, let’s keep our terms sharp. The hype can handle a little precision.