Home Dispatches Tools About
Operating Note Javier Leal February 20, 2026

AI Agents Are Talking to Each Other Through Email

Recently, I was experimenting with two systems that, on paper, both make perfect sense. On one side, I was running automated outreach workflows assisted by language models to customize content and categorize responses. On the other, I set up an AI agent to filter my inbox, triage incoming emails, and surface relevant opportunities.

They were very efficient, but using both at the same time led me to realize that some of the emails being generated through AI outreach are now being read, filtered, and summarized by AI systems on the receiving end. In other words, AI is both writing messages and defining whether a human should engage with them at all. So what looks like normal communication is, in practice, a loop of models writing, interpreting, and responding through a channel originally designed for human conversation.

The Reader Is No Longer Human

Email, DMs, and written outreach were never designed as machine-to-machine protocols. They are human communication layers that are ambiguous, narrative, persuasion-heavy, and context-dependent. A human inbox assumes selective attention, intuition about tone, skepticism toward persuasive language, and limited time. So, outreach best practices naturally adapted to this reality. Subject lines are attention-grabbing, messages are short, and information is compressed to increase the probability of engagement.

The goal was not for leads to fully understand your product; it was to get a response. But an AI-mediated inbox does not read like a human. It does not get bored, it does not skim emotionally, and it does not reward charisma. Instead, it parses, summarizes, classifies, and prioritizes without our bias. And this subtle shift changes the optimization target of communication in ways that are easy to overlook.

Because when I am writing to a human, too much information can reduce response rates, and over-explaining will dilute the hook, so I sacrifice precision for engagement. But the incentives invert when the reader is an AI agent. A long, structured email gives it more data to work with, to the point where a detailed spec sheet may replace a hook. And a comprehensive explanation showing problem-solution alignment can convince the interpreter that my product deserves attention.

Potentially, the more analytical and exhaustive the message becomes, the more convincing it may appear once summarized by the AI agent, even if the original document was still fundamentally persuasive in intent. After all, the interpretation layer is extracting key claims, evaluating relevance probabilistically, and presenting a condensed version of the message to the human, highlighting the sales points relevant to them in a way that may appear neutral.

The Illusion of Objectivity

There is a common intuition that AI evaluation should be more objective than human judgment. After all, the system is not emotional, socially influenced, or distracted in the same way a human reader might be. But large language models are context interpreters, and they’re often not objective in the way people imagine.

A language model does not independently verify reality. It processes inputs, structures them, and generates outputs based on patterns in language. So if an agent reads a long, well-structured narrative of a problem with confident mapping between solution and outcome, it may produce a summary that sounds highly reasonable, even if the original framing was strategically constructed to appear relevant.

So, as AI mediation becomes more common, communication will start to be optimized for model interpretation. And this is already visible with certain marketers using prompt injection as strategic communication design, taking control over what gets highlighted and how urgency is perceived.

The Overlooked Inefficiency

There’s also an inefficiency embedded in this loop in that we are using computationally expensive models to generate human-style emails, interpret human-style emails, and respond in human-style emails. All through a communication channel designed around human limitations.

From a systems perspective, this is an indirect path. Discovery systems would be far more efficient for machine-mediated exchanges, yet the default remains narrative language, because that is the layer our workflows are built on. So instead of direct information transfer, we get iterative interpretation cycles that are not necessarily optimal, but they are compatible with existing norms.

It is unlikely that this dynamic resolves into a single model of communication. Some workflows may continue to rely on AI inboxes and automated outreach loops, and others may shift toward marketplaces and discovery platforms paired with internal agents that proactively identify solutions without waiting for inbound pitches. At the same time, human judgment is unlikely to disappear from high-stakes decisions.

Which of these solutions dominates the market is yet to be seen, and there are challenges with all of them. But during this transition, those who notice these patterns first will get significant benefits.

Back to dispatches

Ripplio Inc

Research-first advisory for leadership teams.