Blog

What are AI agents – is this the beginning of a new stage in software development?

Published
Marcin Stasiak, 1. July 2025

In the world of technology, new buzzwords appear almost every minute: chatbots, co-pilots, cognitive automation… Many of them quickly disappear, but some promise real change.

AI agents (or agentic AI) are an example of a concept that is evolving from a buzzword into a practical direction for software development. More and more experts are pointing out that AI agents are more than just a passing fad – they could be the beginning of a new era in which software not only assists humans, but also plans and executes tasks independently. Below, we explain what AI agents are, how they differ from traditional chatbots, how they work “under the hood” and what impact they could have on business.

What is an AI agent?

An AI agent is a type of software equipped with a goal rather than just a single command. Unlike a regular programme or chatbot, which waits for a question and gives an answer, an agent acts as a digital co-worker – it can independently plan a sequence of actions and execute them from start to finish, using various tools and data sources. In other words, an AI agent not only informs, but actually performs a service. You can think of it as a virtual employee who is assigned a task and performs it with minimal human supervision.

According to Gartner’s definition, intelligent agent systems are goal-oriented software entities that use AI to independently achieve specified results. Such an agent can receive high-level instructions – e.g. “prepare a monthly sales report” – and decide for itself how to carry them out: what data to collect, what analyses to perform and in what form to present the results. It does not need detailed instructions each time – it can generate the next steps in the work process itself. This is a fundamental change in the approach to software: instead of a tool that requires constant human control, we have an autonomous executor with a certain degree of decision-making power.

Importantly, AI agents are no longer just theoretical, but are already in use. Gartner predicts that by 2028, as much as 33% of business software will contain elements of AI agents (currently less than 1%), enabling approximately 15% of daily decisions in organisations to be made automatically. By 2029, “agent AI” may independently handle 80% of typical customer queries without human intervention, leading to a ~30% reduction in operating costs for service departments. Such forecasts show that the concept of AI agents has a strong foundation – companies are pinning their hopes on it for the automation of more advanced tasks than ever before.

From chatbots to agents: a new level of autonomy

Most of us have already encountered chatbots, whether in customer service or in assistants such as ChatGPT. How does an AI agent differ from a typical chatbot? First and foremost, in its scope of operation and initiative. A classic chatbot is usually a system that responds to user questions: it provides information and answers specific questions. Its role ends with generating a response (text, voice, etc.). An AI agent goes a step further – it is not only a conversational partner, but also a task performer. Below are the key differences:

  • Reactivity vs proactivity: A chatbot waits for a query and responds only within its scope. An agent can act proactively – when it receives a goal, it takes the initiative to achieve it. For example, a chatbot will answer a question about the status of an order, but an agent can identify delayed orders and send notifications to customers without being asked.
  • Scope of action: A chatbot focuses on conducting a conversation (providing answers, possibly redirecting to a human). An agent can go beyond the conversation – they use tools, systems and APIs to actually do something. For example, they can start a process in an internal system, update a database, perform analysis in a spreadsheet, etc.
  • End result: The chatbot mainly returns information (e.g. text, numbers, links). The agent is supposed to provide a ready-made solution to a problem or complete a task. From the user’s point of view, the result of the agent’s work is, for example, an updated contract, a generated report, or a new entry on a website – something that an employee would normally do.
  • Decision-making autonomy: A chatbot does not make decisions on its own – it responds according to pre-programmed patterns or a learned model based on the user’s question. An AI agent has limited autonomy – within the scope of its designated goal, it can decide what actions to take and in what order. Of course, the scope of this autonomy is controlled (the agent operates within set limits), but it is real.
  • Memory and context: Many chatbots (especially older ones) have a very short memory – they respond to the current question and easily lose the broader context. AI agents can maintain a longer context and learn from subsequent interactions. They have memory mechanisms that allow them to draw conclusions from previous steps and adapt their actions.

Simply put, a chatbot is a virtual interlocutor, and an AI agent is a virtual employee. A chatbot responds – an agent gets the job done.

In practice, the transition from chatbots to agents means a change in how technology is used. For organisations, it is like moving from a “software as a tool” model to a “work as a service” model. Instead of buying a tool and using it ourselves, we can ultimately “hire” an agent to do a certain job for us. For example, instead of just using a marketing automation system, we can have an agent who creates and publishes posts according to campaign guidelines. Instead of an HR module for scheduling meetings, we can have an agent who conducts initial recruitment. This idea is still in its infancy, but the direction is clear: we are automating not the interface, but the execution of tasks.

How do AI agents work? (mechanism and architecture)

How is it possible for an AI agent to perform a complex task on its own? Let’s look at a simple scenario. Imagine that an agent receives a task: to resolve a customer complaint about a service delay. Such a customer service agent may take the following steps:

  • Gathering contextual data: The agent first collects the necessary information themselves – e.g. checks the customer’s order status in the system, communication history, applicable procedures (SLA) regarding delays.
  • Analysing possible actions: Based on this data, the agent analyses the available scenarios for solving the problem. They may consider options such as offering compensation, rescheduling the delivery date, escalating the ticket to a higher level, etc. Here, they use an AI model to assess which response and action will be best.
  • Taking action: After selecting the best scenario, the agent takes the appropriate action. For example, through integration with the CRM system, they can generate a discount coupon for the customer or change the delivery date in the order system. If they need external tools, they use them via API (e.g., send an apology email, update the status in the database).
  • Communicating the result: The agent communicates the result to the customer themselves. They can send a personalised message with information about the actions taken (e.g. “We apologise for the delay, your delivery date has been changed to … and we have granted a 10% discount …”).
  • Verification of the goal: Finally, the agent assesses whether the problem has been resolved – whether the goal (customer satisfaction/ticket closure) has been achieved. If not (e.g. the customer responds that this is not enough), the agent can repeat the cycle, taking into account additional information from the customer.
Agent AI w trybie reaktywnym, gotowy do działania w czasie rzeczywistym, oparty na sztucznej inteligencji i danych wejściowych.

This is how an AI agent works: it does not stop at one answer, but strives to close the case. Several layers of technology operate behind the scenes of such an agent. First and foremost, at its core is usually an AI model, often a large language model (LLM) such as GPT-4, which is responsible for understanding commands, context and decision-making. In addition, the agent has access to tools and interfaces – it can perform actions through integrations with other systems (e.g. databases, internal applications, external service APIs). It also has a memory mechanism that allows it to store information about the status of a task and the results of individual steps (to learn on the fly and maintain a longer context than a single interaction). The architecture often includes a module referred to as an orchestrator or planner – it breaks down the goal into subtasks and assigns them for execution (sometimes there are even multiple specialised mini-agents for different tasks, supervised by a supervisor agent). All these elements work together so that the agent can reason, act and verify the effects in a loop until it achieves the desired result.

Types of AI agents

We distinguish several categories of AI agents, depending on how they operate and their degree of autonomy. The most commonly described ones include:

Sylwetka ludzkiej głowy z zielonym obszarem mózgu stylizowanym na układ scalony, z którego wychodzą zielone, połączone linie przypominające obwody elektroniczne. Grafika symbolizuje integrację sztucznej inteligencji, uczenia maszynowego (machine learning) i natural language processing z procesami analizy danych, automatyzacji decyzji oraz przetwarzania informacji w czasie rzeczywistym. Ilustracja wizualizuje koncepcję sieci wektorowej w kontekście AI i pokazuje, jak nowoczesne technologie wspierają marketing, personalizację komunikacji i optymalizację działań w oparciu o dane z różnych źródeł.

Reactive agents

Reactive agents – the simplest form, responding to current stimuli according to established rules, without building a plan. Such an agent does not “think” about future steps – it acts a bit like a machine that selects the appropriate response or action based on the event that has occurred. Traditional script-based chatbots can be considered a form of reactive agent (they respond to specific questions without going beyond them).

Deliberative agents

Planning (deliberative) agents – more advanced, able to plan the sequence of actions needed to achieve a goal. Such an agent decides for itself what to do next. It uses AI models to reason and predict outcomes, divides the task into steps, and then executes them. Most of today’s innovative agents (e.g. those based on LLM such as GPT) are planning agents – the user gives them a goal and they determine the plan themselves (e.g. “find information, summarise it and generate a report”).

Multi-agent systems

Multi-agent systems – these are more complex platforms where many agents cooperate with each other or even negotiate roles among themselves. Each agent can specialize in a different task, and a meta-agent or orchestration mechanism oversees the whole process. The goal is to divide a very complex problem into parts between several “virtual employees.” This type of solution was presented, for example, by Unilever – in collaboration with Microsoft and Capgemini, the company created a network of AI agents on its e-commerce website, where one agent acts as an orchestrator, directing customer queries to the appropriate specialized agents (e.g., product recommendation agent, order handling agent, culinary assistance agent). Together, these agent-based microservices provide consistent, personalized customer service – something that a single chatbot would not be able to achieve.

It is worth noting that the current capabilities of AI agents are largely due to recent advances in generative models. Just a few years ago, most automation was based on rigid rules or narrow ML models; today, models such as GPT-4 can “reason” effectively enough to control software actions. This is why Gartner ranks autonomous agents among the hottest trends – while emphasizing that we are still in the early stages of this journey (the technology needs further development in terms of reasoning, memory, and reliability). Nevertheless, the technical foundations are already strong enough for companies to start experimenting with agents in real-world processes.

Humanoidalny agent AI wizualizuje proces automatyzacji zadań: od przyjęcia zapytania po realizację działania, wspierając efektywność działań marketingowych. Ilustracja symbolizuje, jak organizacje mogą wykorzystać AI do przetwarzania cennych informacji, minimalizacji potencjalnych błędów, anonimizacji danych i automatyzacji procesu np. by złożyć zamówienie. Postać AI reprezentuje przyszłość pracy z narzędziami sztucznej inteligencji, które wspomagają ludzi w ich codziennych obowiązkach i podnoszą jakość swojej pracy. Grafika odnosi się do rozwoju technologii, który już w najbliższej przyszłości może zmienić sposób działania zespołów, zapewniając większą skalowalność i precyzję.

Examples of implementations of AI agents in companies

If AI agents are more than just an industry novelty, it is worth asking: where are they already being used today? Below are a few specific examples of organizations that have incorporated AI agents into their operations:

GitHub (Microsoft)

GitHub (Microsoft) – the software world is buzzing about GitHub Copilot, initially a code suggestion assistant, now being developed into a full-fledged coding agent. The latest feature, “Copilot Agent,” can independently perform entire programming tasks based on reported issues – for example, generate code for a new feature, test it, and suggest a pull request with changes. The difference compared to a regular assistant is that this agent iterates over its own output (it corrects errors itself if the tests fail) and can close side threads of a task that the programmer did not explicitly mention. In practice, Copilot can, for example, automatically prepare the environment, clone the repository, make changes to the code, and push them for review – doing so asynchronously, in the background, while the programmer can do something else. GitHub has made this feature available to Enterprise customers, emphasizing that all agent changes are still subject to human review, but it is already clear that code can be written and corrected by AI working as a “co-programmer”. This is a huge change in the way developers work.

Vibe coding w GitHub Copilot: Agent mode. Trzy humanoidalne postacie symbolizujące systemy multi-agentowe, stojące obok siebie na białym tle, każda z neonowozielonym oznaczeniem „AI”. Ilustracja przedstawia, jak agenci AI współpracują przy realizacji złożonych procesów – od analizy danych po podejmowanie decyzji w czasie rzeczywistym. Obraz odzwierciedla zastosowania agentów AI w różnych branżach, gdzie wirtualni asystenci, oparci na sieciach neuronowych i uczeniu maszynowym, uczą się na podstawie wzorców, integrują dane wejściowe z wielu źródeł i automatyzują wykonywanie zadań bez interwencji człowieka. Tego typu inteligentni agenci rewolucjonizują obsługę klienta, wspierając organizacje w rozwiązywaniu problemów i optymalizacji przepływów pracy.

Unilever

Unilever, a giant in the FMCG industry, has used AI agents to improve customer service on its brand websites (e.g., e-commerce sites selling cosmetics and food). In 2024, the company, in collaboration with Microsoft, presented a solution based on a multi-agent system instead of a single chatbot. When a consumer asks a question on the website (e.g., requests a recipe or advice on which product to choose), the query is sent to an AI orchestrator, which breaks it down into smaller tasks and forwards them to specialized agents: one agent searches for recipes using Unilever products, another provides information about ingredients and nutritional values, and yet another can add recommended products to the shopping cart. This entire “fleet” of agents works in harmony to provide the customer with a comprehensive, contextual response and a ready-to-go action (e.g., a shopping list based on the recipe) — something a traditional chatbot would not be able to provide. Unilever’s project has shown that multi-agent architecture can significantly reduce the time customers spend searching for information and products, making the experience more intuitive and personalized.

Impact on working methods and organizational structure

Since AI agents can take over entire tasks, a key question arises: how will this affect people and organizations? The most important change is the shift in the boundary between what an employee does independently and what they merely supervise. Until now, even advanced automation (RPA, classic algorithms, and even machine learning models) has been supportive in nature—it has sped up activities, organized data, and suggested decisions, but the human remained in charge of the process. Now, to a certain extent, the AI agent itself becomes the executor and decision-maker. Even if today these decisions concern relatively simple matters (e.g., granting standard compensation to a customer, generating a report according to a template), this is a qualitative novelty – the IT system operates within a given objective and chooses the means to achieve it on its own.

As a result, AI agents are taking on a new role in the work structure. They are not “living employees,” but they do not fit into the category of traditional tools either. This raises a number of organizational questions: How to supervise the agent’s actions? Who is responsible for mistakes made by AI? How can the company’s organizational culture and ethical principles be communicated to the agent? What tasks should not be delegated to a machine? Companies that implement agents already need to define new policies and procedures to use this technology responsibly.

The role of humans in working with AI agents is changing from that of a performer to a planner, trainer, and controller. Employees will increasingly set goals and verify results rather than personally performing each step of the process. For example, the marketer of the future will not create 100 variants of a mailing campaign themselves, but will supervise an agent generating these messages and select the best ones for implementation. An HR specialist will not manually sift through hundreds of resumes, but will teach an agent what criteria to use in the pre-selection process and then evaluate only the most promising candidates selected by AI. Instead of spending hours on routine work, people can focus on creative, strategic, and relational tasks—those that machines cannot replace. This is confirmed by analyses: according to an OpenAI study, approximately 80% of jobs may have at least 10% of their tasks automated by generative AI, but at the same time, new responsibilities will arise related to quality control, model tuning, and innovation management.

Organizations will likely have to redefine many roles and processes as AI agents advance. There will be a demand for new skills: from creating effective prompts and scenarios for agents, through analyzing AI-generated data, to designing entire ecosystems for human-agent collaboration. New positions may even emerge, such as “AI process designer,” “AI ethics trainer,” or “human-agent collaboration manager.”. Importantly, it will not only be the IT department that will be involved – interdisciplinary teams will be needed, combining technological expertise with domain business knowledge and an understanding of human factors.

The wider deployment of agents will also require trust and new forms of oversight. It is impossible to manually monitor millions of micro-decisions made by AI, which is why Gartner predicts that by 2028, 40% of CIOs will demand the implementation of so-called “guardian agents”metasystems that supervise other AI. These meta-agents are designed to ensure that the fleet of agents operates in accordance with company policy, does not violate security or privacy rules, and responds immediately if anomalies are detected (e.g., by shutting down a rogue agent). This shows the scale of the challenge: when AI permeates many processes, higher-level control is needed, otherwise managing it will exceed human capabilities.

In summary, the emergence of AI agents points to a direction in which companies are becoming hybrid organisms, with some of their “employees” being algorithms. These “virtual employees” can work 24/7, process vast amounts of data in seconds, and perform multiple tasks simultaneously. Companies that learn to take advantage of this will gain a competitive advantage not only in cost efficiency, but also in speed of adaptation and innovation. As Nvidia CEO Jensen Huang put it, we are facing a world where we will have “an entire population of digital workers” – in sales, service, administration – and just like human workers, they will need to be properly organized and coordinated to work with us to achieve the company’s missions. This is a huge cultural change: we need to learn to trust automation where human control was previously required, while establishing clear rules of responsibility and ethics for AI. It is no longer a question of whether AI agents will succeed, but how we will prepare our organizations for their arrival.

How to start implementing AI agents?

For many companies, the prospect of implementing AI agents may sound abstract or overwhelming. However, it is worth approaching the topic step by step – with the mindset of an experimenter, not a revolutionary who wants to change the entire business at once. Here are some practical tips on how to get started with AI agents in your organization:

  1. Choose a small, specific area to pilot. Ideally, this should be a process that is repetitive, data-driven, and relatively simple to automate. This could be generating a monthly report, qualifying sales leads, pre-screening resumes, or analyzing customer feedback. Make sure you have success metrics (KPIs) for this process – this will allow you to assess whether the agent has actually improved performance (lead time, costs, satisfaction, etc.).
  2. Give people a sense of control and clearly define roles. Before an agent joins the team, discuss with your employees which tasks will be taken over by AI and which still require human creativity or empathy. The key is to relieve people of monotonous tasks, not to replace them entirely. It is a good idea to appoint someone to supervise the agent – someone who will “keep them on a leash,” especially in the beginning. This will help the team feel that the tool is helping them, not threatening them.
  3. Treat the agent like a new employee. It may sound strange, but the “HR” approach really works. Design the agent’s job description – clearly define what they can do on their own and what they are not allowed to touch without approval. Implement them: feed them data, business rules, show them (through appropriate prompts and tests) what standards apply in the company. At the beginning, monitor and evaluate their “performance” – e.g., review the agent’s activity logs, verify the correctness of results, collect feedback from employees working with AI. Just like a new team member, the agent will need some training and correction before you can fully trust it.
  4. Take care of security and ethics. Even if you are starting with a small experiment, consider data protection, regulatory compliance, and ethical use of AI from the outset. Determine which systems the agent has access to and where you draw the line. Think about contingency mechanisms—for example, if the agent performs undesirable actions, how will you detect and correct them? Transparency and clear rules will help you gain management approval and team trust in new solutions.
  5. Learn iteratively and share knowledge. After implementing the first agent, analyze the results: what went well and where did problems arise? Draw conclusions and refine the operating model. Then select the next area for automation or scale the original solution to a larger scale if it has proven successful. Be sure to share your experiences within the organization – other departments can benefit from your lessons. Build AI competencies together by organizing, for example, workshops on creating prompts or hackathons for the best idea for using agents. This culture of experimentation will help your company transition more smoothly into the AI era.

Finally, let’s remember: the era of AI agents is just beginning. Technology will evolve, but it is people who must decide how to use it. Companies that gain initial experience with autonomous agents today may be one step ahead of the competition tomorrow, with more adaptable, innovative, and routine-free teams. AI agents are not magic or a solution to every business problem. However, when treated wisely, they can become loyal “co-workers” who will perform many tedious tasks for us, leaving us with what is really important: creativity, customer relations, and a vision for growth. And that’s a pretty good reason to get started, isn’t it?

Author

Marcin Stasiak

Product Solution Advisor

A solutions architect who translates complex technology into real value for users and marketing teams. He focuses on ensuring that even the most advanced systems work efficiently, intuitively, and in line with business needs.

Other articles:

 

 

You have a system. You have data. You need someone to connect it all.

Instead of a major transformation, a well-thought-out pilot program. Instead of theory, an effective agent who gets the job done.

We will help you choose the process, design the solution, and tailor the AI model that fits your organization.

On this page