
{"id":148472,"date":"2026-04-08T11:28:21","date_gmt":"2026-04-08T11:28:21","guid":{"rendered":"https:\/\/mycryptomania.com\/?p=148472"},"modified":"2026-04-08T11:28:21","modified_gmt":"2026-04-08T11:28:21","slug":"from-chatbots-to-agentic-ai-the-accountability-problem","status":"publish","type":"post","link":"https:\/\/mycryptomania.com\/?p=148472","title":{"rendered":"From Chatbots to Agentic AI: The Accountability Problem"},"content":{"rendered":"<p>On February 5, 2026, Anthropic and OpenAI each released a more autonomous kind of AI. Anthropic introduced Claude Opus 4.6 and wrote about \u201cagent teams\u201d while OpenAI released GPT-5.3-Codex, which it describes as an agentic coding model for long-running technical work. Both launches happened on the same day. They were presented to the public mainly as product rollouts and the next stage of AI capability, while questions of responsibility and oversight stayed largely outside the\u00a0frame.<\/p>\n<p>In the <a href=\"https:\/\/medium.com\/coinmonks\/the-intelligence-we-rent-517ac34c46ad\"><strong><em>Intelligence We Rent<\/em><\/strong><\/a> we argued that the AI we use every day is not a neutral tool. It is a centralized infrastructure built on behavioral extraction. A system designed to make itself indispensable by making you legible, predictable, and monetizable. We called it <em>rented intelligence<\/em>. While you use it, someone else owns it and profits from what it learns about\u00a0you.<\/p>\n<p>This article asks the next question. What happens when that rented intelligence stops just answering and starts\u00a0acting?<\/p>\n<h4><strong>When AI Starts\u00a0Acting<\/strong><\/h4>\n<p>A few years ago, large language models and generative AI were barely part of the public conversation, let alone something that could reshape how people work or handle everyday\u00a0tasks.<\/p>\n<p>Now attention is moving toward AI agents, or agentic AI: systems designed not just to respond, but to perceive, reason, and act with varying degrees of autonomy. Unlike the chatbots people have already grown used to, these systems connect to other software, carry out multi-step tasks, and keep operating with little or no direct human input. Agents can break an objective into steps, call APIs, write code, search databases, send requests, measure their own output, adjust their approach, and continue.<\/p>\n<p>82% of organizations plan to integrate AI agents within three years. 10% are already using them. These figures come from Capgemini\u2019s <em>Generative AI in Organizations<\/em> report (<a href=\"https:\/\/www.capgemini.com\/insights\/research-library\/generative-ai-in-organizations-2024\/\">https:\/\/www.capgemini.com\/insights\/research-library\/generative-ai-in-organizations-2024\/<\/a> ) published in July 2024\u200a\u2014\u200aat a moment when the governance frameworks to manage these systems remain, in most organizations, nonexistent.<\/p>\n<p>Nearly half of the organizations running autonomous decision-making systems, systems that book appointments, resolve customer disputes, process medical documentation, manage supply chains have built no solid architecture for accountability.<\/p>\n<p>Responsibility starts getting harder to locate once systems move from answering questions to carrying out tasks. The model generates part of the output, the agent executes part of the process, and the software environment shapes part of the result. Then a human appears at the end, sometimes to approve, sometimes to absorb the risk, sometimes simply because the law still needs a name somewhere on the\u00a0line.<\/p>\n<p>That is also why \u201chuman in the loop\u201d is not enough on its own. A person reviewing a system after it has already gone through thousands of actions is not the same as a person who still controls what the system is doing. When the system moves faster than the review, oversight becomes weaker and much closer to a formality than the phrase suggests. MIT Sloan\u2019s 2026 coverage has already started warning that agentic AI is not ready for blind trust at scale, partly because hallucinations, prompt-injection risks, and operational errors do not go away just because the system feels more\u00a0usable.<\/p>\n<h4><strong>What Case Studies Leave\u00a0Out<\/strong><\/h4>\n<p>The case studies look impressive, no doubt. They are built around the numbers companies are most eager to publish: faster resolution, lower costs, hours saved, better throughput. But those numbers show only one side of the\u00a0story.<\/p>\n<p>They show what the system sped up. They say very little about what it mishandled, who noticed, how long it took to notice, or who ended up carrying the consequences once the mistake had already entered a real workflow.<\/p>\n<p>AtlantiCare deployed <a href=\"https:\/\/www.oracle.com\/health\/clinical-suite\/clinical-ai-agent\/\">Oracle\u2019s Clinical AI Agent<\/a> to handle medical documentation, and the reported result was a 41% reduction in documentation time, saving clinicians around 66 minutes per day. On paper, that sounds like exactly the kind of efficiency any healthcare system would want. But in healthcare, documentation errors can have terrible outcomes. A wrong entry in a patient record can turn into a wrong prescription, a missed diagnosis, or a preventable death.<\/p>\n<p>So who signs the documentation the agent produced? Who carries the liability if it is wrong? In most cases, it\u2019s still the clinician, even when the whole point of the system was to save them time and reduce the amount of direct attention that task would otherwise require.<\/p>\n<p>This is where the accountability problem becomes very concrete. The agent can produce the document, shape the record, and influence what happens next, but it carries no legal responsibility of its own. It cannot answer for an error, defend a decision, or bear liability. That responsibility stays with people and institutions, usually with the clinician who signs, the organization that deployed the system, and the patient who may have to live with the\u00a0result.<\/p>\n<h4><strong>The Terms That Blur Responsibility<\/strong><\/h4>\n<p>The current common language around agents could be described as\u2026convenient. \u201cGuardrails\u201d is one example. The word suggests a contained problem. The fact is, a model can be bounded and still be inserted into a workflow where responsibility is vague, delayed, or quietly pushed onto whoever happens to sign the final document.<\/p>\n<p>The same with \u201corchestration\u201d. It may sound like someone is fully conducting the process. But often what it really means is that several agents, tools, permissions, and systems are now acting across the same chain, while the person supposedly overseeing them cannot fully inspect the chain end to end. <a href=\"https:\/\/www.mckinsey.com\/capabilities\/risk-and-resilience\/our-insights\/deploying-agentic-ai-with-safety-and-security-a-playbook-for-technology-leaders\">McKinsey\u2019s own guidance<\/a> gets closer to the real issue than a lot of the softer marketing language does: once systems move from generating content to making decisions and taking action at machine speed, governance has to define scope, inventory, ownership, and auditability.<\/p>\n<p>If companies want to deploy agents into consequential workflows, three conditions should be non-negotiable.<\/p>\n<p><strong>Traceability.<\/strong> Every decision an agent makes must be traceable, logged, and legible to a human who was not involved in building the\u00a0system.<strong>Assigned responsibility.<\/strong> There must be a named human or institution that carries legal and ethical responsibility for the agent\u2019s actions. The agent acts on behalf of someone. That someone must be identifiable, reachable, and\u00a0liable.<strong>Real interruption.<\/strong> The loop must be effectively stoppable by a human who may need to do it immediately.<\/p>\n<p>As tech founder <strong>Yon Raz-Fridman<\/strong> observed in a recent discussion on the evolution of agentic AI: \u201cFor our entire lives, technology has been a tool. It\u2019s a puppet, and we\u2019re the puppet master. That era is coming to an\u00a0end.\u201d<\/p>\n<p>The industry presents this as inevitable. But is it? We believe it\u2019s a choice or a series of choices, made by specific companies, often under specific governmental pressures, with specific consequences for everyone else. The problem is that right now, most of the choices that are setting the trajectory for your digital life and impacting your digital rights are being made without\u00a0you.<\/p>\n<p><em>To learn more about how SourceLess approaches AI within a wider ecosystem of digital identity, infrastructure, and user control, visit<\/em> <a href=\"http:\/\/sourceless.net\/\">http:\/\/sourceless.net<\/a>\u00a0<em>.<\/em><\/p>\n<p><a href=\"https:\/\/medium.com\/coinmonks\/from-chatbots-to-agentic-ai-the-accountability-problem-3b671c69df32\">From Chatbots to Agentic AI: The Accountability Problem<\/a> was originally published in <a href=\"https:\/\/medium.com\/coinmonks\">Coinmonks<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>","protected":false},"excerpt":{"rendered":"<p>On February 5, 2026, Anthropic and OpenAI each released a more autonomous kind of AI. Anthropic introduced Claude Opus 4.6 and wrote about \u201cagent teams\u201d while OpenAI released GPT-5.3-Codex, which it describes as an agentic coding model for long-running technical work. Both launches happened on the same day. They were presented to the public mainly [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":148473,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-148472","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-interesting"],"_links":{"self":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/148472"}],"collection":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=148472"}],"version-history":[{"count":0,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/148472\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/media\/148473"}],"wp:attachment":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=148472"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=148472"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=148472"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}