In today’s hype cycle, “AI” has become shorthand for “Agents,” which is often shorthand for “Chatbots.” We celebrate large models that can write essays or pass CPA exams, but in financial services, the real question isn’t what they can say. It’s what they can do.
A few years ago, I made the case that “Little AI” was about accessibility. Now it’s about execution and governance. Banks are discovering that massive, passive LLMs are breathtaking in conversation but brittle in workflows that demand precision, audit trails, and policy alignment. A chatbot can sound brilliant and still be useless or worse, non-compliant.
A digital banking executive captured it well: “My AI can write the perfect email about an overdraft fee, but it can’t reverse the fee or tell me why it happened or even prevent it from happening again.”
This is the Big AI trap: huge linguistic capability, no reliable connection to systems, and no ability to perform regulated actions with auditability. What institutions actually need isn’t a one-sized-fits-all supermodel. It’s a properly governed stack where every component has a job and every action is supervised by people.
The New Little AI isn’t one model, it’s a focused multi-model system intent on solving a problem - AI in Action.
The most effective AI architectures coming out of financial services today follow a clear division of labor:
The Calculator (Predictive ML): Auditable, fast, and mathematically precise. Built for credit, fraud, and risk.
The Translator (Small Language Model / SLM): Lightweight, verifiable, and controllable. Tuned for intent, policy interpretation, and compliant communication.
The Doer (Agent + MCP): The execution layer that actually updates systems, triggers workflows, moves money, or flags exceptions and always under human oversight.
Small parts, governed tightly, working together. That’s where reliability and solutions come from.
Let’s take a real world example: A transaction disputes. A customer challenges a $400 charge for a steakhouse they claim they never visited.
The Big AI way: A massive LLM reads the chat, hallucinates a refund policy, and tells the customer, "Don't worry, I've taken care of it," without ever connecting to the core banking system. No math. No action. No real resolution.
The Little AI in Action way:
ML: Looks at customer transaction history and behavior and scores fraud likelihood at 92%.
SLM: Detects urgency, classifies the case in the ticketing system, drafts a compliant message per policy guardrails.
Agent: Initiates the dispute, issues a provisional credit, blocks the card, and escalates to a human reviewer.
Each step is controlled and every decision is auditable. And humans stay in the loop for oversight and governance.
The Future of AI is Small, Coordinated, Governed Systems, set to resolve a specific flow. Not Giant Multipurpose Models.
The next era of AI in financial services won’t be led by one massive model attempting to do everything. It will be shaped by coordinated, verifiable components that specialize, cooperate, and leave a clear regulatory footprint. Little AI has become the foundation of Action Architecture.
To build real intelligence capital, institutions need predictable math, controllable language models, supervised agents and a governance framework tying them together.
The FinServs that win won’t be the ones with the biggest model, they’ll be the ones with the strongest intelligence capital: governed ML, controllable language models, supervised agents, and the discipline to connect them to outcomes.
The future of AI is little actions that lead to big results.
