From Bottleneck to Breakthrough: Transforming AI Approvals in Financial Services

Chris O'Brien and Scot Richardson
.
May 7, 2025
From Bottleneck to Breakthrough: Transforming AI Approvals in Financial Services

Financial services organizations are racing to implement AI solutions to revolutionize their operating models and develop differentiated client experiences. However, a critical barrier stands in their way: the approval gauntlet. For most innovations, product and business leaders spend massive resources attaining 50, 60, and sometimes over a hundred signoffs across complex processes before going live. Many of these signoffs are chased after the product is built but just before launch, putting extra pressure on teams and their leaders to push through a myriad of approval processes.

The advent of AI makes this problem more acute. While AI can reduce the time to innovation exponentially, navigating approval processes not designed for this technology can halt progress just as quickly. Due to the highly regulated nature of financial services, these approval processes create significant friction that slows innovation.

The good news? This doesn't have to be your reality. By addressing both organizational readiness and technology architectural readiness—key components of Financial Services 3.0—institutions can transform their approval processes from barriers to enablers of innovation.

Reimagining the Approval Operating Model for AI Innovation

When it comes to approvals, AI shares challenges with other "inflection point" innovations like blockchain or cloud. First movers struggle most to navigate processes across the three lines of defense. The underlying pain points are numerous: approvers hesitant to sign off on unfamiliar technologies; "Catch-22" situations with circular dependencies; lack of knowledge about byzantine approval processes; and mismatched expectations between MVPs and "full production rollouts."

To overcome these challenges, institutions must implement strategic approaches across people, processes, and technology domains.

People: The Human Element of AI Readiness

The most effective approach is dedicating specific risk and control partners to AI implementations. These cross-functional teams should include compliance, legal, and various risk specialists selected for their comfort with change; senior leaders empowered as escalation points; and AI literacy across control functions.

Where controls need to be "codified" into the product itself (e.g., AML for payments, credit risk for loans), these specialists should be on the build team. For other approvers, institutions should designate consistent, AI-literate approvers for each domain.

Training risk and control approvers on AI requires creative approaches. Rather than "Mandatory AI Training by Video," hands-on workshops where stakeholders apply their specialties in mock scenarios or train an LLM prove much more engaging. One global banking client called these "enabler workshops," giving legal and compliance officers real-world experience designing Horizon 3 MVPs. This approach helps identify stakeholders who are curious about new technologies and may become informal champions.

Note that training goes both ways: those governing AI need to educate product and business leaders on their processes, expectations, and concerns, helping to maintain a two-way relationship as things evolve.

Processes: Streamlining the Path to Innovation

Consultants will say that you need to streamline processes before implementing the technology to support them. Unfortunately, many institutions have spent countless calories trying to streamline approvals processes without much to show for it. Process innovation for AI approvals should be seen as dependent on technology to enable and enforce new ways of working. Rather than viewing approvals processes as a set of committee meetings, leaders should see them as a productized digital service—designed for user-friendliness, flexibility, and scale.

With that in mind, best practices for “productized processes” should include:

Technology: Infrastructure to Enable Speed and Oversight

Successful AI implementation requires workflows with transparency for business, coding teams, and control functions. These systems document decision trails, record rationales, and maintain version control. Organizations benefit from starting with smaller, lower-risk innovations to build credibility and positive precedents.

Many institutions leverage labs and outside expertise to de-risk early-stage builds. These controlled environments allow for experimentation without immediately confronting the full weight of enterprise compliance requirements. The patterns and practices developed in these sandboxes can then be formalized and documented for broader adoption.

Technical Foundations for AI Approval Success

Governance of AI builds should be done in a standardized and real-time manner. These technical foundations fall into three principal areas:

Explainability & Bias Mitigation

The black-box nature of AI has long been a concern. Forward-thinking institutions address this by assuming bias exists by default and continuously validating otherwise.

"Bias" in AI means the system consistently favors certain outcomes or groups. This can manifest as quantitative bias, where predictions consistently skew in one direction, or demographic bias, where the system treats protected classes differently. Bias detection systems should evolve from reactive checks to proactive safeguards that prevent biased models from reaching production.

Explainable AI techniques help decompose model features and understand which ones drive specific outcomes. By identifying the relative contribution of different factors, these techniques make AI decisions more transparent and defensible to regulators and internal governance bodies alike.

Smart monitoring systems track model "drift" over time—when accuracy deteriorates because the real-world environment has changed since training. For example, a model trained on pre-pandemic financial behavior might become less accurate as consumer spending patterns shifted during lockdowns. These systems enable timely recalibration before problems emerge in production. This dynamic oversight replaces the traditional static model approval approach with continuous validation, ultimately enabling faster innovation with greater confidence.

Secure AI Workflows & Infrastructure

Leading institutions apply zero-trust principles with continuous validation of access rights, acknowledging that AI systems often require broader data access than conventional applications.

Real-time AI observability enables quick intervention when unexpected behaviors occur. The most effective systems treat monitoring as an ongoing conversation between business, security, and compliance stakeholders rather than a pass/fail checkpoint.

Successful organizations balance governance with accessibility, avoiding overly restrictive policies that drive shadow AI adoption. They recognize that prohibitive policies often backfire, pushing innovation underground where it can't be monitored or managed. Instead, they focus on education and appropriate guardrails that channel innovation productively.

Standardization & AI Architecture Patterns

Creating reusable, pre-approved architectural patterns dramatically simplifies future approvals. Once an organization has successfully navigated approval for a particular pattern, subsequent implementations can reference this precedent.

Forward-thinking institutions develop blueprints for AI deployment, documenting technical specifications alongside control points and governance requirements. Creating a repository of pre-approved AI models and datasets provides safe starting points for new projects. Teams can build upon these foundations with confidence that the baseline components have already passed regulatory scrutiny, allowing them to focus their compliance efforts on novel elements.

Synthetic data proves invaluable for minimizing exposure risk while maintaining realistic testing environments. By generating artificial data that preserves statistical properties without exposing customer information, organizations can develop and test models more freely, accelerating the early stages of development before real data becomes necessary.

Centralized AI governance establishes a model registry that tracks lineage, risk status, and compliance metadata, similar to MLOps practices but adapted for generative AI. These systems document the full lifecycle of each model, from training data sources to approval decisions, creating an audit trail that builds trust with regulators and simplifies ongoing compliance verification.

AI Approvals Processes- The Best Starting Point for AI Innovation?

Across financial services organizations, innovation and technology leaders are being tasked to implement AI use cases to increase productivity. Given the impact on organizational agility and risk mitigation, why not make approvals processes one of the first priorities for applied AI- and in doing so “trojan horse” solutions to their own problems? A starting point would be new product approvals, but a modular, AI-driven approvals platform could be expanded to other approvals processes like data governance, loan approvals, vendor onboarding, and budget management.  

Features that are currently easy to implement with AI  include automated document generation and review; intelligent process and routing design based on risk profiles; "Next Best Action" recommendations for stakeholders; and cross-functional visibility that prevents duplicate requests and contradictory guidance.

How Blend Helps Financial Institutions Transform Approval Processes

At Blend, we support financial institutions through this transformation with three tailored service offerings:

Blend is an AI services company that provides its clients a combination of AI-related talent, data science and engineering, and intelligent application solutions. We operate across sectors and functions, including serving many of the largest financial services organizations in the world, and we work as a strategic partner to many of the most AI-relevant platforms today.

Transforming Approvals into Competitive Advantage

Financial institutions that transform their approval processes will gain a significant competitive advantage. AI will dramatically increase the pace of product builds, making it imperative to reduce approval cycle times. Consider that large financial services organizations can take 1.5-2 years to launch new client-facing products, with pre-launch approvals taking 3-6 months—and this is before the acceleration promised by GenAI.

As a precedent, Morgan Stanley once launched an onboarding process for startup vendors completed in less than a week. Competitors in innovation roles took notice due to the clear competitive advantage bringing in new capabilities could provide. Similar differentiation is possible for institutions that streamline their AI approval processes.

By addressing people, process, and technology elements of approvals, financial institutions create the foundation for faster innovation while maintaining appropriate risk management. The transformation requires initial investment but yields returns across the entire innovation portfolio, turning a bottleneck into a strategic capability.

The institutions leading this transformation will not only improve operational efficiency but fundamentally change how they deliver value through AI-powered experiences. Those that hesitate risk falling behind as more agile competitors redefine the industry.

Let us blend in, so you can stand out.

Financial services organizations are racing to implement AI solutions to revolutionize their operating models and develop differentiated client experiences. However, a critical barrier stands in their way: the approval gauntlet. For most innovations, product and business leaders spend massive resources attaining 50, 60, and sometimes over a hundred signoffs across complex processes before going live. Many of these signoffs are chased after the product is built but just before launch, putting extra pressure on teams and their leaders to push through a myriad of approval processes.

The advent of AI makes this problem more acute. While AI can reduce the time to innovation exponentially, navigating approval processes not designed for this technology can halt progress just as quickly. Due to the highly regulated nature of financial services, these approval processes create significant friction that slows innovation.

The good news? This doesn't have to be your reality. By addressing both organizational readiness and technology architectural readiness—key components of Financial Services 3.0—institutions can transform their approval processes from barriers to enablers of innovation.

Reimagining the Approval Operating Model for AI Innovation

When it comes to approvals, AI shares challenges with other "inflection point" innovations like blockchain or cloud. First movers struggle most to navigate processes across the three lines of defense. The underlying pain points are numerous: approvers hesitant to sign off on unfamiliar technologies; "Catch-22" situations with circular dependencies; lack of knowledge about byzantine approval processes; and mismatched expectations between MVPs and "full production rollouts."

To overcome these challenges, institutions must implement strategic approaches across people, processes, and technology domains.

People: The Human Element of AI Readiness

The most effective approach is dedicating specific risk and control partners to AI implementations. These cross-functional teams should include compliance, legal, and various risk specialists selected for their comfort with change; senior leaders empowered as escalation points; and AI literacy across control functions.

Where controls need to be "codified" into the product itself (e.g., AML for payments, credit risk for loans), these specialists should be on the build team. For other approvers, institutions should designate consistent, AI-literate approvers for each domain.

Training risk and control approvers on AI requires creative approaches. Rather than "Mandatory AI Training by Video," hands-on workshops where stakeholders apply their specialties in mock scenarios or train an LLM prove much more engaging. One global banking client called these "enabler workshops," giving legal and compliance officers real-world experience designing Horizon 3 MVPs. This approach helps identify stakeholders who are curious about new technologies and may become informal champions.

Note that training goes both ways: those governing AI need to educate product and business leaders on their processes, expectations, and concerns, helping to maintain a two-way relationship as things evolve.

Processes: Streamlining the Path to Innovation

Consultants will say that you need to streamline processes before implementing the technology to support them. Unfortunately, many institutions have spent countless calories trying to streamline approvals processes without much to show for it. Process innovation for AI approvals should be seen as dependent on technology to enable and enforce new ways of working. Rather than viewing approvals processes as a set of committee meetings, leaders should see them as a productized digital service—designed for user-friendliness, flexibility, and scale.

With that in mind, best practices for “productized processes” should include:

Technology: Infrastructure to Enable Speed and Oversight

Successful AI implementation requires workflows with transparency for business, coding teams, and control functions. These systems document decision trails, record rationales, and maintain version control. Organizations benefit from starting with smaller, lower-risk innovations to build credibility and positive precedents.

Many institutions leverage labs and outside expertise to de-risk early-stage builds. These controlled environments allow for experimentation without immediately confronting the full weight of enterprise compliance requirements. The patterns and practices developed in these sandboxes can then be formalized and documented for broader adoption.

Technical Foundations for AI Approval Success

Governance of AI builds should be done in a standardized and real-time manner. These technical foundations fall into three principal areas:

Explainability & Bias Mitigation

The black-box nature of AI has long been a concern. Forward-thinking institutions address this by assuming bias exists by default and continuously validating otherwise.

"Bias" in AI means the system consistently favors certain outcomes or groups. This can manifest as quantitative bias, where predictions consistently skew in one direction, or demographic bias, where the system treats protected classes differently. Bias detection systems should evolve from reactive checks to proactive safeguards that prevent biased models from reaching production.

Explainable AI techniques help decompose model features and understand which ones drive specific outcomes. By identifying the relative contribution of different factors, these techniques make AI decisions more transparent and defensible to regulators and internal governance bodies alike.

Smart monitoring systems track model "drift" over time—when accuracy deteriorates because the real-world environment has changed since training. For example, a model trained on pre-pandemic financial behavior might become less accurate as consumer spending patterns shifted during lockdowns. These systems enable timely recalibration before problems emerge in production. This dynamic oversight replaces the traditional static model approval approach with continuous validation, ultimately enabling faster innovation with greater confidence.

Secure AI Workflows & Infrastructure

Leading institutions apply zero-trust principles with continuous validation of access rights, acknowledging that AI systems often require broader data access than conventional applications.

Real-time AI observability enables quick intervention when unexpected behaviors occur. The most effective systems treat monitoring as an ongoing conversation between business, security, and compliance stakeholders rather than a pass/fail checkpoint.

Successful organizations balance governance with accessibility, avoiding overly restrictive policies that drive shadow AI adoption. They recognize that prohibitive policies often backfire, pushing innovation underground where it can't be monitored or managed. Instead, they focus on education and appropriate guardrails that channel innovation productively.

Standardization & AI Architecture Patterns

Creating reusable, pre-approved architectural patterns dramatically simplifies future approvals. Once an organization has successfully navigated approval for a particular pattern, subsequent implementations can reference this precedent.

Forward-thinking institutions develop blueprints for AI deployment, documenting technical specifications alongside control points and governance requirements. Creating a repository of pre-approved AI models and datasets provides safe starting points for new projects. Teams can build upon these foundations with confidence that the baseline components have already passed regulatory scrutiny, allowing them to focus their compliance efforts on novel elements.

Synthetic data proves invaluable for minimizing exposure risk while maintaining realistic testing environments. By generating artificial data that preserves statistical properties without exposing customer information, organizations can develop and test models more freely, accelerating the early stages of development before real data becomes necessary.

Centralized AI governance establishes a model registry that tracks lineage, risk status, and compliance metadata, similar to MLOps practices but adapted for generative AI. These systems document the full lifecycle of each model, from training data sources to approval decisions, creating an audit trail that builds trust with regulators and simplifies ongoing compliance verification.

AI Approvals Processes- The Best Starting Point for AI Innovation?

Across financial services organizations, innovation and technology leaders are being tasked to implement AI use cases to increase productivity. Given the impact on organizational agility and risk mitigation, why not make approvals processes one of the first priorities for applied AI- and in doing so “trojan horse” solutions to their own problems? A starting point would be new product approvals, but a modular, AI-driven approvals platform could be expanded to other approvals processes like data governance, loan approvals, vendor onboarding, and budget management.  

Features that are currently easy to implement with AI  include automated document generation and review; intelligent process and routing design based on risk profiles; "Next Best Action" recommendations for stakeholders; and cross-functional visibility that prevents duplicate requests and contradictory guidance.

How Blend Helps Financial Institutions Transform Approval Processes

At Blend, we support financial institutions through this transformation with three tailored service offerings:

Blend is an AI services company that provides its clients a combination of AI-related talent, data science and engineering, and intelligent application solutions. We operate across sectors and functions, including serving many of the largest financial services organizations in the world, and we work as a strategic partner to many of the most AI-relevant platforms today.

Transforming Approvals into Competitive Advantage

Financial institutions that transform their approval processes will gain a significant competitive advantage. AI will dramatically increase the pace of product builds, making it imperative to reduce approval cycle times. Consider that large financial services organizations can take 1.5-2 years to launch new client-facing products, with pre-launch approvals taking 3-6 months—and this is before the acceleration promised by GenAI.

As a precedent, Morgan Stanley once launched an onboarding process for startup vendors completed in less than a week. Competitors in innovation roles took notice due to the clear competitive advantage bringing in new capabilities could provide. Similar differentiation is possible for institutions that streamline their AI approval processes.

By addressing people, process, and technology elements of approvals, financial institutions create the foundation for faster innovation while maintaining appropriate risk management. The transformation requires initial investment but yields returns across the entire innovation portfolio, turning a bottleneck into a strategic capability.

The institutions leading this transformation will not only improve operational efficiency but fundamentally change how they deliver value through AI-powered experiences. Those that hesitate risk falling behind as more agile competitors redefine the industry.

Let us blend in, so you can stand out.