Abstract: The 2024 CLA Lenders Summit Panel, “ Everything You Always Wanted to Know About AI (But Were Afraid to Ask)”, explores AI’s role in industries like banking, focusing on lending applications. Experts from TD Bank’s Layer 6 AI lab discuss misconceptions, AI literacy, emerging trends like generative AI, and critical issues such as bias, privacy, and trustworthy AI. Key insights include the importance of explainability, robustness, and human oversight in AI systems, along with practical applications like fraud detection and governance strategies. 👉 Check out the full video here. 👀
Evolution of AI in Lending
Holly: AI has evolved significantly, but its application in lending remains limited. Initially, we relied on rule-based programming, which required extensive human input. Now, machine learning enables us to derive insights directly from data. Still, we face barriers to integrating advanced AI solutions throughout the credit lifecycle.
Jesse: For example, AI in lending started with batch models for collections. Over time, we moved to API-based solutions, and now we’re exploring pre-approvals and adjudication. However, the transition isn’t complete, and challenges remain.
What are some common misconceptions about AI?
Jesse: A big misconception is that deployed AI models are constantly changing or learning in real time. Most real-world models, especially in banking, are static. They are trained on fixed datasets, deployed, and monitored to ensure consistent performance over time.
How can I improve my AI literacy to better engage with consultants and protect sensitive information?
Holly: Great question. For a general survey, the Vector Institute offers accessible courses on ethical AI and governance. They provide resources for both business users and researchers.
Jesse: Vector Institute is industry-focused, making it a good starting point for understanding practical applications of AI.
Holly: Key trends in AI include the shift toward real-time decisioning through cloud-based solutions, the rise of generative AI, and the use of foundation models. For instance, at TD, we use foundation models for tabular data, creating one robust base model with specialized “head models” for specific tasks.
How is corporate governance evolving to manage AI risks?
Jesse: Traditional governance structures need updates. For example, aligning privacy, compliance, and model validation under a unified AI risk framework can mitigate conflicts between different oversight groups.
How should we approach bias in AI models?
Jesse: Bias stems from data, not algorithms. Procedural fairness (treating everyone the same) often doesn’t ensure fair outcomes. Instead, substantive fairness focuses on achieving equitable results for different groups.
Trustworthy AI Framework
Jesse: Trustworthy AI includes principles like:
We’ve applied these principles in production, such as explainable lending models that improve customer interactions by providing insights into decisions.
How can AI help reduce fraud in real-time transactions?
Jesse: Fraud detection requires milliseconds to make decisions. The challenge isn’t building the models but engineering the real-time infrastructure to deploy them effectively.
Holly: AI can also be trained to defer complex cases to human decision-makers, balancing automation with human oversight.
Tal Schwartz: Thank you, Holly and Jesse, for the insightful session. Audience, feel free to follow up with our speakers offline. Thank you Canadian Lenders Association. 👉 Check out the full video here. 👀
Sign up for our 2025 Summit Series