Strategic leader in banking and risk management with over 10 years of experience, specializing in fraud prevention, AML, and advanced analytics. Expert at leveraging machine learning, AI, and GenAI to strengthen enterprise risk strategies, optimize customer authentication, and enhance compliance frameworks.
Proven ability to build and lead high-performing teams, driving delivery within agile models while embedding analytics as business-as-usual practice. Act as a trusted advisor to SVP and EVP leadership, aligning advanced analytics with business priorities, regulatory expectations, and organizational risk appetite.
Recognized for pioneering GenAI & Agentic AI innovation, designing copilots and autonomous agents that streamline investigative processes and create repeatable blueprints for enterprise AI adoption. Known for bridging risk management expertise with advanced analytics, consistently delivering measurable gains in performance, resilience, and growth across the organization.
Overview
12
12
years of professional experience
Work History
Data Science and Advanced Analytics Lead
TD Bank
01.2025 - Current
Team Leadership & Governance
Drive enterprise-wide data science programs optimizing authentication and digital funnel performance across TD’s fraud and risk domains in Canada and the U.S.
Lead a 20-person blended team (12 direct reports, 3 interns, 5 external consultants) spanning data science, ML engineering, BI/reporting, and business analysis; recruited and built the core team, hiring 5 data scientists, 1 ML engineer, 1 AI engineer, and 1 BI specialist.
Serve as a trusted advisor to SVP and EVP leaders, aligning fraud prevention, funnel optimization, and business growth strategies with enterprise priorities.
Designed and implemented a pod-based operating model, assigning SMEs to dedicated domains—improving delivery velocity, strengthening ownership, and enabling parallel execution of high-impact projects.
Established agile project governance frameworks (JIRA, Confluence, Microsoft Planner, SharePoint) that standardized sprint planning, documentation, and tracking—reducing ad hoc request turnaround time by 30% and improving transparency for stakeholders and auditors.
Maintain strong partnerships with ThreatMetrix, BioCatch, Interac, FraudPoint, Mitek, and TransUnion, ensuring alignment on data quality, integration requirements, and deployment timelines for critical fraud prevention capabilities.
Enterprise Optimization Initiatives & Project Support
Lead multi-country optimization programs in partnership with Deloitte and BCG, aligning fraud prevention with business growth across Canadian and U.S. portfolios.
Deloitte engagement (Canada): Build and institutionalize a Bayesian optimization framework to calibrate onboarding and authentication controls (logins and money transfers). Leverage ML/AI models to identify where optimization delivers the highest lift, targeting a 5–8% funnel performance improvement while keeping fraud capture rates within a 0.5% tolerance.
BCG engagement (U.S.): Establish a genetic algorithm–driven methodology to optimize 5 authentication controls across 3 LOBs and 9 products. Use ML/AI models to pinpoint the most impactful opportunities, embedding a BAU approach to balance fraud prevention with conversion uplift under the same 5–8% / 0.5% tolerance framework.
Established the methodology and workflow foundation in H2O.ai, building a centralized feature repository to accelerate reuse across both reporting and AI/ML initiatives and ensure consistent deployment across optimization projects.
Introduced A/B testing and real-time performance monitoring with fraud operations, enabling continuous evaluation of fraud capture rates and customer experience impacts post-deployment.
Manage and coordinate 5+ external consultants across both firms, streamlining delivery timelines, aligning workstreams, and driving consensus across stakeholders (Fraud Governance, DaaS, Fraud Ops, Business).
Support six CIAM initiatives by automating data provisioning, mapping requirements, and building performance monitoring pipelines—improving visibility into new product rollouts and closing fraud coverage gaps.
GenAI & Agentic AI Innovation
Detailed Version (resume-ready):
Leading the design and enterprise integration of two in-progress GenAI and Agentic AI solutions in collaboration with Layer6—pioneering the introduction of autonomous intelligence into fraud analytics workflows.
Alert Summarization Copilot (in progress): Designing an LLM-powered copilot that transforms raw alert metadata into structured, triage-ready summaries. Integrated with SharePoint and JIRA playbooks, the copilot is built to reduce manual triage workload, accelerate L2→L3 handoffs, and improve incident response efficiency across fraud, risk, and business stakeholders.
Post-Incident Intelligence Agent (in progress): Developing an autonomous agent that leverages telemetry features (PSI drift, geolocation anomalies, behavioral velocity signals) to generate incident narratives, root cause hypotheses, and audit-ready documentation—strengthening investigative rigor and reducing resolution timelines.
Infrastructure & Orchestration Blueprint: Defined and aligned infrastructure requirements to ensure enterprise-scale deployment, including:
Data & Compute: Databricks (PySpark pipelines), Azure cloud infrastructure, GPU-backed compute for LLM workloads.
GenAI Stack: Azure OpenAI for model serving; LangChain and Hugging Face for orchestration and fine-tuning; scikit-learn and PSI metrics for drift detection.
Integration Layer: APIs and connectors for SharePoint, Confluence, and JIRA to embed agent outputs directly into existing workflows.
Orchestration Platform: Built modular architecture supporting containerized deployment (Docker/Kubernetes), CI/CD pipelines with MLflow, and workflow automation for production readiness.
Governance & Compliance: Defined requirements for model monitoring, lineage tracking, audit logging, and role-based access controls to ensure outputs meet fraud, audit, and regulatory standards.
Acting as program lead, establishing the framework and infrastructure standards for deploying GenAI copilots and autonomous agents across TD fraud analytics as BAU practice.
Executive-Facing Version (for selective use):
Leading enterprise adoption of GenAI in fraud analytics, building two flagship in-progress solutions in collaboration with Layer6.
Alert Summarization Copilot: LLM-powered assistant that auto-generates triage-ready summaries from raw alerts, designed to cut manual workload and accelerate handoffs.
Post-Incident Intelligence Agent: Autonomous system that produces real-time incident narratives and root-cause insights, strengthening investigative rigor, improving audit readiness, and reducing ad hoc incident management work.
Defined the infrastructure, orchestration, and governance blueprint (Azure OpenAI, Databricks, LangChain, Hugging Face, Kubernetes, CI/CD pipelines) to ensure these projects are scalable and regulatory-compliant.
Establishing a repeatable framework to deploy GenAI copilots and agents as business-as-usual practice, embedding AI into enterprise fraud prevention operations.
Fraud Intelligence & Performance Monitoring
Fraud Loss & Performance Visibility: Designed and implemented a fraud loss attribution methodology to quantify and trace dollar losses across the fraud lifecycle, uncovering both immediate and delayed impacts. Delivered an executive-facing Power BI dashboard powered by Databricks to support leadership oversight, root cause analysis, and regulatory audit readiness.
Executive Dashboards: Oversaw delivery of 7 strategic dashboards built by the BI Manager (direct report) and their team of 4 BI Analysts, providing visibility into fraud trends and funnel performance across Canadian and U.S. portfolios. These dashboards are actively used by SVP/EVP leaders to drive fraud prevention and growth strategies.
Alerting & Monitoring: Designed and deployed 60+ production alerts detecting anomalies linked to fraud actor behaviors or system/operational failures. Alerts cover funnel performance, geolocation/device anomalies, and control drift, ensuring early detection and rapid escalation of emerging risks.
Incident & Alert Management: Led the incident management workstream with Fraud, CIAM, and product teams—identifying control gaps from fraud actor behavior analysis and recommending data-driven solutions. Authored both the alert triage and incident management playbooks, streamlining escalation and reducing resolution time through automated routing of alerts to L2/L3 teams.
Data Automation & Monitoring: Automated the majority of data pipelines, reporting layers, and model workflows, enabling enterprise-wide scalability and faster AI/ML delivery. Developed and operationalized a model performance monitoring framework aligned with validation and audit requirements.
Senior Data Scientist
TD Bank
01.2024 - 01.2025
Led the design, development, and deployment of advanced machine learning models to improve Anti-Money Laundering (AML) detection performance and reduce false positives across multiple business segments.
Spearheaded the transition to Databricks-based big data architecture using PySpark—accelerating processing pipelines, enabling real-time analytics, and improving model refresh cycles.
Deployed LSTM-based NLP models to analyze unstructured communications linked to customer accounts, uncovering hidden AML risk indicators (e.g., suspicious transaction narratives, behavioral anomalies) and augmenting rule-based detection systems with context-aware insights.
Mentored a team of 8 junior data scientists, introducing agile workflows and model lifecycle documentation standards.
Advocated for ethical and explainable AI by implementing SHAP-based interpretability and aligning model monitoring with validation requirements.
Actively participated in TD’s cross-functional working group on regulatory readiness and enterprise model oversight, contributing to the development of the AML analytics roadmap aligned with OCC and OSFI mitigation strategies.
Partnered with compliance, enterprise risk, and legal teams to assess gaps in model governance, interpretability, and controls—ensuring risk models met regulatory expectations and audit scrutiny.
Collaborated with 20+ stakeholders across Data, Risk, and Technology to support the cloud migration of AML systems to Azure, ensuring scalability, lineage traceability, and operational alignment.
Data Scientist
TD Bank
01.2021 - 01.2024
Designed and deployed four AML optimization models across Canadian and U.S. portfolios (Retail and Commercial Banking), creating a scalable framework that surfaced high-risk cases earlier while deprioritizing noise—reducing L2 and L3 investigation workload by 25–30% and improving case quality.
Partnered with AML advisory teams to embed SME-aligned feature engineering, capturing nuanced monetary behaviors (e.g., structuring patterns, velocity anomalies, geographic risk factors) that enhanced model precision and reduced regulatory blind spots.
Built dynamic Tableau dashboards integrated with Databricks pipelines, delivering near real-time AML risk intelligence and investigation metrics—actively used by senior leadership, compliance officers, and regulators to monitor AML program performance.
Designed and operationalized a prioritization framework using XGBoost with SHAP explainability to rank alerts by business and fraud risk—empowering investigators to focus on the top 10–15% of alerts driving 70%+ of AML exposure.
Collaborated with engineering, business, and IT teams to resolve mapping and data lineage gaps across AML scenarios—strengthening audit trails, improving data quality, and aligning scenario logic with enterprise fraud and compliance standards.
Led a 4-person scenario calibration team, applying statistical threshold tuning and stability testing to align scenarios with TD’s evolving risk appetite, while maintaining compliance with OCC and OSFI expectations and avoiding unnecessary customer impact.
Data Science Consultant
Smith School of Business, Queen’s University
01.2017 - 01.2020
Designed and deployed a Bayesian/probabilistic reputation model for supplier–retailer networks, enabling dynamic trust scoring and strategic partnership optimization—boosting supplier profitability by 28%.
Led enterprise churn prediction for Shopify using Random Forest and Logistic Regression—reducing churn by 15% through targeted segmentation, retention incentives, and personalized messaging strategies.
Developed a dynamic programming optimization framework leveraging financial derivatives to determine long-term cut-off strategies for mining operations—increasing projected asset value by over 10%.
Conducted retention modeling and customer behavior segmentation for e-commerce and financial services clients, enabling data-driven pricing and campaign strategies.
Applied model fairness metrics (e.g., disparate impact analysis, SHAP, LIME) to identify and mitigate bias—proposing equitable decision thresholds to improve inclusion without sacrificing accuracy.
Data Science Consultant
Sabanci University
01.2014 - 01.2017
Developed and implemented a Markov Decision Process (MDP) framework to optimize energy efficiency and revenue in data centers—reducing energy costs by 34% while maintaining service-level performance.
Engineered deep learning neural networks for predicting Ubiquitination and Phosphorylation protein sites—advancing computational biology research and supporting early-stage drug discovery pipelines.
Created high-performance bioinformatics workflows integrating feature extraction, sequence analysis, and cross-validation pipelines—enhancing prediction reliability in protein modification studies.
Presented findings at international AI and operations research conferences—establishing thought leadership in applying optimization and AI to cross-disciplinary, high-impact challenges in both industry and academia.
DevOps & MLOps: GitHub Actions, MLflow, Agile ML Guidelines, Bash/Linux
Enterprise Data Architecture & Ingestion: Familiar with enterprise data integration patterns including batch processing, log-based ingestion, real-time streaming via Kafka topics, and API-based pipelines—supporting scalable ingestion of structured and semi-structured data across platforms, including vendor signals, operational logs, and real-time decisioning inputs
Business & Analytical Platforms: SAS, Oracle, Splunk