Managing bias in the actuarial sciences with Joshua Pyle, FCAS
Listen now
Description
Joshua Pyle joins us in a discussion about managing bias in the actuarial sciences. Together with Andrew's and Sid's perspectives from  both the economic and data science fields, they deliver an interdisciplinary conversation about bias that you'll only find here. OpenAI news plus new developments in language models. 0:03The hosts get to discuss the aftermath of OpenAI and Sam Altman's return as CEOTension between OpenAI's board and researchers on the push for slow, responsible AI development vs fast, breakthrough model-making.Microsoft researchers find that smaller, high-quality data sets can be more effective for training language models than larger, lower-quality sets (Orca 2).Google announces Gemini, a trio of models with varying parameters, including an ultra-light version for phones Bias in actuarial sciences with Joshua Pyle, FCAS. 9:29Josh shares insights on managing bias in Actuarial Sciences, drawing on his 20 years of experience in the field.Bias in actuarial work defined as differential treatment leading to unfavorable outcomes, with protected classes including race, religion, and more.Actuarial bias and model validation in ratemaking. 15:48The importance of analyzing the impact of pricing changes on protected classes, and the potential for unintended consequences when using proxies in actuarial ratemaking.Three major causes of unfair bias in ratemaking (Contingencies, Nov 2023)Gaps in the actuarial process that could lead to bias, including a lack of a standardized governance framework for model validation and calibration.Actuarial standards, bias, and credibility. 20:45Complex state-level regulations and limited data pose challenges for predictive modeling in insurance.Actuaries debate definition and mitigation of bias in continuing education.Bias analysis in actuarial modeling. 27:16The importance of identifying dislocation analysis in bias analysis.Analyze two versions of a model to compare predictive power of including vs. excluding protected class (race).Bias in AI models in actuarial field. 33:56Actuaries can learn from data scientists' tendency to over-engineer models.Actuaries may feel excluded from the Big Data era due to their need to explain their methodsStandardization is needed to help actuaries identify and mitigate bias.Interdisciplinary approaches to AI modeling and governance. 42:11Sid hopes to see more systematic and published approaches to addressing bias in the data science field.Andrew emphasizes the importance of interdisciplinary collaboration between actuaries, data scientists, and economists to create more accurate and fair modeling systems.Josh agrees and highlights the need for better governance structures to support this collaboration, citing the lack of good journals and academic silos as a challenge. Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. [email protected] - Keep those questions coming! They inspire future episodes.
More Episodes
What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in...
Published 11/09/24
Published 11/09/24
Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The...
Published 10/08/24