[DATA SCIENCE TRACK] PRESENTATION: Risk mitigation through model transparency in bias: improving model safety through decreasing bias | Kisaco Research

Bias in AI systems can lead to harmful outcomes. We investigate methods to increase model transparency and explainability in order to detect, understand, and mitigate risks from bias. Techniques like saliency maps, attention mechanisms, and adversarial testing can shed light on model behavior. Improving model transparency and reducing bias is key to developing safer, more trustworthy AI.

Session Topics: 
Risk Mitigation
Model Development
Speaker(s): 

Author:

Jon Bennion

Machine Learning Engineer and LLMOps
FOX

Jon Bennion

Machine Learning Engineer and LLMOps
FOX
Session Job Focus: