Getting one or two AI models into production is very different to running an entire enterprise or product on AI, and as AI is scaled, problems can (and often do) scale too.
- Standardizing how you build and operationalize models
- Focusing teams where they’re strongest
- Introducing MLOps and establishing best practices and tools to facilitate rapid, safe, and efficient development and operationalization of AI
Failure to adequately explain model development, working and outcome inherently invites both regulatory and customer scrutiny, especially when things go wrong.
- The extent to which customers need to know how and why a particular outcome has been reached
- Do you need to understand black box models and if so, why?
- Where is explainability a luxury and where is it absolute necessity
- Lessons learned from failures and how explainability could have helped
Significant amounts of data are required for AI in both training and operation, and it is vital to ensure both data quality and compliance. In an age where individuals are being granted ever more control over their data, there are new and emerging challenges post GDPR. This panel will explore:
- How to ensure both high quality and compliant data
- Potential solutions for using highly sensitive data
Opportunities for data collaboration and partnership
Even without intentionally prejudice data or development practices, AI can produce inequitable results. How can organizations ensure they are mitigating bias at all levels and reducing the risk of reputational, societal and regulatory harm?
AI has the potential to unlock significant long-term value but trust needs to be felt both by the company deploying the AI and the customers that will experience it and its outcomes. This discussion will explore the success and failures organizations have faced with:
- Effectively building trust and conveying the net benefits for all parties
- Human circuit breaks as a safety mechanism
- Leveraging trust to create more desirable business outcomes
- The definition of trust in the context of AI – is it in the development process, the outcome or both?
- How to test, evaluate and analyze AI systems
- Adopting comprehensive test and evaluation approaches
- Which protocols can be applied and where new approaches are required
All systems fail at some point, no matter how much time and rigor are put into their design and development. AI is not immune, susceptible to attacks, exploitation and unexpected failures. This session will be broken into two presentations to explore:
- Top tips for designing, building and ensuring robustness and resilience in AI
- Improving the robustness of AI components and systems
- Designing for security challenges and strategies for risk mitigation
How much does it matter than systems are uniquely tailored to your business case? Are you sure you can fully explain the AI models you are using, and do you even need to? There are clear pros and cons for both developing models internally and buying third party but whichever route you choose, have you considered the risks and if so, how are you mitigating them?
![](https://connectedhealthandfitness.com/sites/default/files/styles/panopoly_image_square/public/speakers/adhar_headshot.jpg?itok=nH6Krz7g&c=9e22ca81cd02258032e78652341298c0)