Security and Safety in AI: Adversarial Examples, Bias and Trust w/ Moustapha Cissé - TWiML Talk #108

Security and Safety in AI: Adversarial Examples, Bias and Trust w/ Moustapha Cissé - TWiML Talk #108

By Sam Charrington

In this episode I’m joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris. Moustapha’s broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/108. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.
-
-
Heart UK
Mute/Un-mute