The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

By Sam Charrington

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

Episodes

AI’s Legal and Ethical Implications with Sandra Wachter - #521

Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford.  Sandra’s work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon. The complete show notes for this episode can be found at twimlai.com/go/521.
23/09/2149m 27s

Compositional ML and the Future of Software Development with Dillon Erb - #520

Today we’re joined by Dillon Erb, CEO of Paperspace.  If you’re not familiar with Dillon, he joined us about a year ago to discuss Machine Learning as a Software Engineering Discipline; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.” The complete show notes for this episode can be found at twimlai.com/go/520.
20/09/2141m 14s

Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519

Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model. The complete show notes for this episode can be found at twimlai.com/go/519.
16/09/2138m 28s

Social Commonsense Reasoning with Yejin Choi - #518

Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward.  If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl. The complete show notes for today’s episode can be found at twimlai.com/go/518.
13/09/2151m 31s

Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar - #517

Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH.  In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers focused on the application of ML to game testing, discussing why deep reinforcement learning is at the top of their research agenda, the differences between training atari games and modern 3D games, using CNNs to detect glitches in games, and of course, Konrad gives us his outlook on the future of ML for games training. The complete show notes for this episode can be found at twimlai.com/go/517.
09/09/2140m 21s

Exploring AI 2041 with Kai-Fu Lee - #516

Today we’re joined by Kai-Fu Lee, chairman and CEO of Sinovation Ventures and author of AI 2041: Ten Visions for Our Future.  In AI 2041, Kai-Fu and co-author Chen Qiufan tell the story of how AI could shape our future through a series of 10 “scientific fiction” short stories. In our conversation with Kai-Fu, we explore why he chose 20 years as the time horizon for these stories, and dig into a few of the stories in more detail. We explore the potential for level 5 autonomous driving and what effect that will have on both established and developing nations, the potential outcomes when dealing with job displacement, and his perspective on how the book will be received. We also discuss the potential consequences of autonomous weapons, if we should actually worry about singularity or superintelligence, and the evolution of regulations around AI in 20 years. We’d love to hear from you! What are your thoughts on any of the stories we discuss in the interview? Will you be checking this book out? Let us know in the comments on the show notes page at twimlai.com/go/516.
06/09/2147m 12s

Advancing Robotic Brains and Bodies with Daniela Rus - #515

Today we’re joined by Daniela Rus, director of CSAIL & Deputy Dean of Research at MIT.  In our conversation with Daniela, we explore the history of CSAIL, her role as director of one of the most prestigious computer science labs in the world, how she defines robots, and her take on the current AI for robotics landscape. We also discuss some of her recent research interests including soft robotics, adaptive control in autonomous vehicles, and a mini surgeon robot made with sausage casing(?!).  The complete show notes for this episode can be found at twimlai.com/go/515.
02/09/2145m 36s

Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514

Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.”  We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (applications like this come to mind). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions.  The complete show notes for this episode can be found at twimlai.com/go/514.
30/08/2146m 1s

Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513

Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.  We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation. The complete show notes for this episode can be found at twimlai.com/go/513.
26/08/2136m 25s

Adaptivity in Machine Learning with Samory Kpotufe - #512

Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT).  In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning theory, and his goal of reaching self-tuning, adaptive algorithms. We discuss Samory’s research in transfer learning and other potential procedures that could positively affect transfer, as well as his work understanding unsupervised learning including how clustering could be applied to real-world applications like cybersecurity, IoT (Smart homes, smart city sensors, etc) using methods like dimension reduction, random projection, and others. If you enjoyed this interview, you should definitely check out our conversation with Jelani Nelson on the “Theory of Computation.”  The complete show notes for this episode can be found at https://twimlai.com/go/512.
23/08/2149m 58s

A Social Scientist’s Perspective on AI with Eric Rice - #511

Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society.  Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model.  We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest Milind Tambe, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible.  If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s TWIMLfest on Why AI Innovation and Social Impact Go Hand in Hand. The complete show notes for this episode can be found at https://twimlai.com/go/511.
19/08/2143m 47s

Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510

Today we’re joined by José Miguel Hernández-Lobato, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.
16/08/2142m 27s

Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509

Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs.  The complete show notes for this episode can be found at twimlai.com/go/509.
12/08/2147m 17s

Spatiotemporal Data Analysis with Rose Yu - #508

Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego.  Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models.  The complete show note for this episode can be found at twimlai.com/go/508.
09/08/2132m 11s

Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507

Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.  We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing.  The complete show notes for this episode can be found at twimlai.com/go/507.
05/08/2150m 33s

Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506

Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill.  In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley.  Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like. The complete show notes for this episode can be found at twimlai.com/go/506.
02/08/2154m 20s

Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505

Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt.  In our conversation with Gustavo, we explore his paper Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios. The complete show notes for this episode can be found at twimlai.com/go/505.
29/07/2150m 38s

Fairness and Robustness in Federated Learning with Virginia Smith -#504

Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University.  In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness. We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting. The complete show notes for this episode can be found at twimlai.com/go/504.
26/07/2136m 51s

Scaling AI at H&M Group with Errol Koolmeister - #503

Today we’re joined by Errol Koolmeister, the head of AI foundation at H&M Group. In our conversation with Errol, we explore H&M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.
22/07/2141m 17s

Evolving AI Systems Gracefully with Stefano Soatto - #502

Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA.  Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more. The complete show notes for this episode can be found at twimlai.com/go/502.
19/07/2149m 11s

ML Innovation in Healthcare with Suchi Saria - #501

Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University.  Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success.  Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting. The complete show notes for this episode can be found at twimlai.com/go/501.
15/07/2145m 22s

Cross-Device AI Acceleration, Compilation & Execution with Jeff Gehlhaar - #500

Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm.  In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more!  After you check out this interview, you can look below for some of the other conversations with researchers mentioned.  The complete show notes for this episode can be found at twimlai.com/go/500.
12/07/2141m 54s

The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499

Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and Siddhartha Sen, a principal researcher at Microsoft Research.  In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups.  We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future.   The complete show notes for this episode can be found at https://twimlai.com/go/499.
08/07/2148m 44s

Vector Quantization for NN Compression with Julieta Martinez - #498

Today we’re joined by Julieta Martinez, a senior research scientist at recently announced startup Waabi.  Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search and Neural Network Compression have in Common,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network.  We also dig into another paper Julieta presented at the conference, Deep Multi-Task Learning for Joint Localization, Perception, and Prediction, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently. The complete show notes for this episode can be found at twimlai.com/go/498.
05/07/2141m 18s

Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497

Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder.  We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment.  Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “Deep Unsupervised Learning for Climate Informatics,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events. The complete show notes for this episode can be found at twimlai.com/go/497.
01/07/2142m 14s

Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496

Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies.  In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition.  The complete show notes for this episode can be found at twimlai.com/go/496.
28/06/2147m 59s

Advancing NLP with Project Debater w/ Noam Slonim - #495

Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research.  In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “An Autonomous Debating System,” which details the system in its entirety.  Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more. The complete show notes for this episode can be found at twimlai.com/go/495.
24/06/2151m 45s

Bringing AI Up to Speed with Autonomous Racing w/ Madhur Behl - #494

Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia.  In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of autonomous driving, ML/AI, and Motorsports, where he’s teaching self-driving cars how to drive in an agile manner. We talk through the differences between traditional self-driving problems and those encountered in a racing environment, the challenges in solving planning, perception, control.  We also discuss their upcoming race at the Indianapolis Motor Speedway, where Madhur and his students will compete for 1 million dollars in the world’s first head-to-head fully autonomous race, and how they’re preparing for it.
21/06/2151m 46s

AI and Society: Past, Present and Future with Eric Horvitz - #493

Today we continue our AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz.  In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical implications, the scope of the study on the topic, and how drastically the AI and machine learning landscape has changed since 2009. We also discuss Eric’s role at Microsoft and the Aether committee that has advised the company on issues of responsible AI since 2017. Finally, we talk through his recent work as a member of the National Security Commission on AI, where he helped commission a 750+ page report on topics including the Future of AI R&D, Building Trustworthy AI systems, civil liberties and privacy, and the challenging area of AI and autonomous weapons.   The complete show notes for this episode can be found at twimlai.com/go/493.
17/06/2153m 53s

Agile Applied AI Research with Parvez Ahammad - #492

Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn. In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly organized at LinkedIn. We explore how they ensure time investments on long-term projects are managed, how to identify products that can help in a cross-cutting way across multiple lines of business, quantitative methodologies to identify unintended consequences in experimentation, and navigating the tension between research and applied ML teams in an organization. Finally, we discuss differential privacy, and their recently released GreyKite library, an open-source Python library developed to support forecasting. The complete show note for this episode can be found at twimlai.com/go/492.
14/06/2143m 51s

Haptic Intelligence with Katherine J. Kuchenbecker - #491

Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department.  In our conversation, we explore Katherine’s research interests, which lie at the intersection of haptics (physical interaction with the world) and machine learning, introducing us to the concept of “haptic intelligence.” We discuss how ML, mainly computer vision, has been integrated to work together with robots, and some of the devices that Katherine’s lab is developing to take advantage of this research. We also talk about hugging robots, augmented reality in robotic surgery, and the degree to which she studies human-robot interaction. Finally, Katherine shares with us her passion for mentoring and the importance of diversity and inclusion in robotics and machine learning.  The complete show notes for this episode can be found at twimlai.com/go/491.
10/06/2138m 16s

Data Science on AWS with Chris Fregly and Antje Barth - #490

Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS.  In our conversation with Chris and Antje, we explore their roles as community builders prior to, and since, joining AWS, as well as their recently released book Data Science on AWS. In the book, Chris and Antje demonstrate how to reduce cost and improve performance while successfully building and deploying data science projects.  We also discuss the release of their new Practical Data Science Specialization on Coursera, managing the complexity that comes with building real-world projects, and some of their favorite sessions from the recent ML Summit.
07/06/2140m 26s

Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489

Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies & roadmap at Qualcomm Technologies.  We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year.  Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data.  The complete show notes can be found at https://twimlai.com/go/489.
03/06/2139m 36s

Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488

Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML. In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise.  We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions. The complete show notes for this episode can be found at https://twimlai.com/go/488.
31/05/2143m 24s

Applied AI Research at AWS with Alex Smola - #487

Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI. We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research. Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from. The complete show notes for this episode can be found at https://twimlai.com/go/487.
27/05/2155m 55s

Causal Models in Practice at Lyft with Sean Taylor - #486

Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs. We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work. Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more. The complete show notes for this episode can be found at twimlai.com/go/486.
24/05/2140m 26s

Using AI to Map the Human Immune System w/ Jabran Zahid - #485

Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research. In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic. We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more. The complete show notes for this episode can be found at twimlai.com/go/485.
20/05/2141m 54s

Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484

Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies. We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals. The complete show notes for this episode can be found at twimlai.com/go/484.
17/05/2137m 43s

What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483

Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago.  One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems.  Allyson also participated in a recent panel discussion at the ICLR workshop How Can Findings About The Brain Improve AI Systems?, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more! The complete show notes for this episode can be found at twimlai.com/go/483.
13/05/2138m 0s

Probabilistic Numeric CNNs with Roberto Bondesan - #482

Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm.  In our conversation with Roberto, we explore his paper Probabilistic Numeric Convolutional Neural Networks, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.   The complete show notes for this episode can be found at https://twimlai.com/go/482
10/05/2141m 28s

Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481

Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn.  In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more! The complete show notes for this episode can be found at https://twimlai.com/go/481.
06/05/2134m 43s

Dask + Data Science Careers with Jacqueline Nolis - #480

Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the Build a Career in Data Science Podcast.  You might remember Jacqueline from our Advancing Your Data Science Career During the Pandemic panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist.  We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more! The complete show notes for this episode can be found at https://twimlai.com/go/480.
03/05/2134m 59s

Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479

Today we’re joined by Irene Chen, a Ph.D. student at MIT.  Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence.  We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research. The complete show notes for this episode can be found at https://twimlai.com/go/479.
29/04/2136m 59s

AI Storytelling Systems with Mark Riedl - #478

Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems.  We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more!  The complete show notes for this episode can be found at https://twimlai.com/go/478.
26/04/2141m 28s

Creating Robust Language Representations with Jamie Macbeth - #477

Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College.  In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into. One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research.  The complete show notes for this episode can be found at https://twimlai.com/go/477.
21/04/2140m 4s

Reinforcement Learning for Industrial AI with Pieter Abbeel - #476

Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant. In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he’s building. We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper “Pretrained Transformers as Universal Computation Engines” and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today! The complete show notes for this episode can be found at twimlai.com/go/476.
19/04/2158m 18s

AutoML for Natural Language Processing with Abhishek Thakur - #475

Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster! In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way. We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, Approaching (Almost) Any Machine Learning Problem. The complete show notes for this episode can be found at https://twimlai.com/go/475.
15/04/2136m 16s

Inclusive Design for Seeing AI with Saqib Shaikh - #474

Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project. In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various technologies and use cases for the app, and how it has evolved since the inception of the project, how the technology landscape supports projects like this one, and the technical challenges he faces when building out the app. We also the relationship and trust between humans and robots, and how that translates to this app, what Saqib sees on the research horizon that will support his vision for the future of Seeing AI, and how the integration of tech like Apple’s upcoming “smart” glasses could change the way their app is used. The complete show notes for this episode can be found at twimlai.com/go/474.
12/04/2135m 37s

Theory of Computation with Jelani Nelson - #473

Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley. In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random projections, and dimensionality reduction. We discuss how Jelani thinks about the balance between the innovation of new algorithms and the performance of existing ones, and some use cases where we’d see his work in action. Finally, we talk through how his work ties into machine learning, what tools from the theorist’s toolbox he’d suggest all ML practitioners know, and his nonprofit AddisCoder, a 4 week summer program that introduces high-school students to programming and algorithms. The complete show notes for this episode can be found at twimlai.com/go/473.
08/04/2133m 39s

Human-Centered ML for High-Risk Behaviors with Stevie Chancellor - #472

Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota. In our conversation with Stevie, we explore her work at the intersection of human-centered computing, machine learning, and high-risk mental illness behaviors. We discuss how her background in HCC helps shapes her perspective, how machine learning helps with understanding severity levels of mental illness, and some recent work where convolutional graph neural networks are applied to identify and discover new kinds of behaviors for people who struggle with opioid use disorder. We also explore the role of computational linguistics and NLP in her research, issues in using social media data being used as a data source, and finally, how people who are interested in an introduction to human-centered computing can get started. The complete show notes for this episode can be found at twimlai.com/go/472.
05/04/2140m 45s

Operationalizing AI at Dataiku with Conor Jensen - #471

In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML journey of each panelist’s company, and where Dataiku fits in the equation. The complete show notes for this episode can be found at https://twimlai.com/go/471.
01/04/2123m 51s

ML Lifecycle Management at Algorithmia with Diego Oppenheimer - #470

In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market. The complete show notes for this episode can be found at twimlai.com/go/470.
01/04/2126m 11s

End to End ML at Cloudera with Santiago Giraldo - #469 [TWIMLcon Sponsor Series]

In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering & Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm. The complete show notes for this episode can be found at twimlai.com/sponsorseries.
29/03/2122m 20s

ML Platforms for Global Scale at Prosus with Paul van der Boor - #468 [TWIMLcon Sponsor Series]

In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale. The complete show notes for this episode can be found at twimlai.com/sponsorseries.
29/03/2122m 1s

Can Language Models Be Too Big? 🦜 with Emily Bender and Margaret Mitchell - #467

Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell.  Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going.  We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more.  The complete show notes for this episode can be found at twimlai.com/go/467.
24/03/2154m 2s

Applying RL to Real-World Robotics with Abhishek Gupta - #466

Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley.  Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments.  We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations.  The complete show notes for this episode can be found at https://twimlai.com/go/466.
22/03/2136m 10s

Accelerating Innovation with AI at Scale with David Carmona - #465

Today we’re joined by David Carmona, General Manager of Artificial Intelligence & Innovation at Microsoft.  In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models. We also discuss the different families of models (generation & representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning. The complete show notes for this episode can be found at twimlai.com/go/465.
18/03/2148m 36s

Complexity and Intelligence with Melanie Mitchell - #464

Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans.  While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence.  We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more! The complete show notes for this episode can be found at twimlai.com/go/464.
15/03/2132m 48s

Robust Visual Reasoning with Adriana Kovashka - #463

Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh. In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements.  Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward.  The complete show notes for this episode can be found at twimlai.com/go/463.
11/03/2141m 40s

Architectural and Organizational Patterns in Machine Learning with Nishan Subedi - #462

Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com. In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement use cases. We spend a great deal of time exploring machine learning architecture and architectural patterns, how he perceives the differences between architectural patterns and algorithms, and emergent architectural patterns that standards have not yet been set for. Finally, we discuss how the idea of anti-patterns was innovative in early design pattern thinking and if those concepts are transferable to ML, if architectural patterns will bleed over into organizational patterns and culture, and Nishan introduces us to the concept of Squads within an organizational structure. The complete show notes for this episode can be found at https://twimlai.com/go/462.
08/03/2157m 35s

Common Sense Reasoning in NLP with Vered Shwartz - #461

Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science & Engineering at the University of Washington. In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities. Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research. The complete show notes for this episode can be found at https://twimlai.com/go/461.
04/03/2137m 14s

How to Be Human in the Age of AI with Ayanna Howard - #460

Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard.  Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines.  We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field.  The complete show notes for this episode can be found at https://twimlai.com/go/460.
01/03/2135m 48s

How to Be Human in the Age of AI with Ayanna Howard - #460

Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard.  Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines.  We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field.  The complete show notes for this episode can be found at https://twimlai.com/go/460.
01/03/2136m 32s

Evolution and Intelligence with Penousal Machado - #459

Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra.  In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture.  Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices. The complete show notes for this episode can be found at twimlai.com/go/459.
25/02/2157m 19s

Innovating Neural Machine Translation with Arul Menezes - #458

Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft.  Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models.  We also discuss how they’re using multilingual transfer learning and combining what they’ve learned in translation with pre-trained language models like BERT. Finally, we explore what they’re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward.  The complete show notes for this series can be found at twimlai.com/go/458.
22/02/2144m 25s

Building the Product Knowledge Graph at Amazon with Luna Dong - #457

Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon. In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community. The complete show notes for this episode can be found at https://twimlai.com/go/457.
18/02/2143m 51s

Towards a Systems-Level Approach to Fair ML with Sarah M. Brown - #456

Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island. In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness. Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making. The complete show notes can be found at twimlai.com/go/456.
15/02/2137m 33s

AI for Digital Health Innovation with Andrew Trister - #455

Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill & Melinda Gates Foundation.  In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing “community-based” healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves.  We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches. The complete show notes for this episode can be found at https://twimlai.com/go/455
11/02/2141m 55s

System Design for Autonomous Vehicles with Drago Anguelov - #454

Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo.  In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago’s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix.  We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow’s enterprise machine learning systems. The complete show notes for this episode can be found at twimlai.com/go/454.
08/02/2150m 52s

Building, Adopting, and Maturing LinkedIn's Machine Learning Platform with Ya Xu - #453

Today we’re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker. We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn Platform. We discuss her “three phases” (building, adoption, and maturation) to keep in mind when building out a platform, how to avoid “hero syndrome” early in the process. Finally, we dig into the various tools and platforms that give LinkedIn teams leverage, their organizational structure, as well as the emergence of differential privacy for security use cases and if it's ready for prime time. The complete show notes for this episode can be found at https://twimlai.com/go/453.
04/02/2149m 6s

Expressive Deep Learning with Magenta DDSP w/ Jesse Engel - #452

Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project.  In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which “lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.” Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research. The complete show notes for this episode can be found at twimlai.com/go/452.
01/02/2139m 7s

Semantic Folding for Natural Language Understanding with Francisco Weber - #451

Today we’re joined by return guest Francisco Webber, CEO & Co-founder of Cortical.io. Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding. The complete show notes for this episode can be found at twimlai.com/go/451.
29/01/2155m 17s

The Future of Autonomous Systems with Gurdeep Pall - #450

Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft. Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system. In our conversation with Gurdeep, we discuss Microsoft’s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with “machine teaching,” and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they’ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what’s next in the very broad space of autonomous systems. The complete show notes for this episode can be found at twimlai.com/go/450.
25/01/2153m 17s

AI for Ecology and Ecosystem Preservation with Bryan Carstens - #449

Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology & Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University. In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab’s research. We explore a few of his lab’s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what’s next for his group. The complete show notes for this episode can be found at twimlai.com/go/449.
21/01/2135m 49s

Off-Line, Off-Policy RL for Real-World Decision Making at Facebook - #448

Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI. In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they’re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace. Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models. The complete show notes for this episode can be found at twimlai.com/go/448.
18/01/211h 1m

A Future of Work for the Invisible Workers in A.I. with Saiph Savage - #447

Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM. We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs. We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south. The complete show notes for this episode can be found at twimlai.com/go/447.
14/01/2138m 19s

Trends in Graph Machine Learning with Michael Bronstein - #446

Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter. In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning.  We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.
11/01/211h 14m

Trends in Natural Language Processing with Sameer Singh - #445

Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine.  We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation.  We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more. The complete show notes for this episode can be found at twimlai.com/go/445.
07/01/211h 21m

Trends in Computer Vision with Pavan Turaga - #444

AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering & Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University. Pavan, who joined us back in June to talk through his work from CVPR ‘20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he’s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future. We want to hear from you! Send your thoughts on the year that was 2020 below in the comments, or via Twitter at @samcharrington or @twimlai. The complete show notes for this episode can be found at twimlai.com/go/444
04/01/211h 9m

Trends in Reinforcement Learning with Pablo Samuel Castro - #443

Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain. Pablo joined us earlier this year for a discussion about Music & AI, and his Geometric Perspective on Reinforcement Learning, as well our RL office hours during the inaugural TWIMLfest. In today’s conversation, we explore some of the latest and greatest RL advancements coming out of the major conferences this year, broken down into a few major themes, Metrics/Representations, Understanding and Evaluating Deep Reinforcement Learning, and RL in the Real World. This was a very fun conversation, and we encourage you to check out all the great papers and other resources available on the show notes page.
30/12/201h 26m

MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran - #442

Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington. At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research. The complete show notes for this episode can be found at twimlai.com/go/442
28/12/2038m 1s

Machine Learning as a Software Engineering Enterprise with Charles Isbell - #441

As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing. This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored. We also touch on the fallout from Timnit Gebru being “resignated” and the importance of having diverse voices and different perspectives “in the room,” and what the future holds for machine learning as a discipline. The complete show notes for this episode can be found at twimlai.com/go/441.
23/12/2046m 22s

Natural Graph Networks with Taco Cohen - #440

Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies. In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative models, as well as his paper “Natural Graph Networks,” which explores the concept of “naturality, a generalization of equivariance” which suggests that weaker constraints will allow for a “wider class of architectures.” We also discuss some of Taco’s recent research on neural compression and a very interesting visual demo for equivariance CNNs that Taco and the Qualcomm team released during the conference. The complete show notes for this episode can be found at twimlai.com/go/440.
21/12/2058m 23s

Productionizing Time-Series Workloads at Siemens Energy with Edgar Bahilo Rodriguez - #439

Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy. Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the resurrection of R for machine learning and productionalization. In our conversation with Edgar, we explore the fundamentals of building a strong machine learning infrastructure, and how they’re breaking down applications and using mixed technologies to build models. We also discuss their industrial applications, including wind, power production management, managing systems intent on decreasing the environmental impact of pre-existing installations, and their extensive use of time-series forecasting across these use cases. The complete show notes can be found at twimlai.com/go/439.
18/12/2041m 26s

ML Feature Store at Intuit with Srivathsan Canchi - #438

Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit.  As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of the SageMaker Feature Store. To our pleasant surprise, we came to learn that our friends at Intuit are the original architects of this offering and partnered with AWS to productize it at a much broader scale. In our conversation with Srivathsan, we explore the focus areas that are supported by the Intuit machine learning platform across various teams, including QuickBooks and Mint, Turbotax, and Credit Karma,  and his thoughts on why companies should be investing in feature stores.  We also discuss why the concept of “feature store” has seemingly exploded in the last year, and how you know when your organization is ready to deploy one. Finally, we dig into the specifics of the feature store, including the popularity of graphQL and why they chose to include it in their pipelines, the similarities (and differences) between the two versions of the store, and much more! The complete show notes for this episode can be found at twimlai.com/go/438.
16/12/2041m 3s

re:Invent Roundup 2020 with Swami Sivasubramanian - #437

Today we’re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS. During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few advancements to SageMaker. In this roundup conversation, we discuss the motivation for hosting the first-ever machine learning keynote at the conference, a bunch of details surrounding tools like Pipelines for workflow management, Clarify for bias detection, and JumpStart for easy to use algorithms and notebooks, and many more. We also discuss the emphasis placed on DevOps and MLOps tools in these announcements, and how the tools are all interconnected. Finally, we briefly touch on the announcement of the AWS feature store, but be sure to check back later this week for a more in-depth discussion on that particular release! The complete show notes for this episode can be found at twimlai.com/go/437.
14/12/2048m 44s

Predictive Disease Risk Modeling at 23andMe with Subarna Sinha - #436

Today we’re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe. 23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use case we discuss in our conversation. Subarna talks us through an initial use case of creating an evaluation of polygenic scores, and how that led them to build an ML pipeline and platform. We talk through the tools and tech stack used for the operationalization of their platform, the use of synthetic data, the internal pushback that came along with the changes that were being made, and what’s next for her team and the platform. The complete show notes for this episode can be found at twimlai.com/go/436.
11/12/2039m 44s

Scaling Video AI at RTL with Daan Odijk - #435

Today we’re joined by Daan Odijk, Data Science Manager at RTL. In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content understanding use cases. Daan walks us through some of the challenges on both the modeling and engineering sides of building the platform, as well as the inherent challenges of video applications. Finally, we discuss the current state of their platform, and the benefits they’ve seen from having this infrastructure in place, and why using building a custom platform was worth the investment. The complete show notes for this episode can be found at twimlai.com/go/435.
09/12/2040m 28s

Benchmarking ML with MLCommons w/ Peter Mattson - #434

Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons.  In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference.  We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers. The complete show notes page for this episode can be found at twimlai.com/go/434.
07/12/2046m 4s

Deep Learning for NLP: From the Trenches with Charlene Chambliss - #433

Today we’re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI.  Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she’s been doing with NLP. In our conversation, we explore her experiences working with newer NLP models and tools like BERT and HuggingFace, as well as whats she’s learned along the way with word embeddings, labeling tasks, debugging, and more. We also focus on a few of her projects, like her popular multi-lingual BERT project, and a COVID-19 classifier.  Finally, Charlene shares her experience getting into data science and machine learning coming from a non-technical background, and what the transition was like, and tips for people looking to make a similar shift.
03/12/2045m 43s

Feature Stores for Accelerating AI Development - #432

In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder & CEO of Preset, for a discussion on Feature Stores for Accelerating AI Development. In this panel discussion, Sam and our guests explored how organizations can increase value and decrease time-to-market for machine learning using feature stores, MLOps, and open source. We also discuss the main data challenges of AI/ML, and the role of the feature store in solving those challenges. The complete show notes for this episode can be found at twimlai.com/go/432.
30/11/2056m 16s

An Exploration of Coded Bias with Shalini Kantayya, Deb Raji and Meredith Broussard - #431

In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film. In this panel discussion, Sam and our guests explored the societal implications of the biases embedded within AI algorithms. The conversation discussed examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved. Our panelists Shalini, Meredith, and Deb each share insight into their experience working on and researching bias in AI systems and the oppressive and dehumanizing impact they can have on people in the real world.
 The complete show notes for this film can be found at twimlai.com/go/431
27/11/201h 24m

Common Sense as an Algorithmic Framework with Dileep George - #430

Today we’re joined by Dileep George, Founder and the CTO of Vicarious. Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the nuance of “language understanding” and how all the tasks that fall underneath it are all interconnected, with or without language. We also discuss his work with Recursive Cortical Networks, Schema Networks, and what’s next on the path towards AGI!
23/11/2047m 52s

Scaling Enterprise ML in 2020: Still Hard! with Sushil Thomas - #429

Today we’re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera. Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this conversation with Sushil, we recap the roundtable, exploring some of the topics discussed and insights gained from those conversations. Sushil gives us a look at how COVID19 has impacted business throughout the year, and how the pandemic is shaping enterprise decision making moving forward.  We also discuss some of the key trends he’s seeing as organizations try to scale their machine learning and AI efforts, including understanding best practices, and learning how to hybridize the engineering side of ML with the scientific exploration of the tasks. Finally, we explore if organizational models like hub vs centralized are still organization-specific or if that’s changed in recent years, as well as how to get and retain good ML talent with giant companies like Google and Microsoft looming large. The complete show notes for this episode can be found at https://twimlai.com/go/429.
19/11/2046m 19s

Enabling Clinical Automation: From Research to Deployment with Devin Singh - #428

Today we’re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence & Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI. In our conversation with Devin, we discuss some of the interesting ways that Devin is deploying machine learning within the SickKids hospital, the current structure of academic research, including how much research and publications are currently being incentivized, how little of those research projects actually make it to deployment, and how Devin is working to flip that system on it's head.  We also talk about his work at Hero AI, where he is commercializing and deploying his academic research to build out infrastructure and deploy AI solutions within hospitals, creating an automated pipeline with patients, caregivers, and EHS companies. Finally, we discuss Devins's thoughts on how he’d approach bias mitigation in these systems, and the importance of having proper stakeholder engagement and using design methodology when building ML systems. The complete show notes for this episode can be found at twimlai.com/go/428.
16/11/2043m 37s

Pixels to Concepts with Backpropagation w/ Roland Memisevic - #427

Today we’re joined by Roland Memisevic, return podcast guest and Co-Founder & CEO of Twenty Billion Neurons.  We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app called Fitness Ally, an interactive, personalized fitness coach on your phone.  In our conversation with Roland, we explore the progress TwentyBN has made on their goal of training deep neural networks to understand physical movement and exercise. We also discuss how they’ve taken their research on understanding video context and awareness and applied it in their app, including how recent advancements have allowed them to deploy their neural net locally while preserving privacy, and Roland’s thoughts on the enormous opportunity that lies in the merging of language and video processing. The complete show notes for this episode can be found at twimlai.com/go/427.
12/11/2034m 53s

Fighting Global Health Disparities with AI w/ Jon Wang - #426

Today we’re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation. In our conversation with Jon, we explore a few of the different ways he’s attacking various public health issues, including improving the electronic health records system through automating clinical order sets, and exploring how the lack of literature and AI talent in the non-profit and healthcare spaces, and bad data have further marginalized undersupported communities. We also discuss his work at the Gates Foundation, which included understanding how AI can be helpful in lower-resource and lower-income countries, and building digital infrastructure, and much more. The complete show notes for this episode can be found at twimlai.com/go/426.
09/11/2035m 49s

Accessibility and Computer Vision - #425

Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions. However, most digital images remain inaccessible to the 39 million people worldwide who are blind. AI and computer vision technologies hold the potential to increase image accessibility for people who are blind, through technologies like automated image descriptions. The speakers share their perspectives as people who are both technology experts and are blind, providing insight into future directions for the field of computer vision for describing images and videos for people who are blind. To check out the video of this panel, visit here! The complete show notes for this episode can be found at twimlai.com/go/425
05/11/201h 0m

NLP for Equity Investing with Frank Zhao - #424

Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&P Global Market Intelligence. In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors. Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline. The complete show notes for this episode can be found at twimlai.com/go/424.
02/11/2044m 20s

The Future of Education and AI with Salman Khan - #423

In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy. In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal’s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they’re planning on implementing these features in the future. Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond! The complete show notes for this episode can be found at twimlai.com/go/423.
28/10/2047m 5s

Why AI Innovation and Social Impact Go Hand in Hand with Milind Tambe - #422

In this special #TWIMLfest Keynote episode, we’re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University. In our conversation, we explore Milind’s various research interests, most of which fall under the umbrella of AI for Social Impact, including his work in public health, both stateside and abroad, his conservation work in South Asia and Africa, and his thoughts on the ways that those interested in social impact can get involved.  The complete show notes for this episode can be found at twimlai.com/go/422.
23/10/2035m 32s

What's Next for Fast.ai? w/ Jeremy Howard - #421

In this special #TWIMLfest episode of the podcast, we’re joined by Jeremy Howard, Founder of Fast.ai. In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him down the path to ML education, his thoughts on the current state of the machine learning adoption cycle, and if we’re at maximum capacity for deep learning use and capability. Of course, we dig into the newest version of the fast.ai framework and course, the reception of Jeremy’s book ‘Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD,’ and what’s missing from the machine learning education landscape. If you’ve missed our previous conversations with Jeremy, I encourage you to check them out here and here. The complete show notes for this episode can be found at https://twimlai.com/go/421.
21/10/201h 1m

Feature Stores for MLOps with Mike del Balso - #420

Today we’re joined by Mike del Balso, co-Founder and CEO of Tecton.  Mike, who you might remember from our last conversation on the podcast, was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure from the company in 2018, he has been busy building up Tecton, and their enterprise feature store.  In our conversation, Mike walks us through why he chose to focus on the feature store aspects of the machine learning platform, the journey, personal and otherwise, to operationalizing machine learning, and the capabilities that more mature platforms teams tend to look for or need to build. We also explore the differences between standalone components and feature stores, if organizations are taking their existing databases and building feature stores with them, and what a dynamic, always available feature store looks like in deployment.  Finally, we explore what sets Tecton apart from other vendors in this space, including enterprise cloud providers who are throwing their hat in the ring. The complete show notes for this episode can be found at twimlai.com/go/420. Thanks to our friends at Tecton for sponsoring this episode of the podcast! Find out more about what they're up to at tecton.ai.
19/10/2045m 29s

Exploring Causality and Community with Suzana Ilić - #419

In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT). Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana’s work at Causaly, touching on her experiences transitioning from linguist and domain expert to working with causal modeling, balancing her role as both product manager and leader of the development team for their causality extraction module, and the unique ways that she thinks about UI in relation to their product. We also spend quite a bit of time exploring MLT, including how they’ve achieved exponential growth within the community over the past few years and when Suzana knew MLT was moving beyond just a personal endeavor, her experiences publishing papers at major ML conferences as an independent organization, and inspires her within the broader ML/AI Community. And of course, we answer quite a few great questions from our live audience!
16/10/2054m 8s

Decolonizing AI with Shakir Mohamed - #418

In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind. Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ‘Decolonial AI,’ the distinction between decolonizing AI and ethical AI, while also exploring the origin of the Indaba, the phases of community, and much more. The complete show notes for this episode can be found at twimlai.com/go/418.
14/10/2054m 3s

Spatial Analysis for Real-Time Video Processing with Adina Trufinescu

Today we’re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020.  We focus on the technical innovations that went into their recently announced spatial analysis software, and the software’s use cases including the movement of people within spaces, distance measurements (social distancing), and more.  We also discuss the ‘responsible AI guidelines’ put in place to curb bad actors potentially using this software for surveillance, what techniques are being used to do object detection and image classification, and the challenges to productizing this research.  The complete show notes for this episode can be found at twimlai.com/go/417.
08/10/2039m 41s

How Deep Learning has Revolutionized OCR with Cha Zhang - #416

Today we’re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud & AI.  Cha’s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries of what has been seen as an otherwise ‘solved’ problem. In our conversation with Cha, we explore some of the traditional challenges of doing OCR in the wild, and what are the ways in which deep learning algorithms are being applied to transform these solutions.  We also discuss the difficulties of using an end to end pipeline for OCR work, if there is a semi-supervised framing that could be used for OCR, the role of techniques like neural architecture search, how advances in NLP could influence the advancement of OCR problems, and much more.  The complete show notes for this episode can be found at twimlai.com/go/416.
05/10/2057m 31s

Machine Learning for Food Delivery at Global Scale - #415

In this special edition of the show, we discuss the various ways in which machine learning plays a role in helping businesses overcome their challenges in the food delivery space.  A few weeks ago Sam had the opportunity to moderate a panel at the Prosus AI Marketplace virtual event with Sandor Caetano of iFood, Dale Vaz of Swiggy, Nicolas Guenon of Delivery Hero, and Euro Beinat of Prosus.  In this conversation, panelists describe the application of machine learning to a variety of business use cases, including how they deliver recommendations, the unique ways they handle the logistics of deliveries, and fraud and abuse prevention.  The complete show notes for this episode can be found at twimlai.com/go/415.
02/10/2057m 49s

Open Source at Qualcomm AI Research with Jeff Gehlhaar and Zahra Koochak - #414

Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research.  If you haven’t had a chance to listen to our first interview with Jeff, I encourage you to check it out here! In this conversation, we catch up with Jeff and Zahra to get an update on what the company has up to since our last conversation, including the Snapdragon 865 chipset and Hexagon Neural Network Direct.  We also discuss open-source projects like the AI efficiency toolkit and Tensor Virtual Machine compiler, and how these projects fit in the broader Qualcomm ecosystem. Finally, we talk through their vision for on-device federated learning.  The complete show notes for this page can be found at twimlai.com/go/414.
30/09/2042m 13s

Visualizing Climate Impact with GANs w/ Sasha Luccioni - #413

Today we’re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ‘Machine Learning in the Fight Against Climate Change.’  We were first introduced to Sasha’s work through her paper on ‘Visualizing The Consequences Of Climate Change Using Cycle-consistent Adversarial Networks’, and we’re excited to pick her brain about the ways ML is currently being leveraged to help the environment. In our conversation, we explore the use of GANs to visualize the consequences of climate change, the evolution of different approaches she used, and the challenges of training GANs using an end-to-end pipeline. Finally, we talk through Sasha’s goals for the aforementioned panel, which is scheduled for Friday, October 23rd at 1 pm PT. Register for all of the great TWIMLfest sessions at twimlfest.com! The complete show notes for this episode can be found at twimlai.com/go/413.
28/09/2041m 32s

ML-Powered Language Learning at Duolingo with Burr Settles - #412

Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale. In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they’ve built. We also discuss the Duolingo English Test, and the challenges they’ve faced with maintaining the platform while adding languages and courses. Check out the complete show notes for this episode at twimlai.com/go/412.
24/09/2055m 4s

Bridging The Gap Between Machine Learning and the Life Sciences with Artur Yakimovich - #411

Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists.  While Artur’s background is in viral chemistry, he has since transitioned to a career in computational biology to “see where chemistry stopped, and biology started.” We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems.  Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems.  Check out the complete show notes at twimlai.com/go/411.
21/09/2040m 25s

Understanding Cultural Style Trends with Computer Vision w/ Kavita Bala - #410

Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University.  Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe.  Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research. The complete show notes for this episode can be found at twimlai.com/go/410.
17/09/2038m 9s

That's a VIBE: ML for Human Pose and Shape Estimation with Nikos Athanasiou, Muhammed Kocabas, Michael Black - #409

Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems.  We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.  The complete show notes for this episode can be found at https://twimlai.com/go/409. Register for TWIMLfest today!
14/09/2043m 19s

3D Deep Learning with PyTorch 3D w/ Georgia Gkioxari - #408

Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research.  Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then.  Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences.  The complete show notes for this episode can be found at twimlai.com/go/408.
10/09/2035m 16s

What are the Implications of Algorithmic Thinking? with Michael I. Jordan - #407

Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley.  Michael was gracious enough to connect us all the way from Italy after being named IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path.  We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more. The complete show notes for this episode can be found at. twimlai.com/go/407.
07/09/2056m 33s

Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh - #406

Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine.  Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin.  The complete show notes for this episode can be found at twimlai.com/go/406.
03/09/2041m 37s

How Machine Learning Powers On-Demand Logistics at Doordash with Gary Ren - #405

Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash.  In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.   The complete show notes for this episode can be found at twimlai.com/go/405! Check out our upcoming event at twimlai.com/twimlfest
31/08/2043m 15s

Machine Learning as a Software Engineering Discipline with Dillon Erb - #404

Today we’re joined by Dillon Erb, Co-founder & CEO of Paperspace. We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices.  We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow. The complete show notes for this episode can be found at twimlai.com/go/404.
27/08/2044m 33s

AI and the Responsible Data Economy with Dawn Song - #403

Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs.  In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way.  We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data. The complete show notes for this episode can be found twimlai.com/go/403.
24/08/2053m 24s

Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho - #402

Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. In our conversation, we focus on his paper ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions. We discuss how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.
20/08/2041m 21s

Model Explainability Forum - #401

Today we bring you the latest Discussion Series: The Model Explainability Forum. Our group of experts and researchers explore the current state of explainability and discuss the key emerging ideas shaping the field. Each guest shares their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more.
17/08/201h 27m

What NLP Tells Us About COVID-19 and Mental Health with Johannes Eichstaedt - #400

Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the da
13/08/2058m 44s

Human-AI Collaboration for Creativity with Devi Parikh - #399

Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). In our conversation, we touch on Devi’s definition of creativity, explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling.
10/08/2044m 32s

Neural Augmentation for Wireless Communication with Max Welling - #398

Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In our conversation, we explore Max’s work in neural augmentation, and how it’s being deployed. We also discuss his work with federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.
06/08/2048m 48s

Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis - #397

Today we're joined by Iordanis Kerenidis, Research Director CNRS Paris and Head of Quantum Algorithms at QC Ware. Iordanis was an ICML main conference Keynote speaker on the topic of Quantum ML, and we focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.
04/08/201h 0m

Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis - #397

Today we conclude our 2020 ICML coverage joined by Iordanis Kerenidis, Research Director at Centre National de la Recherche Scientifique (CNRS) in Paris, and Head of Quantum Algorithms at QC Ware. Iordanis’ research centers around quantum algorithms of machine learning, and was an ICML main conference Keynote speaker on the topic! We focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field. The complete show notes for this episode can be found at twimlai.com/talk/397. For complete ICML series details, visit twimlai.com/icml20.
03/08/201h 2m

ML and Epidemiology with Elaine Nsoesie - #396

Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including infectious disease surveillance, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology and the importance of recognizing how the disease is affecting people of different races and economic backgrounds.
30/07/2046m 59s

Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III - #395

Today we’re joined by Hal Daume III, professor at the University of Maryland and Co-Chair of the 2020 ICML Conference. We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models, exploring language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language.
27/07/201h 2m

Graph ML Research at Twitter with Michael Bronstein - #394

Today we’re excited to be joined by return guest Michael Bronstein, Head of Graph Machine Learning at Twitter. In our conversation, we discuss the evolution of the graph machine learning space, his new role at Twitter, and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.
23/07/2055m 20s

Panel: The Great ML Language (Un)Debate! - #393

Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts to discuss both popular and emerging programming languages for machine learning, along with the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28).
20/07/201h 34m

What the Data Tells Us About COVID-19 with Eric Topol - #392

Today we’re joined by Eric Topol, Director & Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. We caught up with Eric to talk through what we’ve learned about the coronavirus since it's emergence, and the role of tech in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise of personalized medicine, and how techniques like federated learning can offer more privacy in healthc
16/07/2042m 33s

The Case for Hardware-ML Model Co-design with Diana Marculescu - #391

Today we’re joined by Diana Marculescu, Professor of Electrical and Computer Engineering at UT Austin. We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from CVPR 2020. We explore how her research group is focusing on making models more efficient so that they run better on current hardware systems, and how they plan on achieving true co
13/07/2045m 48s

Computer Vision for Remote AR with Flora Tasse - #390

Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision & AI Research at Streem. Flora, a keynote speaker at the AR/VR workshop, walks us through some of the interesting use cases at the intersection of AI, CV, and AR technologies, her current work and the origin of her company Selerio, which was eventually acquired by Streem, the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation and more.
09/07/2040m 59s

Deep Learning for Automatic Basketball Video Production with Julian Quiroga - #389

Today we're Julian Quiroga, a Computer Vision Team Lead at Genius Sports, to discuss his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition.” We explore camera setups and angles, detection and localization of figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that he is looking to improve i
06/07/2041m 47s

How External Auditing is Changing the Facial Recognition Landscape with Deb Raji - #388

Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute. Recently there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition tech from Amazon, IBM and Microsoft. In our conversation with Deb, we dig into these stories, discussing the origins of Deb’s work on the Gender Shades project, the harms of facial recognition, and much more.
02/07/201h 20m

AI for High-Stakes Decision Making with Hima Lakkaraju - #387

Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what they look like.
29/06/2045m 18s

Invariance, Geometry and Deep Neural Networks with Pavan Turaga - #386

We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University. Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. We go in-depth on Pavan’s research on integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and Pavan contextualizes this work in relation to Hinton’s similar Capsule Network res
25/06/2046m 0s

Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi - #385

Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm. Babak is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning, covering how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more!
22/06/2055m 18s

Machine Learning Commerce at Square with Marsal Gavalda - #384

Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square, where he manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. We explore how they manage their vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success, tips and best practices for internal democratization of ML, and much more.
18/06/2051m 31s

Cell Exploration with ML at the Allen Institute w/ Jianxu Chen - #383

Today we’re joined by Jianxu Chen, a scientist at the Allen Institute for Cell Science. At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. We discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer
15/06/2044m 16s

Neural Arithmetic Units & Experiences as an Independent ML Researcher with Andreas Madsen - #382

Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.
11/06/2031m 48s

2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381

Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like:  • Why is now such a critical inflection point in the application of responsible AI? • How should engineers and practitioners think about AI ethics and responsible AI? • Why is AI ethics inherently personal and how can you define your own personal approach? • Is the implementation of AI governance necessarily authoritarian?
08/06/201h 1m

Panel: Advancing Your Data Science Career During the Pandemic - #380

Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel. In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.
04/06/201h 7m

On George Floyd, Empathy, and the Road Ahead

Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest.
02/06/206m 19s

Engineering a Less Artificial Intelligence with Andreas Tolias - #379

Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine. We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.
28/05/2046m 21s

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?
25/05/2052m 6s

The Physics of Data with Alpha Lee - #377

Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more
21/05/2033m 59s

Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 🦜

Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?
18/05/2052m 33s

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375

Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work.
14/05/2042m 32s

Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374

Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference. We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape
11/05/2044m 17s

The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald - #373

Today we’re joined by Lukas Biewald, founder and CEO of Weights & Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users.
07/05/2054m 49s

Language Modeling and Protein Generation at Salesforce with Richard Socher - #372

Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce.
04/05/2042m 6s

AI Research at JPMorgan Chase with Manuela Veloso - #371

Today we’re joined by Manuela Veloso, Head of AI Research at J.P. Morgan Chase. Since moving from CMU to JP Morgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time CMU in the ‘80s, or as she describes it, the “mecca of AI,” and her founding role with RoboCup.
30/04/2046m 32s

Panel: Responsible Data Science in the Fight Against COVID-19 - #370

In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved. We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.
29/04/2058m 4s

Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry - #369

Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, to discuss his paper “Adversarial Examples Are Not Bugs, They Are Features.” In our conversation, we talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will help inform opinions on either side of the deep learning debate.
27/04/2041m 1s

AI for Social Good: Why "Good" isn't Enough with Ben Green - #368

Today we’re joined by Ben Green, PhD Candidate at Harvard and Research Fellow at the AI Now Institute at NYU. Ben’s research is focused on the social and policy impacts of data science, with a focus on algorithmic fairness and the criminal justice system. We discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning research; A grounded definition of what “good” actually means, and the absence of a “theory of change.
23/04/2041m 39s

The Evolution of Evolutionary AI with Risto Miikkulainen - #367

Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI. Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we get an update on the latest on the topic. In our conversation, we discuss use cases for evolutionary AI and the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which digs into the historical evolution of AI.
20/04/2037m 57s

Neural Architecture Search and Google’s New AutoML Zero with Quoc Le - #366

Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google. Quoc joins us to discuss his work on Google’s AutoML Zero, semi-supervised learning, and the development of Meena, the multi-turn conversational chatbot. This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview on Youtube, and answer your questions in the chat. We’ll see you there!
16/04/2054m 13s

Automating Electronic Circuit Design with Deep RL w/ Karim Beguir - #365

Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. In our conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS.
13/04/2035m 4s

Neural Ordinary Differential Equations with David Duvenaud - #364

Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto, to discuss his research on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through a few of David’s papers on the subject. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering.
09/04/2049m 22s

The Measure and Mismeasure of Fairness with Sharad Goel - #363

Today we’re joined by Sharad Goel, Assistant Professor at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent recent years focused on applying ML to understanding and improving public policy. In our conversation, we discuss Sharad’s extensive work on discriminatory policing, and The Stanford Open Policing Project. We also dig into Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.”
06/04/2048m 29s

Simulating the Future of Traffic with RL w/ Cathy Wu - #362

Today we’re joined by Cathy Wu, Assistant Professor at MIT. We had the pleasure of catching up with Cathy to discuss her work applying RL to mixed autonomy traffic, specifically, understanding the potential impact autonomous vehicles would have on various mixed-autonomy scenarios. To better understand this, Cathy built multiple RL simulations, including a track, intersection, and merge scenarios. We talk through how each scenario is set up, how human drivers are modeled, the results, and much more.
02/04/2035m 12s

Consciousness and COVID-19 with Yoshua Bengio - #361

Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.
30/03/2049m 4s

Geometry-Aware Neural Rendering with Josh Tobin - #360

Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor
26/03/2026m 53s

The Third Wave of Robotic Learning with Ken Goldberg - #359

Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, and his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, agriculture, and even robotic Covid-19 testing.
23/03/201h 1m

Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358

Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.
18/03/2027m 33s

Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357

Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.
16/03/2034m 14s

SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356

Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.
12/03/2031m 59s

Advancements in Machine Learning with Sergey Levine - #355

Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!
09/03/2043m 8s

Secrets of a Kaggle Grandmaster with David Odaibo - #354

Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical
05/03/2041m 9s

NLP for Mapping Physics Research with Matteo Chinazzi - #353

Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.
02/03/2035m 8s

Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352

The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”
27/02/2056m 8s

High-Dimensional Robust Statistics with Ilias Diakonikolas - #351

Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.
24/02/2036m 5s

How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350

Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor
19/02/2051m 2s

Turning Ideas into ML Powered Products with Emmanuel Ameisen - #349

Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring.
17/02/2042m 21s

Algorithmic Injustices and Relational Ethics with Abeba Birhane - #348

Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics, which was the recipient of the Best Paper award at the 2019 Black in AI Workshop at NeurIPS. In our conversation, break down the paper and the thought process around AI ethics, the “harm of categorization,” how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve the issue, and much more.
13/02/2041m 8s

AI for Agriculture and Global Food Security with Nemo Semret - #347

Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling.
10/02/201h 4m

Practical Differential Privacy at LinkedIn with Ryan Rogers - #346

Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn, to discuss his paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition.” In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy through differential privacy, and the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise.
07/02/2033m 43s

Networking Optimizations for Multi-Node Deep Learning on Kubernetes with Erez Cohen - #345

Today we conclude the KubeCon ‘19 series joined by Erez Cohen, VP of CloudX & AI at Mellanox, who we caught up with before his talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” In our conversation, we discuss NVIDIA’s recent acquisition of Mellanox, the evolution of technologies like RDMA and GPU Direct, how Mellanox is enabling Kubernetes and other platforms to take advantage of the recent advancements in networking tech, and why we should care about networking in Deep Lea
05/02/2031m 31s

Managing Research Needs at the University of Michigan using Kubernetes w/ Bob Killen - #344

Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we explore how Bob and his group at UM are deploying Kubernetes, the user experience, and how those users are taking advantage of distributed computing. We also discuss if ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs, where gaps currently exist in trying to support these ML/AI users’ workloads, and more!
03/02/2025m 28s

Scalable and Maintainable Workflows at Lyft with Flyte w/ Haytham AbuelFutuh and Ketan Umare - #343

Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. We caught up with Haytham and Ketan at KubeCo, where they were presenting their newly open-sourced, cloud-native ML and data processing platform, Flyte. We discuss what prompted Ketan to undertake this project and his experience building Flyte, the core value proposition, what type systems mean for the user experience, how it relates to Kubeflow and how Flyte is used across Lyft.
30/01/2045m 22s

Causality 101 with Robert Osazuwa Ness - #342

Today Robert Osazuwa Ness, ML Research Engineer at Gamalon and Instructor at Northeastern University joins us to discuss Causality, what it means, and how that meaning changes across domains and users, and our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning," for which you can find details at twimlai.com/community.
27/01/2039m 32s

PaccMann^RL: Designing Anticancer Drugs with Reinforcement Learning w/ Jannis Born - #341

Today we’re joined by Jannis Born, Ph.D. student at ETH & IBM Research Zurich, to discuss his “PaccMann^RL” research. Jannis details how his background in computational neuroscience applies to this research, how RL fits into the goal of anticancer drug discovery, the effect DL has had on his research, and of course, a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and then discover new anticancer drugs.
23/01/2042m 4s

Social Intelligence with Blaise Aguera y Arcas - #340

Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss his role at Google, and his team’s approach to machine learning, and of course his presentation, in which he touches discussing today’s ML landscape, the gap between AI and ML/DS, the difference between intelligent systems and true intelligence, and much more.
20/01/2047m 57s

Music & AI Plus a Geometric Perspective on Reinforcement Learning with Pablo Samuel Castro - #339

Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his papers including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.”
16/01/2044m 45s

Trends in Computer Vision with Amir Zamir - #338

Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more!
13/01/201h 37m

Trends in Natural Language Processing with Nasrin Mostafazadeh - #337

Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.
09/01/201h 12m

Trends in Fairness and AI Ethics with Timnit Gebru - #336

Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.
06/01/2049m 44s

Trends in Reinforcement Learning with Chelsea Finn - #335

Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the CS Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year.
02/01/201h 8m

Trends in Machine Learning & Deep Learning with Zack Lipton - #334

Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU. You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.
30/12/191h 19m

FaciesNet & Machine Learning Applications in Energy with Mohamed Sidahmed - #333

Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&D Manager at Shell. In our conversation, we discuss two papers Mohamed and his team submitted to the conference this year, Accelerating Least Squares Imaging Using Deep Learning Techniques, and FaciesNet: Machine Learning Applications for Facies Classification in Well Logs. The show notes for this episode can be found at twimlai.com/talk/333/, where you’ll find links to both of these papers!
27/12/1939m 55s

Machine Learning: A New Approach to Drug Discovery with Daphne Koller - #332

Today we’re joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. In our conversation, discuss the current landscape of pharmaceutical drugs and drug discovery, including the current pricing of drugs, and an overview of Insitro’s goal of using ML as a “compass” in drug discovery. We also explore how Insitro functions as a company, their focus on the biology of drug discovery and the landscape of ML techniques being used, Daphne’s thoughts on AutoML, and
26/12/1943m 9s

Sensory Prediction Error Signals in the Neocortex with Blake Richards - #331

Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. Blake was an invited speaker at the Neuro-AI Workshop, and presented his research on “Sensory Prediction Error Signals in the Neocortex.” In our conversation, we discuss a series of recent studies on two-photon calcium imaging. We talk predictive coding, hierarchical inference, and Blake’s recent work on memory systems for reinforcement lea
24/12/1940m 29s

How to Know with Celeste Kidd - #330

Today we’re joined by Celeste Kidd, Assistant Professor at UC Berkeley, to discuss her invited talk “How to Know” which details her lab’s research about the core cognitive systems people use to guide their learning about the world. We explore why people are curious about some things but not others, and how past experiences and existing knowledge shape future interests, why people believe what they believe, and how these beliefs are influenced, and how machine learning figures into the equation.
23/12/1953m 29s

Using Deep Learning to Predict Wildfires with Feng Yan - #329

Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno to discuss ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. In our conversation, Feng details the development of the machine learning models and surrounding infrastructure. We also talk through problem formulation, challenges with using camera and satellite data in this use case, and how he has combined the use of IaaS and FaaS tools for cost-effectiveness and scalability
20/12/1951m 12s

Advancing Machine Learning at Capital One with Dave Castillo - #328

Today we’re joined by Dave Castillo, Managing VP for ML at Capital One and head of their Center for Machine Learning. In our conversation, we explore Capital One’s transition from “lab-based” ML to enterprise-wide adoption and support of ML, surprising ML use cases, their current platform ecosystem, their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and much more.
19/12/1947m 3s

Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang - #327

Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte, a company focused on the application of computer vision to various fish farming use cases. In our conversation, we discuss how Bryton identified the various problems associated with mass fish farming, challenges developing computer algorithms that can measure the height and weight of fish, assess issues like sea lice, and how they’re developing interesting new features such as facial recognition for fish!
17/12/1937m 55s

Metaflow, a Human-Centric Framework for Data Science with Ville Tuulos - #326

Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including features, user experience, tooling, supported libraries, and much more. If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact!
13/12/1956m 8s

Single Headed Attention RNN: Stop Thinking With Your Head with Stephen Merity - #325

Today we’re joined by Stephen Merity, an independent researcher focused on NLP and Deep Learning. In our conversation, we discuss Stephens latest paper, Single Headed Attention RNN: Stop Thinking With Your Head, detailing his primary motivations behind the paper, the decision to use SHA-RNNs for this research, how he built and trained the model, his approach to benchmarking, and finally his goals for the research in the broader research community.
12/12/1959m 3s

Automated Model Tuning with SigOpt - #324

In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo! This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324.
09/12/1946m 13s

Automated Machine Learning with Erez Barak - #323

Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more!
06/12/1942m 45s

Responsible AI in Practice with Sarah Bird - #322

Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. At Ignite, Microsoft released new tools focused on responsible machine learning, which fall under the umbrella of the Azure ML 'Machine Learning Interpretability Toolkit.’ In our conversation, Sarah walks us this toolkit, detailing use cases and the user experience. We also discuss her work in differential privacy, and in the broader ML community, in particular, the MLSys conference.
04/12/1938m 0s

Enterprise Readiness, MLOps and Lifecycle Management with Jordan Edwards - #321

Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details how Azure ML accelerates model lifecycle management with MLOps, which enables data scientists to collaborate with IT teams to increase the pace of model development and deployment. We discuss various problems associated with generalizing ML at scale at Microsoft, what exactly MLOps is, the “four phases” along the journey of customer implementation of MLOps, and much m
02/12/1939m 2s

DevOps for ML with Dotscience - #320

Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML. Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.   Head to https://twimlai.com/democast/dotscience to watch the full democast!
26/11/1946m 51s

Building an Autonomous Knowledge Graph with Mike Tung - #319

Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community.
21/11/1944m 6s

The Next Generation of Self-Driving Engineers with Aaron Ma - Talk #318

Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss Aaron’s research interests in reinforcement learning and self-driving cars, his journey from programmer to ML engineer, his experiences participating in kaggle competitions, and how he balances his passion for ML with day-to-day life.
18/11/1947m 45s

Spiking Neural Networks: A Primer with Terrence Sejnowski - #317

On today’s episode, we’re joined by Terrence Sejnowski, to discuss the ins and outs of spiking neural networks, including brain architecture, the relationship between neuroscience and machine learning, and ways to make NN’s more efficient through spiking. Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks.
14/11/1949m 36s

Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain - #316

Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building.
11/11/1939m 2s

What Does it Mean for a Machine to "Understand"? with Thomas Dietterich - #315

Today we have the pleasure of being joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. Tom recently wrote a blog post titled "What does it mean for a machine to “understand”, and in our conversation, he goes into great detail on his thoughts. We cover a lot of ground, including Tom’s position in the debate, his thoughts on the role of systems like deep learning in potentially getting us to AGI, the “hype engine” around AI advancements, and so much more.
07/11/1938m 20s

Scaling TensorFlow at LinkedIn with Jonathan Hung - #314

Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn. Jonathan presented at TensorFlow world last week, titled Scaling TensorFlow at LinkedIn. In our conversation, we discuss their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure, TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, and their foray into using Kubernetes for research.
04/11/1935m 20s

Machine Learning at GitHub with Omoju Miller - #313

Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss: • Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science,  • Her work as an inaugural member of the Github machine learning team • Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”
31/10/1943m 44s

Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman - #312

Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor
28/10/1946m 58s

Deep Learning for Earthquake Aftershock Patterns with Phoebe DeVries & Brendan Meade - #311

Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and Brendan Meade, Professor of Earth and Planetary Sciences at Harvard. Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and by measuring how the earth’s surface moves, predicting future movement location, as seen in their paper: ‘Deep learning of aftershock patterns following large earthquakes'.
25/10/1936m 0s

Live from TWIMLcon! Operationalizing Responsible AI - #310

An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat.
22/10/1930m 40s

Live from TWIMLcon! Scaling ML in the Traditional Enterprise - #309

Machine learning and AI is finding a place in the traditional enterprise - although the path to get there is different. In this episode, our panel analyzes the state and future of larger, more established brands. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss & Co., and Jürgen Weichenberger, Data Science Senior Principal & Global AI Lead at Accenture, moderated by Josh Bloom, Professor at UC Berkeley.
18/10/1933m 39s

Live from TWIMLcon! Culture & Organization for Effective ML at Scale (Panel) - #308

TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear about changing the way companies think about machine learning from a diverse set of panelists including Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder & CEO at Alectio, moderated by Maribel Lopez, Founder & Principal Analyst at Lopez Research.
15/10/1927m 39s

Live from TWIMLcon! Use-Case Driven ML Platforms with Franziska Bell - #307

Today we're Franziska Bell, Ph.D., the Director of Data Science Platforms at Uber, who joined Sam on stage at TWIMLcon last week. Fran provided a look into the cutting edge data science available company-wide at the push of a button. Since joining Uber, Fran has developed a portfolio of platforms, ranging from forecasting to conversational AI. Hear how use cases can strategically guide platform development, the evolving relationship between her team and Michelangelo (Uber’s ML Platform) and much more!
10/10/1932m 16s

Live from TWIMLcon! Operationalizing ML at Scale with Hussein Mehanna - #306

The live interviews from TWIMLcon continue with Hussein Mehanna, Head of ML and AI at Cruise. From his start at Facebook to his current work at Cruise, Hussein has seen first hand what it takes to scale and sustain machine learning programs. Hear him discuss the challenges (and joys) of working in the industry, his insight into analyzing scale when innovation is happening in parallel with development, his experiences at Facebook, Google, and Cruise, and his predictions for the future of ML platforms!
08/10/1933m 42s

Live from TWIMLcon! Encoding Company Culture in Applied AI Systems - #305

In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. Deepak shares the impact that standardizing processes and tools have on a company’s culture and productivity levels, and best practices to increasing ML ROI. He also details the Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more
04/10/1932m 24s

Live from TWIMLcon! Overcoming the Barriers to Deep Learning in Production with Andrew Ng - #304

Earlier today, Andrew Ng joined us onstage at TWIMLcon - as the Founder and CEO of Landing AI and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful. Hear about the work that Landing AI is doing to help organizations adopt modern AI, his experience in overcoming challenges for large companies, how enterprises can get the most value for their ML investment as well as addressing the ‘essential complexity’ of software engineering.
01/10/1934m 1s

The Future of Mixed-Autonomy Traffic with Alexandre Bayen - #303

Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley. Alex's current research is in mixed-autonomy traffic to understand how the growing automation in self-driving vehicles can be used to improve mobility and flow of traffic. At the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years.
27/09/1944m 2s

Deep Reinforcement Learning for Logistics at Instadeep with Karim Beguir - #302

Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company focusing on building advanced decision-making systems for the enterprise. In this episode, we focus on logistical problems that require decision-making in complex environments using deep learning and reinforcement learning. Karim explains the InstaDeep process and mindset, where they get their data sets, the efficiency of RL, heuristic vs learnability approaches and how explainability fits into the model.
25/09/1943m 53s

Deep Learning with Structured Data w/ Mark Ryan - #301

Today we're joined by Mark Ryan, author of the upcoming book Deep Learning with Structured Data. Working on the support team at IBM Data and AI, he saw a lack of general structured data sets people could apply their models to. Using the streetcar network in Toronto, Mark gathered an open data set that started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data, details of his experience with a range of data sets, and details his new book.
19/09/1939m 54s

Time Series Clustering for Monitoring Fueling Infrastructure Performance with Kalai Ramea - #300

Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. In this episode we discuss her journey buying a hydrogen car and the subsequent journey and paper that followed assessing fueling stations. In her next paper, Kalai looked at fuel consumption at hydrogen stations and used temporal clustering to identify signatures of usage over time. As the number of fueling stations is planned to increase dramatically in the future, building reliability on their performance is crucial.
18/09/1930m 6s

Swarm AI for Event Outcome Prediction with Gregg Willcox - TWIML Talk #299

Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results.
13/09/1941m 23s

Rebooting AI: What's Missing, What's Next with Gary Marcus - TWIML Talk #298

Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, well-known scientist, bestselling author, professor and entrepreneur. Hear Gary discuss his latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI. In this episode, Gary provides insight into what we should be talking and thinking about to make even greater (and safer) strides in AI.
10/09/1947m 30s

DeepQB: Deep Learning to Quantify Quarterback Decision-Making with Brian Burke - TWIML Talk #297

Today we're joined by Brian Burke, Analytics Specialist with the Stats & Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick decisions both roles make on a regular basis. In this episode, we discuss his paper: “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making & Performance”, what it means for football, and his excitement for machine learning in sports.
05/09/1950m 49s

Measuring Performance Under Pressure Using ML with Lotte Bransen - TWIML Talk #296

Today we're joined by Lotte Bransen, a Scientific Researcher at SciSports. With a background in mathematics, econometrics, and soccer, Lotte has honed her research on analytics of the game and its players, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses her paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and the implications of her research in the world of sports.
03/09/1934m 40s

Managing Deep Learning Experiments with Lukas Biewald - TWIML Talk #295

Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights & Biases. Lukas founded the company after seeing a need for reproducibility in deep learning experiments. In this episode, we discuss his experiment tracking tool, how it works, the components that make it unique, and the collaborative culture that Lukas promotes. Listen in to how he got his start in deep learning and experiment tracking, the current Weights & Biases success strategy, and what his team is working on today.
29/08/1942m 17s

Re-Architecting Data Science at iRobot with Angela Bassa - TWIML Talk #294

Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss: • iRobot's re-architecture, and a look at the evolution of iRobot. • Where iRobot gets its data from and how they taxonomize data science. • The platforms and processes that have been put into place to support delivering models in production. •The role of DevOps in bringing these various platforms together, and much more!
26/08/1948m 54s

Disentangled Representations & Google Research Football with Olivier Bachem - TWIML Talk #293

Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team. Olivier joins us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment. Olivier and Sam discuss what makes this environment different than other available RL environments, such as OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.
22/08/1942m 50s

Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis."
19/08/1950m 17s

Identifying New Materials with NLP with Anubhav Jain - TWIML Talk #291

Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. We discuss his latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’. Anubhav explains the design of a system that takes the literature and uses natural language processing to conceptualize complex material science concepts. He also discusses scientific literature mining and how the method can recommend materials for functional applications in the future.
15/08/1939m 57s

The Problem with Black Boxes with Cynthia Rudin - TWIML Talk #290

Today we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for more comprehensible decisions - extremely important when dealing with human lives. Cynthia explains black box and interpretable models, their development, use cases, and her future plans in the field.
14/08/1948m 29s

Human-Robot Interaction and Empathy with Kate Darling - TWIML Talk #289

Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics, the social implication of how people treat robots and the purposeful design of robots in our daily lives. We discuss measuring empathy, the impact of robot treatment on kids behavior, the correlation between animals and robots, and why 'effective' robots aren’t always humanoid. Kate combines a wealth of knowledge with an analytical mind that questions the why and how of human-robot intera
08/08/1943m 57s

Automated ML for RNA Design with Danny Stoll - TWIML Talk #288

Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. In this episode, Danny explains the design process through reverse engineering and how his team’s deep learning algorithm is applied to train and design sequences. We discuss transfer learning, multitask learning, ablation studies, hyperparameter optimization and the difference between chemical and statistical based approac
05/08/1937m 17s

Developing a brain atlas using deep learning with Theofanis Karayannis - TWIML Talk #287

Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is focused on brain circuit development and uses Deep Learning methods to segment the brain regions, then detect the connections around each region. He then looks at the distribution of connections that make neurological decisions in both animals and humans every day. From the way images of the brain are collected to genetic trackability, this episode has it all.
01/08/1937m 23s

Environmental Impact of Large-Scale NLP Model Training with Emma Strubell - TWIML Talk #286

Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, reviews carbon emissions of training neural networks despite an increase in accuracy. In this episode, we discuss Emma’s research methods, how companies are reacting to environmental concerns, and how we can do b
29/07/1937m 22s

“Fairwashing” and the Folly of ML Solutionism with Zachary Lipton - TWIML Talk #285

Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With a theme of data interpretation, Zachary’s research is focused on machine learning in healthcare, with the goal of assisting physicians through the diagnosis and treatment process. We discuss supervised learning in the medical field, robustness under distribution shifts, ethics in machine learning systems across industries, the concept of ‘fairwashing, and more.
25/07/191h 15m

Retinal Image Generation for Disease Discovery with Stephen Odaibo - TWIML Talk #284

Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before becoming an entrepreneur. In this episode we discuss his expertise in ophthalmology and engineering along with the current state of both industries that lead him to build autonomous systems that diagnose and treat retinal diseases.
22/07/1941m 11s

Real world model explainability with Rayid Ghani - TWiML Talk #283

Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Drawing on his range of experience, Rayid saw that while automated predictions can be helpful, they don’t always paint a full picture. The key is the relevant context when making tough decisions involving humans and their lives. We delve into the world of explainability methods, necessary human involvement, machine feedback loop and more.
18/07/1950m 34s

Inspiring New Machine Learning Platforms w/ Bioelectric Computation with Michael Levin - TWiML Talk #282

Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. In our conversation, we talk about synthetic living machines, novel AI architectures and brain-body plasticity. Michael explains how our DNA doesn’t control everything and how the behavior of cells in living organisms can be modified and adapted. Using research on biological systems dynamic remodeling, Michael discusses the future of developmental biology and regenerative medicine.
15/07/1925m 30s

Simulation and Synthetic Data for Computer Vision with Batu Arisoy - TWiML Talk #281

Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies & Solutions team at Siemens Corporate Technology. Batu’s research focus is solving limited-data computer vision problems, providing R&D for business units throughout the company. In our conversation, Batu details his group's ongoing projects, like an activity recognition project with the ONR, and their many CVPR submissions, which include an emulation of a teacher teaching students information without the use of memorizatio
09/07/1941m 28s

Spiking Neural Nets and ML as a Systems Challenge with Jeff Gehlhaar - TWIML Talk #280

Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. Qualcomm has a hand in tons of machine learning research and hardware, and in our conversation with Jeff we discuss: • How the various training frameworks fit into the developer experience when working with their chipsets. • Examples of federated learning in the wild. • The role inference will play in data center devices and much more.
08/07/1952m 34s

Transforming Oil & Gas with AI with Adi Bhashyam and Daniel Jeavons - TWIML Talk #279

Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss: • The progress that Dan and his team has made since our last conversation, including an overview of their data platform. • Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases.
01/07/1946m 8s

Fast Radio Burst Pulse Detection with Gerry Zhang - TWIML Talk #278

Today we’re joined by Yunfan Gerry Zhang, a PhD student at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss:  • Gerry's research on applying machine learning techniques to astrophysics and astronomy. • His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”. • We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.
27/06/1938m 34s

Tracking CO2 Emissions with Machine Learning with Laurence Watson - TWIML Talk #277

Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss: • Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”. • How they are using computer vision to process satellite images of coal plants, including how the images are labeled. •Various challenges with the scope and scale of this project.
24/06/1941m 37s

Topic Modeling for Customer Insights at USAA with William Fehlman - TWIML Talk #276

Today we’re joined by William Fehlman, director of data science at USAA, to discuss: • His work on topic modeling, which USAA uses in various scenarios, including member chat channels. • How their datasets are generated. • Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization. • We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence.
20/06/1944m 56s

Phronesis of AI in Radiology with Judy Gichoya - TWIML Talk #275

Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss: • Judy's research on the paper “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology. • Potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use c
18/06/1943m 33s

The Ethics of AI-Enabled Surveillance with Karen Levy - TWIML Talk #274

Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation, we discuss how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.
14/06/1943m 3s

Supporting Rapid Model Development at Two Sigma with Matt Adereth & Scott Clark - TWIML Talk #273

Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss: • The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling. • How Two Sigma has attacked the experimentation challenge with their platform. • What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so, and much more!
11/06/1946m 19s

Scaling Model Training with Kubernetes at Stripe with Kelley Rivoire - TWIML Talk #272

Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss: • Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes." • Stripe’s machine learning infrastructure journey, including their start from a production focus. • Internal tools used at Stripe, including Railyard, an API built to manage model training at scale & more!
06/06/1942m 14s

Productizing ML at Scale at Twitter with Yi Zhuang - TWIML Talk #271

Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter. In our conversation, we cover:  • The machine learning landscape at Twitter, including with the history of the Cortex team • Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0. • The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!
03/06/1946m 28s

Snorkel: A System for Fast Training Data Creation with Alex Ratner - TWiML Talk #270

Today we’re joined by Alex Ratner, Ph.D. student at Stanford, to discuss: • Snorkel, the open source framework that is the successor to Stanford's Deep Dive project. • How Snorkel is used as a framework for creating training data with weak supervised learning techniques. • Multiple use cases for Snorkel, including how it is used by companies like Google.  The complete show notes can be found at twimlai.com/talk/270. Follow along with AI Platforms Vol. 2 at twimlai.com/aiplatforms2.
30/05/1943m 38s

Advancing Autonomous Vehicle Development Using Distributed Deep Learning with Adrien Gaidon - TWiML Talk #269

In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss:  • The beginning and gradual scaling up of TRI's platform. • Their distributed deep learning methods, including their use of stock Pytorch, and much more!
28/05/1948m 1s

Are We Being Honest About How Difficult AI Really Is? w/ David Ferrucci - TWiML Talk #268

Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do, to discuss: • The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding. • His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.
23/05/1950m 7s

Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling - TWiML Talk #267

Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, and VP of Technologies at Qualcomm, to discuss:  • Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, power efficiency for AI via compression, quantization, and compilation. • Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and com
20/05/191h 3m

Can We Trust Scientific Discoveries Made Using Machine Learning? with Genevera Allen - TWiML Talk #266

Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University. Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation, we discuss the goal of Genevera's talk, the issues surrounding reproducibility in Machine Learning, and much more!
16/05/1942m 42s

Creative Adversarial Networks for Art Generation with Ahmed Elgammal - TWiML Talk #265

Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. We discuss his work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art. The complete show notes for this episode can be found at twimlai.com/talk/265.
13/05/1938m 1s

Diagnostic Visualization for Machine Learning with YellowBrick w/ Rebecca Bilbro - TWiML Talk #264

Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. In our conversation, Rebecca details: • Her relationship with toolmaking, which led to the eventual creation of YellowBrick. • Popular tools within YellowBrick, including a summary of their unit testing approach. • Interesting use cases that she’s seen over time.
10/05/1941m 44s

Librosa: Audio and Music Processing in Python with Brian McFee - TWiML Talk #263

Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis. Brian walks us through his experience building LibROSA, including: • Detailing the core functions provided in the library  • His experience working in Jupyter Notebook • We explore a typical LibROSA workflow & more! The complete show notes for this episode can be found at twimlai.com/talk/26
09/05/1938m 19s

Practical Natural Language Processing with spaCy and Prodigy w/ Ines Montani - TWiML Talk #262

In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy. Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases. The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.
07/05/1948m 49s

Scaling Jupyter Notebooks with Luciano Resende - TWiML Talk #261

Today we're joined by Luciano Resende, an Open Source AI Platform Architect at IBM, to discuss his work on Jupyter Enterprise Gateway. In our conversation, we address challenges that arise while using Jupyter Notebooks at scale and the role of open source projects like Jupyter Hub and Enterprise Gateway. We also explore some common requests like tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.
06/05/1933m 37s

Fighting Fake News and Deep Fakes with Machine Learning w/ Delip Rao - TWiML Talk #260

Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge. In our conversation, we discuss the generation and detection of artificial content, including “fake news” and “deep fakes,” the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutio
03/05/1958m 45s

Maintaining Human Control of Artificial Intelligence with Joanna Bryson - TWiML Talk #259

Today we’re joined by Joanna Bryson, Reader at the University of Bath. I was fortunate to catch up with Joanna at the conference, where she presented on “Maintaining Human Control of Artificial Intelligence." In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI sy
01/05/1938m 16s

Intelligent Infrastructure Management with Pankaj Goyal & Rochna Dhand - TWiML Talk #258

Today we're joined by Pankaj Goyal and Rochna Dhand, to discuss HPE InfoSight. In our conversation, Pankaj gives a look into how HPE as a company views AI, from their customers to the future of AI at HPE through investment. Rocha details the role of HPE’s Infosight in deploying AI operations at an enterprise level, including a look at where it fits into the infrastructure for their current customer base, along with a walkthrough of how InfoSight is deployed in a real-world use case.
29/04/1944m 33s

Organizing for Successful Data Science at Stitch Fix with Eric Colson - TWiML Talk #257

Today we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the Strata Data conference explored “How to make fewer bad decisions.” Our discussion focuses in on the three key organizational principles for data science teams that he’s developed while at Stitch Fix. Along the way, we also talk through various roles data science plays, exploring a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, a
26/04/1952m 14s

End-to-End Data Science to Drive Business Decisions at LinkedIn with Burcu Baran - TWiML Talk #256

In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn. At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more.
24/04/1948m 49s

Learning with Limited Labeled Data with Shioulin Sam - TWiML Talk #255

Today we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning
22/04/1944m 13s

cuDF, cuML & RAPIDS: GPU Accelerated Data Science with Paul Mahler - TWiML Talk #254

Today we're joined by Paul Mahler, senior data scientist and technical product manager for ML at NVIDIA. In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and ML tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.
19/04/1938m 10s

Edge AI for Smart Manufacturing with Trista Chen - TWiML Talk #253

Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec, who spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond” at GTC. In our conversation, we discuss the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of ML in an industrial setting to perform various tasks. We also discuss the challenges associated with estimating the ROI of industrial AI projects.
18/04/1938m 35s

Machine Learning for Security and Security for Machine Learning with Nicole Nichols - TWiML Talk #252

Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. We discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” We explore two use cases, insider threat detection, and software fuzz testing, discussing the effectiveness of standard and bidirectional RNN language models for detecting malicious activity, the augmentation of software fuzzing techniques using deep learning, and much mor
16/04/1941m 52s

Domain Adaptation and Generative Models for Single Cell Genomics with Gerald Quon - TWiML Talk #251

Today we’re joined by Gerald Quon, assistant professor at UC Davis. Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how he uses deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.
15/04/1932m 21s

Mapping Dark Matter with Bayesian Neural Networks w/ Yashar Hezaveh - TWiML Talk #250

Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how ML can be applied to undistort images, the intertwined roles of simulation and ML in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.
11/04/1934m 21s

Deep Learning for Population Genetic Inference with Dan Schrider - TWiML Talk #249

Today we’re joined by Dan Schrider, assistant professor in the department of genetics at UNC Chapel Hill. My discussion with Dan starts with an overview of population genomics, looking into his application of ML in the field. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.
09/04/1949m 11s

Empathy in AI with Rob Walker - TWiML Talk #248

Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when enterprise AI systems.
05/04/1940m 46s

Benchmarking Custom Computer Vision Services at Urban Outfitters with Tom Szumowski - TWiML Talk #247

Today we’re joined by Tom Szumowski, Data Scientist at URBN, parent company of Urban Outfitters and other consumer fashion brands. Tom and I caught up to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We look at the process Tom and his team took to build custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the
03/04/1950m 9s

Pragmatic Quantum Machine Learning with Peter Wittek - TWiML Talk #245

Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms. In our conversation, we discuss the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in Februar
01/04/191h 5m

*Bonus Episode* A Quantum Machine Learning Algorithm Takedown with Ewin Tang - TWiML Talk #246

In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready!
01/04/1940m 27s

Supporting TensorFlow at Airbnb with Alfredo Luque - TWiML Talk #244

Today we're joined by Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb. If you’re interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users.
28/03/1940m 25s

Mining the Vatican Secret Archives with TensorFlow w/ Elena Nieddu - TWiML Talk #243

Today we’re joined by Elena Nieddu, Phd Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit. In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation.
27/03/1943m 16s

Exploring TensorFlow 2.0 with Paige Bailey - TWiML Talk #242

Today we're joined by Paige Bailey, TensorFlow developer advocate at Google, to discuss the TensorFlow 2.0 alpha release. Paige and I talk through the latest TensorFlow updates, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end ML platform, the emphasis on community collaboration with TF 2.0, and more.
25/03/1939m 57s

Privacy-Preserving Decentralized Data Science with Andrew Trask - TWiML Talk #241

Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project, an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. We dig into why OpenMined is important, exploring some of the basic research and technologies supporting Private, Decentralized Data Science, including ideas such as Differential Privacy,and Secure Multi-Party Computation.
21/03/1933m 47s

The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen - TWiML Talk #240

Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University. Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper "The unreasonable effectiveness of the forget gate," in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks.
18/03/1932m 6s

Building a Recommendation Agent for The North Face with Andrew Guldman - TWiML Talk #239

Today we’re joined by Andrew Guldman, VP of Product Engineering and R&D at Fluid to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. We specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, challenges associated with staying on top of a constantly changing landscape, and more!
14/03/1947m 48s

Active Learning for Materials Design with Kevin Tran - TWiML Talk #238

Today we’re joined by Kevin Tran, PhD student at Carnegie Mellon University. In our conversation, we explore the challenges surrounding the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” The AI Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!
11/03/1933m 42s

Deep Learning in Optics with Aydogan Ozcan - TWiML Talk #237

Today we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, exploring his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore practical applications for their research and other areas of interest.
07/03/1942m 24s

Scaling Machine Learning on Graphs at LinkedIn with Hema Raghavan and Scott Meyer - TWiML Talk #236

Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn to discuss the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.
04/03/1946m 28s

Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane - TWiML Talk #235

Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR, who presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors” at the workshop. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”
01/03/1953m 46s

Dissecting the Controversy around OpenAI's New Language Model - TWiML Talk #234

In the inaugural TWiML Live, Sam Charrington is joined by Amanda Askell (OpenAI), Anima Anandkumar (NVIDIA/CalTech), Miles Brundage (OpenAI), Robert Munro (Lilt), and Stephen Merity to discuss the controversial recent release of the OpenAI GPT-2 Language Model. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.
25/02/191h 5m

Human-Centered Design with Mira Lane - TWiML Talk #233

Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft. Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.
22/02/1946m 48s

Fairness in Machine Learning with Hanna Wallach - TWiML Talk #232

Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research. Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk
18/02/1948m 34s

AI for Healthcare with Peter Lee - TWiML Talk #231

In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met back at Microsoft Ignite, where he gave me some really interesting takes on AI development in China, which is linked in the show notes. This conversation centers around impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine.
18/02/1956m 51s

An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection with Justice Amoh Jr. - TWiML Talk #230

Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering. Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environm
11/02/1945m 39s

Pathologies of Neural Models and Interpretability with Alvin Grissom II - TWiML Talk #229

Today, we continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. In our conversation, we dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regulariz
11/02/1932m 31s

AI for Earth with Lucas Joppa - TWiML Talk #228

Today we’re joined by Lucas Joppa, Chief Environmental Officer at Microsoft and Zach Parisa, Co-founder and president of Silvia Terra, a Microsoft AI for Earth grantee. In our conversation, we explore the ways that ML & AI can be used to advance our understanding of forests and other ecosystems, supporting conservation efforts. We discuss how Silvia Terra uses computer vision and data from a wide array of sensors, combined with AI, to yield more detailed estimates of the various species in our forests.
08/02/1956m 11s

AI for Accessibility with Wendy Chisholm - TWiML Talk #227

Today we’re joined by Wendy Chisholm, a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication & Connection. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of projects in this area.
06/02/1950m 16s

AI for Humanitarian Action with Justin Spelhaug - TWiML Talk #226

Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. In our conversation, we discuss the company’s efforts in AI for Humanitarian Action, covering Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.
04/02/1958m 50s

Teaching AI to Preschoolers with Randi Williams - TWiML Talk #225

Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab. At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work.
31/01/1943m 36s

Holistic Optimization of the LinkedIn News Feed - TWiML Talk #224

Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. In our conversation, Tim describes the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.
28/01/1948m 2s

AI at the Edge at Qualcomm with Gary Brotman - TWiML Talk #223

Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc. Gary, who got his start in AI through music, now leads strategy and product planning for the company’s AI and ML technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable th
24/01/1951m 28s

AI Innovation at CES - TWiML Talk #222

A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. In this special visual only episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment! For the show notes, visit https://twimlai.com/talk/222.
21/01/192m 0s

Self-Tuning Services via Real-Time Machine Learning with Vladimir Bychkovsky - TWiML Talk #221

Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral, a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks.
17/01/1946m 8s

Building a Recommender System from Scratch at 20th Century Fox with JJ Espinoza - TWiML Talk #220

Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox. In this talk we dig into JJ and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users.
14/01/1934m 59s

Legal and Policy Implications of Model Interpretability with Solon Barocas - TWiML Talk #219

Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we explore the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines.”
10/01/1946m 52s

Trends in Computer Vision with Siddha Ganju - TWiML Talk #218

In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show. Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.
07/01/1932m 53s

Trends in Reinforcement Learning with Simon Osindero - TWiML Talk #217

In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind. We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen this year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more. The complete show notes for this episode can be found at https://twimlai.com/talk/217.
03/01/1952m 13s

Trends in Natural Language Processing with Sebastian Ruder - TWiML Talk #216

In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.
31/12/1852m 54s

Trends in Machine Learning with Anima Anandkumar - TWiML Talk #215

In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. For this episode's complete show notes, visit twimlai.com/talk/215.
27/12/1851m 23s

Trends in Deep Learning with Jeremy Howard - TWiML Talk #214

In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai. Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.
24/12/181h 8m

Training Large-Scale Deep Nets with RL with Nando de Freitas - TWiML Talk #213

Today we close out both our NeurIPS series joined by Nando de Freitas, Team Lead & Principal Scientist at Deepmind. In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.”
20/12/1855m 24s

Making Algorithms Trustworthy with David Spiegelhalter - TWiML Talk #212

Today we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems.
20/12/1823m 25s

Designing Computer Systems for Software with Kunle Olukotun - TWiML Talk #211

Today we’re joined by Kunle Olukotun, Professor in the department of EE and CS at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. This was a fun one!
18/12/1855m 44s

Operationalizing Ethical AI with Kathryn Hume - TWiML Talk #210

Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. We discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.
14/12/1853m 44s

Approaches to Fairness in Machine Learning with Richard Zemel - TWiML Talk #209

Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute. In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”
12/12/1845m 31s

Trust and AI with Parinaz Sobhani - TWiML Talk #208

In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.
11/12/1846m 26s

Unbiased Learning from Biased User Feedback with Thorsten Joachims - TWiML Talk #207

In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. We discuss his presentation “Unbiased Learning from Biased User Feedback,” looking at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies.
07/12/1840m 44s

Language Parsing and Character Mining with Jinho Choi - TWiML Talk #206

Today we’re joined by Jinho Choi, assistant professor of computer science at Emory University. Jinho presented at the conference on ELIT, their cloud-based NLP platform. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.
05/12/1847m 33s

re:Invent Roundup Roundtable 2018 with Dave McCrory and Val Bercovici - TWiML Talk #205

I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data. If you missed the news coming out of re:Invent, we cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning, DeepRacer, Inferentia and Elastic Inference, ML Marketplace and much more. For the show notes visit https://twimlai.com/ta
03/12/181h 7m

Knowledge Graphs and Expert Augmentation with Marisa Boston - TWiML Talk #204

Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. We caught up to discuss some of the ways that KPMG is using AI to build tools that help augment the knowledge of their teams of professionals. We discuss knowledge graphs and how they can be used to map out and relate various concepts and how they use these in conjunction with NLP tools to create insight engines. We also look at tools that curate and contextualize news and other text-based data sour
29/11/1846m 57s

ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid - TWiML Talk #203

Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research. NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ML and DL models to support the firm’s investment decisions. We focus on techniques for modeling non-stationary time-series, stationary vs non-stationary time-series, and challenges of building models using financial data.
26/11/1858m 29s

Industrializing Machine Learning at Shell with Daniel Jeavons - TWiML Talk #202

In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell. In our conversation, we explore the evolution of analytics and data science at Shell, discussing IoT-related applications and issues, such as inference at the edge, federated ML, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to the company as a whole.
21/11/1845m 19s

Resurrecting a Recommendations Platform at Comcast with Leemay Nassery - TWiML Talk #201

In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including the rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure.
19/11/1847m 36s

Productive Machine Learning at LinkedIn with Bee-Chung Chen - TWiML Talk #200

In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML. Bee-Chung breaks down some of the major pieces of the pipeline, LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed. For the complete show notes, visit https://twimlai.com/talk/200.
15/11/1847m 38s

Scaling Deep Learning on Kubernetes at OpenAI with Christopher Berner - TWiML Talk #199

In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.
12/11/1849m 57s

Bighead: Airbnb's Machine Learning Platform with Atul Kale - TWiML Talk #198

In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb. In our conversation, we discuss Airbnb’s internal machine learning platform, Bighead. Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, some of Atul’s best practices for scaling machine learning, and a special announcement that Atul and his team made at Strata.
08/11/1849m 44s

Facebook's FBLearner Platform with Aditya Kalro - TWiML Talk #197

In the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. We discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook.
06/11/1838m 38s

Geometric Statistics in Machine Learning w/ geomstats with Nina Miolane - TWiML Talk #196

In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I spoke about her work in the field of geometric statistics in ML, specifically the application of Riemannian geometry, which is the study of curved surfaces, to ML. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.
01/11/1843m 43s

Milestones in Neural Natural Language Processing with Sebastian Ruder - TWiML Talk #195

In this episode, we’re joined by Sebastian Ruder, PhD student studying NLP at National University of Ireland and Research Scientist at text analysis startup Aylien. We discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also look at the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his ULMFit paper, which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.
29/10/181h 1m

Natural Language Processing at StockTwits with Garrett Hoffman - TWiML Talk #194

In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas.
25/10/1850m 56s

Advanced Reinforcement Learning & Data Science for Social Impact with Vukosi Marivate - TWiML Talk #193

In the final episode of our Deep Learning Indaba series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba. My conversation with Vukosi falls into two distinct parts, his PhD research in reinforcement learning, and his current research, which falls under the banner of data science with social impact. We discuss several advanced RL scenarios, along with several applications he is currently exploring in areas like public safety and energy.
23/10/1846m 36s

AI Ethics, Strategic Decisioning and Game Theory with Osonde Osoba - TWiML Talk #192

In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation. Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues and how to build an intuition for where ethical flashpoints may exist in these discussions. We also discuss Osonde’s own model development research, including the application of machine learning to strategic decisions and game theor
18/10/1847m 3s

Acoustic Word Embeddings for Low Resource Speech Processing with Herman Kamper - TWiML Talk #191

In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, lecturer at Stellenbosch University in SA and a co-organizer of the Indaba. We discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We also dive into the specifics of the methods being used and developed in Herman’s lab.
16/10/181h 1m

Learning Representations for Visual Search with Naila Murray - TWiML Talk #190

In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe. Naila presented at the Indaba on computer vision. In this discussion, we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper  “Generalized Max Pooling,” and much more! For the complete show notes, visit twimlai.com/tal
12/10/1841m 33s

Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.
10/10/181h 3m

Graph Analytic Systems with Zachary Hanif - TWiML Talk #188

In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. We start our discussion with a look at the role of graph analytics in the ML toolkit, including some important application areas for graph-based systems. Zach gives us an overview of the different ways to implement graph analytics, including what he calls graphical processing engines which excel at handling large datasets, & much m
08/10/1854m 7s

Diversification in Recommender Systems with Ahsan Ashraf - TWiML Talk #187

In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. We discuss his presentation, “Diversification in recommender systems: Using topical variety to increase user satisfaction,” covering the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system and much more! The show notes can be found at https://twimlai.com/talk/18
04/10/1844m 34s

The Fastai v1 Deep Learning Framework with Jeremy Howard - TWiML Talk #186

In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more!
02/10/181h 11m

Federated ML for Edge Applications with Justin Norman - TWiML Talk #185

In this episode we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. In my chat with Justin we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge. For the complete show notes, visit https://twimlai.com/talk/185.
27/09/1847m 44s

Exploring Dark Energy & Star Formation w/ ML with Viviana Acquaviva - TWiML Talk #184

In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. In our conversation, we discuss an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance. The complete show notes can be found at https://twimlai.com/talk/184.
26/09/1840m 12s

Document Vectors in the Wild with James Dreiss - TWiML Talk #183

In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout.
24/09/1840m 59s

Applied Machine Learning for Publishers with Naveed Ahmad - TWiML Talk #182

In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse. For the complete show notes for this episode, visit https://twimlai.com/talk/182.
20/09/1839m 57s

Anticipating Superintelligence with Nick Bostrom - TWiML Talk #181

In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18
17/09/1844m 56s

Can We Train an AI to Understand Body Language? with Hanbyul Joo - TWIML Talk #180

In this episode, we’re joined by Hanbyul Joo, a PhD student at CMU. Han is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio used to capture human body behavior and body language. His work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. We also discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.”
13/09/1851m 53s

Biological Particle Identification and Tracking with Jay Newby - TWiML Talk #179

In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” He gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models.
10/09/1845m 31s

AI for Content Creation with Debajyoti Ray - TWiML Talk #178

In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers. Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.
06/09/1855m 15s

Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli - TWiML Talk #177

Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177
30/08/181h 34m

OpenAI Five with Christy Dennison - TWiML Talk #176

Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI, who has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. In our conversation we overview of DOTA 2 gameplay and the recent OpenAI Five benchmark, we dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning, LSTM recurrent neural networks, and entity embeddings, plus some tricks and techniques they use to train the models.
27/08/1848m 25s

How ML Keeps Shelves Stocked at Home Depot with Pat Woowong - TWiML Talk #175

Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, their use of kubernetes to support future growth in the platform, and much more. For complete show notes, visit https://twimlai.com/talk/175.
23/08/1845m 25s

Contextual Modeling for Language and Vision with Nasrin Mostafazadeh - TWiML Talk #174

Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition. Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision including her work on the Story Cloze Test, a reasoning framework for evaluating story understanding and generation. We explore the details of this task, some of the challenges it presents and approaches for solving it.
20/08/1849m 20s

ML for Understanding Satellite Imagery at Scale with Kyle Story - TWiML Talk #173

Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs. Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, and the key challenges they’ve had to overcome in scaling them.
16/08/1856m 25s

Generating Ground-Level Images From Overhead Imagery Using GANs with Yi Zhu - TWiML Talk #172

Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” We discuss the goal of this research and how he uses conditional GANs to generate artificial ground-level images.
13/08/1838m 8s

Vision Systems for Planetary Landers and Drones with Larry Matthies - TWiML Talk #171

Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. In our conversation, we discuss two talks he gave at CVPR a few weeks back, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects. For the complete show notes, visit https://twimlai.com/talk/171.
09/08/1843m 32s

Learning Semantically Meaningful and Actionable Representations with Ashutosh Saxena - TWiML Talk #170

In this episode i'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. For complete show notes, visit https://twimlai.com/talk/170.
06/08/1845m 55s

AI Innovation for Clinical Decision Support with Joe Connor - TWiML Talk #169

In this episode I speak with Joe Connor, Founder of Experto Crede. In our conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians, some of the various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes. We also discuss data protections, especially GDPR, potential ways to include clinicians in the building of applications.
02/08/1842m 29s

Dynamic Visual Localization and Segmentation with Laura Leal-Taixé -TWiML Talk #168

In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group. In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning, her paper on one-shot video object segmentation and the broader vision for her research.
30/07/1844m 57s

Conversational AI for the Intelligent Workplace with Gillian McCann - TWiML Talk #167

In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, their engineering pipeline & how they build high quality systems that incorporate external APIs and her view on factors that contribute to misunderstandings and impatience on the part of users of AI-based products.
26/07/1836m 39s

Computer Vision and Intelligent Agents for Wildlife Conservation with Jason Holmberg - TWiML Talk #166

In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Jason and I discuss Wildme's pair of open source computer vision based conservation projects, Wildbook and Whaleshark.org, Jason kicks us off with the interesting story of how Wildbook came to be, the eventual expansion of the project and the evolution of these projects’ use of computer vision and deep learning. For the complete show notes, visit twimlai.com/talk/166
22/07/1848m 24s

Pragmatic Deep Learning for Medical Imagery with Prashant Warier - TWiML Talk #165

In this episode I'm joined by Prashant Warier, CEO and Co-Founder of Qure.ai. We discuss the company’s work building products for interpreting head CT scans and chest x-rays. We look at knowledge gained in bringing a commercial product to market, including what the gap between academic research papers and commercially viable software, the challenge of data acquisition and more. We also touch on the application of transfer learning. For the complete show notes, visit https://twimlai.com/talk/165.
19/07/1836m 36s

Taskonomy: Disentangling Transfer Learning for Perception (CVPR 2018 Best Paper Winner) with Amir Zamir - TWiML Talk #164

In this episode I'm joined by Amir Zamir, Postdoctoral researcher at both Stanford & UC Berkeley, who joins us fresh off of winning the 2018 CVPR Best Paper Award for co-authoring "Taskonomy: Disentangling Task Transfer Learning." In our conversation, we discuss the nature and consequences of the relationships that Amir and his team discovered, and how they can be used to build more effective visual systems with machine learning. https://twimlai.com/talk/164
16/07/1847m 33s

Predicting Metabolic Pathway Dynamics w/ Machine Learning with Zak Costello - TWiML Talk #163

In today’s episode I’m joined by Zak Costello, post-doctoral fellow at the Joint BioEnergy Institute to discuss his recent paper, “A machine learning approach to predict metabolic pathway dynamics from time-series multiomics data.” Zak gives us an overview of synthetic biology and the use of ML techniques to optimize metabolic reactions for engineering biofuels at scale. Visit twimlai.com/talk/163 for the complete show notes.
11/07/1839m 38s

Machine Learning to Discover Physics and Engineering Principles with Nathan Kutz - TWiML Talk #162

In this episode, I’m joined by Nathan Kutz, Professor of applied mathematics, electrical engineering and physics at the University of Washington to discuss his research into the use of machine learning to help discover the fundamental governing equations for physical and engineering systems from time series measurements. For complete show notes visit twimlai.com/talk/162
09/07/1843m 8s

Automating Complex Internal Processes w/ AI with Alexander Chukovski - TWiML Talk #161

In this episode, I'm joined by Alexander Chukovski, Director of Data Services at Munich, Germany based career platform, Experteer. In our conversation, we explore Alex’s journey to implement machine learning at Experteer, the Experteer NLP pipeline and how it’s evolved, Alex’s work with deep learning based ML models, including models like VDCNN and Facebook’s FastText offering and a few recent papers that look at transfer learning for NLP. Check out the complete show notes at twimlai.com/talk/161
05/07/1839m 42s

Designing Better Sequence Models with RNNs with Adji Bousso Dieng - TWiML Talk #160

In this episode, I'm joined by Adji Bousso Dieng, PhD Student in the Department of Statistics at Columbia University to discuss two of her recent papers, “Noisin: Unbiased Regularization for Recurrent Neural Networks” and “TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency.” We dive into the details behind both of these papers and learn a ton along the way.
02/07/1838m 22s

Love Love: AI and ML in Tennis with Stephanie Kovalchik - TWiML Talk #159

In the final show in our AI in Sports series, I’m joined by Stephanie Kovalchik, Research Fellow at Victoria University and Senior Sports Scientist at Tennis Australia. In our conversation we discuss Tennis Australia's use of data to develop a player rating system based on ability and probability, some of the interesting products her Game Insight Group is developing, including a win forecasting algorithm, and a statistic that measures a given player’s workload during a match.
29/06/1846m 50s

Growth Hacking Sports w/ Machine Learning with Noah Gift - TWiML Talk #158

In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah and I discuss some of his recent work in using social media to predict which players hold the most on-court value, and how this work could lead to more complete approaches to player valuation. Check out the show notes at twimlai.com/talk/158
28/06/1850m 35s

Fine-Grained Player Prediction in Sports with Jennifer Hobbs - TWiML Talk #157

In this episode of our AI in Sports series, I'm joined by Jennifer Hobbs, Senior Data Scientist at STATS, a collector and distributor of sports data, to discuss the STATS data pipeline and how they collect and store different types of data for easy consumption and application. We also look into a paper she co-authored, Mythbusting Set-Pieces in Soccer, which was presented at the MIT Sloan Conference this year. https://twimlai.com/talk/157
27/06/1842m 48s

Targeted Ticket Sales Using Azure ML with the Trail Blazers w/ Mike Schumacher & Chenhui Hu - TWiML Talk #156

In today’s episode of our AI in Sports series I'm joined by Mike Schumacher, director of business analytics for the Portland Trail Blazers, and Chenhui Hu, a data scientist at Microsoft to discuss how the Blazers are using machine learning to produce better-targeted sales campaigns, for both single-game and season-ticket buyers.
26/06/1837m 28s

AI for Athlete Optimization with Sinead Flahive - TWiML Talk #155

This week we’re excited to kick off a series of shows on AI in sports. In this episode I'm joined by Sinead Flahive, data scientist at Dublin, Ireland based Kitman Labs to discuss Kitman’s Athlete Optimization System, which allows sports trainers and coaches to collect and analyze data for player performance optimization and injury reduction. Enjoy!
25/06/1840m 24s

Omni-Channel Customer Experiences with Vince Jeffs - TWiML Talk #154

In this, the final episode of our PegaWorld series I’m joined by Vince Jeffs, Senior Director of Product Strategy for AI and Decisioning at Pegasystems. Vince and I had a great talk about the role AI and advanced analytics will play in defining future customer experiences. We do this in the context provided by one of his presentations from the conference, which explores four technology scenarios from Pegasystems’ innovation labs. These look at a connected car experience, the use of deep learning for diagnostics, dynamic notifications, and continuously optimized marketing. We also get into an interesting discussion about how much is too much when it comes to hyperpersonalized experiences, and how businesses can manage this challenge. The notes for this show can be found at twimlai.com/talk/154. For more information on the Pegaworld series, visit twimlai.com/pegaworld2018.
21/06/1843m 0s

Workforce Intelligence for Automation & Productivity with Michael Kempe - TWiML Talk #153

In this episode of our PegaWorld series, I’m joined by Michael Kempe, chief operating officer at global share registry and financial services provider Link Market Services. In the interview, Michael and I dig into Link’s use of workforce intelligence software to allow it to track and analyze the performance of its workforce and business processes. Michael and I discuss some of the initial challenges associated with implementing this type of system, including skepticism amongst employees, and how it ultimately sets the stage for the Link’s broader use of machine learning, AI and so called “robotic process automation” to increase workforce productivity. The notes for this show can be found at twimlai.com/talk/153. For more information on our PegaWorld series, visit twimlai.com/pegaworld2018.
20/06/1836m 26s

Data Platforms for Decision Automation at Scotiabank with Jim Saleh - TWiML Talk #152

In this show, part of our PegaWorld 18 series, I'm joined by Jim Saleh, Senior Director of process and decision automation at Scotiabank. Jim is tasked with helping the bank transition from a world where customer interactions are based on historical analytics to one where they’re based on real-time decisioning and automation. In our conversation we discuss what’s required to deliver real-time decisioning, starting from the ground up with the data platform. In this vein we explore topics like data lakes, data warehouses, integration, and more, and the effort required to take advantage of these. The notes for this show can be found at twimlai.com/talk/152. For more info on our PegaWorld 2018 series, visit twimlai.com/pegaworld2018.
19/06/1832m 31s

Towards the Self-Driving Enterprise with Kirk Borne - TWiML Talk #151

In this show, the first of our PegaWorld 18 series, I'm joined by Kirk Borne, Principal Data Scientist at management consulting firm Booz Allen Hamilton. In our conversation, Kirk shares his views on automation as it applies to enterprises and their customers. We discuss his experiences evangelizing data science within the context of a large organization, and the role of AI in helping organizations achieve automation. Along the way Kirk, shares a great analogy for intelligent automation, comparing it to an autonomous vehicle . We covered a ton of ground in this chat, which I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/151. For more info about our PegaWorld 2018 Series, visit twimlai.com/pegaworld2018.
18/06/1841m 17s

How a Global Energy Company Adopts ML & AI with Nicholas Osborn - TWiML Talk #150

On today’s show I’m excited to share this interview with Nick Osborn, a longtime listener of the show and Leader of the Global Machine Learning Project Management Office at AES Corporation, a Fortune 200 power company. Nick and I met at my AI Summit a few weeks back, and after a brief chat about some of the things he was up to at AES, I knew I needed to get him on the show! In this interview, Nick and I explore how AES is implementing machine learning across multiple domains at the company. We dig into several examples falling under the Natural Language, Computer Vision, and Cognitive Assets categories he’s established for his projects. Along the way we cover some of the key podcast episodes that helped Nick discover potentially applicable ML techniques, and how those are helping his team broaden the use of machine learning at AES. This was a fun and informative conversation that has a lot to offer. Thanks, Nick! The notes for this episode can be found at twimlai.com/talk/150.
14/06/1846m 9s

Problem Formulation for Machine Learning with Romer Rosales - TWiML Talk #149

In this episode, i'm joined by Romer Rosales, Director of AI at LinkedIn. We begin with a discussion of graphical models and approximate probability inference, and he helps me make an important connection in the way I think about that topic. We then review some of the applications of machine learning at LinkedIn, and how what Romer calls their ‘holistic approach’ guides the evolution of ML projects at LinkedIn. This leads us into a really interesting discussion about problem formulation and selecting the right objective function for a given problem. We then talk through some of the tools they’ve built to scale their data science efforts, including large-scale constrained optimization solvers, online hyperparameter optimization and more. This was a really fun conversation, that I’m sure you’ll enjoy! The notes for this show can be found at twimlai.com/talk/149.
11/06/1850m 28s

AI for Materials Discovery with Greg Mulholland - TWiML Talk #148

In this episode I’m joined by Greg Mulholland, Founder and CEO of Citrine Informatics, which is applying AI to the discovery and development of new materials. Greg and I start out with an exploration of some of the challenges of the status quo in materials science, and what’s to be gained by introducing machine learning into this process. We discuss how limitations in materials manifest themselves, and Greg shares a few examples from the company’s work optimizing battery components and solar cells. We dig into the role and sources of data used in applying ML in materials, and some of the unique challenges to collecting it, and discuss the pipeline and algorithms Citrine uses to deliver its service. This was a fun conversation that spans physics, chemistry, and of course machine learning, and I hope you enjoy it. The notes for this show can be found at twimlai.com/talk/148.
07/06/1842m 24s

Data Innovation & AI at Capital One with Adam Wenchel - TWiML Talk #147

In this episode I’m joined by Adam Wenchel, vice president of AI and Data Innovation at Capital One, to discuss how Machine Learning & AI are being integrated into their day-to-day practices, and how those advances benefit the customer. In our conversation, we look into a few of the many applications of AI at the bank, including fraud detection, money laundering, customer service, and automating back office processes. Adam describes some of the challenges of applying ML in financial services and how Capital One maintains consistent portfolio management practices across the organization. We also discuss how the bank has organized to scale their machine learning efforts, and the steps they’ve taken to overcome the talent shortage in the space. The notes for this show can be found at twimlai.com/talk/147.
04/06/1845m 6s

Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146

On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.
31/05/1846m 12s

Masked Autoregressive Flow for Density Estimation with George Papamakarios - TWiML Talk #145

In this episode, University of Edinburgh Phd student George Papamakarios and I discuss his paper “Masked Autoregressive Flow for Density Estimation.” George walks us through the idea of Masked Autoregressive Flow, which uses neural networks to produce estimates of probability densities from a set of input examples. We discuss some of the related work that’s laid the groundwork for his research, including Inverse Autoregressive Flow, Real NVP and Masked Auto-encoders. We also look at the properties of probability density networks and discuss some of the challenges associated with this effort. The notes for this show can be found at twimlai.com/talk/145.
28/05/1834m 37s

Training Data for Computer Vision at Figure Eight with Qazaleh Mirsharif - TWiML Talk #144

For today’s show, the last in our TrainAI series, I'm joined by Qazaleh Mirsharif, a machine learning scientist working on computer vision at Figure Eight. Qazaleh and I caught up at the TrainAI conference to discuss a couple of the projects she’s worked on in that field, namely her research into the classification of retinal images and her work on parking sign detection from Google Street View images. The former, which attempted to diagnose diseases like diabetic retinopathy using retinal scan images, is similar to the work I spoke with Ryan Poplin about on TWiML Talk #122. In my conversation with Qazaleh we focus on how she built her datasets for each of these projects and some of the key lessons she’s learned along the way. The notes for this show can be found at twimlai.com/talk/144. For series details, visit twimlai.com/trainai2018.
25/05/1821m 54s

Agile Data Science with Sarah Aerni - TWiML Talk #143

Today we continue our TrainAI series with Sarah Aerni, Director of Data Science at Salesforce Einstein. Sarah and I sat down at the TrainAI conference to discuss her talk “Notes from the Field: The Platform, People, and Processes of Agile Data Science.” Sarah and I dig into the concept of agile data science, exploring what it means to her and how she’s seen it done at Salesforce and other places she’s worked. We also dig into the notion of machine learning platforms, which is also a keen area of interest for me. We discuss some of the common elements we’ve seen in ML platforms, and when it makes sense for an organization to start building one. The notes for this show can be found at twimlai.com/talk/143. For more details on the TrainAI series, visit twimlai.com/trainai2018
24/05/1838m 28s

Tensor Operations for Machine Learning with Anima Anandkumar - TWiML Talk #142

In this episode of our TrainAI series, I sit down with Anima Anandkumar, Bren Professor at Caltech and Principal Scientist with Amazon Web Services. Anima joined me to discuss the research coming out of her “Tensorlab” at CalTech. In our conversation, we review the application of tensor operations to machine learning and discuss how an example problem–document categorization–might be approached using 3 dimensional tensors to discover topics and relationships between topics. We touch on multidimensionality, expectation maximization, and Amazon products Sagemaker and Comprehend. Anima also goes into how to tensorize neural networks and apply our understanding of tensor algebra to do perform better architecture searches. The notes for this show can be found at twimlai.com/talk/142. For series info, visit twimlai.com/trainai2018
23/05/1834m 6s

Deep Learning for Live-Cell Imaging with David Van Valen - TWiML Talk #141

In today’s show, I sit down with David Van Valen, assistant professor of Bioengineering & Biology at Caltech. David joined me after his talk at the Figure Eight TrainAI conference to chat about his research using image recognition and segmentation techniques in biological settings. In particular, we discuss his use of deep learning to automate the analysis of individual cells in live-cell imaging experiments. We had a really interesting discussion around the various practicalities he’s learned about training deep neural networks for image analysis, and he shares some great insights into which of the techniques from the deep learning research have worked for him and which haven’t. If you’re a fan of our Nerd Alert shows, you’ll really like this one. Enjoy! The notes for this show can be found at twimlai.com/talk/141. For more information on this series, visit twimlai.com/trainai2018.
22/05/1837m 13s

Checking in with the Master w/ Garry Kasparov - TWiML Talk #140

In this episode I’m joined by legendary chess champion, author, and fellow at the Oxford Martin School, Garry Kasparov. Garry and I sat down after his keynote at the Figure Eight Train AI conference in San Francisco last week. Garry and I discuss his bouts with the chess-playing computer Deep Blue–which became the first computer system to defeat a reigning world champion in their 1997 rematch–and how that experience has helped shaped his thinking on artificially intelligent systems. We explore his perspective on the evolution of AI, the ways in which chess and Deep Blue differ from Go and Alpha Go, and the significance of DeepMind’s Alpha Go Zero. We also talk through his views on the relationship between humans and machines, and how he expects it to change over time. The notes for this show can be found at twimlai.com/talk/140. For more information on this series, visit twimlai.com/trainai2018.
21/05/1832m 44s

Exploring AI-Generated Music with Taryn Southern - TWiML Talk #139

In this episode I’m joined by Taryn Southern - a singer, digital storyteller and Youtuber, whose upcoming album I AM AI will be produced completely with AI based tools. Taryn and I explore all aspects of what it means to create music with modern AI-based tools, and the different processes she’s used to create her singles Break Free, Voices in My Head, and more. She also provides a rundown of the many tools she’s used in this space, including Google Magenta, Watson Beat, AMPer, Landr and more. This was a super fun interview that I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/139
17/05/1833m 4s

Practical Deep Learning with Rachel Thomas - TWiML Talk #138

In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138
14/05/1844m 19s

Kinds of Intelligence w/ Jose Hernandez-Orallo - TWiML Talk #137

In this episode, I'm joined by Jose Hernandez-Orallo, professor in the department of information systems and computing at Universitat Politècnica de València and fellow at the Leverhulme Centre for the Future of Intelligence, working on the Kinds of Intelligence Project. Jose and I caught up at NIPS last year after the Kinds of Intelligence Symposium that he helped organize there. In our conversation, we discuss the three main themes of the symposium: understanding and identifying the main types of intelligence, including non-human intelligence, developing better ways to test and measure these intelligences, and understanding how and where research efforts should focus to best benefit society. The notes for this show can be found at twimlai.com/talk/137.
10/05/1844m 18s

Taming arXiv with Natural Language Processing w/ John Bohannon - TWiML Talk #136

In this episode i'm joined by John Bohannan, Director of Science at AI startup Primer. As you all may know, a few weeks ago we released my interview with Google legend Jeff Dean, which, by the way, you should definitely check if you haven’t already. Anyway, in that interview, Jeff mentions the recent explosion of machine learning papers on arXiv, which I responded to jokingly by asking whether Google had already developed the AI system to help them summarize and track all of them. While Jeff didn’t have anything specific to offer, a listener reached out and let me know that John was in fact already working on this problem. In our conversation, John and I discuss his work on Primer Science, a tool that harvests content uploaded to arxiv, sorts it into natural topics using unsupervised learning, then gives relevant summaries of the activity happening in different innovation areas. We spend a good amount of time on the inner workings of Primer Science, including their data pipeline and some of the tools they use, how they determine “ground truth” for training their models, and the use of heuristics to supplement NLP in their processing. The notes for this show can be found at twimlai.com/talk/136
07/05/1854m 17s

Epsilon Software for Private Machine Learning with Chang Liu - TWiML Talk #135

In this episode, our final episode in the Differential Privacy series, I speak with Chang Liu, applied research scientist at Georgian Partners, a venture capital firm that invests in growth stage business software companies in the US and Canada. Chang joined me to discuss Georgian’s new offering, Epsilon, a software product that embodies the research, development and lessons learned helps in helping their portfolio companies deliver differentially private machine learning solutions to their customers. In our conversation, Chang discusses some of the projects that led to the creation of Epsilon, including differentially private machine learning projects at BlueCore, Work Fusion and Integrate.ai. We explore some of the unique challenges of productizing differentially private ML, including business, people and technology issues. Finally, Chang provides some great pointers for those who’d like to further explore this field. The notes for this show can be found at twimlai.com/talk/135
04/05/1846m 51s

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.
03/05/1859m 28s

Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133

In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133
01/05/1838m 8s

Differential Privacy Theory & Practice with Aaron Roth - TWiML Talk #132

In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.
30/04/1842m 55s

Optimal Transport and Machine Learning with Marco Cuturi - TWiML Talk #131

In this episode, i’m joined by Marco Cuturi, professor of statistics at Université Paris-Saclay. Marco and I spent some time discussing his work on Optimal Transport Theory at NIPS last year. In our discussion, Marco explains Optimal Transport, which provides a way for us to compare probability measures. We look at ways Optimal Transport can be used across machine learning applications, including graphical, NLP, and image examples. We also touch on GANs, or generative adversarial networks, and some of the challenges they present to the research community. The notes for this show can be found at twimlai.com/talk/131.
26/04/1832m 37s

Collecting and Annotating Data for AI with Kiran Vajapey - TWiML Talk #130

In this episode, I’m joined by Kiran Vajapey, a human-computer interaction developer at Figure Eight. In this interview, Kiran shares some of what he’s has learned through his work developing applications for data collection and annotation at Figure Eight and earlier in his career. We explore techniques like data augmentation, domain adaptation, and active and transfer learning for enhancing and enriching training datasets. We also touch on the use of Imagenet and other public datasets for real-world AI applications. If you like what you hear in this interview, Kiran will be speaking at my AI Summit April 30th and May 1st in Las Vegas and I’ll be joining Kiran at the upcoming Figure Eight TrainAI conference, May 9th&10th in San Francisco. The notes for this show can be found at twimlai.com/talk/130
23/04/1840m 18s

Autonomous Aerial Guidance, Navigation and Control Systems with Christopher Lum - TWiML Talk #129

Ok, In this episode, I'm joined by Christopher Lum, Research Assistant Professor in the University of Washington’s Department of Aeronautics and Astronautics. Chris also co-heads the University’s Autonomous Flight Systems Lab, where he and his students are working on the guidance, navigation, and control of unmanned systems. In our conversation, we discuss some of the technical and regulatory challenges of building and deploying Unmanned Autonomous Systems. We also talk about some interesting work he’s doing on evolutionary path planning systems as well as an Precision Agriculture use case. Finally, Chris shares some great starting places for those looking to begin a journey into autonomous systems research. The notes for this show can be found at twimlai.com/talk/129.
19/04/1852m 35s

Infrastructure for Autonomous Vehicles with Missy Cummings - TWiML Talk #128

In this episode, I’m joined by Missy Cummings, head of Duke University’s Humans and Autonomy Lab and professor in the department of mechanical engineering. In addition to being an accomplished researcher, Missy also became one of the first female fighter pilots in the US Navy following the repeal of the Combat Exclusion Policy in 1993. We discuss Missy’s research into the infrastructural and operational challenges presented by autonomous vehicles, including cars, drones and unmanned aircraft. We also cover trust, explainability, and interactions between humans and AV systems. This was an awesome interview and i'm glad we’re able to bring it to you! The notes for this show can be found at twimlai.com/talk/128.
16/04/1843m 32s

Hyper-Personalizing the Customer Experience w/ AI with Rob Walker - TWiML Talk #127

In this episode, we're joined by Rob Walker, Vice President of decision management and analytics at Pegasystems, a leading provider of software for customer engagement and operational excellence. Rob and I discuss what’s required for enterprises to fully realize the vision of providing a hyper-personalized customer experience, and how machine learning and AI can be used to determine the next best action an organization should take to optimize sales, service, retention, and risk at every step in the customer relationship. Along the way we dig into a couple of key areas, specifically some of the techniques his organization uses to allow customers to manage the tradeoff between model performance and transparency, particularly in light of new laws like GDPR, and how all this ties to an enterprise’s ability to manage bias and ethical issues when deploying ML. We cover a lot of ground in this one and I think you’ll find Rob’s perspective really interesting. The notes for this show can be found at twimlai.com/talk/127.
12/04/1841m 40s

Information Extraction from Natural Document Formats with David Rosenberg - TWiML Talk #126

In this episode, I’m joined by David Rosenberg, data scientist in the office of the CTO at financial publisher Bloomberg, to discuss his work on “Extracting Data from Tables and Charts in Natural Document Formats.” Bloomberg is dealing with tons of financial and company data in pdfs and other unstructured document formats on a daily basis. To make meaning from this information more efficiently, David and his team have implemented a deep learning pipeline for extracting data from the documents. In our conversation, we dig into the information extraction process, including how it was built, how they sourced their training data, why they used LaTeX as an intermediate representation and how and why they optimize on pixel-perfect accuracy. There’s a lot of interesting info in this show and I think you’re going to enjoy it. The notes for this show can be found at twimlai.com/talk/126.
09/04/1845m 36s

Human-in-the-Loop AI for Emergency Response & More w/ Robert Munro - TWiML Talk #125

In this episode, I chat with Rob Munro, CTO of the newly branded Figure Eight, formerly known as CrowdFlower. Figure Eight’s Human-in-the-Loop AI platform supports data science & machine learning teams working on autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Rob and I had a really interesting discussion covering some of the work he’s previously done applying machine learning to disaster response and epidemiology, including a use case involving text translation in the wake of the catastrophic 2010 Haiti earthquake. We also dig into some of the technical challenges that he’s encountered in trying to scale the human-in-the-loop side of machine learning since joining Figure Eight, including identifying more efficient approaches to image annotation as well as the use of zero shot machine learning to minimize training data requirements. Finally, we briefly discuss Figure Eight’s upcoming TrainAI conference, which takes place on May 9th & 10th in San Francisco. Train AI you can join me and Rob, along with a host of amazing speakers like Garry Kasparov, Andrej Karpathy, Marti Hearst and many more and receive hands-on AI, machine learning and deep learning training through real-world case studies on practical machine learning applications. For more information on TrainAI, head over to figure-eight.com/train-ai, and be sure to use code TWIMLAI for 30% off your registration! For those of you listening to this on or before April 6th, Figure Eight is offering an even better deal on event registration. Use the code figure-eight to register for only 88 dollars. The notes for this show can be found at twimlai.com/talk/125.
05/04/1848m 26s
-
-
Heart UK
Mute/Un-mute