Humanitarian AI, PyTorch Models, and Saliency Maps
Listen now
Description
Kyle discusses How can AI help in a humanitarian crisis? https://www.independent.co.uk/news/science/artifical-intelligence-disaster-response-humanitarian-crisis-ai-help-a8319361.html Lan discusses Captum https://medium.com/pytorch/introduction-to-captum-a-model-interpretability-library-for-pytorch-d236592d8afa George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretabilty - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias.
More Episodes
This week we are back with our regular panelists! Kyle brings us a short article exploring science fiction impacting AI titled "Survey Finds Science Fiction One of Many Factors Impacting Views of AI Technology." George brings us an article about using thousands fo computers from universities,...
Published 09/16/20
We are back with other guest this week! We have NLP/ML research scientist, Fredrik Olsson joining us. He discusses the work "Why You Should Do NLP Beyond English." Lan brings us a news item, "Research News: DNA Storage." George talks about the article "Discovering Symbolic Models from Deep...
Published 09/10/20
Rachel Bittner, a research scientist at Spotify, joins us in our discussion this week! She brings us the paper "Few-Shot Sound Event Detection." Lan discusses an article about fairness of search results. George talks about a blog post from England about an algorithm grading exams and the...
Published 09/01/20