How AI works is often a mystery — that's a problem
Listen now
Description
Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them. Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out. Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Hosted on Acast. See acast.com/privacy for more information.
More Episodes
The often repeated claim that "80% of the world's biodiversity is found in the territories of Indigenous Peoples" appears widely in policy documents and reports, yet appears to have sprung out of nowhere. According to a group of researchers, including those from Indigenous groups, this baseless...
Published 09/06/24
Published 09/06/24
In this episode: 00:45 Why a 'nuclear clock' is now within researchers’ reachResearchers have made a big step towards the creation of the long theorized nuclear clock, by getting the most accurate measurement of the frequency of light required to push thorium nuclei into a higher energy state....
Published 09/04/24