Description
The learning ability of AI systems means that they no longer have to be completely programmed but can constantly learn by themselves. They can change their behavior through the input of data from the outside world.
AI holds great promise. But when the adoption of this technology increases, their liabilities also emerge. If the algorithms fail to perform as expected, the result can be economic loss and business interruption, personal injury and property damage, professional liability, medical malpractice and cyber exposure.
The European Union is working on a comprehensive legal framework for AI and a harmonized liability regime specifically for AI systems. Meanwhile, general civil law concepts like the duty of care and product liability govern who will be liable if damage occurs when these systems fail.
For this particular podcast episode, we've invited once again Dominic “Doc” Ligot from Data Ethics Philippines and Cirrolytix as he walk us through the Algorithmic Liabilities and its significant relevance to our digital ecosystem.
---
This episode is sponsored by
· Anchor: The easiest way to make a podcast. https://anchor.fm/app
---
Send in a voice message: https://anchor.fm/edgar-angeles/message
The speed of innovation and ability to adapt to rapidly changing market trends rests on the agility of your release cycle and the ability to quickly diagnose, triage, and fix errors. Data virtualization is the critical lever used by forward-thinking enterprises to provision production-quality...
Published 10/21/24
Breaking down the analytics adoption barrier will require providing data and analytics where people spend their time, without disruption — just like we are used to receiving insights while scrolling through Netflix for instance. With this in mind, the most innovative organizations are already...
Published 09/30/24