Algorithmic Liabilities
Listen now
Description
The learning ability of AI systems means that they no longer have to be completely programmed but can constantly learn by themselves. They can change their behavior through the input of data from the outside world. AI holds great promise. But when the adoption of this technology increases, their liabilities also emerge. If the algorithms fail to perform as expected, the result can be economic loss and business interruption, personal injury and property damage, professional liability, medical malpractice and cyber exposure. The European Union is working on a comprehensive legal framework for AI and a harmonized liability regime specifically for AI systems. Meanwhile, general civil law concepts like the duty of care and product liability govern who will be liable if damage occurs when these systems fail. For this particular podcast episode, we've invited once again Dominic “Doc” Ligot from Data Ethics Philippines and Cirrolytix as he walk us through the Algorithmic Liabilities and its significant relevance to our digital ecosystem. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/edgar-angeles/message
More Episodes
Published 09/12/24
Organizational behavior is an essential part of people management in today’s work culture. The synergy and behavior between your people will determine their efficiency. In-depth learning provides an understanding of how employees communicate with one another. Their compatibility is vital so that...
Published 09/12/24
As our world becomes more interconnected, so too does the need for banking systems to follow suit. In the past, businesses and individuals were often restricted to banking in a single country, but the rise of borderless banking is enabling both to benefit from greater financial freedoms.By...
Published 08/28/24