AI can’t handle the truth when it comes to the law
Listen now
Description
Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.
More Episodes
Robotics company Boston Dynamics announced this month it is retiring its humanoid robot known as “Atlas.” The 6′, 2,330 lb robot was considered a quantum leap in robotics and was famous for parkour stunts and awkward dance moves. Debuting more than a decade ago in 2013, the Atlas robot was a part...
Published 04/29/24
Published 04/29/24
The noncompete clause is dead! American tech workers are poised to benefit from the Federal Trade Commission’s new crackdown on the agreements, which prevent a company’s ex-employees from working for its rivals for a specified time. Also, Tesla’s profits crashed 55%. As electric vehicle sales...
Published 04/26/24