LLMs are not magic: Finding ways to make AI generate trustworthy content | Tim Leers
Description
Can we rely on LLMs to repurpose our content in social media?
To end our first season of the AI and Digital Transformation Podcast, we talked to dataroots R&D engineer Tim Leers about two very popular topics in 2023: LLMs, and content creation.
In this age of content creation and social media, journalists now have an extra role to fill: sharing their work and the news using their social media accounts. Given the popular use of ChatGPT and Midjourney, people ask LLMs to repurpose their news content for social media purposes.
This comes with a price. By relying solely on AI, journalists, like content creators, risk sharing repurposed content that are biased, polarizing and misinformed.
Listen to this episode and learn how you can make LLM more trustworthy when repurposing your existing content.
Who is Tim Leers?
Tim started his AI journey in neuroscience and psychology, studying the parallels between human & machine minds.
Four years ago, he shifted his focus from brains to bytes, joining dataroots as an AI engineer, a leading company in AI and data-driven solutions. In this role, he assists organizations in the research, development, and deployment of cutting-edge AI systems.
Tim is now primarily focused on how to effectively and responsibly utilize generative AI, agents, and LLMs, and advise decision-makers, engineers and end-users on how to navigate the expanding role of AI in work, life, and society.
Check out our show notes to know more about Tim, his work, and dataroots.
Time Stamps
(00:00:00) Trailer
(00:00:53) About Tim
(00:03:51) AI Use Case - Smart News Assistance
(00:05:48) Challenges in repurposing content using LLM
(00:07:52) LLM text-to-audio
(00:09:13) LLM workflow: Interactive process vs. automation
(00:11:26) LLMs are not magic: summarizing & humans in the loop
(00:14:49) Journalist’s perception of AI: authenticity, trust and quality
(00:18:09) Is this the end of outsourcing a press agency for content?
(00:20:25) Search engine and algorithms: detecting unique news content
(00:26:14) Risk of Conspiracies and Prompt Governance
(00:29:47) What makes dataroots’ smart news assistance tool different compared to ChatGPT?
(00:31:48) Do I need to finetune LLMs?
(00:34:18) Can Open source models replace ChatGPT?
(00:37:38) Adapting LLMs in businesses: Usability, APIs, Hardware vs. Cloud
(00:46:16) Future of Work, Critical thinking, LLM being a digital glue
(00:55:01) Recap, closing remarks and book recommendation
---
More on G.M.S.C. Consulting
Follow us on our socials:
LinkedIn
YouTube
Book an appointment with us.
Sign up to our newsletter.
---
Music credits: storyblocks.com
Logo credits: Joshua Coleman, Unsplash
---
Send in a voice message: https://podcasters.spotify.com/pod/show/gmsc-consulting/message
We’ve covered episodes about AI in logistics in the past. Let’s now focus our attention to manufacturing. Some AI applications in this sector include predictive maintenance, quality assurance using computer vision, anomaly detection, and digital twins.
Building these solutions takes time and...
Published 09/12/24
Does your business need an AI computer vision co-pilot?
You might need it, especially if you’re working on a high-stake AI project that requires precision or accuracy.
In this episode, Akridata’s AI engineer Alexander Berkovich tells us more about it as he covers the different use cases that...
Published 05/30/24