Description
Artificial Intelligence chatbots have come such a long way in a really short time.
Each release of ChatGPT brings new features, like voice chat, along with updates to the training data fed into the systems, supposed to make them smarter.
But are more leaps forward a sure thing? Or could the tools actually get dumber?
Today, Aaron Snoswell from the generative AI lab at the Queensland University of Technology discusses the limitations of large language models like ChatGPT.
He explains why some observers fear ‘model collapse’, where more mistakes creep in as the systems start ‘inbreeding’, or consuming more AI created content than original human created works.
Aaron Snoswell says these models are essentially pattern matching machines, which can lead to surprising failures.
He also discusses the massive amounts of data required to train these models and the creative ways companies are sourcing this data.
The AI expert also touches on the concept of artificial general intelligence and the challenges in achieving it.
Featured:
Aaron Snoswell, senior research fellow at the generative AI lab at the Queensland University of Technology
Key Topics:
Artificial Intelligence
ChatGPT
Large Language Models
Model Collapse
AI Training Data
Artificial General Intelligence
Responsible AI Development
Generative AI
We want to hear from you; how can we make our podcast even better? Please take a few minutes to complete our listener survey. Find the link on the ABC News Daily website.
Thousands of us travel through South East Asia every year and it can, at times, be risky.
But the story out of Laos this...
Published 11/21/24
We want to hear from you; how can we make our podcast even better? Please take a few minutes to complete our listener survey. Find the link on the ABC News Daily website.
On the one thousandth day of Russia’s war in Ukraine this week there was a major shift on the battleground.
Russia...
Published 11/20/24