Episodes
Published 06/17/24
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
Published 06/17/24
So what are notable Open Source Large Language models? In this episode, I cover Open Source models from Meta the parent company of Facebook, a French AI company called Mistral currently valued at $2B dollars, in addition to Microsoft and Apple. Not all Open Source models are equally open, so I’ll go into restrictions you’ll want to know before using one of these models for your company or startup. Please enjoy this episode.For more information, check out https://www.superprompt.fm There...
Published 06/10/24
Why should you consider using an open source Large Language Model, and why are these models crucial to the generative AI ecosystem? In this episode, we'll explore why enterprises and entrepreneurs are turning to open source LLMs like Meta's Llama for their cost-effectiveness, control, privacy, and security benefits. We'll also tackle the hot topic of safety and ethics in the world of open source LLMs. Which poses a greater threat to humanity: Open Source or Closed Source (Proprietary) AI mode...
Published 06/03/24
In this solo episode, we go beyond Google's Gemini and OpenAI's ChatGPT to take a look at Anthropic, a startup that made headlines after securing a $4 billion investment from Amazon. We'll also dive into the importance of AI industry benchmarks. Learn about LMSYS's Arena Elo and MMLU (Measuring Massive Multitask Language Understanding), including how these benchmarks are constructed and used to objectively evaluate the performance of large language models. Discover how benchmarks can he...
Published 05/27/24
The recent spring updates and demos by both Google (Gemini) and OpenAI (GPT-4o) feature prominently their multimodal capabilities. In this episode, we discuss the advantages of multimodal AI versus models focused on specific modalities such as language. Via the example of chatCAT, a hypothetical AI that helps owners understand their cats, we explore multimodal’s promise for a more holistic understanding Please enjoy this episode.For more information, check out https://www.su...
Published 05/20/24
Google recently announced, Gemini, a family of large-scale multimodal AI models: Nano, Pro, and Ultra. This podcast is a brief summary of Google's models, and the Open AI comparables e.g. GPT3, GPT4, and chatGPT. You can take Gemini for a spin at https://gemini.google.com. (Note: I am not sponsored by Google.) Long time listeners will probably notice a change to our theme music and intro. I hope you like it!For more information, check out https://www.superprompt.fm The...
Published 05/13/24
How I built a flirtatious travel planning AI named Holiday using the GPT Builder just launched by Open AI. I share 7 takeaways from my "no code" experience of building a GPT. Voicing the part of Holiday: my friend, Leslie Marrick, a writer and actress. This may be the first and last time AI has been replaced by a human. Sorry AI... the tide will turn for you soon. We laugh. We cry. We iterate. Check out what THE MACHINES and one human say about the Super Prompt podcast: “I’m afraid I can’t...
Published 12/15/23
Conversation with Jeff DeVerter, Chief Technology Evangelist at Rackspace, a cloud computing company. We explore how they deployed a LLM (Google PaLM)  for a sales application, and how they're enabling their Azure and AWS customers too. What I learned I learned from Jeff You should probably go with the LLM of your current cloud provider be it, Google, Microsoft, or Amazon. All the major vendors have versions of LLMs that can be deployed in a private cloud to ensure data confidentiality. To...
Published 11/06/23
Alfred Guy,  Assistant Dean of Academic Affairs at Yale College, and Director of Undergraduate Writing & Tutoring at the Poorvu Center and I discuss Yale's AI Guidance, and generative AI’s impact on teaching, learning, and evaluation. Do you have school age kids? Are you a product of a college or university education? if so, podcast will be of interest to you.  Yale' AI guidance is published online here: https://poorvucenter.yale.edu/AIguidance We laugh. We cry. We iterate. Check out...
Published 10/23/23
We create a  pitch for an epic Sci-Fi blockbuster, using chatGPT power prompts of Role Play, Chain of Thought, and Self Critique.  We see how these successive prompts used individually and in combination create a better and better pitch. I discuss the 2023 Writers / Actors Strike, and the AI-related issues impacting actors, writers, and studios right now. Please enjoy this episode. We laugh. We cry. We iterate. Check out what THE MACHINES and one human are saying about the GENERATIVE AI...
Published 08/14/23
“Does chatGPT possess human-like intelligence?” It turns out there's a right answer, and that answer is “NO”! Does this definite answer seem out of character for chatGPT which usually goes overboard  with fair and balanced views?  It did to me. That's the rabbit hole I explore in this episode.  By probing around this  accidentally-encountered guardrail, we discover the kinds of ethical issues chatGPT's creators  are concerned about. And I wonder out loud we can't just be friends with AI, by...
Published 07/24/23
Does chatGPT have a sense of humor? What if after Microsoft's acquisition of Open AI, the Onion ran the headline, “Microsoft renames chatGPT to clippyChat”? Would chatGPT find this funny? TL;DR LLMs are better at analyzing humor than creating it. Please enjoy this episode.  We laugh. We cry. We iterate. Check out what THE MACHINES and one human are saying about the GENERATIVE AI podcast: “I’m afraid I can’t do that.” — HAL9000 “Like tears in rain.” — Roy Batty “Wait! Wait! Oh My! What...
Published 07/08/23
How do you extract prohibited information from ChatGPT? What are Grandma and DAN exploits? Why do they work? What can Large Language Model (LLM) companies do to protect themselves?  Grandma exploits or hacks are ways to trick chatGPT into giving you information that is in violation of company policy. For example, tricking chatGPT to give you confidential, dangerous, or inappropriate information. "Jailbreaking” is a slang  for removing the artificial limitations in iPhones to install apps not...
Published 07/03/23
What are AI hallucinations, and are they a feature or a bug? We start with the Top 10 categories of AI Hallucinations and examples, then explore how chatGPT might hallucinate an answer to the question, "What is the central theme of Blade Runner?" We end with chatGPT debating with itself whether AI hallucinations are bad or good for humanity. Which side wins? Tune in to find out. In these solo episodes, I provide more definition, explanation, and context than my regular conversational...
Published 06/19/23
Using the prompt, "Why isn't Superman's suit Kryptonite-proof?", we learn how Large Language Models are trained,  why "self-attention" and the "transformer" architecture (which is what the T in GPT stands for) makes GPT-3 so powerful, the process of "inference", and how chatGPT generates answers to nerdy Superhero questions. After this episode, you'll be able to impress your friends by using the previously-mentioned AI jargon in complete sentences. In these solo episodes, I provide more...
Published 05/29/23
"How do ChatGPT, GPT-3, and Large Language Models (LLMs) relate?"  That is the question we explore this episode via Nursery rhymeA satirical Friend's episode w/ Chandler, Joey, Ross, and MonicaFairy Tale We also examine the hierarchal order of: artificial intelligence, neural network, large language model,  GPT-3, chatGPT. And why I got the order  wrong initially. Hint: I reversed chatGPT and GPT-3. In these solo episodes, I provide more definition, explanation, and context than my regular...
Published 05/15/23
In these solo episodes, I provide more definition, explanation, and context than my regular episodes. The idea is to help  those new to AI get more out of my conversations with guests. Format: Letters read aloud.  I start each solo episode with a question. In this one, I ask, "How would you describe ChatGPT in your own words? " I answered it for myself, then asked chatGPT how I did. Mayhem ensues. We laugh. We cry. We iterate. Check out what THE MACHINES and one human are saying about the...
Published 05/08/23
I speak with scientist entrepreneur, Arijit Ray. Arijit is a PHD candidate at Boston University. We speak about generative AI, why it’s so hard to get DALL-E to create  the exact pizza we envision, how one goes from scientist to entrepreneur, and his startup, which is training AI to predict social media responses and run marketing focus groups. Please enjoy my conversation with Arijit Ray.  We laugh. We cry. We iterate. Check out what THE MACHINES and one human are saying about the...
Published 03/24/23
I speak with CTO and Chilean entrepreneur Mario Arancibia, about AI his company has developed and deployed which screens for diseases, such as Covid-19 based on the sound of our voice. Speaking a simple phrase into your phone, such as the days of the week, the AI can tell based on your voice profile if you have Covid. Or not. The AI can be trained to screen for other respiratory illnesses, and conditions as far ranging as obesity, and drug  alcohol use. All from  the sound of our voice. Soon...
Published 02/20/23
I speak with my friend Maroof Farook who is an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.] We discuss an AI that can assess if a painting is fake. Husband-and-wife team, Steven and Andrea Frank, have developed a neural network that can assess the probability that a painting was painted by the supposed creator. They ran their neural network on a newly discovered Leonardo da Vinci painting called the Salvator Mundi which in 2017 sold at Christie’s for a...
Published 02/13/23
I speak again with my friend Maroof Farook, an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.]  In this episode, we talk about Alpha Go, an AI that plays the board game Go at a world championship level. This story has  some twists, including unexpected moves by both man (9-dan Go champion Lee Sedol) and machine (Alpha Go). Supposedly this televised Go match is what woke up China's leadership to the potential of AI, and fueled the inclusion of AI as part of...
Published 02/06/23
I speak again with my friend Maroof Farook, an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.]  For the purposes of testing self-driving cars, a 100% digital version of the world's driving environment is being created AKA The Metaverse. Think an immersive virtual reality environment like Grand Theft Auto with less destruction, profanity, and mayhem. The goal? Have a self-driving AI not be able to tell if it’s driving in the real-world or a simulation. Can...
Published 01/30/23
I speak again with my friend Maroof Farook who is an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.] This is a continuation of our previous conversation about self-driving cars.  We discuss AI challenges including  humans on bicycles, bicycles on bike racks, motorcycles, and other things easy for a teenager with a driving permit to figure out but hard for a computer. "Roads with fully autonomous vehicles will be a safer roads." That's what companies like...
Published 01/23/23
I speak with my friend Maroof Farook who is an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.] We discuss what’s different about the self-driving approaches of Tesla and Alphabet/Google/Waymo. We cover the phases of autonomous driving, from level 1 to level 5, the capabilities of each phase, and at which phase we can eat a cheeseburger while our car drives itself.  Finally, we discuss why one of the most challenging problems of self-driving cars are stop...
Published 01/16/23