Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27
Listen now
Description
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
More Episodes
Published 06/17/24
So what are notable Open Source Large Language models? In this episode, I cover Open Source models from Meta the parent company of Facebook, a French AI company called Mistral currently valued at $2B dollars, in addition to Microsoft and Apple. Not all Open Source models are equally open, so...
Published 06/10/24
Why should you consider using an open source Large Language Model, and why are these models crucial to the generative AI ecosystem? In this episode, we'll explore why enterprises and entrepreneurs are turning to open source LLMs like Meta's Llama for their cost-effectiveness, control, privacy,...
Published 06/03/24