Simon Willison joined Bryan and Adam to discuss a recent article maligning open source large language models. Simon has so much practical experience with LLMs, and brings so much clarity to what they can and can’t do. How do these systems work? How do they break? What are open and proprietary LLMs out there?
Recorded 1/15/2024
We've been hosting a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording.
Recorded 1/15/2024
We've been hosting a live show weekly on Mondays at 5p for about an hour, and recording them all; here is the recording.
Some of the topics we hit on, in the order that we hit them:
- IEEE Spectrum: Open-Source AI Is Uniquely Dangerous
- Newsroom Robots with Simon Willison
- OxF: Another LPC55 ROM Vulnerability
- Simon Willison: Stuff we figured out about AI in 2023
- llama.cpp
- Mistral AI
- France’s Mistral AI blows in with a $113M seed round at a $260M valuation to take on OpenAI
- Simon again: The AI trust crisis
- Reply All: Is Facebook Spying on You?
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- New York Times Sues OpenAI
- Lycos
- ChatGPT Can Be Broken by Entering These Strange Words, And Nobody Is Sure Why
Simon posted a follow up blog article where he explains using MacWhisper and Claude to make his LLM pull out a few of his favorite quotes from this episode:
If we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers!