Computers are learning to read our minds

Computers are learning to read our minds

The home team chats with Gašper Beguš, director of the Berkeley Speech and Computation Lab, about his research into how LLMs—and humans—learn to speak. Plus: how AI is restoring a stroke survivor’s ability to talk, concern over models that pass the Turing test, and what’s going on with whale brains.
Medical research made understandable with AI (ep. 601)

Medical research made understandable with AI (ep. 601)

On today’s episode we chat with CEO Dipanwita Das and CTO Hellmut Adolphs of Sorcero, which uses AI and large language models to make medical texts more discoverable and readable, helping knowledge to more easily spread, increasing the changes doctors and patients will find the solutions they need.

The post Medical research made understandable with AI (ep. 601) appeared first on Stack Overflow Blog.

Medical research made understandable with AI (ep. 601)

Semantic search without the napalm grandma exploit (Ep. 600)

Ben and senior software engineer Kyle Mitofsky are joined by two people who worked on the launch of Overflow AI: director of data science and data platform Michael Foree and senior software developer Alex Warren. They talk about how and why Stack Overflow launched semantic search, how to ensure a knowledge base is trustworthy, and why user prompts can make LLMs vulnerable to exploits.

The post Semantic search without the napalm grandma exploit (Ep. 600) appeared first on Stack Overflow Blog.