Deep Papers-logo

Deep Papers

Science & Technology News

Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.

Location:

United States

Description:

Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.

Language:

English


Episodes
Ask host to enable sharing for playback control

Watermarking for LLMs and Image Models

7/30/2025
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals. Learn more about the A Watermark for Large Language Models paper. Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:42:56

Ask host to enable sharing for playback control

Self-Adapting Language Models: Paper Authors Discuss Implications

7/8/2025
The authors of the new paper *Self-Adapting Language Models (SEAL)* shared a behind-the-scenes look at their work, motivations, results, and future directions. The paper introduces a novel method for enabling large language models (LLMs) to adapt their own weights using self-generated data and training directives — “self-edits.” Learn more about the Self-Adapting Language Models paper. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:31:26

Ask host to enable sharing for playback control

The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning

6/20/2025
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic. Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder. Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking." Read the paper: The Illusion of Thinking Read the response: The Illusion of the Illusion of Thinking Explore more AI research and sign up for future readings Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:30:35

Ask host to enable sharing for playback control

Accurate KV Cache Quantization with Outlier Tokens Tracing

6/4/2025
Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Paper: https://arxiv.org/abs/2505.10938 Slides: https://bit.ly/45wolpr Join us for Arize Observe: https://arize.com/observe-2025/ Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:25:11

Ask host to enable sharing for playback control

Scalable Chain of Thoughts via Elastic Reasoning

5/16/2025
In this week's episode, we talk about Elastic Reasoning, a novel framework designed to enhance the efficiency and scalability of large reasoning models by explicitly separating the reasoning process into two distinct phases: thinking and solution. This separation allows for independent allocation of computational budgets, addressing challenges related to uncontrolled output lengths in real-world deployments with strict resource constraints. Our discussion explores how Elastic Reasoning contributes to more concise and efficient reasoning, even in unconstrained settings, and its implications for deploying LRMs in resource-limited environments. Read the paper here: https://arxiv.org/pdf/2505.05315 Sign up for the next discussion & see more AI research: arize.com/ai-research-papers Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:28:54

Ask host to enable sharing for playback control

Sleep-time Compute: Beyond Inference Scaling at Test-time

5/2/2025
What if your LLM could think ahead—preparing answers before questions are even asked? In this week's paper read, we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline, well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance. ​We explore new benchmarks—Stateful GSM-Symbolic, Stateful AIME, and the multi-query extension of GSM—that show up to 5x lower compute at inference, 2.5x lower cost per query, and up to 18% higher accuracy when scaled. ​You’ll also see how this method applies to realistic agent use cases and what makes it most effective.If you care about LLM efficiency, scalability, or cutting-edge research. Explore more AI research, or sign up to hear the next session live: arize.com/ai-research-papers Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:30:24

Ask host to enable sharing for playback control

LibreEval: The Largest Open Source Benchmark for RAG Hallucination Detection

4/18/2025
For this week's paper read, we actually dive into our own research. We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost. So, over the past few weeks, the Arize team generated the largest public dataset of hallucinations, as well as a series of fine-tuned evaluation models. We talk about what we built, the process we took, and the bottom line results. 📃 Read the paper: https://arize.com/llm-hallucination-dataset/ Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:27:19

Ask host to enable sharing for playback control

AI Benchmark Deep Dive: Gemini 2.5 and Humanity's Last Exam

4/4/2025
This week we talk about modern AI benchmarks, taking a close look at Google's recent Gemini 2.5 release and its performance on key evaluations, notably Humanity's Last Exam (HLE). In the session we covered Gemini 2.5's architecture, its advancements in reasoning and multimodality, and its impressive context window. We also talked about how benchmarks like HLE and ARC AGI 2 help us understand the current state and future direction of AI. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:26:11

Ask host to enable sharing for playback control

Model Context Protocol (MCP)

3/25/2025
We cover Anthropic’s groundbreaking Model Context Protocol (MCP). Though it was released in November 2024, we've been seeing a lot of hype around it lately, and thought it was well worth digging into. Learn how this open standard is revolutionizing AI by enabling seamless integration between LLMs and external data sources, fundamentally transforming them into capable, context-aware agents. We explore the key benefits of MCP, including enhanced context retention across interactions, improved interoperability for agentic workflows, and the development of more capable AI agents that can execute complex tasks in real-world environments. Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:15:03

Ask host to enable sharing for playback control

AI Roundup: DeepSeek’s Big Moves, Claude 3.7, and the Latest Breakthroughs

2/28/2025
This week, we're mixing things up a little bit. Instead of diving deep into a single research paper, we cover the biggest AI developments from the past few weeks. We break down key announcements, including: Stay ahead of the curve with this fast-paced recap of the most important AI updates. We'll be back next time with our regularly scheduled programming. Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:30:23

Ask host to enable sharing for playback control

How DeepSeek is Pushing the Boundaries of AI Development

2/21/2025
This week, we dive into DeepSeek. SallyAnn DeLucia, Product Manager at Arize, and Nick Luzio, a Solutions Engineer, break down key insights on a model that have dominating headlines for its significant breakthrough in inference speed over other models. What’s next for AI (and open source)? From training strategies to real-world performance, here’s what you need to know. Read a summary: https://arize.com/blog/how-deepseek-is-pushing-the-boundaries-of-ai-development/ Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:29:54

Ask host to enable sharing for playback control

Multiagent Finetuning: A Conversation with Researcher Yilun Du

2/4/2025
We talk to Google DeepMind Senior Research Scientist (and incoming Assistant Professor at Harvard), Yilun Du, about his latest paper "Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains." This paper introduces a multiagent finetuning framework that enhances the performance and diversity of language models by employing a society of agents with distinct roles, improving feedback mechanisms and overall output quality. The method enables autonomous self-improvement through iterative finetuning, achieving significant performance gains across various reasoning tasks. It's versatile, applicable to both open-source and proprietary LLMs, and can integrate with human-feedback-based methods like RLHF or DPO, paving the way for future advancements in language model development. Read an overview on the blog Watch the full discussion Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:30:03

Ask host to enable sharing for playback control

Training Large Language Models to Reason in Continuous Latent Space

1/14/2025
LLMs have typically been restricted to reason in the "language space," where chain-of-thought (CoT) is used to solve complex reasoning problems. But a new paper argues that language space may not always be the best for reasoning. In this paper read, we cover an exciting new technique from a team at Meta called Chain of Continuous Thought—also known as "Coconut." In the paper, "Training Large Language Models to Reason in a Continuous Latent Space" explores the potential of allowing LLMs to reason in an unrestricted latent space instead of being constrained by natural language tokens. Read a full breakdown of Coconut on our blog Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:24:58

Ask host to enable sharing for playback control

LLMs as Judges: A Comprehensive Survey on LLM-Based Evaluation Methods

12/23/2024
We discuss a major survey of work and research on LLM-as-Judge from the last few years. "LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods" systematically examines the LLMs-as-Judge framework across five dimensions: functionality, methodology, applications, meta-evaluation, and limitations. This survey gives us a birds eye view of the advantages, limitations and methods for evaluating its effectiveness. Read a breakdown on our blog: https://arize.com/blog/llm-as-judge-survey-paper/ Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:28:57

Ask host to enable sharing for playback control

Merge, Ensemble, and Cooperate! A Survey on Collaborative LLM Strategies

12/10/2024
LLMs have revolutionized natural language processing, showcasing remarkable versatility and capabilities. But individual LLMs often exhibit distinct strengths and weaknesses, influenced by differences in their training corpora. This diversity poses a challenge: how can we maximize the efficiency and utility of LLMs? A new paper, "Merge, Ensemble, and Cooperate: A Survey on Collaborative Strategies in the Era of Large Language Models," highlights collaborative strategies to address this challenge. In this week's episode, we summarize key insights from this paper and discuss practical implications of LLM collaboration strategies across three main approaches: merging, ensemble, and cooperation. We also review some new open source models we're excited about. Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:28:47

Ask host to enable sharing for playback control

Agent-as-a-Judge: Evaluate Agents with Agents

11/22/2024
This week, we break down the “Agent-as-a-Judge” framework—a new agent evaluation paradigm that’s kind of like getting robots to grade each other’s homework. Where typical evaluation methods focus solely on outcomes or demand extensive manual work, this approach uses agent systems to evaluate agent systems, offering intermediate feedback throughout the task-solving process. With the power to unlock scalable self-improvement, Agent-as-a-Judge could redefine how we measure and enhance agent performance. Let's get into it! Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.

Duration:00:24:54

Ask host to enable sharing for playback control

Introduction to OpenAI's Realtime API

11/12/2024
We break down OpenAI’s realtime API. Learn how to seamlessly integrate powerful language models into your applications for instant, context-aware responses that drive user engagement. Whether you’re building chatbots, dynamic content tools, or enhancing real-time collaboration, we walk through the API’s capabilities, potential use cases, and best practices for implementation. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Duration:00:29:56

Ask host to enable sharing for playback control

Swarm: OpenAI's Experimental Approach to Multi-Agent Systems

10/29/2024
As multi-agent systems grow in importance for fields ranging from customer support to autonomous decision-making, OpenAI has introduced Swarm, an experimental framework that simplifies the process of building and managing these systems. Swarm, a lightweight Python library, is designed for educational purposes, stripping away complex abstractions to reveal the foundational concepts of multi-agent architectures. In this podcast, we explore Swarm’s design, its practical applications, and how it stacks up against other frameworks. Whether you’re new to multi-agent systems or looking to deepen your understanding, Swarm offers a straightforward, hands-on way to get started. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Duration:00:46:46

Ask host to enable sharing for playback control

KV Cache Explained

10/24/2024
In this episode, we dive into the intriguing mechanics behind why chat experiences with models like GPT often start slow but then rapidly pick up speed. The key? The KV cache. This essential but under-discussed component enables the seamless and snappy interactions we expect from modern AI systems. Harrison Chu breaks down how the KV cache works, how it relates to the transformer architecture, and why it's crucial for efficient AI responses. By the end of the episode, you'll have a clearer understanding of how top AI products leverage this technology to deliver fast, high-quality user experiences. Tune in for a simplified explanation of attention heads, KQV matrices, and the computational complexities they present. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Duration:00:04:19

Ask host to enable sharing for playback control

The Shrek Sampler: How Entropy-Based Sampling is Revolutionizing LLMs

10/16/2024
In this byte-sized podcast, Harrison Chu, Director of Engineering at Arize, breaks down the Shrek Sampler. This innovative Entropy-Based Sampling technique--nicknamed the 'Shrek Sampler--is transforming LLMs. Harrison talks about how this method improves upon traditional sampling strategies by leveraging entropy and varentropy to produce more dynamic and intelligent responses. Explore its potential to enhance open-source AI models and enable human-like reasoning in smaller language models. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Duration:00:03:31