Meeting Prep
Walk in prepared — walk out with a research opportunity
This page has two layers: general conversation tips by research area, and professor-specific talking points derived from each lab's published work. Use both to prepare for a first meeting.
General Tips by Research Area
Robotics
- Ask about their hardware platforms — which robots does the lab use?
- Discuss sim-to-real transfer challenges in their work
- Ask about dataset collection: how do they gather training data?
- Mention any hands-on hardware experience you have (Arduino, ROS, etc.)
Reinforcement Learning
- Ask about the gap between simulation and real-world RL deployment
- Discuss offline vs online RL and which the lab focuses on
- Mention any RL projects or course assignments you've completed
- Ask about their benchmark environments and evaluation methodology
Computer Vision
- Ask about their approach to data augmentation and self-supervised learning
- Discuss recent advances in diffusion models or generative approaches
- Mention any computer vision projects (course projects, Kaggle, etc.)
- Ask about compute requirements and what GPU resources are available
Foundation Models
- Ask about fine-tuning vs prompting strategies in their research
- Discuss scaling challenges they face with large models
- Mention experience with transformers, attention mechanisms, or LLM APIs
- Ask about evaluation: how do they measure model quality beyond benchmarks?
AI Safety
- Ask about their threat model — what failure modes concern them most?
- Discuss the relationship between alignment and capabilities research
- Mention any coursework in security, ethics, or formal methods
- Ask about concrete safety benchmarks or metrics they use
ML Theory & Stats
- Ask about the practical implications of their theoretical results
- Discuss which open problems in ML theory excite them most
- Mention relevant math coursework (real analysis, probability, optimization)
- Ask about their proof techniques and mathematical toolkit
ML Systems
- Ask about their production deployment stack and infrastructure
- Discuss bottlenecks in current ML training/serving pipelines
- Mention systems programming experience (C++, Rust, CUDA, distributed)
- Ask about their benchmarking methodology for system performance
Professor-Specific Talking Points
Each lab's key papers, and why mentioning them matters — derived from the lab's current research direction.
RAIL Lab
Prof. Sergey Levine · EECS
Key papers to mention
- Octo: An Open-Source Generalist Robot Policy
The lab's flagship 2024 work on generalist robot policies — mentioning this shows you understand their current direction
- Bridge Data V2: A Dataset for Robot Learning at Scale
Core large-scale dataset enabling offline RL and generalist policy training at RAIL Lab
- Cal-QL: Calibrated Offline Reinforcement Learning
Foundational offline RL method from the lab — key to understanding how they train robot policies without online interaction
Robot Learning Lab (RLL)
Prof. Pieter Abbeel · EECS
Key papers to mention
- Apprenticeship Learning via Inverse Reinforcement Learning
Abbeel's seminal IRL paper — understanding this framework is foundational to the lab's philosophy of learning from demonstrations
- Foundation Models for Robot Planning and Manipulation
Recent direction of the lab on using large pretrained models for robot planning — shows current research trajectory
- Language-Conditioned Imitation Learning for Robot Manipulation Tasks
Illustrates the lab's approach to language-conditioned robot control, a core active research area
AUTOLAB
Prof. Ken Goldberg · EECS
Key papers to mention
- Dex-Net 4.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
The lab's landmark grasping system — understanding Dex-Net is essential for AUTOLAB conversations
- FogROS2: An Adaptive Platform for Cloud and Edge Robotics Using ROS 2
Cloud robotics infrastructure the lab is actively developing — shows the systems side of AUTOLAB research
- RT-Grasp: Reasoning Transferable Grasps from Large Language Models
Most recent LLM-for-robotics direction — demonstrates the lab's current focus on combining LLMs with physical manipulation
BAIR Vision Lab
Prof. Jitendra Malik · EECS
Key papers to mention
- Learning Agile Locomotion Skills with a Mentor
Recent work on sim-to-real transfer for agile quadruped locomotion — representative of the lab's current robotics focus
- Sim-to-Real Transfer of Agile Locomotion for Quadruped Robots
Foundational sim-to-real paper from the lab — core methodology actively used in ongoing projects
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Influential vision transformer paper with lab contribution — shows the lab's connection to foundational vision architecture work
InterACT Lab
Prof. Anca Dragan · EECS
Key papers to mention
- Legibility and Predictability of Robot Motion
Seminal work on designing robot motion that communicates intent — foundational to the lab's HRI philosophy
- Cooperative Inverse Reinforcement Learning
CIRL framework co-authored with Stuart Russell — foundational alignment paper the lab built on
- Planning with Trust for Human-Robot Collaboration
Shows the lab's current direction on trust-aware planning — directly relevant to ongoing HRI research
Jordan Lab
Prof. Michael I. Jordan · EECS
Key papers to mention
- Minimax Optimal Rates for Poisson Inverse Problems with Physical Constraints
Recent theoretical work demonstrating the lab's statistical ML approach — shows current research direction
- An Introduction to Variational Methods for Graphical Models
Jordan's foundational survey on variational inference — essential background for understanding the lab's Bayesian ML work
- Decision-Making under Uncertainty with Reinforcement Learning and Active Learning
Illustrates the lab's intersection of economics, decision theory, and ML — core research theme
Darrell Group / Berkeley DeepDrive
Prof. Trevor Darrell · EECS
Key papers to mention
- LLaVA: Visual Instruction Tuning
Landmark vision-language model from Darrell Group collaborators — central to the lab's current multimodal research
- Open-Vocabulary Object Detection Using Captions
Influential open-vocabulary detection work from the lab — directly applicable to autonomous driving perception
- Learning Transferable Visual Models From Natural Language Supervision
CLIP paper with lab contribution — foundational to the vision-language pretraining paradigm the lab builds on
Visual Learning Group
Prof. Alexei Efros · EECS
Key papers to mention
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Seminal CycleGAN paper from the lab — core contribution to the image translation field and foundational to their generative work
- Image-to-Image Translation with Conditional Adversarial Networks
pix2pix paper — shows the lab's foundational contribution to conditional image generation
- The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
LPIPS perceptual metric paper — widely used evaluation metric for image generation research
CHAI
Prof. Stuart Russell · EECS
Key papers to mention
- Human Compatible: Artificial Intelligence and the Problem of Control
Russell's book laying out the human-compatible AI framework — foundational reading for understanding CHAI's research vision
- Cooperative Inverse Reinforcement Learning
CIRL framework — the core technical contribution underlying CHAI's alignment research
- Scalable and Safe Planning Using Constrained MDPs
Shows CHAI's approach to safe planning — key for understanding how they formalize safety constraints
Song Lab
Prof. Dawn Song · EECS
Key papers to mention
- Universal and Transferable Adversarial Attacks on Aligned Language Models
Landmark paper on jailbreaking aligned LLMs — central to the lab's LLM safety research
- SoK: Certified Robustness for Deep Neural Networks
Comprehensive survey on certified robustness — shows the lab's systematic approach to provable AI safety
- Towards Federated Learning at Scale: A System Design
Influential federated learning paper relevant to the lab's privacy-preserving ML research direction
Hardt Lab
Prof. Moritz Hardt · EECS
Key papers to mention
- Equality of Opportunity in Supervised Learning
Foundational fairness paper introducing equalized odds — one of the most cited works on algorithmic fairness
- Performative Prediction
The lab's key theoretical contribution — a must-read to understand their approach to socially-aware ML
- Train Faster, Generalize Better: Stability of Stochastic Gradient Descent
Seminal work connecting SGD stability to generalization — foundational to the lab's statistical ML theory
Yu Lab
Prof. Bin Yu · Statistics
Key papers to mention
- Veridical Data Science: The Practice of Responsible Data Analysis and Decision Making
Foundational paper on the lab's PCS framework — essential reading to understand their philosophy of responsible data science
- Three Principles of Data Science: Predictability, Computability, and Stability
Core PCS framework paper — the theoretical foundation underlying the lab's approach to reproducible ML
- Hierarchical Interpretations for Neural Network Predictions
AGGLOMERATE/HARE method from the lab — representative of their interpretable ML tools
Ma Lab
Prof. Yi Ma · EECS
Key papers to mention
- White-Box Transformers via Sparse Rate Reduction
CRATE architecture — the lab's flagship work deriving transformers from first principles via rate reduction
- ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
Foundational work on rate reduction for principled deep network design — core to understanding the lab's approach
- Robust Principal Component Analysis?
Seminal RPCA paper — classic contribution that established the lab's expertise in low-dimensional structure
Jiao Lab
Prof. Jiantao Jiao · EECS
Key papers to mention
- Minimax Estimation of Functionals of Discrete Distributions
Foundational information-theoretic estimation paper — core contribution establishing the lab's statistical estimation approach
- Towards Efficient and Unified Compression for Modern Large Language Models
Most recent LLM compression work — shows the lab's active direction in applying information theory to LLMs
- Information-Theoretic Understanding of Population Risk Improvement with Model Compression
Theoretical bridge between model compression and generalization — key to the lab's information-theoretic ML work
Sky Lab
Prof. Ion Stoica · EECS
Key papers to mention
- Ray: A Distributed Framework for Emerging AI Applications
Foundational Ray paper — the distributed computing framework the lab built that is now widely used in ML
- Efficiently Programming Large Language Models using SGLang
SGLang system from the lab — shows their active work on LLM inference optimization
- vLLM: Efficient Memory Management for Large Language Model Serving with PagedAttention
vLLM's PagedAttention from Sky Lab — one of the most impactful recent contributions to LLM serving infrastructure
Prof. Joseph Gonzalez · EECS
Key papers to mention
- Inductive Representation Learning on Large Graphs
GraphSAGE paper — foundational contribution to inductive graph neural networks, core to Gonzalez's GNN work
- Orca: A Distributed Serving System for Transformer-Based Generative Models
Influential LLM serving system — context for the lab's work on efficient transformer inference
- Gorilla: Large Language Model Connected with Massive APIs
LLM tool-use paper from the lab — shows the direction on connecting LLMs to external systems
MSC Lab
Prof. Masayoshi Tomizuka · ME
Key papers to mention
- Safe Motion Planning for Autonomous Driving in Dynamic Environments
Core motion planning paper from the lab — foundational to understanding their autonomous driving safety approach
- Deep RL-Based Motion Planning for Autonomous Driving
RL for motion planning from MSC Lab — shows the lab's approach to learning-based autonomous driving
- Simultaneous Learning and Planning with Temporal Logic Constraints
Temporal logic for safe planning — demonstrates the formal methods side of the lab's autonomous vehicle research
Hybrid Robotics Lab
Prof. Koushil Sreenath · ME
Key papers to mention
- Control Barrier Functions: Theory and Applications
Foundational CBF survey paper — essential reading to understand the safety-critical control framework the lab is built on
- Safety-Critical Control of Active Spine-Based Quadruped Robot
Application of CBFs to quadruped robots — shows how the lab applies formal safety methods to legged locomotion
- Rapid Locomotion via Reinforcement Learning
Recent RL-based locomotion work — demonstrates the lab's sim-to-real transfer capabilities for agile movement
HiPeRLab
Prof. Mark Mueller · ME
Key papers to mention
- Stability Analysis for Linear Systems with Arbitrary Time-Varying Delays
Theoretical control systems work from Mueller — foundational to understanding the lab's rigorous control approach
- Fault-tolerant Control of a Quadrocopter with Various Physical Damage
Landmark fault-tolerant flight paper — central to the lab's safety and robustness research for aerial robots
- A Computationally Efficient Motion Primitive for Quadrocopter Trajectory Generation
Efficient trajectory planning for quadrocopters — core methodology used for real-time high-speed flight planning
Embodied Dexterity Group (EDG)
Prof. Hannah Stuart · ME
Key papers to mention
- Compliant Contact-Implicit Model Predictive Control for Soft Robot Locomotion
Contact-implicit MPC for soft robots — shows the lab's approach to controlling compliant robotic systems
- Scaling Up Dexterous Manipulation for Hand-Arm Systems: A Tactile Feedback Approach
Tactile feedback for dexterous manipulation — core to understanding the lab's sensor-rich manipulation research
- Soft Robotic Grippers for Biological Sampling on Deep Reefs
Real-world deployment of soft grippers — illustrates the lab's applied bio-inspired robotics research