The First Hit is Free
Using AI freely while staying in the driver's seat.
My colleague James Tompkin showed me his Claude Code setup for writing a research proposal. I was floored. Scott Alexander challenged his readers to deeply integrate AI into their work, and I wish I had followed his advice sooner. Yes, I’m an AI researcher and an expert in language and robotics, but I was still amazed, and immediately addicted. I used it to write out some math that I have been trying (and failing) to convince a student to do for more than 10 years. It wrote the LaTeX, it implemented the algorithm in Python, and it fixed errors. I asked it, “Please use an Unscented Kalman Filter instead of an Extended Kalman Filter,” and it said, “Yes, ma’am!” Soon, I was using it every day for writing of every kind. My AI policy in my course at Brown this semester is that you can use AI however you want, in any way you want, up to and including delivering your course presentations, and several students took me up on it. We used AI to generate slides, generate code, and generate video presentations about their work, a laboratory for automating AI research with AI.
This process begs the question: what are we doing when we use AI to write for us, and read other people’s writing for us? Is it degenerating to a pointless exercise? My answer comes back to the fundamentals. Why do we write, make slides, and deliver presentations? Fundamentally, we are putting our ideas into other people’s brains. We are producing artifacts to communicate more effectively with people to teach them, challenge them, empower them, and build a relationship with them. If AI helps produce better artifacts, or produce them more quickly, then it is helping human-human interaction become more efficient and effective. It helps us move more quickly, and in my lab, it helps us make robots do things they couldn’t do before.
But this only works if you already know what you’re doing. One reason I can use AI effectively is that my job is already to prompt my students to go off and do tasks, and I’ve been practicing for many years. And before that, I spent many hours sitting next to the robot to make it do the thing, so I deeply understand all levels of the robot hardware and software stack. To get to know Claude, I used it to convert our old ROS1 program into ROS2 to resurrect an AIBO for my outreach work. The project wouldn’t build. Claude suggested fixes to the source files, but I knew it was a problem with the package.xml, and I had to redirect it several times before we found and fixed the problem (still much faster than it would have taken on my own). But later, I pointed it at my lab’s recent papers and asked it to suggest new research ideas. They were terrible! Human guidance is still critical. In our department, we are weighing AI policies in our courses: how do we help students have the deep knowledge necessary to solve hard problems? The New Yorker compares this to using a forklift to move a pallet vs using a forklift to lift weights. I don’t have an answer, but it’s a crucial question.
For our blog, we’ve landed on a specific stance: use AI freely and take full responsibility. We are adopting effectively the same policy as the Linux Kernel. We will use AI to produce content for this blog; however we like, we (David and Stefanie) are responsible for reviewing all generated content, ensuring compliance with licensing requirements (e.g., copyright), and take full responsibility for the contribution. The robots are helping, but we’re in the driver’s seat.



