Qualitative Simulation of Swine Production
Lessons from Matt Mason’s Undergraduate Thesis
More than a decade ago, I ran into Jerry Sussman’s office, bursting with excitement because I had met Matt Mason at a conference. I was a lowly postdoc, and Matt was the director of the Robotics Institute at Carnegie Mellon University. Matt and I had lunch together at Chez Ashton1 in Quebec City! Jerry stores all the theses of all his students on his bookshelves, so he immediately pulled out Matt’s undergraduate thesis, entitled “Qualitative Simulation of Swine Production2,” written in 1976. For some reason, this work was never stored as a tech report at MIT, so I scanned it and sent it back to Matt - but more than a decade later, it still hasn’t made it into the academic record.
In these deep, dark days, it is instructive to look backwards, and Matt’s thesis is a wonderful example of “Good Old Fashioned AI.” He uses a domain specific language to simulate a concrete domain using lifted symbolic expressions in an expert-system approach. Specifically, he models swine production, building on his experience at his family’s pig farm. Matt writes, “The greatest difficulty in writing the hog-farm simulation was the representation of time,” pointing to early recognition of the importance of space and time in modeling real-world problems. In fact he is gesturing at an often misunderstood feature of human language, which is that human language can express both goals, as well as actions or trajectories. The actions r trajectories are outputted by LLM approaches to language understanding, but a goal-based approach is often what a person means. For example, consider a toy problem such as “Go to the red room.” The robot might need to open a door to go to the red room. Unfortunately, the door is locked, and it needs to find a key. But the lock seized so it needs to find WD40 to dissolve the rust in the lock. There is no WD40, so now it is on its way to the hardware store, but to get there it needs to find the car keys, all to get into the red room. (Not that this happened to me recently...) A goal specifies an end state, and it is the robot’s job to figure out how to achieve that state, and it may need to take arbitrary actions to be successful. There is a stack involved. In contrast, an action such as “drive 1 meter north” translates more directly to a motor command (but of course this is just a goal at another level, specifying a target for a motor controller to achieve relative to the odometry sensor.) Similarly, Matt’s thesis specifies desired end states and an implicit planning tree to connect start states to end states in order to answer questions about the simulation.
A second feature of this thesis is its use of lifted variables for pattern matching and inference. For example, one of Matt’s rules is
(law vet-cost ((v* vet) hogs rate cost)
((= (hogs-present !?v*) !>hogs)
(= (rate !?v*) !>rate)
(= (cost !?v*) !>cost))
(equation ‘cost ‘(sn&* hogs rate) ))
This law introduces a new equation for the vet’s cost, calculated by multiplying the number of hogs by the vet’s rate. This law is triggered if associated variables are defined: hogs-present, rate, and cost. This approach foreshadows STRIPS-style planning, which works via declared preconditions and effects. Most of the existing work on behavior cloning and large behavior models uses skills parameterized with language, such as language or image-conditioned skills. Yet many of the places we want our robots to integrate rely on formal, structured tasks, such as fulfilling an order from a website or assembling a kit for the next model coming down the assembly line. So, in addition to language-conditioned and goal-conditioned tasks, we need skills that take formal parameters that make promises about the entire parameter space.
Unstructured natural languages such as English can express the heights of philosophy, the intricacies of science, and the whimsy of folk tales. Formal languages, in contrast, are limited to the precisely specified grammar, syntax, and semantics of the language, plus whatever a programmer can add to the language within those constraints. The Church-Turing thesis tells us that any programming language boils down to a Turing Machine, one way or another. Yet we still don’t have a formal language that captures the full power and nuance of English while still preserving the precision of the formal language. Yet formal languages - from Python to Jax to Linear Temporal Logic - provide powerful safety guarantees, the ability to safely and robustly compose large systems, and clearly interpretable answers and constraints. Figuring out how to make them play nice with our neural models is an ongoing challenge!
My advisor recommended Chez Ashton. I thought it was some kind of fancy French place, but actually it was like McDonalds but for poutine - delicious!
I was excited that “Elephants Don’t Write Sonnets” was a 4-gram not present on Google before our first blog post. Neither is “Qualitative Simulation of Swine Production”!



