Last updated on 06/03/2024
I’m technically Dr. Jordan Thayer. I have a doctorate in the philosophy of computer science, focusing on AI. I have a weird relationship with my doctorate. Sometimes, it’s useful to credential myself: I know a lot about this, look at the letters! Other times, it gets in the way: I can have the degree and still be wrong; I need you to call me out!
I think both reactions are borne from people not being that familiar with the process. Most of us don’t do PhDs, and many of us don’t know someone with a doctorate. To paraphrase the great Tom Lehrer, “Be that as it may, some of you may have had occasion to run into PhDs and to wonder therefore how they got that way.” Allow me to illuminate by describing what, exactly, a doctorate is.
An Apprenticeship
First and foremost, a PhD is an opportunity to learn. “Of course!” I hear you exclaim. I don’t mean the subject matter however, though we do learn that. No, I mean we learn how to run a university. This comes in a few distinct pieces:
- Learning how to teach and mentor
- Learning how to conduct novel research
- Learning how to care for and feed an institution
The subject of the PhD is critical. More important though is learning how to do the above. It’s so rare that anyone needs your specific, super narrow, expertise. More often, they’re relying on your ability to conduct novel research. Novel research isn’t useful unless you can explain it to others. And that, in a nutshell, is the job that PhDs are most often hired to do. We’re expected to learn new things and share them with others.
A Trial
Grad school is also a bit of a trial. The pay is lousy. The hours are long. The expectations are high. Lots of people want those positions, so you’re pretty replaceable. And, like a fraternity, or a social club, shared suffering builds camaraderie. When I talk to other folks that went through grad school, I know we have a large common set of experiences to talk about. From the outside, it’s a big signifier of commitment. Anyone with a PhD has shown a willingness to go to the mat to solve a problem.
Piled Higher and Deeper
When I look back on my doctoral program, I’m not proud of the accolades, the grants, or the certificates. The thing that matters the most to me is that I added my little rock to the pile of human knowledge. Early on, I found a question I didn’t know how to answer. It turned out that it was interesting enough for the government to pay for me to study it. Then I learned that, well, no one really knew how to answer that question. I was lucky enough to get to spend five years looking for an answer. Then, I took what I found and told everyone about it.
Six years boiled up into a pile of publications, and a footnote in a chapter of “AI: A Modern Approach”. It doesn’t sound like much, but I’m immensely proud of it. AI: AMA is the textbook that undergraduate AI courses are taught from. My work was important enough to be presented to every college student that studies AI, in some very small way.
Well, What Was Your PhD?
So, what was that work about? Well…
There’s a class of problems that are interesting, both commercially but also mathematically. These are called NP-hard, and you’ve encountered some of them in your life. Sudoku and logistics for shipping products from warehouses to homes and a billion other things.
There’s a type of algorithm you use to solve those, called heuristic search. A* is the most well known of these. Well, A* is an optimal search. That means that if there’s a solution to a problem, it will find the cheapest one. It also proves that it finds the cheapest solution. The downside is that this solving is expensive. As problem sizes grow, the cost of solving them grows exponentially. For problems of interest, often we would need multiple human lifetimes to find an answer.
It’s just not practical to solve every NP-hard problem optimally. However, there’s a class of algorithms that can help: bounded suboptimal search algorithms. They relax the bound on solution optimality for decreased compute resources. For example, you might say “I’m willing to pay 1.5 times the optimal solution cost to find a solution faster.” These algorithms allow you to do that for any factor of optimal, called a suboptimality bound.
I studied algorithms in that class. In particular, I was interested in how algorithms could learn from their own behavior during search. I thought that by imbuing algorithms with introspection, we could find answers faster. It took a minute to explore that idea, but it turns out I was right. There are quite a few ways we can use the behavior of an algorithm to improve its performance in situ.
What Do I Do Now that I’ve Defended My Thesis?
I’m in charge of the AI practice for a software product consultancy. My job is roughly to have a broad command of the body of research that’s been conducted in AI. When a colleague wants to learn about AI, I know how and what to teach. When a client comes to us with a problem, I identify what kinds of AI approaches might help solve it. My knowledge of the literature is critical in being effective and efficient. I can relate a problem to a body of research and learn both how to solve the problem in front of me as well as what missteps to avoid. And if the problem is truly novel? Well, I have some experience in solving those too.