What I learned taking one of MIT’s hardest graduate computer science classes (besides lots of math)
This past fall, I had the rare opportunity to take a graduate engineering course at MIT without being enrolled in a degree program. I knew a class would be hard, but being a student at the best technical school in the world, if only for a couple months, was something I had to experience. I talked to some students and recent grads, picked a class, and registered.
It’s hard to explain what the class was about. It was called “Algorithms for Inference”, and it focused on probabilistic graphical models, a framework for dealing with complicated probability distributions. There are no analogies I can use to explain what that means that would make sense to someone who hasn’t taken a graduate level probability and statistics course. Essentially, the course covered a lot of the mathematical techniques that underlie cutting edge artificial intelligence algorithms. Despite being an esoteric topic, it has become pretty trendy in certain computer science communities. It’s also known by students to be one of the hardest graduate subjects offered by MIT’s elite Electrical Engineering & Computer Science department.
The first lecture was fun. The professor was great and rather brilliantly outlined the course and its significance in the larger field of artificial intelligence and machine learning. It wasn’t until I sat down with the first problem set that I realized just how much trouble I was in. I remember spending an entire Saturday on the first question and barely making progress. My old college textbooks and Google searches were all but useless. After hitting the library and getting help from the TAs and my PhD co-workers, I finally started grasping the fundamentals. But things were going painfully slowly. If you’ve ever wondered how smart MIT students are, they’re crazy smart. Scary smart. One-in-a-thousand smart. I wouldn’t be surprised at all if the average IQ in that classroom was 140. Being so thoroughly outmatched was a new experience for me.
A few problem sets into the semester, I started to get the hang of things. Still, I had to spend about 20 hours per week outside of the classroom reading and working on problems. I came to appreciate that struggling with the workload is a fundamental learning tool. The expectations on an MIT graduate student are intense. This course packed far more content into a semester than my bachelor’s and master’s courses had. You can’t learn material at this depth by simply attending lectures and flipping through slides. Only by working through the problems, which always seem impossible at first, can you develop a profound understanding of the mathematical nuances that make these algorithms and models work. My classmates knew this and rarely complained. I found that the students weren’t just intellectually gifted but also ridiculously hard-working. I struggled at times to motivate myself, but I put in the work, managed roughly average performance, and made it through the class.
So what did I learn (besides probabilistic graphical models)?
When it comes to intellectual pursuits, striving to be the best is an impossible goal. This might sound like blasphemy to the over-achiever culture, but let’s be real. There will always be someone better than you in some way: smarter, faster, or harder working. Rather than worrying about being the biggest fish in your pond, focus instead on finding the right pond. Lots of people dream about getting a PhD at MIT because we’ve all heard stories about amazing people conducting impactful, innovative research. No one aspires to “just get by” at an elite institution. Find a place in the world where you’re challenged and need to put forth your best, but also where you’re good enough to make a real difference. And if you keep working hard and learning, who knows where you’ll end up.
this is a new field, but not esoteric. predictive models, like this, is what what stuff like google’s self driving cars, and facebook’s newsfeed are faced on. this math will be used increasingly in the future.