It’s been a while since I’ve posted here, so I figured I’d make a post giving a bit of an update on what I’ve been up to.
To start off, I graduated from Harvey Mudd College and am now living in San Francisco! Right now I’m working at Cruise Automation, a self-driving car company, developing machine-learning models for predicting what people will do next.
Also, a paper that I worked on with other students at HMC has been accepted to NeurIPS 2018! The project was based on a collaboration between HMC and Intel Corporation, with the goal being to design a method for separating and locating sound events in a large, noisy outdoor space. We ended up building a probabilistic model of sound propagation and inter-microphone phase differences, and performed separation and localization using approximate Bayesian maximum a posteriori inference under that model.
The basic idea that makes this work is that, when two microphones are placed very close to one another, sounds arrive at one microphone slightly before the other, with a delay that is dependent on the angle the source makes with the microphones. Since the microphones are close, this time delay turns into a phase change in the audio signal, which can be detected using the short-time Fourier transform. If you have multiple microphone pairs, you can figure out what combination of phase changes you might see based on each possible source location (the probabilistic model). You can then “work backwards” (Bayesian inference) to figure out what sources and locations are most likely (maximum a posteriori) to have produced the inputs you recorded, using some assumptions about source and location smoothness to ensure you get a good separation. If that sounds interesting to you, you can find more information here (which should have the paper and poster available in the next few weeks).
I’ll be presenting this paper as a poster at NeurIPS 2018 in Montreal in December. Looking forward to meeting people there and talking about my work in more detail!
Recently, I submitted a paper titled “Learning Graphical State Transitions” to the ICLR conference. (Update: My paper was accepted! I will be giving an oral presentation at ICLR 2017 in Toulon, France. See here for more details.) In it, I describe a new type of neural network architecture, called a Gated Graph Transformer Neural Network, that is designed to use graphs as an internal state. I demonstrate its performance on the bAbI tasks as well as on some other tasks with complex rules. While the main technical details are provided in the paper, I figured it would be worthwhile to describe the motivation and basic ideas here.
Note: Before I get too far into this post, if you have read my paper and are interested in replicating or extending my experiments, the code for my model is available on GitHub.
Another thing that I’ve noticed is that almost all of the papers on machine learning are about successes. This is an example of an overall trend in science to focus on the positive results, since they are the most interesting. But it can also be very useful to discuss the negative results as well. Learning what doesn’t work is in some ways just as important as learning what does, and can save others from repeating the same mistakes. During my development of the GGT-NN, I had multiple iterations of the model, which all failed to learn anything interesting. The version of the model that worked was thus a product of an extended cycle of trial and error. In this post I will try to describe the failed models as well, and give my speculative theories for why they may not have been as successful.
This summer, I had the chance to do research at Mudd as part of the Intelligent Music Software team. Every year, under the advisement of Prof. Robert Keller, members of the Intelligent Music Software team work on computational jazz improvisation, usually in connection with the Impro-Visor software tool. Currently, Impro-Visor uses a grammar-based generation approach for most of its generation ability. The goal this summer was to try to integrate neural networks into the generation process.
At Harvey Mudd, I’m taking a cool class called “Discrete and Computational Geometry”, a special topics course taught by Professor Satyan Devadoss. It’s a very interesting class. In lieu of normal problem sets, we instead do a bunch of group projects, each one very freeform. The basic instructions are “go make something related to this class”. Here are a couple of the projects my group made:
This is the second part of a three-part writeup on my augmented Shanahan project. You may want to start with part one. For some background math, you may also want to read the post about my 5C Hackathon entry
From the detection and tracking algorithms, I was able to generate a 2D perspective transformation matrix that mapped coordinates on the wall to pixels in the image. In order to do 3D rendering, however, I needed a 3D transformation matrix. The algorithm I ended up using is based on this post on Stack Exchange, this gist on GitHub, and this Wikipedia page. I will attempt to describe how the algorithm works in the rest of this post.
In order to actually use my whiteboard drawing bot, I needed a way to generate the commands to be run by the Raspberry Pi. Essentially, this came down to figuring out how to translate a shape into a series of increases and decreases in the positions of the two stepper motors.
The mechanical design of the contraption can be abstracted as simply two lines: one bound to the origin point and one attached to the end of the first. The position of the pen is then the endpoint of the second arm:
The arms cannot move freely, however. The first arm can only be at discreet intervals, dividing the circle into the same number of steps as the stepper motor has. The second arm can actually only be in half that many positions: it has the same interval between steps, but can only sweep out half a circle. Furthermore, the second arm's angle is relative to the first's: if each step is 5 degrees and the first arm is on the third step, then the second arm's step zero follows the first arm's angle and continues at 15 degrees.