Now that I’m done with my first year of college (wow!), I’ve actually had significant amounts of free time. I’ve been using some of that time to compose electronic music in Ableton Live, and I’ve managed to accumulate enough songs that I figured I should put them online somewhere. So, without further ado, my new SoundCloud account! (ta-daaa!)
So that’s pretty cool. Hope you enjoy!
(For those of you paying attention, the artwork that goes with each of the songs is actually an exported image from my MotionCells experiment. The two seem to go together quite well!)
A few weekends ago I went to the TreeHacks hackathon at Stanford. It was a lot of fun, and was probably the best hackathon I’ve been to so far, so good job Stanford people! While there, I got a chance to work with a Meta 1 developer kit. I used them to make a 3D augmented reality replacement for alt-tab, where the windows on your computer would fly out into 3D space.
This is the third part of a three-part writeup on my augmented Shanahan project. If you would like to read about the algorithms behind the augmented reality, you should probably start with part one.
In this final part of the writeup, I will be discussing the augmented Shanahan snake game itself. The game is made of two parts, the client app, which runs on the mobile device, and the server app, which runs on my website server. The client is responsible for doing all of the detection, tracking, and extrapolation that I discussed in the first two parts. It also accepts user input, and renders the current game state. The server receives the user input from each client device, processes the snake game, and then broadcasts the updated game states to each client.
This is the second part of a three-part writeup on my augmented Shanahan project. You may want to start with part one. For some background math, you may also want to read the post about my 5C Hackathon entry
From the detection and tracking algorithms, I was able to generate a 2D perspective transformation matrix that mapped coordinates on the wall to pixels in the image. In order to do 3D rendering, however, I needed a 3D transformation matrix. The algorithm I ended up using is based on this post on Stack Exchange, this gist on GitHub, and this Wikipedia page. I will attempt to describe how the algorithm works in the rest of this post.
When I was thinking of ideas for the 5C Hackathon back in November, I thought it would be really cool to do augmented reality on buildings. They are large, somewhat unnoticed features of the environment, and they are relatively static, not moving or changing over time. They seemed like the perfect thing to overlay with augmented reality. And since they are so large, it opens up the possibility for multiple people to interact with the same wall simultaneously.
Note: This is mostly an explanation of how I implemented detection. If you prefer, you can skip ahead to the 3D extrapolation and rendering in part 2, or to the game logic in part 3.
If you instead want to try it out on your own, click here. The augmented reality works on non-iOS devices with a back-facing camera and a modern browser, but you have to be in the Shanahan building to get the full effect. Denying camera access or accessing it on an incompatible device will use a static image of the building instead of using the device camera. Works best in evenly-lit conditions. Image quality and detection may be slightly better in Firefox than Chrome, which doesn’t autofocus very well.
This is an experiment I started working on many months ago, but didn’t actually finish until recently. It was based on some doodles I used to make, where I would draw a bunch of lines, continuing each until it intersected with others. It creates a bunch of lines randomly, and then extends them until they intersect, coloring the regions between the lines.
I’ve been at Harvey Mudd College for almost three months now, and I’m enjoying it a lot. So far, I’ve participated in two hackathons: first, MuddHacks, a hardware hackathon, and then last weekend, the 5C Hackathon, a software hackathon that spans the 5 Claremont Colleges (Harvey Mudd, Pomona, Scripps, Pitzer, Claremont McKenna). In this post, I’m going to be talking about the project I made for the second of the two hackathons.
But wait! You haven’t talked about the first one yet! You can’t do a part 2 without doing a part 1! I’d like to tell you all about how I got caught in a time machine that someone built in the hardware hackathon and so my past self posted about the first hackathon in this timeline’s future, but
then I’d have to get the government to come brainwash you to preserve national security I’d be lying. So instead I’ll explain that I haven’t actually finished the project from the first hackathon yet. Both of these were 12-hour hackathons, and about 10.5 hours into the first one, I realized that our chosen camera angle introduced interference patterns that made our image analysis code completely ineffective. I’m hoping to find some time this weekend or next to finish that up, and I’ll write that project up then.
In CS 42 we are starting a unit on low-level computing. As a part of that, we are using a program called Logisim, a program that allows you to build virtual circuits. Of course, after experimenting with it, I had some ideas. So, without further ado, my modular Conway’s Game of Life implementation in Logisim!
“Modular”? How so? Basically, I designed the circuit as a grid of cells. Each cell is a square subcircuit, and they connect to each other on the sides. There is also a clock signal that is propagated through the grid to synchronize the updates.
I’m really happy with how this experiment turned out. It’s a clock that spells out the time with squares, and then shifts those squares into their new positions whenever the time changes. It draws the time text into a hidden tiny
canvas element at each minute, then uses
getImageData to extract the individual pixels. Any pixel that has been drawn with an alpha > 0.5 is set as a destination for the next squares animation. The animations themselves are performed using d3.js.
The picture above is of the black-on-white version. There is also a grey-on-black version, if you prefer that color scheme. Read more…
After completing the basic design and software for the whiteboard drawing bot, I decided to make an interactive simulator and shape editor to allow people to generate their own commands. I thought it would be cool to share it as well, in case other people wanted to play with the stroking/filling algorithms or use it to run their own drawing robots or do something else with it entirely.
For the simulator, I wrote a parser to parse the command sequence, and then animated it drawing the command out onto the screen. The parser is written with PEG.js, which I’ll be discussing a bit later. The parameters for the generation and rendering are controlled using DAT.gui, and the drawing itself is done using two layered canvases: the bottom one to hold the actual drawing that persists from frame to frame, and the top one to render the arms, which are cleared and redrawn each frame. I separated them because I did not want to have to re-render the entire drawing each time the simulator drew anything new. Read more…