hexahedria

Daniel Johnson's personal fragment of the web

Transcend 3D Window Switcher – TreeHacks

A few weekends ago I went to the TreeHacks hackathon at Stanford. It was a lot of fun, and was probably the best hackathon I’ve been to so far, so good job Stanford people! While there, I got a chance to work with a Meta 1 developer kit. I used them to make a 3D augmented reality replacement for alt-tab, where the windows on your computer would fly out into 3D space.

Read more...

Augmented Shanahan: Game Logic

This is the third part of a three-part writeup on my augmented Shanahan project. If you would like to read about the algorithms behind the augmented reality, you should probably start with part one.

In this final part of the writeup, I will be discussing the augmented Shanahan snake game itself. The game is made of two parts, the client app, which runs on the mobile device, and the server app, which runs on my website server. The client is responsible for doing all of the detection, tracking, and extrapolation that I discussed in the first two parts. It also accepts user input, and renders the current game state. The server receives the user input from each client device, processes the snake game, and then broadcasts the updated game states to each client.

Read more...

Augmented Shanahan: Extrapolation

This is the second part of a three-part writeup on my augmented Shanahan project. You may want to start with part one. For some background math, you may also want to read the post about my 5C Hackathon entry

From the detection and tracking algorithms, I was able to generate a 2D perspective transformation matrix that mapped coordinates on the wall to pixels in the image. In order to do 3D rendering, however, I needed a 3D transformation matrix. The algorithm I ended up using is based on this post on Stack Exchange, this gist on GitHub, and this Wikipedia page. I will attempt to describe how the algorithm works in the rest of this post.

Read more...

Augmented Shanahan: Detection

When I was thinking of ideas for the 5C Hackathon back in November, I thought it would be really cool to do augmented reality on buildings. They are large, somewhat unnoticed features of the environment, and they are relatively static, not moving or changing over time. They seemed like the perfect thing to overlay with augmented reality. And since they are so large, it opens up the possibility for multiple people to interact with the same wall simultaneously.

Note: This is mostly an explanation of how I implemented detection. If you prefer, you can skip ahead to the 3D extrapolation and rendering in part 2, or to the game logic in part 3.

If you instead want to try it out on your own, click here. The augmented reality works on non-iOS devices with a back-facing camera and a modern browser, but you have to be in the Shanahan building to get the full effect. Denying camera access or accessing it on an incompatible device will use a static image of the building instead of using the device camera. Works best in evenly-lit conditions. Image quality and detection may be slightly better in Firefox than Chrome, which doesn’t autofocus very well.

Read more...

Subdivision

This is an experiment I started working on many months ago, but didn’t actually finish until recently. It was based on some doodles I used to make, where I would draw a bunch of lines, continuing each until it intersected with others. It creates a bunch of lines randomly, and then extends them until they intersect, coloring the regions between the lines.

Read more...

Hackathon writeup: Augmented Reality Snake

I’ve been at Harvey Mudd College for almost three months now, and I’m enjoying it a lot. So far, I’ve participated in two hackathons: first, MuddHacks, a hardware hackathon, and then last weekend, the 5C Hackathon, a software hackathon that spans the 5 Claremont Colleges (Harvey Mudd, Pomona, Scripps, Pitzer, Claremont McKenna). In this post, I’m going to be talking about the project I made for the second of the two hackathons.

But wait! You haven’t talked about the first one yet! You can’t do a part 2 without doing a part 1! I’d like to tell you all about how I got caught in a time machine that someone built in the hardware hackathon and so my past self posted about the first hackathon in this timeline’s future, but then I’d have to get the government to come brainwash you to preserve national security I’d be lying. So instead I’ll explain that I haven’t actually finished the project from the first hackathon yet. Both of these were 12-hour hackathons, and about 10.5 hours into the first one, I realized that our chosen camera angle introduced interference patterns that made our image analysis code completely ineffective. I’m hoping to find some time this weekend or next to finish that up, and I’ll write that project up then.

Anyways, about this project. My partner Hamzah Khan and I wanted to do some sort of computer vision/image analysis based project. Originally, we considered trying to detect certain features in an image and then use those as the basis for a game (I thought it would be really cool to play snake on the walls of HMC, which are mostly made of square bricks). But feature detection is pretty difficult, and we decided it wasn’t a good choice for a 12-hour project. Instead, we came up with the idea of doing an augmented-reality based project, detecting a very specific marker pattern. We also wanted to do it in Javascript, because we both knew Javascript pretty well and also wanted to be able to run it on mobile devices.

Read more...

© . All rights reserved. | Top