hexahedria

Daniel Johnson's personal fragment of the web

Posts tagged "image processing"

Augmented Shanahan: Detection

When I was thinking of ideas for the 5C Hackathon back in November, I thought it would be really cool to do augmented reality on buildings. They are large, somewhat unnoticed features of the environment, and they are relatively static, not moving or changing over time. They seemed like the perfect thing to overlay with augmented reality. And since they are so large, it opens up the possibility for multiple people to interact with the same wall simultaneously.

Note: This is mostly an explanation of how I implemented detection. If you prefer, you can skip ahead to the 3D extrapolation and rendering in part 2, or to the game logic in part 3.

If you instead want to try it out on your own, click here. The augmented reality works on non-iOS devices with a back-facing camera and a modern browser, but you have to be in the Shanahan building to get the full effect. Denying camera access or accessing it on an incompatible device will use a static image of the building instead of using the device camera. Works best in evenly-lit conditions. Image quality and detection may be slightly better in Firefox than Chrome, which doesn’t autofocus very well.

Read more...

Hackathon writeup: Augmented Reality Snake

I’ve been at Harvey Mudd College for almost three months now, and I’m enjoying it a lot. So far, I’ve participated in two hackathons: first, MuddHacks, a hardware hackathon, and then last weekend, the 5C Hackathon, a software hackathon that spans the 5 Claremont Colleges (Harvey Mudd, Pomona, Scripps, Pitzer, Claremont McKenna). In this post, I’m going to be talking about the project I made for the second of the two hackathons.

But wait! You haven’t talked about the first one yet! You can’t do a part 2 without doing a part 1! I’d like to tell you all about how I got caught in a time machine that someone built in the hardware hackathon and so my past self posted about the first hackathon in this timeline’s future, but then I’d have to get the government to come brainwash you to preserve national security I’d be lying. So instead I’ll explain that I haven’t actually finished the project from the first hackathon yet. Both of these were 12-hour hackathons, and about 10.5 hours into the first one, I realized that our chosen camera angle introduced interference patterns that made our image analysis code completely ineffective. I’m hoping to find some time this weekend or next to finish that up, and I’ll write that project up then.

Anyways, about this project. My partner Hamzah Khan and I wanted to do some sort of computer vision/image analysis based project. Originally, we considered trying to detect certain features in an image and then use those as the basis for a game (I thought it would be really cool to play snake on the walls of HMC, which are mostly made of square bricks). But feature detection is pretty difficult, and we decided it wasn’t a good choice for a 12-hour project. Instead, we came up with the idea of doing an augmented-reality based project, detecting a very specific marker pattern. We also wanted to do it in Javascript, because we both knew Javascript pretty well and also wanted to be able to run it on mobile devices.

Read more...

© . All rights reserved. | Top