hexahedria

Daniel Johnson's personal fragment of the web

A Tale of Two Hackathons, Part 2: Augmented Reality Snake

ars_on_phone

I’ve been at Harvey Mudd College for almost three months now, and I’m enjoying it a lot. So far, I’ve participated in two hackathons: first, MuddHacks, a hardware hackathon, and then last weekend, the 5C Hackathon, a software hackathon that spans the 5 Claremont Colleges (Harvey Mudd, Pomona, Scripps, Pitzer, Claremont McKenna). In this post, I’m going to be talking about the project I made for the second of the two hackathons.

But wait! You haven’t talked about the first one yet! You can’t do a part 2 without doing a part 1! I’d like to tell you all about how I got caught in a time machine that someone built in the hardware hackathon and so my past self posted about the first hackathon in this timeline’s future, but then I’d have to get the government to come brainwash you to preserve national security I’d be lying. So instead I’ll explain that I haven’t actually finished the project from the first hackathon yet. Both of these were 12-hour hackathons, and about 10.5 hours into the first one, I realized that our chosen camera angle introduced interference patterns that made our image analysis code completely ineffective. I’m hoping to find some time this weekend or next to finish that up, and I’ll write that project up then.

Anyways, about this project. My partner Hamzah Khan and I wanted to do some sort of computer vision/image analysis based project. Originally, we considered trying to detect certain features in an image and then use those as the basis for a game (I thought it would be really cool to play snake on the walls of HMC, which are mostly made of square bricks). But feature detection is pretty difficult, and we decided it wasn’t a good choice for a 12-hour project. Instead, we came up with the idea of doing an augmented-reality based project, detecting a very specific marker pattern. We also wanted to do it in Javascript, because we both knew Javascript pretty well and also wanted to be able to run it on mobile devices.

Read more…

Modular Conway’s Game of Life in Logisim

Top-View-cropped

In CS 42 we are starting a unit on low-level computing. As a part of that, we are using a program called Logisim, a program that allows you to build virtual circuits. Of course, after experimenting with it, I had some ideas. So, without further ado, my modular Conway’s Game of Life implementation in Logisim!

“Modular”? How so? Basically, I designed the circuit as a grid of cells. Each cell is a square subcircuit, and they connect to each other on the sides. There is also a clock signal that is propagated through the grid to synchronize the updates.

Read more…

Shift Clock

Shift Clock

I’m really happy with how this experiment turned out. It’s a clock that spells out the time with squares, and then shifts those squares into their new positions whenever the time changes. It draws the time text into a hidden tiny canvas element at each minute, then uses getImageData to extract the individual pixels. Any pixel that has been drawn with an alpha > 0.5 is set as a destination for the next squares animation. The animations themselves are performed using d3.js.

The picture above is of the black-on-white version. There is also a grey-on-black version, if you prefer that color scheme. Read more…

Whiteboard Drawing Bot – Part 3: Editor

combined

After completing the basic design and software for the whiteboard drawing bot, I decided to make an interactive simulator and shape editor to allow people to generate their own commands. I thought it would be cool to share it as well, in case other people wanted to play with the stroking/filling algorithms or use it to run their own drawing robots or do something else with it entirely.

For the simulator, I wrote a parser to parse the command sequence, and then animated it drawing the command out onto the screen. The parser is written with PEG.js, which I’ll be discussing a bit later. The parameters for the generation and rendering are controlled using DAT.gui, and the drawing itself is done using two layered canvases: the bottom one to hold the actual drawing that persists from frame to frame, and the top one to render the arms, which are cleared and redrawn each frame. I separated them because I did not want to have to re-render the entire drawing each time the simulator drew anything new. Read more…

Whiteboard Drawing Bot – Part 2: Algorithms

Drawbot algorithms featured

In order to actually use my whiteboard drawing bot, I needed a way to generate the commands to be run by the Raspberry Pi. Essentially, this came down to figuring out how to translate a shape into a series of increases and decreases in the positions of the two stepper motors.

The mechanical design of the contraption can be abstracted as simply two lines: one bound to the origin point and one attached to the end of the first. The position of the pen is then the endpoint of the second arm:

download

The arms cannot move freely, however. The first arm can only be at discreet intervals, dividing the circle into the same number of steps as the stepper motor has. The second arm can actually only be in half that many positions: it has the same interval between steps, but can only sweep out half a circle. Furthermore, the second arm’s angle is relative to the first’s: if each step is 5 degrees and the first arm is on the third step, then the second arm’s step zero follows the first arm’s angle and continues at 15 degrees. Read more…

Whiteboard Drawing Bot – Part 1

2014-07-27 20.18.23

For the last couple of weeks, I have been working on creating a “whiteboard drawing bot”, a Raspberry-Pi-powered contraption that can draw shapes and text on a whiteboard. After four redesigns and about a thousand lines of code, I’m finally finished. Tada!

Anyways, now that I have finished building it, I am going to write a bit about how I did so in a few posts. For this first post, I’m going to be talking about the hardware behind the bot and a little bit of the software on the Raspberry Pi. Then, later, I’ll talk about the custom software I wrote to translate shapes and lines into commands for the Pi.

Read more…

Trying out Ableton Live

A few weeks ago, I downloaded a trial version of Ableton Live and fiddled around with it. I wrote two short songs, and I thought I may as well put them up here. So, without further ado, here are the two songs I have written:

Street Corners

Fission

I’ve been exploring some different styles that I like, so they are quite different in content, and I haven’t really edited them that much. Even so, though, I think they are pretty cool. I kinda pulled the names out of thin air, so don’t expect any deep metaphors or anything. Read more…

Refraction

refraction

This is an experiment I made recently. It displays the path that light would take when it refracts through variously-shaped objects, color-coded based on the initial angle of emission. In order to do this, it casts a series of rays and uses Snell’s law to determine how they refract off of objects. It then iteratively casts more rays between rays that get too far apart or act differently (if one ray hits something and the other hits something else, for instance) to get higher accuracy. The raycasting is performed with JavaScript, and then the light intensity is interpolated between rays in WebGL.
Read more…

Infinite Triangles

infinitetriangles

This experiment was one of my first experiments with WebGL. The story behind this one starts with a camping trip I took with some friends. We didn’t camp in the wilderness because of logistical issues, so we ended up camping close to a beach. On the beach, people started to build sand sculptures, so I began working on a Sierpinski-esque triangle design, but with recursion in the inner triangle as well. Ultimately, a few other people ended up working on the increasingly intricate design. After the camping trip ended, the triangular fractals (affectionately called “tringles” by my friend Sarah) became a sort of running joke and obsession, and we ended up doodling them all over the place. Eventually I decided to make a version with code that would be infinite.
Read more…

QR Clock

qrclock

A ridiculously useless but rather cool looking clock. Every second, it creates a QR code that encodes the time and date, and then animates each pixel of the QR code to its new state. It attempts to move the dark pixels between locations, sliding from one location to another, but if a given pixel cannot be animated using motion, it simply fades away. You’ll need a QR reader to actually use this.

Copyright © 2014 hexahedria. All Rights Reserved. | Top