hexahedria

Daniel Johnson's personal fragment of the web

Posts tagged "javascript"

Discrete and Computational Geometry Projects

At Harvey Mudd, I’m taking a cool class called “Discrete and Computational Geometry”, a special topics course taught by Professor Satyan Devadoss. It’s a very interesting class. In lieu of normal problem sets, we instead do a bunch of group projects, each one very freeform. The basic instructions are “go make something related to this class”. Here are a couple of the projects my group made:

Read more...

TIME || ƎMIT: A game for js13k

Story time: So a few years ago, after playing Portal and loving it, and after making my iOS word game Wordchemy, I was kind of in a game-design zone. I brainstormed a ton of ideas: from memory games, to arcade-style games, to physics-based puzzle games, and many more. One idea, however, stuck with me. The basic idea was this: what if you could go backwards in time? Not as in traditional time travel, where you appear in the past, but actually backwards? Everything else would seem to be moving in reverse: things would fall up, people would be walking backwards, and you might even see your (relative) past self undoing your actions!

I ended up never making most of those games, but that time-travel idea stuck. I started designing levels for it, and actually started to build it once, but I never got it to a playable state. But every once and a while, I’d start thinking about it again. I decided that one of these days I was going to make it.

Read more...

Augmented Shanahan: Game Logic

This is the third part of a three-part writeup on my augmented Shanahan project. If you would like to read about the algorithms behind the augmented reality, you should probably start with part one.

In this final part of the writeup, I will be discussing the augmented Shanahan snake game itself. The game is made of two parts, the client app, which runs on the mobile device, and the server app, which runs on my website server. The client is responsible for doing all of the detection, tracking, and extrapolation that I discussed in the first two parts. It also accepts user input, and renders the current game state. The server receives the user input from each client device, processes the snake game, and then broadcasts the updated game states to each client.

Read more...

Augmented Shanahan: Detection

When I was thinking of ideas for the 5C Hackathon back in November, I thought it would be really cool to do augmented reality on buildings. They are large, somewhat unnoticed features of the environment, and they are relatively static, not moving or changing over time. They seemed like the perfect thing to overlay with augmented reality. And since they are so large, it opens up the possibility for multiple people to interact with the same wall simultaneously.

Note: This is mostly an explanation of how I implemented detection. If you prefer, you can skip ahead to the 3D extrapolation and rendering in part 2, or to the game logic in part 3.

If you instead want to try it out on your own, click here. The augmented reality works on non-iOS devices with a back-facing camera and a modern browser, but you have to be in the Shanahan building to get the full effect. Denying camera access or accessing it on an incompatible device will use a static image of the building instead of using the device camera. Works best in evenly-lit conditions. Image quality and detection may be slightly better in Firefox than Chrome, which doesn’t autofocus very well.

Read more...

Subdivision

This is an experiment I started working on many months ago, but didn’t actually finish until recently. It was based on some doodles I used to make, where I would draw a bunch of lines, continuing each until it intersected with others. It creates a bunch of lines randomly, and then extends them until they intersect, coloring the regions between the lines.

Read more...

Hackathon writeup: Augmented Reality Snake

I’ve been at Harvey Mudd College for almost three months now, and I’m enjoying it a lot. So far, I’ve participated in two hackathons: first, MuddHacks, a hardware hackathon, and then last weekend, the 5C Hackathon, a software hackathon that spans the 5 Claremont Colleges (Harvey Mudd, Pomona, Scripps, Pitzer, Claremont McKenna). In this post, I’m going to be talking about the project I made for the second of the two hackathons.

But wait! You haven’t talked about the first one yet! You can’t do a part 2 without doing a part 1! I’d like to tell you all about how I got caught in a time machine that someone built in the hardware hackathon and so my past self posted about the first hackathon in this timeline’s future, but then I’d have to get the government to come brainwash you to preserve national security I’d be lying. So instead I’ll explain that I haven’t actually finished the project from the first hackathon yet. Both of these were 12-hour hackathons, and about 10.5 hours into the first one, I realized that our chosen camera angle introduced interference patterns that made our image analysis code completely ineffective. I’m hoping to find some time this weekend or next to finish that up, and I’ll write that project up then.

Anyways, about this project. My partner Hamzah Khan and I wanted to do some sort of computer vision/image analysis based project. Originally, we considered trying to detect certain features in an image and then use those as the basis for a game (I thought it would be really cool to play snake on the walls of HMC, which are mostly made of square bricks). But feature detection is pretty difficult, and we decided it wasn’t a good choice for a 12-hour project. Instead, we came up with the idea of doing an augmented-reality based project, detecting a very specific marker pattern. We also wanted to do it in Javascript, because we both knew Javascript pretty well and also wanted to be able to run it on mobile devices.

Read more...

Shift Clock

I'm really happy with how this experiment turned out. It's a clock that spells out the time with squares, and then shifts those squares into their new positions whenever the time changes. It draws the time text into a hidden tiny canvas element at each minute, then uses getImageData to extract the individual pixels. Any pixel that has been drawn with an alpha > 0.5 is set as a destination for the next squares animation. The animations themselves are performed using d3.js.

The picture above is of the black-on-white version. There is also a grey-on-black version, if you prefer that color scheme.

Read more...

Whiteboard Drawing Bot – Part 3: Editor

After completing the basic design and software for the whiteboard drawing bot, I decided to make an interactive simulator and shape editor to allow people to generate their own commands. I thought it would be cool to share it as well, in case other people wanted to play with the stroking/filling algorithms or use it to run their own drawing robots or do something else with it entirely.

For the simulator, I wrote a parser to parse the command sequence, and then animated it drawing the command out onto the screen. The parser is written with PEG.js, which I'll be discussing a bit later. The parameters for the generation and rendering are controlled using DAT.gui, and the drawing itself is done using two layered canvases: the bottom one to hold the actual drawing that persists from frame to frame, and the top one to render the arms, which are cleared and redrawn each frame. I separated them because I did not want to have to re-render the entire drawing each time the simulator drew anything new.

Read more...

Whiteboard Drawing Bot – Part 2: Algorithms

In order to actually use my whiteboard drawing bot, I needed a way to generate the commands to be run by the Raspberry Pi. Essentially, this came down to figuring out how to translate a shape into a series of increases and decreases in the positions of the two stepper motors.

The mechanical design of the contraption can be abstracted as simply two lines: one bound to the origin point and one attached to the end of the first. The position of the pen is then the endpoint of the second arm:

download

The arms cannot move freely, however. The first arm can only be at discreet intervals, dividing the circle into the same number of steps as the stepper motor has. The second arm can actually only be in half that many positions: it has the same interval between steps, but can only sweep out half a circle. Furthermore, the second arm's angle is relative to the first's: if each step is 5 degrees and the first arm is on the third step, then the second arm's step zero follows the first arm's angle and continues at 15 degrees.

Read more...

Refraction

This is an experiment I made recently. It displays the path that light would take when it refracts through variously-shaped objects, color-coded based on the initial angle of emission. In order to do this, it casts a series of rays and uses Snell's law to determine how they refract off of objects. It then iteratively casts more rays between rays that get too far apart or act differently (if one ray hits something and the other hits something else, for instance) to get higher accuracy. The raycasting is performed with JavaScript, and then the light intensity is interpolated between rays in WebGL.

Read more...

QR Clock

A ridiculously useless but rather cool looking clock. Every second, it creates a QR code that encodes the time and date, and then animates each pixel of the QR code to its new state. It attempts to move the dark pixels between locations, sliding from one location to another, but if a given pixel cannot be animated using motion, it simply fades away. You’ll need a QR reader to actually use this.

Read more...

Speech -> Music

This is a program I wrote to explore the connection between the patterns in poetry and speeches and the patterns in music. Unfortunately, I cannot upload it here (the WebMIDI API integration does not seem to work anymore and also it was built with a very specific soundfont so viewing it elsewhere would not be equivalent), so instead of the actual program you can watch a video of it.

The program, written in Javascript and utilizingthe WebMIDI API, first runs through the poem and extracts phonemes using the excellent Carnegie Mellon Pronouncing Dictionary. It then links together words that share phonemes, assigning each pair a strength depending on the number of matched phonemes (plus a bonus if the words are identical). Finally, it begins to play the poem syllable-by-syllable, assigning each word a note based on its first letter. When the playhead reaches a word that has links to other words, it creates a secondary playhead that plays the matched word simultaneously, producing temporary chords and repetition that die off over time.

In this video, the program is playing its rendition of “The Raven” by Edgar Allan Poe.

Read more...

Textflow

This is a pretty dumb experiment, really. I got the idea one day when I was absentmindedly typing into a blank page. I noticed that if I pressed and held multiple keys, it would repeatedly type all of them. I did this for some time, and discovered that by inserting text at the start of a document when the document consists of lots of repeated sequences, the sequences seem to dance and jitter due to the irregularities in letter width. So, of course, I wrote up a quick script in Javascript to simulate this, and then made it colorful. Don’t expect anything profound. It’s literally just text moving across the screen.

Read more...

Stacked Clock

This is one of my favorite experiments. I had the original idea during a vacation to Hawaii. Basically, I wanted to make a clock where the hands were connected end-to-tip instead of all being connected to the center of the clock, making a sort of time arm (second hand starting where minute hand ended, etc). I made a version of that using Rainmeter on my desktop for a while.

Read more...

Motion Cells

This is an experiment I made a long time ago, inspired by a flocking cellular automaton I saw on Rectangle World. In this experiment, the canvas is divided into a bunch of cells, each of which maintain a vector. Every frame, each cell attempts to make its vector closer to the vectors of its neighbors, and the vectors are “normalized” so that the average length is 1 (some are longer, some are shorter). On top of this, I built 19 different variations, each of which displays the results in a different way.

My first few variations display the actual vectors, either just as lines or by mapping direction to hue and length to brightness. In most of the other variations, the code places particles “on top” of the cells, and they move in the direction that the vectors point. In a few of the variations, I connect the particles to each other and draw lines between them, making a sort of web effect. This was one of the first experiments I made with the HTML5 Canvas, and I like how it looks.

Read more...

© . All rights reserved. | Top