Daniel Johnson's personal fragment of the web

TIME || ƎMIT: A game for js13k


Story time: So a few years ago, after playing Portal and loving it, and after making my iOS word game Wordchemy, I was kind of in a game-design zone. I brainstormed a ton of ideas: from memory games, to arcade-style games, to physics-based puzzle games, and many more. One idea, however, stuck with me. The basic idea was this: what if you could go backwards in time? Not as in traditional time travel, where you appear in the past, but actually backwards? Everything else would seem to be moving in reverse: things would fall up, people would be walking backwards, and you might even see your (relative) past self undoing your actions!

I ended up never making most of those games, but that time-travel idea stuck. I started designing levels for it, and actually started to build it once, but I never got it to a playable state. But every once and a while, I’d start thinking about it again. I decided that one of these days I was going to make it.

Read more…

Composing Music With Recurrent Neural Networks


It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.

For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!

Here’s a taste of things to come:

Read more…

Squared 3.0 for Pebble Time

For a few years, I’ve had a Pebble, and I recently upgraded to a Pebble Time. So far I really like it: its more comfortable, it looks cooler, and it has a color display! However, my favorite watchface for Pebble (Squared by lastfuture) was only black-and-white, and it seemed like a waste to use a non-color watchface on my color Pebble. So, naturally, since the watchface was open-source and Pebble has great development tools, I was able to make a color version!

I present, Squared 3.0:

The code is available on GitHub, and you can download it from the Pebble App Store if you have a Pebble Time yourself. Enjoy!

Creations of the musical variety

Now that I’m done with my first year of college (wow!), I’ve actually had significant amounts of free time. I’ve been using some of that time to compose electronic music in Ableton Live, and I’ve managed to accumulate enough songs that I figured I should put them online somewhere. So, without further ado, my new SoundCloud account! (ta-daaa!)

So that’s pretty cool. Hope you enjoy!

(For those of you paying attention, the artwork that goes with each of the songs is actually an exported image from my MotionCells experiment. The two seem to go together quite well!)

Transcend 3D Window Switcher – TreeHacks

Transcend 3d Window Switcher

A few weekends ago I went to the TreeHacks hackathon at Stanford. It was a lot of fun, and was probably the best hackathon I’ve been to so far, so good job Stanford people! While there, I got a chance to work with a Meta 1 developer kit. I used them to make a 3D augmented reality replacement for alt-tab, where the windows on your computer would fly out into 3D space.
Read more…

Augmented Shanahan: Game Logic


This is the third part of a three-part writeup on my augmented Shanahan project. If you would like to read about the algorithms behind the augmented reality, you should probably start with part one.

In this final part of the writeup, I will be discussing the augmented Shanahan snake game itself. The game is made of two parts, the client app, which runs on the mobile device, and the server app, which runs on my website server. The client is responsible for doing all of the detection, tracking, and extrapolation that I discussed in the first two parts. It also accepts user input, and renders the current game state. The server receives the user input from each client device, processes the snake game, and then broadcasts the updated game states to each client.
Read more…

Augmented Shanahan: Extrapolation

Window wireframes

This is the second part of a three-part writeup on my augmented Shanahan project. You may want to start with part one. For some background math, you may also want to read the post about my 5C Hackathon entry

From the detection and tracking algorithms, I was able to generate a 2D perspective transformation matrix that mapped coordinates on the wall to pixels in the image. In order to do 3D rendering, however, I needed a 3D transformation matrix. The algorithm I ended up using is based on this post on Stack Exchange, this gist on GitHub, and this Wikipedia page. I will attempt to describe how the algorithm works in the rest of this post.

Read more…

Augmented Shanahan: Detection

Extrapolated squares

When I was thinking of ideas for the 5C Hackathon back in November, I thought it would be really cool to do augmented reality on buildings. They are large, somewhat unnoticed features of the environment, and they are relatively static, not moving or changing over time. They seemed like the perfect thing to overlay with augmented reality. And since they are so large, it opens up the possibility for multiple people to interact with the same wall simultaneously.

Note: This is mostly an explanation of how I implemented detection. If you prefer, you can skip ahead to the 3D extrapolation and rendering in part 2, or to the game logic in part 3.

If you instead want to try it out on your own, click here. The augmented reality works on non-iOS devices with a back-facing camera and a modern browser, but you have to be in the Shanahan building to get the full effect. Denying camera access or accessing it on an incompatible device will use a static image of the building instead of using the device camera. Works best in evenly-lit conditions. Image quality and detection may be slightly better in Firefox than Chrome, which doesn’t autofocus very well.

Read more…



This is an experiment I started working on many months ago, but didn’t actually finish until recently. It was based on some doodles I used to make, where I would draw a bunch of lines, continuing each until it intersected with others. It creates a bunch of lines randomly, and then extends them until they intersect, coloring the regions between the lines.

Read more…

Copyright © 2015 hexahedria. All Rights Reserved. | Top