# From Wordpress to Jekyll

Until now, my blog has been happily running on WordPress, hosted on a VPS provided by DigitalOcean. Recently I decided I didn’t really need all of the complexity of running a VPS just to host my website. So instead, I’m migrating my blog over to Jekyll, a static site generator used by GitHub Pages.

# Discrete and Computational Geometry Projects

At Harvey Mudd, I’m taking a cool class called “Discrete and Computational Geometry”, a special topics course taught by Professor Satyan Devadoss. It’s a very interesting class. In lieu of normal problem sets, we instead do a bunch of group projects, each one very freeform. The basic instructions are “go make something related to this class”. Here are a couple of the projects my group made:

# TIME || ƎMIT: A game for js13k

Story time: So a few years ago, after playing Portal and loving it, and after making my iOS word game Wordchemy, I was kind of in a game-design zone. I brainstormed a ton of ideas: from memory games, to arcade-style games, to physics-based puzzle games, and many more. One idea, however, stuck with me. The basic idea was this: what if you could go backwards in time? Not as in traditional time travel, where you appear in the past, but actually backwards? Everything else would seem to be moving in reverse: things would fall up, people would be walking backwards, and you might even see your (relative) past self undoing your actions!

I ended up never making most of those games, but that time-travel idea stuck. I started designing levels for it, and actually started to build it once, but I never got it to a playable state. But every once and a while, I’d start thinking about it again. I decided that one of these days I was going to make it.

# Reading an Artificial Mind: Visualizing the Neural Net

In my previous post, I described my music-composing neural network. In some of the time since then, I extracted some interesting images to visualize the neural network’s internal state. And this is what I got:

# Composing Music With Recurrent Neural Networks

(Update: A paper based on this work has been accepted at EvoMusArt 2017! See here for more details.)

It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. Very cool stuff!) Now seems like as good a time as ever to experiment with what a neural network can do.

For a while now, I’ve been floating around vague ideas about writing a program to compose music. My original idea was based on a fractal decomposition of time and some sort of repetition mechanism, but after reading more about neural networks, I decided that they would be a better fit. So a few weeks ago, I got to work designing my network. And after training for a while, I am happy to report remarkable success!

Here’s a taste of things to come: