Here are all the posts I’ve made, in reverse chronological order. Some of these are imported from my old blog, so they go back to 2016 (when I was an undergrad!)
A code critique I posted to the critical code studies working group 2024.
I defended my dissertation! In this post, I’m going to describe its key unifying question, how do we evaluate algorithms for aesthetic phenomenon problems in computer vision?
Recent crises regarding bias and injustice in AI raise serious questions about our research methods. How should we change our methods to prevent these issues in the future?
I’m about to defend my dissertation, and I’ve been procrastinating on other writing work, so I’m going to write a series of “explainer” posts where I discuss a concept I wish I had understood before grad school. In this post, I’m going to write a bit about a pair of words I learned recently, nomothetic and idiographic.
How did we end up with the red, green and blue color encodings that we use for digital cameras, screens and images? It turns out the answer involves an argument started by Newton and Goethe, 17 British "trichromats" and the ostracization of German scientists after World War I.
Subjectivity is a bit of a dirty word in machine learning. We aspire to be a highly objective, mathematical, scientific discipline. That means we want to make our algorithms and evaluations as objective as possible. In many cases, we can formally define the problem at hand and prove our solutions correct. In other cases, we can't really define the problem, so we collect data and use that data to evaluate our methods empirically. But does evaluation based on data really work when the underlying problem is fundamentally subjective?
If you've visited my website before, you'll probably notice it's changed! While I still like the old look and feel of the page, I set it up six years ago and didn't really know what I was doing design-wise, so it was time for a change.
If you've spent any time reading computer vision research papers over the past few years, you've probably noticed a big change in the way ppaers look, driven by the deep learning revolution and subsequent boom in computer vision. We wrote a paper about these changes, and how they both reflect and shape computer vision.
Over the past two years, I've been exploring a category of subjective computer vision problems and how we should approach them. In this post, I'd like to make the case for a more humanistic approach to these problems.
In this post, I’d like to examine an unusual text, the label on a designer jacket produced by ZA/UM, the art collective and video game studio behind the hit 2019 game, Disco Elysium.
Well, you might believe it. They mostly just gave advice on how to design websites. But the thing which I found absolutely fascinating in these books is the way they gave that advice. In fact, we found this style of writing so interesting, we wrote a paper about it, which is now available.
In mathematics, optimization is a technique for finding the highest or lowest value of a function. In machine learning we use optimization as a tool for fitting models to data, but it is more than that. In many ways, computer scientists engage with optimization more like an ideology.
I made some visualizations of art images over time using an interesting algorithm! I think they have some provocative qualities which are worth discussing. My code is on Github so you can make plots like these too if you want.
Over the past few months, I've been digging into a strange subfield of computer vision called "image aesthetic quality assessment. I've become absolutely infatuated with this research topic, not because I think that what they are doing is good or right, but because I think their work is a really good way to approach a difficult issue at the core of all the topics I study.
I procedurally generated some musical accompaniment for a video taken from the perspective of my roomba!
I made a little website on NeoCities to promote our recent CHI paper and wanted to post it here and point out some of the ideas behind my designs.
I wrote a short article for hyperallergic.com. As it's a website for a general audience, it's a quite short article without technical details. As a companion here, I wanted to write out some more of the details about deep learning algorithms for image colorization.
Over the past two years, I've been studying the history of web design. I wrote an article about our work for The Conversation, and wrote a post on this blog about some of that work, but I haven't actually written about the history there yet. The following is my 1500 word summary of the history of web design.
I did a generative art project based on research into color theory!
In my free time, I made some art with GANs
I was wondering whether neural networks trained to detect cute things actually understand cuteness. I did some experiments and found that the answer was "yes…sort of?"
In this section, we're going to start by looking at how to measure the difference between two colors, then define what a color scheme is in our quantitative framework and look at ways of extending our way of measuring difference to color schemes.
I've been busy with grad school recently, but I thought work I've done recently merited a blog post. I've been looking at changes in web design over time using image analysis, and we want to find metrics which capture why websites look similar to one another, and use those to identify design trends. In this line of inquiry, I ran into an interesting question, how do you measure the difference between two color schemes?
I've had some free time lately (I recently left my job at RTI and am starting a PhD at Indiana University), so I've been reading and programming for fun again. Some of that reading and programming tied itself nicely into a coherent project that I've written up below. If you just want to skip to the programming, I've made a webapp available here.
Now that I'm updating my blog again, it occurred to me that I never posted about my musical studies capstone thesis. Rather than post a bloggy version of the things I was thinking about, like I did while working on my honors research, I'm going to keep things brief and only discuss one of the computational methods that I used, which is a neat data visualization technique.
Last week, I discovered a wonderful twitter account that posts art generated from images using Primitive.lol by Michael Fogleman. Since following it, I have been bombarded with beautiful images of shapes arranged to resemble real things.
A week ago, I spewed some of my thoughts about The Incredibles in a post. Now I've graduated college and am back home with more free time. Hopefully, in this post, I can go through the rest of my ideas about the first part of Giacchino's fantastic score.
It's May now, and I'm done with classes! Commencement at Oberlin is next Monday, so I have some free time, and I decided to write a post. Rather than spend more time on my honors research, I figured I'd go into something else I've been thinking about and researching lately, which is Michael Giacchino's score to the Disney Pixar film The Incredibles. I wrote a paper for a music theory class on this topic, but I had a lot of thoughts about the music that didn't fit nicely into a paper, so I'm going to try to fit more content into a less structured blog post.
Well, it's been a month, and I finally have a free day to write up the answers to the survey! People had been asking whether they got the right answers, and since my results were anonymous, I couldn't just look it up. That said, I figured it would be fun to make a post with some graphs.
Despite never getting the chance to post about generation schemes or survey design (which I plan to do retroactively at some point!), I was able to complete my research and sumbit my thesis on time.
As promised in a previous post, I wanted to write a summary of the DeepBach paper, published in December, which achieves a result similar to what I want to achieve with this project. This article saw some news coverage when it was published, so it may sound familiar. It is currently, as far as I know, the state-of-the-art model for generating Bach-like music.
After finishing the individual expert models, I moved on to building a product model. The concept comes from Geoffrey Hinton and this particular application was my colleague from summer research, Daniel Johnson's idea. Basically, instead of training naively on untransposed pitches, preprocess that pitch data in a variety of different ways, then train several "expert" models on the preprocessed data, then take the output probability density functions outputted by each expert, multiply them together and renormalize.
The next step towards machine composition of four-part harmony, after working with a naive generative model, was to try to implement other models that better preprocessed training data. My ultimate goal with these is to apply Geoffry Hinton's Product of Experts and multiply together the output probability distributions of several models trained on different aspects of the training data to get a more nuanced result.
Rather than dive immediately into complex neural network architectures, I wanted to create a first model that predicts the absolute pitches of one voice given the absolute pitches in other voices. My goal was not to create the best possible predictive model of music so much as it was to create a control against future experiments that pre-process the training data further. If the dataset I had found could be modeled with high accuracy without significant pre-processing, then any more sophisticated model would be unnecessary and I would know to find a less homogeneous dataset before proceeding.
My goal with this project is to investigate methods for procedurally generating counterpoint/voice leading by learning trends in existing music, rather than starting from pre-existing rules of counterpoint/voice leading. My reasons for this are two-fold. First, while undergrad composition and music theory classes in the conservatory model have strict composition rules as their bread and butter, mature composers of the past and present by and large compose music by recreating music they enjoy or find interesting, often by breaking the established rules of their chosen form. Second, there are many examples in the music composition literature of prescriptive, rule-based models for stochastic composition (see Hiller, Tenney, Xenakis, Eno, etc.). As far as I know, there are only a few people who have tried to do composition using machine learning techinques.