views:

2137

answers:

15

I've been putting a lot of thought into procedural generation of content for a while and I've never seen much experimentation with procedural music. We have fantastic techniques for generating models, animations, textures, but music is still either completely static or simply layered loops (e.g. Spore).

Because of that, I've been thinking up optimal music generation techniques, and I'm curious as to what other people have in mind. Even if you haven't previously considered it, what do you think will work well? One technique per answer please, and include examples where possible. The technique can use existing data or generate the music entirely from scratch, perhaps on some sort of input (mood, speed, whatever).

+10  A: 

Cellular Automata - read.

You can also try it out here.

thekidder
Wolfram is so smart! Great job making this kind of thing so accessible too...
defmeta
+1  A: 

The technique I've been considering is to create small musical patterns, up to a bar or so. Tag these patterns with feeling identifiers such as 'excitement', 'intense', etc. When you want to generate music for a situation, pick a few patterns based on these tags and pick an instrument you want to play it with. Based on the instrument, figure out how to combine the patterns (e.g. on a piano you may be able to play it all together, depending on hand span, on a guitar you may play the notes in rapid succession) and then render it to PCM. In addition, you could change key, change speed, add effects, etc.

Cody Brocious
A: 

Back in the late 90's, Microsoft created an ActiveX control called the "Interactive Music Control" which did exact what your looking for. Unfortunately, they seem to have abandon the project.

James Curran
That's because the music it created completely blew, which is a pretty common characteristic for algorithmically-composed music.
MusiGenesis
The Wolfram Tones project referenced by @thekidder above is remarkable successful at not blowing. I was expecting the usual rnd noodling...
defmeta
+1  A: 

The specific technique you're describing is something Thomas Dolby was working on ten or fifteen years ago, though I can't remember now what he called it so I can't give you a good search term.

But see this Wikipedia article and this Metafilter page.

Robert Rossney
You're thinking of "generative music", and program named "Koan".
MusiGenesis
+4  A: 

An easy and somewhat effective algorithm is to use 1/f noise aka "pink noise" to select durations and notes from a scale. This sounds sort of like music and can be a good starting point.

A better algorithm is to use "markov chains".. scan some example music and build a table of probabilities. In the simplest case, it would be something like C is 20% likely to follow A. To make this better, look at the sequence of the past few notes, for example "C A B" is 15% likely to be followed by B, and 4% likely to be followed by a Bb, etc. Then, just pick notes using the probabilities of the previously chosen notes. This remarkably simple algorithm generates pretty good results.

Markov chains for music generation

+3  A: 

My software uses applied evolutionary theory to "grow" music. The process is similar to Richard Dawkins' The Blind Watchmaker program - MusiGenesis adds musical elements randomly, and then the user decides whether or not to keep each added element. The idea is to just keep what you like and ditch whatever doesn't sound right, and you don't have to have any musical training to use it.

The interface blows, but it's old - sue me.

MusiGenesis
So THATs where you get your name!!! Aha!
RCIX
+2  A: 

Research on non-boring procedural music generation goes way back. Browse old and new issues of Computer Music Journal http://204.151.38.11/cmj/ (no real domain name?) This has serious technical articles of actual use to music synthesis tinkerers, soldering iron jockeys, bit herders and academic researchers, not a fluffy reviews and interviews rag such as several of the mags you can find in major bookstores.

DarenW
i should mention my knowledge of this magazine is based on the subscription i had but lapsed a few years ago. i assume it's still as good!
DarenW
+13  A: 

The most successful system will likely combine several techniques. I doubt you'll find one technique that works well for melody, harmony, rhythm and bass sequence generation across all genres of music.

Markov chains, for instance, are well suited for melodic and harmonic sequence generation. This method requires analysis of existing songs to build the chain transition probabilities. The real beauty of Markov chains is that the states can be whatever you want.

  • For melody generation, try key-relative note numbers (e.g. if the key is C minor, C would be 0, D would be 1, D# would be 2 and so on)
  • For harmony generation, try a combination of key-relative note numbers for the root of the chord, the type of the chord (major, minor, diminished, augmented, etc.) and the inversion of the chord (root, first or second)

Neural networks are well suited to time series prediction (forecasting), which means they're equally suited to 'predicting' a musical sequence when trained against existing popular melodies/harmonies. The end result will be similar to that of the Markov chain approach. I can't think of any benefit over the Markov chain approach other than reducing the memory footprint.

In addition to pitch you will need duration to determine the rhythm of the generated notes or chords. You can choose to incorporate this information into the Markov chain states or neural network outputs, or you can generate it separately and combine the independent pitch and duration sequences.

Genetic algorithms can be used to evolve rhythm sections. A simple model could use a binary chromosome in which the first 32 bits represent the pattern of a kick drum, the second 32 bits a snare, the third 32 bits a closed hi hat and so on. The downside in this case is that they require continuous human feedback to assess the fitness of the newly evolved patterns.

An expert system can be used to verify sequences generated by the other techniques. The knowledge base for such a validation system can probably be lifted from any good music theory book or website. Try Ricci Adams' musictheory.net.

Richard Poole
+2  A: 

Dmitri Tymoczko has some interesting ideas and examples here :

http://music.princeton.edu/~dmitri/whatmakesmusicsoundgood.html

interstar
A: 

Not quite what you're after, but I knew someone who looked at automatically generating DJ sets called Content Based Music Similarity.

Peter K.
A: 

If you're into deeper theories about how music hangs together, Bill Sethares site has some interesting twists.

Peter K.
A: 

Ive been looking into doing this project proposal - "8.1" from the "Theory and praxis in programming language" research group from the University of Copenhagen - department of CS:

8.1 Automated Harvesting and Statistical Analysis of Music Corpora

Traditional analysis of sheet music consists of one or more persons analysing rhythm, chord sequences and other characteristics of a single piece, set in the context of an often vague comparison of other pieces by the same composer or other composers from the same period.

Traditional automated analysis of music has barely treated sheet music, but has focused on signal analysis and the use of machine learning techniques to extract and classify within, say, mood or genre. In contrast, incipient research at DIKU aims to automate parts of the analysis of sheet music. The added value is the potential for extracting information from large volumes of sheet music that cannot easily be done by hand and cannot be meaningfully analysed by machine learning techniques.

This - as I see it - is the opposite direction of your question the data generated - I imagine - could be used in some instances of procedural generation of music.

svrist
A: 

I have always liked the old Lucasarts games that used the iMuse system, which produced a never-ending, reactive soundtrack for the game and was very musical (because most of it was still created by a composer). You can find the specs (including the patent) here: http://en.wikipedia.org/wiki/IMUSE

Nintendo seems to be the only company to still use an approach similar to iMuse to create or influence the music on the fly.

Unless your project is very experimental, I would not abandon the use of a composer - a real human composer will produce much more musical and listenable results than an algorythm.

Compare it to writing a poem: You can easily generate nonsene poems which sound very avant-garde, but to replicate shakespeare with an algorythm is difficult, to put it mildly.

Galghamon
Very true, but i think users would be far more interested in "ok" or "decent" music that reacts to gameplay than the same 5 "great" tracks over and over again...
RCIX
@RCIX: Have you ever played a game with the iMuse system? It reacts to loads of things, it's very subtle or obvious, as required, but it uses music written by a human composer. It doesn't generate completely new, never before heard music, but it does great transitions between cues, it can alter arrangements (bring in new instruments, blend out others), it can speed up or slow down, all without ever missing a beat. This is very far from "the same 5 great tracks over and over again". I would call it "one continuous stream of music shaped to fit the mood of the game at the present moment".
Galghamon
A: 

My opinion is that generative music only works when it goes through a rigorous selection process. David Cope, an algorithmic music pioneer, would go through hours of musical output from his algorithms (which I think were mostly Markov Chain based) to pick out the few that actually turned out well.

I think this selection process could be automated by modeling the characteristics of a particular musical style. For instance, a "disco" style would award lots of points for a bassline that features offbeats and drum parts with snares on the backbeats but subtract points for heavily dissonant harmonies.

The fact is that the music composition process is filled with so many idiomatic practices that they are very difficult to model without specific knowledge of the field.

gregsabo
+3  A: 

there are over 50 years of research into these techniques, much often overlooked by developers not familiar with the history of computer music and algorithmic composition. numerous examples of systems and research that address these issues can be found here:

http://www.algorithmic.net

flexatone