This paper attempts to provide a framework for automatic music generation. It correctly identifies a fundamental problem with automatic music generation in that there is no agreed upon standard for what an automatic music generation system should do. (One could argue there is a bigger problem in that there is no universal definition of music itself, but we’ll leave that discussion for another day.)
A concept map for automatic music generation systems is presented that attempts to organize music generation in terms of various properties such as narrative, composition, melody, and so on. Each of these properties is explained in detail. Following this is a fairly thorough history of automatic music generation. Each music generation system is categorized by the specific feature(s) of music that system generates. Then, each feature is mapped onto the previously mentioned concept map, giving us a functional taxonomy, as per the title of the paper.
The paper is very interesting, mostly for its history of music generation. However, there are two issues that are not fully resolved. The first is that most music generation systems will be mapped to multiple areas on the concept map; how is that resolved? The next is that while labeling the systems with the functional taxonomy does provide a nice framework, what does one do with the result? It is not clear how this is used to set the stage for new breakthroughs as per the abstract. Perhaps an example of the usage of the taxonomy would make it clear.