Computer music







Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century.


In the 2000s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording systems ranging from GarageBand to Pro Tools, the term is sometimes used to describe music that has been created using digital technology.




Contents






  • 1 History


  • 2 Advances


  • 3 Research


  • 4 Computer-generated music


    • 4.1 Music composed and performed by computers


    • 4.2 Computer-generated scores for performance by human players


    • 4.3 Computer-aided algorithmic composition




  • 5 Machine improvisation


    • 5.1 Statistical style modeling


    • 5.2 Uses of machine improvisation


    • 5.3 Implementations


    • 5.4 Musicians working with machine improvisation




  • 6 Live coding


  • 7 See also


  • 8 References


  • 9 Further reading





History






CSIRAC, Australia's first digital computer, as displayed at the Melbourne Museum


Much of the work on computer music has drawn on the relationship between music and mathematics, a relationship which has been noted since the Ancient Greeks described the "harmony of the spheres".


Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were obviously speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises,[1] but there is no evidence that they actually did it.[2][3]


The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey and Maston Beard from the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for the purpose. The music was never recorded, but it has been accurately reconstructed.[4][5] In 1951 it publicly played the "Colonel Bogey March"[6] of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice.


The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Ba, Ba Black Sheep, and "In the Mood" and this is recognised as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the this Manchester University site. Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on Soundcloud.[7][8][9]


Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendents, further popularising computer music through a 1963 article in Science.[10] Amongst other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956-9, manifested in the 1957 premiere of the Illiac Suite for string quartet.[11]


In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC computer. This resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the 1970s.[12]




The programming computer for Yamaha's first FM synthesizer GS1. CCRMA, Stanford University


Early computer-music programs typically did not run in real time, although the first experiments on CSIRAC and the Ferranti Mark 1 did operate in real time. Programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music.[13][14] One way around this was to use a 'hybrid system', most notably the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978.[12]John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis,[15] eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983.[16] In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music.[16] In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes.[12] By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.[17]


Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.[18]



Advances


Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.[19]



Research


Despite the ubiquity of computer music in contemporary culture, there is considerable activity in the field of computer music, as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the ICMA (International Computer Music Association), C4DM (Centre for Digital Music), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.



Computer-generated music


Computer-generated music is music composed by, or with the extensive aid of, a computer. Although any music which uses computers in its composition or realisation is computer-generated to some extent, the use of computers is now so widespread (in the editing of pop songs, for instance) that the phrase computer-generated music is generally used to mean a kind of music which could not have been created without the use of computers.[citation needed]


We can distinguish two groups of computer-generated music: music in which a computer generated the score, which could be performed by humans, and music which is both composed and performed by computers. There is a large genre of music that is organized, synthesized, and created on computers.[citation needed]



Music composed and performed by computers




Later, composers such as Gottfried Michael Koenig had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s.[20]


Procedures such as those used by Koenig and Xenakis are still in use today.[citation needed] Since the invention of the MIDI system in the early 1980s, for example, some people have worked on programs which map MIDI notes to an algorithm and then can either output sounds or music through the computer's sound card or write an audio file for other programs to play.[citation needed]


Some of these simple programs are based on fractal geometry, and can map midi notes to specific fractals, or fractal equations. Although such programs are widely available and are sometimes seen as clever toys for the non-musician, some professional musicians have given them attention also. The resulting 'music' can be more like noise, or can sound quite familiar and pleasant. As with much algorithmic music, and algorithmic art in general, more depends on the way in which the parameters are mapped to aspects of these equations than on the equations themselves. Thus, for example, the same equation can be made to produce both a lyrical and melodic piece of music in the style of the mid-nineteenth century, and a fantastically dissonant cacophony more reminiscent of the avant-garde music of the 1950s and 1960s.[citation needed]


Other programs can map mathematical formulae and constants to produce sequences of notes. In this manner, an irrational number can give an infinite sequence of notes where each note is a digit in the decimal expression of that number. This sequence can in turn be a composition in itself, or simply the basis for further elaboration.[citation needed]


Operations such as these, and even more elaborate operations can also be performed in computer music programming languages such as Max/MSP, Reaktor, SuperCollider, Csound, Pure Data (Pd), Keykit, and ChucK. These programs now easily run on most personal computers, and are often capable of more complex functions than those which would have necessitated the most powerful mainframe computers several decades ago.[citation needed]


There exist programs that generate "human-sounding" melodies by using a vast database of phrases. One example is Band-in-a-Box, which is capable of creating jazz, blues and rock instrumental solos with almost no user interaction. Another is Impro-Visor, which uses a stochastic context-free grammar to generate phrases and complete solos.[citation needed]


Another 'cybernetic' approach to computer composition uses specialized hardware to detect external stimuli which are then mapped by the computer to realize the performance. Examples of this style of computer music can be found in the middle-80's work of David Rokeby (Very Nervous System) where audience/performer motions are 'translated' to MIDI segments. Computer controlled music is also found in the performance pieces by the Canadian composer Udo Kasemets such as the Marce(ntennia)l Circus C(ag)elebrating Duchamp (1987), a realization of the Marcel Duchamp process piece Erratum Musical using an electric model train to collect a hopper-car of stones to be deposited on a drum wired to an Analog:Digital converter, mapping the stone impacts to a score display (performed in Toronto by pianist Gordon Monahan during the 1987 Duchamp Centennial), or his installations and performance works (e.g. Spectrascapes) based on his Geo(sono)scope (1986) 15x4-channel computer-controlled audio mixer. In these latter works, the computer generates sound-scapes from tape-loop sound samples, live shortwave or sine-wave generators.[citation needed]



Computer-generated scores for performance by human players


Many systems for generating musical scores actually existed well before the time of computers. One of these was Musikalisches Würfelspiel (Musical dice game; 18th century), a system which used throws of the dice to randomly select measures from a large collection of small phrases. When patched together, these phrases combined to create musical pieces which could be performed by human players. Although these works were not actually composed with a computer in the modern sense, it uses a rudimentary form of the random combinatorial techniques sometimes used in computer-generated composition.[citation needed]


The world's first digital computer music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard, although it was only used to play standard tunes of the day. Subsequently, one of the first composers to write music with a computer was Iannis Xenakis. He wrote programs in the FORTRAN language that generated numeric data that he transcribed into scores to be played by traditional musical instruments. An example is ST/48 of 1962. Although Xenakis could well have composed this music by hand, the intensity of the calculations needed to transform probabilistic mathematics into musical notation was best left to the number-crunching power of the computer.[citation needed]


Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope. He wrote computer programs that analyse works of other composers to produce new works in a similar style. He has used this program to great effect with composers such as Bach and Mozart (his program Experiments in Musical Intelligence is famous for creating "Mozart's 42nd Symphony"), and also within his own pieces, combining his own creations with that of the computer.[21]


Melomics, a research project from the University of Málaga, Spain, developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra."[22] The group has also developed an API for developers to utilize the technology, and makes its music available on its website.



Computer-aided algorithmic composition




Diagram illustrating the position of CAAC in relation to other Generative music Systems


Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design.[23]



Machine improvisation



Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection.
This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples.[24]



Statistical style modeling


Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree and string searching by factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion[25]).



Uses of machine improvisation


Machine improvisation encourages musical creativity by providing automatic modeling and transformation structures for existing music.[citation needed] This creates a natural interface with the musician without need for coding musical algorithms. In live performance, the system re-injects the musician's material in several different ways, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. In offline version, machine improvisation can be used to achieve style mixing, an approach inspired by Vannevar Bush's memex imaginary machine.[citation needed]



Implementations


The first system implementing interactive machine improvisation by means of Markov models and style modeling techniques is the Continuator,[26] developed by François Pachet at Sony CSL Paris in 2002[27][28] based on work on non-real time style modeling.[29][30]
Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.[31]


OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group.[32]



Musicians working with machine improvisation


Gerard Assayag (IRCAM, France),
Jeremy Baguyos (University of Nebraska at Omaha, US)
Tim Blackwell (Goldsmiths College, Great Britain),
George Bloch (Composer, France),
Marc Chemiller (IRCAM/CNRS, France),
Nick Collins (University of Sussex, UK),
Shlomo Dubnov (Composer, Israel / US),
Mari Kimura (Juilliard, New York City),
Amanuel Zarzowski (Composer Los Angeles/San Diego),
George Lewis (Columbia University, New York City),
Bernard Lubat (Pianist, France),
François Pachet (Sony CSL, France),
Joel Ryan (Institute of Sonology, Netherlands),
Michel Waisvisz (STEIM, Netherlands),
David Wessel (CNMAT, California),
Michael Young (Goldsmiths College, Great Britain),
Pietro Grossi (CNUCE, Institute of the National Research Council, Pisa, Italy),
Toby Gifford and Andrew Brown (Griffith University, Brisbane, Australia),
Davis Salks (jazz composer, Hamburg, PA, US),
Doug Van Nort (electroacoustic improviser, Montreal/New York)



Live coding



Live coding[33] (sometimes known as 'interactive programming', 'on-the-fly programming',[34] 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.[35]


Generally, this practice stages a more general approach: one of interactive programming, of writing (parts of) programs while they are interpreted. Traditionally most computer music programs have tended toward the old write/compile/run model which evolved when computers were much less powerful. This approach has locked out code-level innovation by people whose programming skills are more modest. Some programs have gradually integrated real-time controllers and gesturing (for example, MIDI-driven software synthesis and parameter control). Until recently, however, the musician/composer rarely had the capability of real-time modification of program code itself. This legacy distinction is somewhat erased by languages such as ChucK, SuperCollider, and Impromptu.[citation needed]


TOPLAP, an ad-hoc conglomerate of artists interested in live coding was formed in 2004, and promotes the use, proliferation and exploration of a range of software, languages and techniques to implement live coding. This is a parallel and collaborative effort e.g. with research at the Princeton Sound Lab, the University of Cologne, and the Computational Arts Research Group at Queensland University of Technology.[citation needed]



See also











References





  1. ^ "Algorhythmic Listening 1949-1962 Auditory Practices of Early Mainframe Computing". AISB/IACAP World Congress 2012. Retrieved 18 October 2017..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ "MuSA 2017 - Early Computer Music Experiments in Australia, England and the USA". MuSA Conference. 9 July 2017. Retrieved 18 October 2017.


  3. ^ Doornbusch, Paul (2017). "Early Computer Music Experiments in Australia and England". Organised Sound. Cambridge University Press. 22: 297–307 [11]. doi:10.1017/S1355771817000206. Retrieved 29 August 2017.


  4. ^ Fildes, Jonathan (2008-06-17). "Oldest computer music unveiled". BBC News Online. Retrieved 2008-06-18.


  5. ^ Doornbusch, Paul (March 2004). "Computer Sound Synthesis in 1951: The Music of CSIRAC". Computer Music Journal. 28 (1): 11–12. doi:10.1162/014892604322970616. ISSN 0148-9267. Archived from the original on 2004.


  6. ^ Doornbusch, Paul. "The Music of CSIRAC". Melbourne School of Engineering, Department of Computer Science and Software Engineering. Archived from the original on 18 January 2012.


  7. ^ "First recording of computer-generated music – created by Alan Turing – restored". The Guardian. 26 September 2016. Retrieved 28 August 2017.


  8. ^ "Restoring the first recording of computer music - Sound and vision blog". British Library. 13 September 2016. Retrieved 28 August 2017.


  9. ^ Fildes, Jonathan (June 17, 2008). "'Oldest' computer music unveiled". BBC News. Retrieved 4 December 2013.


  10. ^ Bogdanov, Vladimir (2001). All Music Guide to Electronica: The Definitive Guide to Electronic Music. Backbeat Books. p. 320. Retrieved 4 December 2013.


  11. ^ Lejaren Hiller and Leonard Isaacson, Experimental Music: Composition with an Electronic Computer (New York: McGraw-Hill, 1959; reprinted Westport, Conn.: Greenwood Press, 1979).
    ISBN 0-313-22158-8.[page needed]



  12. ^ abc Shimazu, Takehito (1994). "The History of Electronic and Computer Music in Japan: Significant Composers and Their Works". Leonardo Music Journal. MIT Press. 4: 102–106 [104]. doi:10.2307/1513190. Retrieved 9 July 2012.


  13. ^ Cattermole, Tannith (May 9, 2011). "Farseeing inventor pioneered computer music". Gizmag. Retrieved 28 October 2011.

    "In 1957 the MUSIC program allowed an IBM 704 mainframe computer to play a 17-second composition by Mathews. Back then computers were ponderous, so synthesis would take an hour."



  14. ^ Mathews, Max (1 November 1963). "The Digital Computer as a Musical Instrument". Science. 142 (3592): 553–557. doi:10.1126/science.142.3592.553. Retrieved 28 October 2011.

    "The generation of sound signals requires very high sampling rates.... A high speed machine such as the I.B.M. 7090 ... can compute only about 5000 numbers per second ... when generating a reasonably complex sound."



  15. ^ Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. p. 20. ISBN 0-19-533161-3.


  16. ^ ab Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. p. 1. ISBN 0-19-533161-3.


  17. ^ Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. pp. 4–5. ISBN 0-19-533161-3.

    "... by the 90s ... digital sound manipulation (using MSP or many other platforms) became widespread, fluent and stable."



  18. ^ Loy, D. Gareth (1992). Roads, Curtis, ed. The Music Machine: Selected Readings from Computer Music Journal. MIT Press. p. 344. ISBN 0-262-68078-5.


  19. ^ Doornbusch, Paul (2009). "Chapter 3: Early Hardware and Early Ideas in Computer Music: Their Development and Their Current Forms". In Dean, R. T. The Oxford handbook of computer music. Oxford University Press. pp. 44–80. doi:10.1093/oxfordhb/9780199792030.013.0003. ISBN 0-19-533161-3.


  20. ^ Berg, P (1996). "Abstracting the future: The Search for Musical Constructs". Computer Music Journal. MIT Press. 20: 24–27 [11].


  21. ^ Baofu, Peter (2013-01-03). The Future of Post-Human Performing Arts: A Preface to a New Theory of the Body and its Presence. Cambridge Scholars Publishing. ISBN 9781443844857.


  22. ^ "Computer composer honours Turing's centenary". New Scientist. 5 July 2012.


  23. ^ Christopher Ariza: An Open Design for Computer-Aided Algorithmic Music Composition, Universal-Publishers Boca Raton, Florida, 2005, p. 5


  24. ^ Mauricio Toro, Carlos Agon, Camilo Rueda, Gerard Assayag. "GELISP: A Framework to Represent Musical Constraint Satisfaction Problems and Search Strategies", Journal of Theoretical and Applied Information Technology 86, no. 2 (2016): 327–331.


  25. ^ Jan Pavelka; Gerard Tel; Miroslav Bartosek, eds. (1999). Factor oracle: a new structure for pattern matching; Proceedings of SOFSEM’99; Theory and Practice of Informatics. Springer-Verlag, Berlin. pp. 291–306. ISBN 3-540-66694-X. Retrieved 4 December 2013. Lecture Notes in Computer Science 1725


  26. ^ [1]


  27. ^ Pachet, F., The Continuator: Musical Interaction with Style. In ICMA, editor, Proceedings of ICMC, pages 211-218, Göteborg, Sweden, September 2002. ICMA. Best paper award.


  28. ^ Pachet, F. Playing with Virtual Musicians: the Continuator in practice. IEEE Multimedia,9(3):77-82 2002.


  29. ^ G. Assayag, S. Dubnov, O. Delerue, "Guessing the Composer's Mind : Applying Universal Prediction to Musical Style", In Proceedings of International Computer Music Conference, Beijing, 1999.


  30. ^ S. Dubnov, G. Assayag, O. Lartillot, G. Bejerano, "Using Machine-Learning Methods for Musical Style Modeling", IEEE Computers, 36 (10), pp. 73-80, Oct. 2003.


  31. ^
    M Toro, C Rueda, C Agón, G Assayag. NTCCRT: A concurrent constraint framework for soft-real time music interaction.
    Journal of Theoretical & Applied Information Technology Vol. 82 Issue 1, p184-193. 2015



  32. ^ "The OMax Project Page". omax.ircam.fr. Retrieved 2018-02-02.


  33. ^ Collins, N.; McLean, A.; Rohrhuber, J.; Ward, A. (2004). "Live coding in laptop performance". Organised Sound. 8 (03). doi:10.1017/S135577180300030X.


  34. ^ Wang G. & Cook P. (2004) "On-the-fly Programming: Using Code as an Expressive Musical Instrument", In Proceedings of the 2004 International Conference on New Interfaces for Musical Expression (NIME) (New York: NIME, 2004).


  35. ^ Collins, N. (2003). "Generative Music and Laptop Performance". Contemporary Music Review. 22 (4): 67–79. doi:10.1080/0749446032000156919.




Further reading



  • Ariza, C. 2005. "Navigating the Landscape of Computer-Aided Algorithmic Composition Systems: A Definition, Seven Descriptors, and a Lexicon of Systems and Research." In Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association. 765-772. Internet: https://web.archive.org/web/20070927001256/http://www.flexatone.net/docs/nlcaacs.pdf

  • Ariza, C. 2005. An Open Design for Computer-Aided Algorithmic Music Composition: athenaCL. Ph.D. Dissertation, New York University. Internet: https://web.archive.org/web/20110606061708/http://www.flexatone.net/docs/odcaamca.pdf

  • Berg, P. 1996. "Abstracting the future: The Search for Musical Constructs" Computer Music Journal 20(3): 24-27.


  • Boulanger, Richard, ed. (March 6, 2000). The Csound Book: Perspectives in Software Synthesis, Sound Design, Signal Processing, and Programming. The MIT Press. p. 740. ISBN 0-262-52261-6. Retrieved 3 October 2009.


  • Chadabe, Joel. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, New Jersey: Prentice Hall.

  • Chowning, John. 1973. "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation". Journal of the Audio Engineering Society 21, no. 7:526–34.


  • Collins, Nick (2009). Introduction to Computer Music. Chichester: Wiley. ISBN 978-0-470-71455-3.


  • Dodge, Charles; Jerse (1997). Computer Music: Synthesis, Composition and Performance. Thomas A. (2nd ed.). New York: Schirmer Books. p. 453. ISBN 0-02-864682-7.

  • Doornbusch, P. 2015. "A Chronology / History of Electronic and Computer Music and Related Events 1906 - 2015" http://www.doornbusch.net/chronology/

  • Doornbusch, P. 2017. "MuSA 2017 - Early Computer Music Experiments in Australia, England and the USA" https://www.academia.edu/34234640/MuSA_2017_Conference_-_Early_Computer_Music_Experiments_in_Australia_England_and_the_USA


  • Heifetz, Robin (1989). On the Wires of Our Nerves. Lewisburg Pa.: Bucknell University Press. ISBN 0-8387-5155-5.


  • D. Herremans; C.H. Chuan; E. Chew (2017). "A Functional Taxonomy of Music Generation Systems". ACM Computing Surveys. 50 (5): 69:1–30. doi:10.1109/TAFFC.2017.2737984.


  • Manning, Peter (2004). Electronic and Computer Music (revised and expanded ed.). Oxford Oxfordshire: Oxford University Press. ISBN 0-19-517085-7.

  • Perry, Mark, and Thomas Margoni. 2010. "From Music Tracks to Google Maps: Who Owns Computer-Generated Works?". Computer Law and Security Review 26: 621–29.


  • Roads, Curtis (1994). The Computer Music Tutorial. Cambridge: MIT Press. ISBN 0-262-68082-3.

  • Supper, M. 2001. "A Few Remarks on Algorithmic Composition." Computer Music Journal 25(1): 48-53.


  • Xenakis, Iannis (2001). Formalized Music: Thought and Mathematics in Composition. Harmonologia Series No. 6. Hillsdale, NY: Pendragon Pr. ISBN 1-57647-079-2.









Popular posts from this blog

Guess what letter conforming each word

Port of Spain

Run scheduled task as local user group (not BUILTIN)