Algorithmic Music Lab

An project dedicated to

artistic and technological divulgation of

Algorithmic Musical Composition


Algorithmic Composition

Since the appearance of the first computers, throughout the second half of the 20th century, their ability for musical creation and expression has been explored. So, it has been developed [more...]

Algorithmic Music Lab

A project created in 2012 by composers and researchers Gustavo Díaz-Jerez, Jesús L. Álvaro and Luis Robles, although completely open [more...]

Events and Activity

Here you can find about all the activity of Algorithmic Music Lab [more...]

Projects

We introduce the main Projects and Software tools linked to Algorithmic Music Lab. All of them are consolidated projects, accessible to the general public, shown through [more...]


News

Conference. Iamus: AI at the service of contemporary music composition. 15 February, 2019

Located in the Technological Park of Andalusia in Malaga, Iamus is a powerful computing cluster. It is the first system completely dedicated to the composition of contemporary classical music. The compositional approach of Iamus has a biological inspiration. Each work is encoded in a genome that experiences evo-devo dynamics (evolutionary biology) and genetic manipulations, such as mutation, genetic recombination, etc. Iamus not only composes the music, but also provides a score completely written in standard musical notation, ready to be performed. All this is done without human intervention.

Speaker: Gustavo Díaz-Jerez

Where: Fundación Telefónica. C/Fuencarral, 3. Madrid

Free entrance with registration.

Tickets: https://espacio.fundaciontelefonica.com/evento/charla-iamus-inteligencia-artificial-al-servicio-de-la-musica/


Lecture: Composition and Musical Design using Algorithms, March 2017

The Universidad Autónoma de Madrid hosts this conference-presentation by Luis Robles. An overview of the Algorithmic Composition will be introduced, with a historical introduction and an evaluation of the state of the matter.

DeepMind presents WaveNet, a striking sound synthesis system, September 2016

DeepMind is a British company dedicated to Artificial Intelligence that was acquired by Google in 2014. It has recently presented one of its latest projects, WaveNet, which applies Artificial Intelligence to Sound Synthesis from a novel approach, aimed especially at the synthesis of speech. The result is shocking, what Google wants to take advantage of in its next applications of the relationship between man and machine.

But the surprising thing is that the model has also been applied to music synthesis, with amazing results that can be heard on the project's website itself, which also offers an academic paper with detailed technical information on the applied process.