Overview
eikosany
is an R package of tools for algorithmic composition with Erv Wilson’s Combination Product Sets (Narushima 2019, chap. 6). It’s meant to complement other microtonal composition tools, not replace any of them.
About the name: an Eikosany is a 20-note scale derived by Erv Wilson from six harmonic factors. Although any six factors can be used, the most commonly encountered Eikosany uses the first six odd numbers: 1, 3, 5, 7, 9 and 11.
Other tools
Scala. Note: this is not the Scala multi-paradigm programming language that runs on the Java Virtual Machine. This is a specialized tool for working with musical scales.
Sevish’s Scale Workshop. This is a web-based tool for working with musical scales.
Leimma and Apotome. These tools, by Khyam Allami and Counterpoint, are browser-based applications for creating microtonal scales and making generative music with them.
ODDSound MTS-ESP. This is a plugin for digital audio workstations (DAWs) that facilitates production of microtonal music. I own a copy and if you’re making microtonal electronic music, you should too. The Eikosany and other scales Erv Wilson developed all ship with MTS-ESP, so you don’t really need my R package to compose with them.
Entonal Studio. Entonal Studio is a user interface package for microtonal composition. It can operate as a standalone application, a plugin host or a plugin. I own a copy of Entonal Studio and recommend it highly.
-
Infinitone DMT. From the Infinitone DMT FAQ: “Infinitone DMT is a DAW plugin and standalone that empowers musicians to easily use micro-tuning within their own workflow. …
“As a plugin, Infinitone DMT is inserted in your DAW as a MIDI effect. … The standalone can be used separately from a DAW, or it can be used in conjunction with a DAW by routing MIDI data from the DAW to the standalone (and back).”
Universal Tuning Editor. Universal Tuning Editor is an application for computing and visualizing microtonal scales and tunings, and includes tools to interface with hardware and software synthesizers.
-
Wilsonic. This is a free app that runs on iOS devices. I don’t have any iOS devices so I’ve never used this.
There is also a version of Wilsonic in development for use with ODDSound MTS-ESP. See https://wilsonic.co/downloads/downloads-mts-esp/ for the details.
Surge XT. Surge XT is an open source full featured software synthesizer. The Surge XT community has invested a significant level of effort into supporting alternate tuning systems.
See the Xenharmonic Wiki List of microtonal software plugins for more ways of making microtonal music.
Some history
On February 4, 2001, composer Iannis Xenakis passed away. I’ve been a fan of experimental music, especially musique concrète, algorithmically composed music, microtonal music, and other avant-garde genres since I was an undergraduate. Xenakis was one of the major figures in algorithmic composition.
Reading the first edition of Tuning, Timbre, Spectrum, Scale 1 rekindled my appreciation for the microtonal music of Harry Partch. And so, armed with copies of Sethares (1998), Formalized Music 2, and Genesis of a Music 3, I embarked on a path that led to When Harry Met Iannis 4.
When Harry Met Iannis was premiered at a microtonal music festival in El Paso, Texas in late October, 2001. The Bandcamp version is essentially identical to that version; the source code is on GitHub at https://github.com/AlgoCompSynth/when-harry-met-iannis.
At the festival, I met a number of composers who were working in microtonal and just intonation, and one name kept coming up: Erv Wilson. Wilson was a theoretician who developed keyboards, scales and tuning systems that several composers were using at the time, and are still using today. Terumi Narushima’s Microtonality and the Tuning Systems of Erv Wilson 5 is a comprehensive documentation of Wilson’s work and is the basis for much of the code in this package.
Motivation
I have two main motivations:
-
There’s an old saying that if you really want to learn something, teach a computer to do it. In the case of Erv Wilson’s musical constructs, teasing the construction processes out of his and others’ writings on the subject is a non-trivial task.
For example, much of Wilson’s work consists of multi-dimensional graph structures drawn on flat paper. He did build physical three-dimensional models of some of them, but some can’t even be rendered in three dimensions. And the graph theory operations that generated them and musical ways to traverse them are not at all obvious.
-
The 20th anniversary of Xenakis’ passing and of When Harry Met Iannis occured in my second year of virtual isolation because of COVID-19. During 2021, I acquired two synthesizers that are capable of mapping the keyboards to arbitrary microtonal scales: an Ashun Sound Machines Hydrasynth Desktop, and a Korg Minilogue XD.
The Hydrasynth ships with the tuning tables for many of Erv Wilson’s scales already in the firmware. For the Minilogue XD, the user can load up to six custom scales with a software librarian program.
But I’m not a keyboard player, and even if I were, the remapping process for the scales leaves only middle C where a musician would normally expect to find it. All the other notes are somewhere else.
So I need a translator for the music I want to write that doesn’t involve a lot of trial and error fumbling around on a remapped synthesizer or on-screen keyboard. CPS scales are aimed at harmonic musical structures like chords, and finding them on a remapped keyboard is tedious and error-prone.
Music composed using Wilson’s musical structures is mostly played on instruments custom-built for them. There are keyboards designed for Wilson’s and other microtonal music; indeed, Wilson himself designed microtonal keyboards (Narushima 2019, chap. 2). But they’re quite expensive and, like the instruments, custom-built. I need tools to work with what I have.
Developer notes
Project status update 2023-09-06:
I presented the project in its current state at the Cascadia R Conference on August 19th, 2023. The slides and some sample data are at https://github.com/AlgoCompSynth/eikosany-slides. I have now begun what I am calling “The Great Refactor”. Approximate road map:
“Finish” consonaR. I am moving most of the scale, keyboard, interval, spectral analysis and synthesis functionality in
eikosany
toconsonaR
. I am also adding functionality toconsonaR
to facilitate algorithmic composition in the frequency domain. This may include a real-time synthesis capability if I can find a way to make that work on Windows, MacOS, Ubuntu 22.04 LTS and Raspberry Pi OS. It definitely will include the current synthesis functionality based onseewave
andtuneR
.Replace much of the low-level functionality in
eikosany
with calls to the equivalents inconsonaR
.-
Removing the MIDI functionality from
eikosany
. First of all, performing composed microtonal music on hardware and software synthesizers that support it is a solved problem, using the other tools listed above. Second, MIDI is a terrible score language for the kind of music I want to make.Open Sound Control (OSC) may be better, but I’m not convinced. I’d much prefer a language that facilitates live coding / performance as a human / computer interface over a communication protocol like MIDI or OSC. CLAMS is my approach.
Moving forward
If you’re interested in helping with the development of this package, a few notes:
While you can install this package via
remotes::install("eikosany")
, it will be easier for me if you fork the repository https://github.com/AlgoCompSynth/eikosany.git and install it viadevtools::install(dependencies = TRUE)
. I don’t recommend using the package without RStudio; there may well be hidden dependencies on it. I regularly runR CMD check
and you should too.I am tracking Wilsonic MTS-ESP and Surge XT and am more or less constantly re-scoping this project to avoid duplicating those capabilities. So if I get a feature request that’s covered by one of them, I’ll most likely send you there.
-
I have another project in the works that will be ramping up in September. CLAMS is a Forth-based real-time synthesis toolset for embedded environments. So my development time will be divided between the two projects.
There will be integrations between the two projects, mostly so I can use R Markdown and Quarto for literate programming and documentation, and so the synthesizer can make algorithmic microtonal music.