ASCII Controlled Polysynth

A year ago, my software „ASCII Controlled Polysynth“ was first presented at the symposium Invisible Places – Sounding Cities. This software creates an auditory representation of an ASCII graphic by transforming it into sound. It can be used as a sonification tool, for information coding or for some weird kind of additive synthesis.

„ASCII Controlled Polysynth“ automatically captures the number of lines in an ASCII text and generates a pure tone for each line, to which an individual pitch is assigned using an exponential function (the lowest line has a frequency of around 30–40 Hz, the highest around 12–13 kHz, and the frequencies of those in between increase exponentially). The software reads all the lines in the text simultaneously from left to right and modulates the amplitudes of the sine waves depending on the characters in the lines that represent them. The value of the amplitude is defined by the surface area covered by the individual characters – “M” has the highest amplitude, “.” the lowest.

In the video shown above, an ASCII representation of a Moscow street map was fed into the programme. The image was first converted into a text file in which all segments of the picture were represented by 32 characters from the ASCII character set (M, &, @, B, W, Q, 0, E, b, 8, Z, 9, 6, A, I, U, 2, o, z, n, 1, S, t, C, X, 7, x, c, v, i, : and .). “M” was assigned to black areas, “.” was assigned to white ones. The other characters represent different shades of grey depending on how much surface they cover.

„Swarm-like“ sound texture modeling using cellular automata

I‘ve been wondering if it could be possible to create an abstract model of real-world organic sound textures like the sounds produced by insect/bird swarms, rain shower, wind blown leaves, stones clashing against each other etc. Because simply splicing single, unaffiliated sounds did not result in a convincing „swarm sound“, I came to the conclusion that realistic swarm sounds must be more than a mere accumulation of similar acoustic events overlapping in time: I think the crucial factor constituting a swarm sound texture is the interaction of singular acoustic events.

In the attempt shown above, the „interaction“ or „swarm behaviour“ factor is provided by a cellular automaton, a model in which single cells adopt different states over time according to the states of their neighbouring cells.

At first, I did an implementation of a simple and popular cellular automaton (called „Conway’s Game Of Life“) with Max/MSP. Every row in the grid of the automaton constantly produces a sequence of short bandpass-filtered pink noise signals. The center frequencies of the bandpass filters increase with the number of rows (upper row – highest frequency). The more cells in a row, the denser the sequence of pink noise signals. The position of cells in a row (left, right or centered) determines the placement of pink noise signals in the stereo spectrum.

While this attempt to synthesize sound textures is oriented towards actual swarm behaviour, there are other approaches embracing the human perception of textures. Check McDermott/Simoncelli for further information.

Navigatrix – multiparametric step sequencer


This is „Navigatrix“, a simple yet quite flexible software step sequencer. I created it because I wanted to sequence a multitude of sound parameters in realtime via a plain and intuitively usable interface. Controllable parameters are: pitch, amplitude, portamento, envelopes, filter cutoff frequency and resonance, LFO and frequency spectrum shift.