Synth Sins

Warm analogue it ain’t. I knew when I started coding my synth-sequencer, Foldy, a few months ago, that it’d be harshly digital and crude sounding. I was inspired by tracker software as well as by two old PC-only music programs, Drumsynth and Hammerhead (which were the basis of my beat-creating project last year).

I’m releasing it today and calling it version 1.0. It works, but some iffy design decisions mean I won’t keep developing it.

That said, the code quality is a step up from my last release, the experimental art tool MoiréTest. I was able to go back and make big changes in Foldy, without the whole thing crumbling, which is always a good sign.

For the rest of this post I’ll explain what the program does, then what questionable decisions I made and how I would do it again.

(To try it yourself, download Foldy.jar from here and double click on it. If that doesn’t work try the further instructions in the readme.)

Foldy takes in a musical sequence, which you can type into a box in the app window. Notes are numbered as MIDI notes, where A=440 is at 69, and notes range from 0 to 128, and separated by commas. A rest is -1.

(By the way, did you know that, incredibly annoyingly, there is no industry standard for numbering the octaves of MIDI notes? The frequencies are agreed on, but one manufacturer’s C3 is another’s C4… how sad. This doesn’t impact Foldy though, I just work from the frequencies.)

The speed that notes are played is altered using tempo and beat subdivision controls. All the other parameters in the window modify the sound of individual notes. Only one note can play at a time. This kept things a bit simpler though, with the Java Sound API, opening another output line or mixing two together wouldn’t be much harder.

I was going to include a choice of mathematical curves, possibly Bezier curves, for the amplitude envelope, out of a perverse desire to avoid the bog-standard Attack-Decay-Sustain-Release model, which is suited to a keyboard instrument where a note is attacked, held and released. I was thinking this synth could be more percussive, inspired by the basic sample-playback model of drum machines and trackers (a type of sampler software originally made for Amiga computers and associated with the demoscene).

Unfortunately I didn’t finish the Bezier stuff, but in any case it probably wasn’t suitable. (For one thing, Bezier curves can easily have two y values for one x value.) In fact, I didn’t do any extra envelope options, partly because envelopes typically drive filters or modulations, but these are not allowed by my architecture. If there’s an obvious v1.1 feature, extra envelope curves is it.

One feature that did make it in is “wave-folding”. To get more complex waveforms, I cut a sine wave at a certain amplitude, and invert anything above that amplitude. This can be done multiple times to add a lot of harmonics.

Adding harmonics to a sine wave by folding it at 1/2, then 1/4 amplitude

However, this is a restrictive technique with a distinctive grinding, mechanical sound. All we’re doing here is shaping a waveform which is then repeated exactly at the period of the note frequency. The ear instantly picks up the lack of complexity.

I remember when I was a teenager, having the following bright idea: if I can see that the recorded waveform from my bass consists of repeated bumps, can’t I just sample one of those and repeat it/change the speed of it to get any bass note I want?

Why, chap, it’s simply a bunch of bumps (by the way, don’t record bass in stereo like I did here)

This is the basic concept of wavetable synthesis. However, when done as simply as that, it sounds completely artificial, not at all like a bass guitar. The sound of any real instrument has complexities like propagating resonances, changes in pitch, string rattle and other distortions/energy loss.

(E.g. listen to the low note in this sampled bassline – it’s starts really sharp, then reverts to normal. That’s because plucking of a stringed instrument raises the pitch of the note momentarily, especially on an open string – I think this was an open E string on the original sampled recording, just it’s been pitched up here.)

Foldy has no capability for such modulations. I could try put them in, but here we come up against the compromises I made at the start.

Because I was afraid that rounding errors would mount up and give me grief, I decided to keep everything as whole numbers, taking advantage of the fact that digital audio ultimately is whole numbers: a series of amplitudes or “samples” each expressed as, for example a 16bit or “short” integer. (Most studios mix at 24bit these days, but say CD audio only goes up to 16bit precision.)

This informed the basis of the synth. Desired frequencies and tempos are approximated by a wavelength and a subdivision length expressed in whole samples. 44100 samples per second might seem fairly precise, but for musical pitches, it isn’t. So I found a compromise that bounded pitch error to about 20 cents:

Foldy tries to fit multiple wave cycles within a whole number of samples, for example 3 cycles in 401 samples. This gives a bit more precision, because the wavelength is 401/3 = 133.667 samples, in between the 133 and 134 wavelengths that are all I could get otherwise.

I then use these bits of audio, which I call “chunks”, and which could contain a single cycle or a handful of cycles, in the same way I was using single wave cycles originally. So every note would contain hundreds of them. Then I decided I could reuse this division to store amplitude envelopes – I gave each chunk a starting amplitude, and interpolated between these. (Of course, this is redundant at the moment because my overall envelopes are merely a linear interpolation from maximum to zero! But with a curved envelope, the result would be to store the curve within a few dozen or hundred points, with straight lines from point to point.)

Ugh… I don’t even want to write about it anymore. It wasn’t well conceived and caused me a lot of hassle. It precluded any of the more intriguing synthesis techniques I like, such as frequency modulation, because pitch in this system is fixed for each note (and imprecise).

Long story short, when I opened up the source code of Drumsynth recently, I realised that… it just uses floats and gets along fine. For modulation, it simply keeps track of phase as another float. I should’ve done that.

(That said, I think Drumsynth’s sound quality is far from pristine. This isn’t from rounding errors, I’m certain, but from not doing more complex stuff like supersampling. But, that’s out of my ability level right now anyway.)

Using floats, I still would have had trouble with the timing for the sequencer, probably… but that would have led me to the realisation that I was biting off too much!

It’s not a complete loss. I really enjoyed trying to calculate sine waves while sticking to integer arithmetic . I found out about Bhaskara‘s approximation, implemented it, and then found some really nice code using bitshifts to do a Taylor Series approximation of a sine wave. (I wish I had the chops to come up with it myself!)

Reading the source of Drumsynth also completely changed my approach to the GUI code. I originally had all of the classes that make up the synth – Note, Chunk, Sequence and so on – also be GUI elements by inheriting Java Swing component classes. I think I picked this up from some book or tutorial, but it’s obviously not good. It breaks the basic principle of decoupling.

Drumsynth blew my mind with its simplicity. There are no classes as it’s written in C, an imperative language. The synthesis is just one long function! I almost didn’t know you could do that, having spent a year studying Java and OOP. But given that the app is non-realtime (meaning that there is a third of a second pause to calculate the sound before you can hear it)… this is the sensible approach. Logically, it is one long straight task that we’re doing.

So I ripped out the GUI code from my main classes, and stuck it into one class called Control. Drumsynth’s GUI is even more decoupled: it’s written in a different language – a Visual Basic form that calls DLLs to access the synth functions!

(Yes, I know this is pretty out-of-date inspiration – heck Drumsynth even cheekily uses INI files for configuration though they were officially deprecated – but I think the lesson on directness and decoupling stands.)

My overall lessons from this project are:

  • Do normal stuff rather than trying to reinvent things.
  • Find exactly what draws you to a project and make that the focus. E.g. with this I would’ve been better off making something smaller and more conventional but which allowed me to try some unusual FM stuff.
  • Even though I’ve so, so much further to go, I kinda like low-level stuff. I mean, okay, nothing in Java is actually low-level, but still I was dealing with buffers, overflows, even endianness! Those are fun errors to fix.
  • Read other people’s code!

Even more generally, there’s a kind of tricky question here. This project showed me that it’d be a huge amount of work to approach the quality level of some of the audio programming frameworks out there such as JSFX, VST/Synthmaker, or JUCE. If I’m interested in actually programming synths for musical purposes, I should use one of those.

On the other hand, these are all coded in C or C++ (maybe with another abstraction layer such as EEL scripting language in the case of JSFX). If I really want to understand fundamentals, I should learn C.

But, it’s not very likely I’ll get a job doing high performance programming of that sort, considering the competition from those with as much enthusiasm as me for flash graphics or cool audio, but much more chops! I’m at peace with that – I quit music to get out of a profession that is flooded with enthusiasts.

Stuff to mull over.

Beats, Windows 98-Style

It’s been a while since I blogged here. In the meantime I’ve been working a lot on my rock band Mescalito… but some blog ideas have been simmering in the back of my mind.

Today’s post is a quick chat about a creativity-boosting project I thought of. I’ll be making a drumloop a day, every day of December 2016 and uploading them to my Soundcloud.

I was recently producing beats for my trio with Dyl Lynch and Max Zaska. I enjoyed trying to imitate the likes of Madlib, using compressor and EQ plugins etc. to make our live performances as fat as possible. For this month’s project, though, I’ll just focus on drum programming. I’m inspired by another bandmate, Ben Prevo’s, song-a-day project where he used whatever was at hand to make a more-or-less finished product each day.

To avoid the rabbit hole of tweaking FX plugins, and for a healthy dose of nostalgia, I’ll only use software available in the year 2000!

Hammerhead.png
Hammerhead Rhythm Station (Bram Bos, 2000)

Drumsynth.png
Drumsynth 2.0 (Paul Kellett, 2000)

To me, these programs evoke a different world. I imagine bedroom tinkerers sharing coding techniques, knowledge of analog and digital hardware, and a love of dance music. Bram Bos’ program even displays his student email address, from a Dutch university. The last days of a smaller, less consolidated internet.

Hammerhead Intro.png
The intro screen for Hammerhead

If you had a PC back then, your music-making options were limited to MIDI sequencing, basic layering of samples, trackers – or free programs like these.

massiva_.jpg
This screenshot took a bit of effort to find – it’s easy for the history of a scene like PC music software to disappear into the ether … Massiva, another program I was messing around with around the year 2000

The nicest thing about (my fantasy of) the 90s is the DIY mentality. The tools are by amateurs and rely on no-one else’s file formats or software. These guys saw a problem, coded up a solution and gave it to the world. That still happens today but you are far less likely to hear of it in the hyped and moneyed tech/startup landscape of today.

Admittedly, some of those pioneers monetised their work. Drumsynth 2 is now bundled with FL Studio.

I say “pioneers”, but the reason there was a space for pioneering, is that the professional music world had little time for PCs. PC music was a nerdy little field, obsessed with emulating “realer”, cooler sounds – a vibe you can pick up by browsing old magazines.

The presets in Drumsynth 2 do try to emulate iconic drum machines – but the little synth can’t really hack it and the noises are crude. I kind of like that though. To recap, I’m using 20-year-old free software to get a sound roughly (but not convincingly) like 40-year-old drum machines.

Having a small number of samples (20 preset, 6 custom, only 6 channels) in Hammerhead, my drum machine, forces me to listen closely to how sounds work together. No delay or reverb makes me strive for other ways of creating depth: volume differences, layered and interlocking syncopations, and expressive, varied timbres.

I’ll be pushing the software past what it was designed to do. Hammerhead does 4/4 beats in 16th notes only. By using odd numbers of bars, though, this can be got around (e.g. 5 bars of 4/4 can be 4 bars of 5/4). Similarly, the shuffle control can be abused for some beat-bending tricks, if the given 4/4 grid is disregarded.

So in a humble way this project might represent some DIY values from the hacker and demo-scenes of my idealised 90s – which were all about overcoming computational limitations.

By the way, those 4/4 grids are how I first learned rhythm, at the age of 12 or so (first in a MIDI sequencer, then in Hammerhead). Here is my first ever beat, from 2001:

And here is the first drumline of my month of beats, Windows 98-style. (Try this direct link if the soundcloud embedding doesn’t display below.)