I simplified the interface a lot, mostly by removing unnecessary options. There are no longer two instances of the additive synthesis engine bundled together – a more flexible way to experiment with that stacking effect is to use multiple tracks in your DAW.
For example, you can get instant deep drones by making three identical tracks with a long MIDI note, and then setting track 2’s instance to “First overtone” and track 3’s instance to “Second overtone”. This will get you a chord tuned according to golden ratio intervals! The sound is a little harsh but it’s amazing with the well-known free delay effect NastyDLA providing some dusty air.
I also fixed the polyphony/retriggering issue so notes will behave as expected. And I fixed a bug in the 8th voice and standardised the startup values.
As always with additive synthesis, watch out, it can get very loud.
I made a VST software instrument that uses the Golden Ratio to generate frequencies off a given note. You can download it here if you want to lash it into your music program and try it out. It’s unfinished though – more details below.
This didn’t come from any particular intention – it was a discovery I made while messing about in Synthedit many months ago. I was trying to make a normal additive synth where you mix the relative levels of each harmonic within a single note. But I found that if I took the frequency multipliers for the harmonics (which are normally just 1, 2, 3, 4 – so the second harmonic is twice the frequency of the first/root, the third is three times its frequency, fourth four times and so on) and raised them to a particular power around 0.6, a cool sound came out.
“Around 0.6” turned out to be 0.618034 – the “Conjugate Golden Ratio” (or one over the Golden Ratio).
Now it’s not possible to discover some “alternate harmonic series” because harmonics are a physical phenomenon: if you have a vibrating object with a fundamental frequency, whole-number multiples of that frequency can likely also form waves in it. So, each half of a guitar string vibrates one octave higher than the open string, and each third vibrates one fifth higher than that, and so on. Our sense of hearing subconsciously interprets the presence and tuning of harmonics as derived from physical properties: material and size and density. No other frequency series could have this same effect.
Nonetheless, the Golden Ratio seems more musical and harmonious than any other I could get by that exponentiating technique – actually it sounds like a jazzy chord. And it has what Gerhard Kubik calls “timbre-harmonic” aspects – like a Thelonious Monk chord, the harmony bleeds into the perceived timbre. My synth (on default settings) has a silky, bonky, dense timbre. (That territory between noise and harmony is where I like to be, musically. Check out the sounds I used for my drum programming experiment, for example.)
I could hear that it wasn’t in tune to equal tempered notes (nor to the non-equal tempered ratios found in the natural overtone series). But it was tuneful enough to sound concordant rather than discordant. If you download the synth and try the other ratios in the drop down menu you’ll hear the difference, I hope.
Here are the ratios: Golden Ratio conjugate on top, then normal harmonics, then the non-inverse Golden Ratio. You can see that the Golden Ratio conjugate results in a somewhat out of tune minor 11th chord – definitely jazzy! (The normal overtone series results in a dominant chord.)
I whipped up some little riffs so you can hear the synth. It’s very digital-sounding, like additive synths generally are, and also reminiscent of the stacked, sometimes exotic overtones of FM synthesis at its icier end.
Note I didn’t sequence any chords in these – the “harmony” is from the voices of the synth. And there are no effects added.
I’ll evaluate the musical aspect at the end of this post. For now I want to discuss the synth-making software I used: Synthedit.
When I first started messing with production as a teen, the free synths I downloaded were mostly built in Synthedit. I soon got to know its characteristic signs – exuberant amateur graphics, slightly misplaced buttons and sliders due to the software’s drag-and-drop interface, and I guess a lack of quality. I remember one bass synth that was pitched way off A=440 – rank sloppiness. I used it anyway. The Flea, it was called.
Most freeware Synthedit VSTs were like that: knock-off bass synths or delay effects, easy and obvious stuff, frequently derided by snobs on forums.
Synthedit enabled a flood of low-quality, imitative software synths by lowering the barrier to entry. Instead of coding C++, you could (and can today) just drag and drop components, add in other people’s custom components, and instantly see/hear the result in your DAW interfacing with your MIDI gear and other FX.
I was blown away when I first did this a couple of days ago. I clicked export, set some easy options, and then couldn’t find the exported file. Irritated, I went back to REAPER, my production software – and there was my synth just sitting there! And it worked! And nothing crashed!
Having studied programming for the last year, I know how hard it is to make software like that. The default mode of enthusiast-made nerdy software is to fail aggressively until you figure out some subtle, annoying configuration stuff.
So, today’s post is a celebration of a great tool, very much like the one I did about Processing. Once again, I want to emphasise how great it is that people make such programming tools for beginners, where the hard and horrid configuration stuff is done for you.
This is priceless. It can change culture, like Synthedit changed bedroom production culture and marked my adolescence.
Amazingly, the program is developed by a single man called Jeff McClintock. He is active on the forum and from reading a few of his posts I get an impression of someone who takes any user’s difficulty as a sign to improve the program. I really admire that. And it shows in the robustness of the app (even the old free version I’m using).
To make a synth, you drag connections between “modules” that provide a tiny bit of functionality or logic. It’s like wiring up a modular synth. The downside is that, if you already know how to code, it’s a drag having to do repetitive fixes or changes that in a programming language could be handled with a single line. Also, when a module you want isn’t available, you are forced to make silly workarounds, download third party stuff or change your idea. In Java or Python you could just do it yourself.
All told, I enjoyed the experience of making Golden (so I have baptised my synth). The best part is having impressively reliable access to powerful, mainstream standards: MIDI and VST. That made it a lot more fun than my previous synth which took in melodies as comma separated values and outputted raw audio data. It was brilliant to have all the capabilities of my DAW – clock/tempo, MIDI sequencing, parameter automation – talking to my little baby.
The drag-and-drop interface builder is also great. Once again, amazingly, McClintock hides all the donkey work of making interfaces, the boilerplate code and updating and events. You just put the slider where you want it, then it works. The downsides are being locked into standard interface elements unless you want to go much more advanced. So, I wanted to have one envelope take the values from another at the flick of a switch, but I couldn’t. (I’m sure it can be done, but I couldn’t find it easily online. In general, the documentation for Synthedit is weak, and online tutorials scanty. I think that’s due to the narrow niche served – people nerdy enough to make synths, but not nerdy enough to code.)
Although I had a great time with Synthedit, I’d like to keep learning and do this work in a procedural or OOP language next time.
Let’s finish. Do I think this Golden Ratio thing has musical value? Yes, and I would like to use it soon in a hip hop beat or tracker music production. (It could also serve as root material for spectral composition, I strongly suspect.) Is my synth very good as is? No, the envelopes don’t work nicely for immediately consecutive notes (I should make it polyphonic to fix that) and I’m not happy with the use of….
Actually, I should quickly explain the synth’s features.
At the top are overall options: the choice of exponent, then various tuning knobs. “Exponent fine tuning” lets you alter the exponent, “Voice shift” is an interval cumulatively added to each voice, “Keyscaled flattening” is a hack-y tuning knob that applies more to higher notes. Use these to massage the microtonality into sitting better with other harmony/instruments.
Then there are two instances of the basic synth, as you can see, each with 8 voices you can mix. You can turn each one up or down with the little knob on its left end. You can also change its tone with the lowpass filter big knob.
The idea of the two synth engines in one was to be able to double voices at Golden Ratio intervals. Sorry if this only makes sense in my head, but I thought that these dank Golden Ratio sounds should be harmonised using their own kind of interval rather than standard fifths or thirds, so by selecting the interval in one synth instance’s drop-down box you can set it apart from the other by one of those intervals. Selecting “First overtone” with “Golden Ratio Conjugate” set in the Exponent menu will, therefore, displace the 8 voices of that synth instance upwards by a perfect fifth + 42 cents.
Finally, to create some simple motion within the sound, I use two ADSR envelopes for each engine and linearly interpolate between them. The bottom one directly affects the bottom voice, the top one the top voice (always voice 8 BTW – I wanted it to detect how many voices are in use but had to abandon it – one of those workarounds I was talking about) – and voices in between are blended between these two, unless you click the “Link Envelopes” switch in which case only the bottom envelope is used.
And each engine has an LFO which affects the exponent, and therefore has a greater effect on the higher voices.
… I can see why they say writing docs is hard! Hope you could withstand that raw brain dump.
As I was saying, this synth is rough, but hey I’ve seen rougher on KVR Audio so it’s a release.
I’ve been listening to SNES-era game soundtracks so I’m tempted to try make some dreamy, pretty melodies using Golden. I think it might also be good for some woozy house or hip hop.
If I was to develop the synth, the first thing to change would be the two envelopes idea – really it should be some more sophisticated morphing. I saw an additive synth where each voice had its own envelope but that’s too much clicking. Some intelligent system – interpolating but using a selection of curves rather than linear, or maybe something like setting percentages of each voice over time while overall amplitude is determined by a single envelope – would be nice.
It also badly needs some convenience stuff: overall volume and pitch, an octave select, polyphony.
I’m leaving Golden as a nice weekend project. I’ll come back when I have some chops in C++, I would think.
Well, thanks for reading if you made it this far. You get a “True Synth Nerd” badge! If you want to talk about the Golden Ratio or synths, get in touch 🙂 And don’t hesitate to try out the instrument.
Warm analogue it ain’t. I knew when I started coding my synth-sequencer, Foldy, a few months ago, that it’d be harshly digital and crude sounding. I was inspired by tracker software as well as by two old PC-only music programs, Drumsynth and Hammerhead (which were the basis of my beat-creating project last year).
I’m releasing it today and calling it version 1.0. It works, but some iffy design decisions mean I won’t keep developing it.
That said, the code quality is a step up from my last release, the experimental art tool MoiréTest. I was able to go back and make big changes in Foldy, without the whole thing crumbling, which is always a good sign.
For the rest of this post I’ll explain what the program does, then what questionable decisions I made and how I would do it again.
(To try it yourself, download Foldy.jar from here and double click on it. If that doesn’t work try the further instructions in the readme.)
Foldy takes in a musical sequence, which you can type into a box in the app window. Notes are numbered as MIDI notes, where A=440 is at 69, and notes range from 0 to 128, and separated by commas. A rest is -1.
(By the way, did you know that, incredibly annoyingly, there is no industry standard for numbering the octaves of MIDI notes? The frequencies are agreed on, but one manufacturer’s C3 is another’s C4… how sad. This doesn’t impact Foldy though, I just work from the frequencies.)
The speed that notes are played is altered using tempo and beat subdivision controls. All the other parameters in the window modify the sound of individual notes. Only one note can play at a time. This kept things a bit simpler though, with the Java Sound API, opening another output line or mixing two together wouldn’t be much harder.
I was going to include a choice of mathematical curves, possibly Bezier curves, for the amplitude envelope, out of a perverse desire to avoid the bog-standard Attack-Decay-Sustain-Release model, which is suited to a keyboard instrument where a note is attacked, held and released. I was thinking this synth could be more percussive, inspired by the basic sample-playback model of drum machines and trackers (a type of sampler software originally made for Amiga computers and associated with the demoscene).
Unfortunately I didn’t finish the Bezier stuff, but in any case it probably wasn’t suitable. (For one thing, Bezier curves can easily have two y values for one x value.) In fact, I didn’t do any extra envelope options, partly because envelopes typically drive filters or modulations, but these are not allowed by my architecture. If there’s an obvious v1.1 feature, extra envelope curves is it.
One feature that did make it in is “wave-folding”. To get more complex waveforms, I cut a sine wave at a certain amplitude, and invert anything above that amplitude. This can be done multiple times to add a lot of harmonics.
However, this is a restrictive technique with a distinctive grinding, mechanical sound. All we’re doing here is shaping a waveform which is then repeated exactly at the period of the note frequency. The ear instantly picks up the lack of complexity.
I remember when I was a teenager, having the following bright idea: if I can see that the recorded waveform from my bass consists of repeated bumps, can’t I just sample one of those and repeat it/change the speed of it to get any bass note I want?
This is the basic concept of wavetable synthesis. However, when done as simply as that, it sounds completely artificial, not at all like a bass guitar. The sound of any real instrument has complexities like propagating resonances, changes in pitch, string rattle and other distortions/energy loss.
(E.g. listen to the low note in this sampled bassline – it’s starts really sharp, then reverts to normal. That’s because plucking of a stringed instrument raises the pitch of the note momentarily, especially on an open string – I think this was an open E string on the original sampled recording, just it’s been pitched up here.)
Foldy has no capability for such modulations. I could try put them in, but here we come up against the compromises I made at the start.
Because I was afraid that rounding errors would mount up and give me grief, I decided to keep everything as whole numbers, taking advantage of the fact that digital audio ultimately is whole numbers: a series of amplitudes or “samples” each expressed as, for example a 16bit or “short” integer. (Most studios mix at 24bit these days, but say CD audio only goes up to 16bit precision.)
This informed the basis of the synth. Desired frequencies and tempos are approximated by a wavelength and a subdivision length expressed in whole samples. 44100 samples per second might seem fairly precise, but for musical pitches, it isn’t. So I found a compromise that bounded pitch error to about 20 cents:
Foldy tries to fit multiple wave cycles within a whole number of samples, for example 3 cycles in 401 samples. This gives a bit more precision, because the wavelength is 401/3 = 133.667 samples, in between the 133 and 134 wavelengths that are all I could get otherwise.
I then use these bits of audio, which I call “chunks”, and which could contain a single cycle or a handful of cycles, in the same way I was using single wave cycles originally. So every note would contain hundreds of them. Then I decided I could reuse this division to store amplitude envelopes – I gave each chunk a starting amplitude, and interpolated between these. (Of course, this is redundant at the moment because my overall envelopes are merely a linear interpolation from maximum to zero! But with a curved envelope, the result would be to store the curve within a few dozen or hundred points, with straight lines from point to point.)
Ugh… I don’t even want to write about it anymore. It wasn’t well conceived and caused me a lot of hassle. It precluded any of the more intriguing synthesis techniques I like, such as frequency modulation, because pitch in this system is fixed for each note (and imprecise).
Long story short, when I opened up the source code of Drumsynth recently, I realised that… it just uses floats and gets along fine. For modulation, it simply keeps track of phase as another float. I should’ve done that.
(That said, I think Drumsynth’s sound quality is far from pristine. This isn’t from rounding errors, I’m certain, but from not doing more complex stuff like supersampling. But, that’s out of my ability level right now anyway.)
Using floats, I still would have had trouble with the timing for the sequencer, probably… but that would have led me to the realisation that I was biting off too much!
It’s not a complete loss. I really enjoyed trying to calculate sine waves while sticking to integer arithmetic . I found out about Bhaskara‘s approximation, implemented it, and then found some really nice code using bitshifts to do a Taylor Series approximation of a sine wave. (I wish I had the chops to come up with it myself!)
Reading the source of Drumsynth also completely changed my approach to the GUI code. I originally had all of the classes that make up the synth – Note, Chunk, Sequence and so on – also be GUI elements by inheriting Java Swing component classes. I think I picked this up from some book or tutorial, but it’s obviously not good. It breaks the basic principle of decoupling.
Drumsynth blew my mind with its simplicity. There are no classes as it’s written in C, an imperative language. The synthesis is just one long function! I almost didn’t know you could do that, having spent a year studying Java and OOP. But given that the app is non-realtime (meaning that there is a third of a second pause to calculate the sound before you can hear it)… this is the sensible approach. Logically, it is one long straight task that we’re doing.
So I ripped out the GUI code from my main classes, and stuck it into one class called Control. Drumsynth’s GUI is even more decoupled: it’s written in a different language – a Visual Basic form that calls DLLs to access the synth functions!
(Yes, I know this is pretty out-of-date inspiration – heck Drumsynth even cheekily uses INI files for configuration though they were officially deprecated – but I think the lesson on directness and decoupling stands.)
My overall lessons from this project are:
Do normal stuff rather than trying to reinvent things.
Find exactly what draws you to a project and make that the focus. E.g. with this I would’ve been better off making something smaller and more conventional but which allowed me to try some unusual FM stuff.
Even though I’ve so, so much further to go, I kinda like low-level stuff. I mean, okay, nothing in Java is actually low-level, but still I was dealing with buffers, overflows, even endianness! Those are fun errors to fix.
Read other people’s code!
Even more generally, there’s a kind of tricky question here. This project showed me that it’d be a huge amount of work to approach the quality level of some of the audio programming frameworks out there such as JSFX, VST/Synthmaker, or JUCE. If I’m interested in actually programming synths for musical purposes, I should use one of those.
On the other hand, these are all coded in C or C++ (maybe with another abstraction layer such as EEL scripting language in the case of JSFX). If I really want to understand fundamentals, I should learn C.
But, it’s not very likely I’ll get a job doing high performance programming of that sort, considering the competition from those with as much enthusiasm as me for flash graphics or cool audio, but much more chops! I’m at peace with that – I quit music to get out of a profession that is flooded with enthusiasts.
Today I finished up a coding project, which was to make a simple synthesiser written in Java. I noted though, as I dashed off a quick readme file and uploaded the repository for the last time, a distinct sense of anticlimax, even disappointment. The app didn’t work out that well. Instead of writing an account of what I was trying to make and how I set about it, then, this evening I want to reflect on ways to avoid that letdown, specifically with side projects where you have total creative freedom.
This is about accepting and optimising for the way my brain works. Which is: it creates lots and lots of ideas at the start of a creative process. Here are some of the ideas I had for my synth:
doing additive synthesis by mixing together multiple instances of my basic instrument
doing additive synthesis by creating custom waveforms containing the desired harmonics
generating just intonated and 5-limit tunings
generating tunings from two overlapped harmonic series
generating 12-tone equal temperament and quarter-tone (24-tone) tunings mathematically
using the stuttering/echoing sound of buffer underruns as a musical effect
adding frequency modulation synthesis
having the synth be about a limited subset of sounds like bells, marimbas and bass
avoiding certain assumptions from the MIDI standard, e.g. removing note duration in favour of focusing on note onsets only (an idea influenced by my study of African-derived musics which have this emphasis)
using the synth in some kind of persistent, low-key online game as a way for players to leave melodies for each other to find
using rules to generate different melodies from the same basic info (say an array of integers), for example with different tunings or in different modes and scales
generating scales, chords and perhaps melodic cells or fragments geometrically, using relations I know from my study of music theory back in the day (e.g. a chord is every odd member of a scale is an approximation to 7 stacked fifths in a chromatic space generated using the twelfth root of two)
making an interface to expose these options interactively
And so on. The obvious common factor among all of these, I would say, is that there is no common factor. They are an extremely heterogenous bunch suggesting a whole lot of varied perspectives. Some are from a coder’s perspective (fixed-point calculation), some are about synthesising technique, some reflect my own limitations of ability, some are more like observations about the nature of music. Many have a distinctly contrarian flavour.
You can probably see the problem. There is no way to ever succeed at a project “defined by” such a list. And I’m not talking about success defined externally, but even just personal satisfaction. There are far too many requirements, many of which are contradictory.
The ideas there which are more rooted and achievable have another issue: they are technical challenges only, which makes them arbitrary. There’s no way to know if the solution arrived at is good, because there’s no agreed-upon way to measure performance. Should my bezier curve envelope allow two values for a given x input (which an unconstrained quadratic bezier can easily have), or forbid such curves? There’s no right answer because I haven’t defined what I’m trying to do.
This is in stark contrast to the school projects I’ve done for my higher diploma course: making a banking app, or an e-commerce website. Even if the work could get boring with those, the remit is clear and it’s satisfying when such a system comes together.
How did I manage to put a few weeks into my synth without realising that I hadn’t fixed upon a goal?
Or do I even need a goal? There’s nothing wrong with just fiddling about with stuff for fun, there’s even a fancy phrase to make it sound more official: “stochastic tinkering“. However, I know that I get my fun as a programmer in quite a specific way – by turning stuff that doesn’t work into stuff that does. When definitions are too loose, there’s no way to decide whether something works.
I’ve come up with a few pointers on how I might avoid this looseness in future.
The first is to design something conventional with a fictional, “normal” user in mind. This was how my school projects got done. This is good because convention (shopping carts in an e-commerce site, account selection in a banking site) guides you. This leverages the part of my mind that can’t stand to be wrong, and that likes tidying up: as long as the project fails to meet conventional expectations, I’ll be nettled into improving it.
However, finding the motivation to develop without the ego boost of originality would be hard for me. I know from the experience of finishing up my synth that work done just for the sake of appearing competent to strangers who come across my Github profile, isn’t very sustaining. The school projects had the virtue of being compulsory.
The second solution is to design something I’d like to use. This is… hard actually. It requires some self-awareness and honesty. I made a synth because I thought it’d be cool… or so I thought. Yet if I truly believed synths are cool, I’d probably have used one in the last few months; I haven’t done any synth-based music-making though in that time (despite having dozens of software synths installed on my computer). My conclusion is that I find synths to be a pleasantly intricate subject for mental distraction, but that I don’t actually have much desire to use them.
And similarly with the pixel art app I made before the synth. I like thinking about pixel patterns and generating them, yet if I liked making pixel art I’d be making some.
So, thinking honestly about one’s interests and requirements isn’t all that easy.
A third approach is to make something new, but very small. This worked well with something I made last year, an interactive sine wave visualiser. I actually made use of it, just for a second, while working on my synth to help me think about differentiating sines.
I’ve read advice to programmers about making tools that do one thing very well, and I can see the sense of it.
A fourth thing that has worked well for me is collaborating. When I’m working closely with others, my desire to appear right is a strong motivator. The hard part though for a side project is putting the energy into finding collaborators and the contradictory twin fears of not being good enough versus working with someone I feel is holding me back.
Those are actually familiar negative thoughts from my musician days.
Well, that’s what I wanted to write. Conclusion: even though my brain likes nothing better than lashing out idea after idea, finding the right one takes courage and deliberation. And it seems likely that good project ideas will combine a couple of the following: doing one thing only; being conventional; solving a real problem I have; being collaborations.
Today’s post analyses a composition by Tim Follin from the soundtrack of a 1994 Sega Mega Drive game, Time Trax. (I found it on this sweet playlist.) I wanted to find out how it succeeds in being so improbably funky.
Chiptune music has been rising in cultural prominence with the predictability of any nostalgic trend. A mate of mine recently put me on to the quite expensively produced Diggin’ in the Carts, series, for example. I guess what’s fun about the music, beyond just hearing things from your childhood, is the musical meaning conveyed within harsh technical limits. Somehow, cheaply synthesised noises that don’t sound at all like brass, bass guitar, a string section, or whatever, can cheekily evoke just those things. So I want to examine that dialogue across the chasm of failed simulation, where the ludicrousness of the attempt at orchestral grandeur or, in this case, funk jamming, is part of the aesthetic.
You won’t remember this one from your childhood, because this game was never actually released and only a prototype of it emerged online in 2013. “The game is notable for its use of a relatively advanced sound driver designed by Dean Belfield for Follin,” segaretro.org tells us. I get the impression that this was a technical peak of sound design on the Megadrive.
Not to get too nerdy – let’s save that for later – but this style of synthesis is associated with, roughly, the 16-bit generation of consoles as opposed to earlier 8-bit. It is called frequency modulation synthesis and it tends towards a distinctive metallic, clanging, bell-like, brassy tone. (Which Tim Follin’s sound design actually disguises pretty effectively, at least until the heavy distorted riff sections.) You may also recognise the sound if you ever played MIDI files on a laptop with a cheap soundcard, like my Dell Latitude.
Let’s get to the music!
So, apart from the dinky sounds, we have a medium tempo funk-rock groove tune. The first thing I was curious about was the structure. As is typical for game soundtracks, this one is designed to loop interminably. However this isn’t really an issue either way as players were not likely to stay long on the ‘Mission Briefing’ screen where this track is played. (In this playthrough video the player spends 40 seconds.)
In any case, there’s a 90-bar structure lasting about 3 and a half minutes. The basic principle is one found in a lot of groove music – Wayne Shorter’s ‘Beauty and the Beast’ is my canonic example – which we could call “on or off”: you’re either in the groove or getting ready to get back in.
Follin uses key changes to shape his track. (He mentions his “fondness for random key changes” here.) As shown above, we get the excitement of going up a b3rd, then a gradual floating down to the home key – a good scheme for a track whose opening will be heard more than its ending.
The key change is smooth on a number of counts.
Okay, the lead line doesn’t voice lead, but in general its D major pentatonic melodic/modal colour is goes strongly to G minor (i.e it’s the elemental I major to IV minor alternation that has strong gravity in both directions). The (faint) bass voice ends on a D before a strong G bassline comes in, so that works. And the inner voices have a general upwards sliding of a semitone. Nice.
There are some other
nice applications of harmonic colour, suiting this rock/funk context.
In the intro we get some Cs, the b6 of the key, giving a suitably
earnest diatonic natural minor mood of cop show epic – after that
these are thrown out in favour of minor-7th/sus-chord funk colours.
There’s some strong use of the 9th, F#, in a couple of places e.g.
the brass pentatonic build.
(I recognise that my names for the instrumental sounds are arbitrary. I just don’t want to put quotations around “organ” and “elec. piano” for the whole blog post. Your interpretation of what the instruments are meant to be is just as valid.)
This ambiguity of instrumental sounds is crucial for what I consider the secret sauce to this track. I’m talking about the inner voices that comp all the various organ solos and can be most clearly heard in the breakdown at 3:00. Quietly, with a warm electric piano-like sound, they add some rhythmic action that interlocks nicely with the rest, and fills out the middle part of the sonic spectrum. At the end of bars 2 and 4, every time, they feature some bluesy parallelism of a type I discussed a long time ago on this blog. A bII I movement, and next time it’s bIII IV. How does the bII I fit so smoothly in a minor key? I think it’s really a bV IV – a classic blues side-slip – in the V key!
Here is where I’d normally show a little transcription (I did attempt one bar of it above – the dotted eighths in the second bar). However, these inner voices are incredibly hard to transcribe due to two peculiarities of the medium: overtones generated by FM synthesis; and the need to swap instrument sounds mid-flow to maximise channel use.
The first issue means that, although I think only two channels are used for these inner voices, we get a fleeting impression of triads in the passing chords – I believe I can hear the thirds. This is due to the way frequency modulation synthesis works] – it adds overtones called side-bands, in various proportions, to the original fundamental. As the 3rd and 5th (actually 3rd+2 octaves and 5th+1 octave) are part of the overtone series, frequency modulation can generate a kind of major chord. So, for instance, while the most audible line in the inner voices starts b7 b7 8, or D D E at 3:00, I think this is actually the 5th and that below there’s a b3 b3 4, G G A which is the fundamental.
What makes it trickier is the fact that Follin may not be keeping the sound (or “patch” in synthesiser terminology) consistent from note to note. In the intro, we note a changing modulation on the synth stabs. And if you look at the visualisation, you’ll see that a channel may switch from brass to bass instantly (e.g. when the groove kicks in at 1:05), or whatever. (Follins mentions this as a “basic trick” here.)
So, even though with a bit of hunting around on old-school, nerdywebsites I got some tools to extract MIDI data from the game files, I still can’t, after a decent effort, unambiguously notate it, because these lines might be transposed or changing timbre any time, and in any case they’re definitely using timbres with at least a strong 5th above the root.
What does all this mean musically? Just that it’s a full-sounding comping pattern with some sonic depth and mystery, and which, especially during those passing chords, is subtly but unmistakeably bluesy! Because blues uses that ambiguity between harmony and timbre all the time (so does electronic dance music, funk, jazz…), particularly for cliched parallel chord movements.
Let’s talk about the other sounds! Some commenters on Youtube have gotten into the nitty gritty of Follin’s techniques – in particular, his use of clipping/overdriving the signal to get otherwise impossible waveforms. I don’t know enough to comment there but I’ll just praise the sounds from a musical perspective and from what I can see in the visualisations.
Firstly, the very effective drum sounds are a single instrument/patch sounding at a high note for snare and a low note for kick, and a really high note for the hats. (Listen in the breakdown sections and you’ll hear the hat sound is kind of like a snare.) This is clear in the intro fill which sounds like it’s on the toms – but later those same notes function as a kick and snare in the main beat. While initially they sound more like toms than a kick or snare, in the mix they’re convincing. The beefy snare takes up some bass register quite effectively.
The distorted sounds later on, and the brass in the intro, are even less “realistic” but still sound good. I really like the bleep on beats 2 and 4 in the intro – here Follin uses a classic technique of “fake delay,” repeating the tone more quietly 3 16th notes later to give the impression of a classic tempo-synced delay effect. Then the bells/glockenspiel in the middle are a really nice timbral contrast. In fact timbral contrast is one of Follin’s main tools.
There are some cool sequencing tricks. The time feel changes from straight 16ths when it’s only hats, to swung 16ths when the groove kicks in. Also, there are some nice dynamic changes in instrumental sounds: the volume swell for the 2nd pads chord in the section starting 0:05, and the changing timbre of the synth stabs in the following section (accomplished by dialling in the degree of modulation of the carrier wave).
Of course, the centrepiece is the organ melodies. Although not very memorable as themes, they’re definitely funky, using tricks like staccato pedal tones, 32nd-note blues scale ornamentation, and (not idiomatic for organ, as I mentioned) pitchbends. As Follin says, “I also liked the playing styles used by folk musicians, all the twiddles and little arpeggios, which were again relatively easy to reproduce.” In general, these organ lines are built using rhythmic groups of 3 and either I minor pentatonic or V minor pentatonic shapes.
There’s one characteristic of the programming which is more to do with expediency – there’s a lot of reused material. The underlying drum pattern has no variations until it switches to a disco beat; the last minute is mostly just one riff in various orchestrations; and all of the organ bits use the same “answer” phrase in bars 3-4 and 7-8. As Follins recounts, these tracks were made by typing in notes in a text editor. I’d say this is why he copied and pasted a lot. It’s not a major problem functionally: the up-and-down the arc of dynamics keeps a meaningful directionality even though much of the groove stays unchanged for multiple sections. However, once you know about them, some of the 4-bar exact repeats (i.e. in the middle of the organ solo bits) become a little jarring.
This track was evidently made quickly, within the strictures of commercial production. Nonetheless it’s remarkably crafted, especially the sounds, which are not only skilfully programmed but gel together in a very fat “band sound.” And this was done without any mixing in the normal sense of applying EQ, compression, reverb. My personal yardstick is that I repeatedly found myself tapping my foot as I analysed it. No surprise that Follins states, “My own preference in my early teens (squashed by peer pressure) was for Quincy Jones.”
The actual game Time Trax, BTW, “is a straightforward platformer that sticks to the 16-bit platforming formula rather than innovate.” It’s clearly Follin’s composing work – which he says was something too nerdy and embarrassing to mention to friends and acquaintances at the time – that has kept it in the limelight. It’s nice to see that he’s only getting more recognition with the years.
I hope you enjoyed this jaunt into some different territory for the blog! If you have any insights into VGM or synthesis, feel free to comment!