I simplified the interface a lot, mostly by removing unnecessary options. There are no longer two instances of the additive synthesis engine bundled together – a more flexible way to experiment with that stacking effect is to use multiple tracks in your DAW.
For example, you can get instant deep drones by making three identical tracks with a long MIDI note, and then setting track 2’s instance to “First overtone” and track 3’s instance to “Second overtone”. This will get you a chord tuned according to golden ratio intervals! The sound is a little harsh but it’s amazing with the well-known free delay effect NastyDLA providing some dusty air.
I also fixed the polyphony/retriggering issue so notes will behave as expected. And I fixed a bug in the 8th voice and standardised the startup values.
As always with additive synthesis, watch out, it can get very loud.
Are you new to blogging, and do you want step-by-step guidance on how to publish and grow your blog? Learn more about our new Blogging for Beginners course and get 50% off through December 10th.
WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.
I finished another programming project and I think it’s my strongest yet, thanks to me finally getting serious about testing! I called my app HuffmanRevisited because it implements the classic Huffman coding algorithm from 1951, and also because I had previously tried to program this algorithm a few months ago.
load a text file, encode (compress) it and save the encoded version, or else
load a previously compressed file, decode it and save the original content back as a text file
A few aspects of my approach helped make coding this an enjoyable task: I tried to specify it rigorously, I wrote a lot of tests, and I tackled a clear, well-known problem.
Before Christmas, I read a book – well, no, I skimmed a book after reading the prologue (which took multiple attempts) – called How To Design Programs. It’s an MIT textbook, you can read it here. I recommend it.
My paraphrase of the book’s prologue is: “precisely specifying what a program does is the hardest part of making it.”
Of course, I had encountered this sentiment in my Object Oriented Software Engineering module in the National College of Ireland. But the MIT textbook avoids the familiar paraphernelia of design patterns, verbose conventions and profuse diagrams. Instead, the book’s prologue challenges you to specify a problem by answering two questions: how is the data represented (in terms of data types that the machine can understand, and at all steps of the program)? and what are the functions (in terms of operations on those types)?
I mean, when I write it out, it’s dull and obvious. Yet, time and again I’ve found myself skimping on this specification step. Because, yes, it is the hardest part and one’s brain tries to escape it.
Even after my epiphany from reading the MIT book, I still evaded it. I specified most but not all of my Huffman encoding app before I started, thinking that the remainder was “simple”. But the simple part is never simple and if you haven’t thought it through, the solution will be incoherent even if it seems to work.
I failed to specify all of HuffmanRevisited, but at least I knew that this failure was to blame when I witnessed complexity mushrooming in front of my eyes as I tried to solve new, small problems that kept cropping up.
BTW, I’ll mention a couple of those little problems to see if you spot a pattern that kind of pleased me:
accessing individual bits from an array of bytes
packing an unpredictable amount of bits into an array of bytes
turning Java’s int and char types into a byte representation (not just casting which truncates them)
saving a compact representation of a binary tree containing characters in its leaf nodes (the ‘Huffman tree’)
Yeah… the pattern I spotted is that I’m doing low-level stuff and pushing somewhat against the limitations of Java. This is nice because I’ve been looking for an excuse to study a more low-level language!
The other thing that went really well with this project was testing. I’d written tests in JUnit before, but to be honest I was doing it to fulfil obligations in school assignments. Just like with specifying rigorously, I knew that tests are a great idea but was lazy about writing them.
I totally changed my tune once I had the framework up and running. (I used JUnit 5, Maven and NetBeans 11, and I mention this combination because I had no joy with JUnit 4 or with Ant.) I realised I’ve always done a lot of testing, but amateurishly: printing variables to the console all the time. That works okay… until your program starts to have a decent amount of methods on the call stack (i.e., functions that call functions that call functions that call functions…) and you spend your time trying to remember in which method did your gnomic text originate. Plus, all those print statements mess up your code.
Not to sound too much like a new convert, but using a test framework was a delight after all that. It’s much more organised. (And with my let’s say “wide-ranging” mind, I need all the organisation I can get!) It’s just like what I was doing before, except you only see debug text if there’s a problem, you get to choose any range of values you like to test, you can test as small or as large a section of the program as you like (as long as you’ve made the program itself reasonably articulated), and all of this business lives in separate files. Oh, and you get a cheerful screen of green when you pass your tests!
Okay, so specification and testing are non-negotiable aspects of real-world software development, but the last aspect I want to discuss can be more of a luxury: a clearly defined problem.
Until I get a start in a programming job, I can’t be sure, but my impression is that even communicating what a problem is, never mind completing a rigorous specification, can be hard in a typical business context.
However, I did this project for self-study so I got to choose exactly what to work on.
(I was helped in this by a comp sci book called The Turing Omnibus that a mentor very kindly lent me! It has a chapter on Huffman coding. The hook of the book, I would say, is that it conversationally introduces topics but doesn’t take you through every nuance. For example, unlike the Wikipedia article on Huffman coding, it mentions neither the need to pad bytes with zeros, nor any scheme for storing a b-tree.)
I was so glad I chose such a an old chestnut of an algorithm to implement! When I was refactoring my way out of that mushrooming complexity I mentioned earlier, the clarity of my app’s intention was a godsend.
Even better was the lack of edge cases. I could be certain my program had worked when it took a text file, compressed it into a smaller file, and then decompressed that encoded version into the exact same data I started with!
That’s intrinsically neater than some other areas I’ve boldly attempted, for example digital audio or vector graphics, where you need good control of sampling and rounding.
When I do go back to such complex topics, I’ll have a crucial bit of extra experience with the exact weapons needed to contain the ambiguity. Testing and full specification.
So, I’ll sign off there. My app could easily absorb some more effort. The next thing to work on would be efficiency. Do some profiling, and just comb through it for wastages. I can also think of some cool ways to present it, but no point hyping work I may not get around to.
Anyway, I’m pleased with it already. Jaysusin’ thing works, and the code structure is fairly sensible.
Thanks for reading. Take care in these hard times!
I love classic Japanese console RPG soundtracks like Final Fantasy VII and Secret of Mana. The idea of writing in that style appeals to me. But one thing that saps my confidence is when I struggle to find a section to follow a fragment I’ve already written. I tend to grab at the first possibility, even when the connection is weak or forced.
It would be great to have a general idea of how sections are shaped and connected in these songs.
So today I’ve analysed ten of my fave tracks by Noboe Umatsue (Final Fantasy VII) and Hiroki Kikuta (Secret of Mana 2, Seiken Densetsu 3). I wanted to know:
How long are the songs and sections?
What elements are repeated, and how many times?
What textural and harmonic progressions connect sections?
I ended up with this giant chart (which you can see in full size here):
Let me try explain this crazy chart. As you can see, I laid out a timeline for each song, boiling down everything in it to the following abstract categories:
beats & riffs – 1-4 bars long: Sometimes riffs change their note content to match a chord progression. But I still view it as the same riff. E.g., the synth arpeggio in ‘Prelude’.
phrases – the units of melody, 2-6 bars long: Of course, the judgement of phrase length can be arbitrary. I just decided these cases intuitively, trying to avoid fussiness. So, my chart doesn’t show every little motif.
sections – the large-scale divisions
If you’re familiar with sequencer software, you’ll recognise where I got the idea for all this. It’s how these tracks would look in a sequencer’s “Arrange” window: horizontal lanes containing MIDI clips: short, repeated grooves and beats, and longer melodic or chordal themes.
However, I’m not representing every instrument. I use the “Ostinato” lane for any repeating figure or combination of repeating figures, and anything that I deem to be a melody or theme (whether single note, harmonised, counterpoint or chordal) goes in the “Phrases” lane.
I’ve done the jazz musician thing and reduced the harmony to chord symbols. I don’t condone this in general. It’s just to sketch out what’s going on for skimming purposes. And while I’m confessing sins, I also used mode names to describe chords. In a past post I complained about overuse of modes as an explanatory device. However, I think modes are the best explanation for aspects of Hiroki Kikuta’s music.
Now, these are game soundtracks and the structure is first and foremost determined by having to loop indefinitely. Every one of these tunes has a section, the loop, that will cycle for as long as your game character stays in that location or game state (e.g. the battle screen). Four of the songs also have a preceding section that I call the intro.
The looping is part of the aesthetic, providing a hypnotic dreaminess, a melancholy, an escapism into something both boundless and yet safely predictable.
Obviously, looped music needs both variety and smoothness if it’s to avoid annoying the listener.
I never completed Final Fantasy VII (or even played either Secret of Mana or Seiken Densetsu 3) but I remember songs getting annoying when you had to redo a task too many times, like the Chocobo race. Or even the battle music, sometimes it’s the last thing you want.
Entrances and starts of sections are almost all square and on the beat. Melodic pickups are used for sure, and drum fills, but there’s never a sensation of skipping the downbeat or disturbing the start of a section. The music, after all, shouldn’t demand too much attention. It should provide drama and atmosphere, and depth for repeated listening, without snagging the ear. This doesn’t prohibit dissonance, strange sounds or unusual time signatures. But they must be safely contained in comfortable box-like structures.
Changes in instrumentation or texture are obviously important to provide diversity within the short loops. I tried to depict the instrumentation changes in the following chart:
Again, I’m not happy with this chart. The bars look like a bar chart, but although I am depicting the song structures chronologically from left to right, longer bars don’t represent longer time periods: instead, they represent songs that have more instrumentation changes.
That’s confusing and I’d like to improve on this in future.
Generally there’s a lot of keyboards, woodwinds, strings, mallets. Bit of voice, reed instruments, plucked strings. And a leaning towards kitsch things like barrel organ, accordion, music box.
The orchestration is not dense. I counted at most five different instruments at any time. This has to do with available tech, of course. These tracks are in a sample-based format, similar to tracker music, with (I’m guessing) 8 or 16 simultaneous samples permitted at once.
I presume the instruments were sampled from Yamaha digital synths. It can be hard to tell if something is meant to sound “like a synth”, or like a synthesised version of something real. That kind of stuff gives a lot of the aura of these soundtracks. I’ve spoken about it a bit before.
All right, let’s get onto the structures!
About half of the tunes have a loop length below a minute, while half have a length from 1:30 to 2:30. If you are composing in this idiom, you’ll be writing stuff shorter than a short pop song. Maybe that’s part of the appeal: a bijou version of generally long-winded genres like classical, prog rock and fusion.
‘Prelude’, ‘Tifa’s Theme’, and ‘Fond Memories’ (Uematsu) and ‘Still of the Night’, ‘A Curious Happening’ and ‘Raven’ (Kikuta): all these have a roughly ternary form for the loop. Kikuta in particular uses an AAB form with no variation between the As, a couple of of times.
‘Few Paths Forbidden’ (Kikuta) and ‘Anxious Hearts’ (Uematsu) have four equal sections in the loop. ‘Sending A Dream’ into the Universe (Uematsu) has only two but the theme’s phrase form is compensatorily more complex. ‘Now Flightless Wings’ (Kikuta) is a special case which I’ll discuss later.
Five of the tunes use repetitions with variation. Strategies for variation are all very familiar:
add (or remove) a countermelody, as in ‘Prelude’ and the second part of ‘Now Flightless Wings’
octave shifts, that old classic
change instrumentation, like flute to oboe in ‘Tifa’s Theme’
Most of the tunes centre around a continuous chunk of thematic melody of around 30-50 seconds’ length. It depends on the tempo, but often that’s 16 bars long. Perhaps because I chose a lot of melancholy and pensive and nostalgic pieces, many of these tracks have a similar moderate 4/4 tempo. Both games feature some 3/4 or 6/8, but less than I expected.
8 out of the 10 tunes have an ostinato of some kind, so that’s definitely a technique to reach for. Of those 8, 6 of those have it basically throughout.
Finally, let’s mention rests and breaks. All of the songs except for ‘Still of the Night’ and ‘Tifa’s Theme’ and the tiny loop of ‘Now Flightless Wings’, feature a tag or a breakdown to rhythmic hits. This provides a relief from the main melody, within the loop. ‘Raven’ has two different rhythmic breakdown sections.
‘Tifa’s Theme’, ‘Few Paths Forbidden’, ‘Now Flightless Wings’, ‘Anxious Heart’ and ‘Sending a Dream into the Universe’ (all lyrical, emotive ones!) feature prolongations of melody endings by a bar or two, either of a V chord or a I. Nothing too surprising, but another little technique for the toolbox.
In the end, I think I’ve reached the limitations of this kind of analysis. I could try eke out some conclusions about the phrase divisions of these melodies, but we’d learn more by transcribing a couple and talking about them as, you know, melodies.
Okay, time to wrap up with individual comments on each tune.
I apologise for presenting the tunes in no sensible ordering. It’s because I (rashly) chose LibreOffice Calc to lay out my data. Putting the tunes in a sensible ordering would involve too much layout hacking to be worth it.
I gotta say, I haven’t been too impressed with Calc. I encountered a fair few tiny glitches and the export functions are unfinished: I couldn’t find a way to choose what page or what cells to export to image, and the pagination options in the PDF export appear to do nothing.
Anyway. Now comes the fun part!
‘Prelude’ (C major) – Noboe Uematsu, Final Fantasy VII
This is the first thing you hear when you start the game. Confidently, for 16 bars it features only solo synth arpeggios that climb and fall through 4 octaves with a calm wave-like effect. The synth is warm and woody in its lower registers and chime-like at the very top. An echo effect adds magic dust. The triads are decorated with 9s and, at the end, 7s, providing a bit of extra colour.
Harmonically, it’s a four-chord trick until the parallel minor chords – all familiar but powerful stuff. The mood is mystical but noble. After that full round of synth, a majestic theme, with full chords, in strings and woodwinds, begins.
One smart detail is the order of the theme variants: first a version with ascending countermelodies in the accompaniment, then a plainer version without countermelodies, providing some easing and rest.
‘Still Of the Night’ (A minor) – Hiroki Kikuta, Secret of Mana
This isn’t a million miles away from the hypnotic, chimey, magical mood of Prelude, yet Kikuta’s style is distinctive. It’s more mysterious and warmer, cheekier. This stems from a static dorian modality, alternating with major chords off flattened degrees like bII, bVI and even bI. That sense of mystery comes from the ambiguous voicings (there isn’t a clear bass note) and tensions created by the shifting, slow ostinato against a droning tonic note.
This particular tune is very open in texture though we’ll see him do busier stuff elsewhere. Sonically, we’re in chimy, dreamy land again, but Kikuta’s sounds are warmer. He famously crafted the samples himself rather than leaving it to an engineer, and the result is gorgeous.
‘Tifa’s theme’ – Noboe Uematsu, Final Fantasy VII
Wow, this is such a catchy theme, I’ve had it in my head all day. Like Prelude, it’s in a major key with some colourful chords from the parallel minor. Also like Prelude, the progression is basic and powerful. Legato orchestral sounds plus a near-constant vibes arpeggio combine in a mood I’d call soulful.
The strings are done in a bit of a hurry, I think, but we get some contrary motion from variations in the vibes. There’s some not-particularly-subtle symbolism in the melody textures, that nonetheless drew a tear from me, about how Tifa wants a man to love and a return to the happiness she had with her childhood friend Cloud: flute and oboe together, then flute alone, then oboe an octave lower with flute finally rejoining.
The loop back to the start harmonically goes to I from a II, although the melody does strongly lean on the V note. It’s as if the theoretically necessary, bridging V7 chord is only briefly hinted at.
What a groover! This one has an awesome syncopated drums and bass guitar groove, a warm hooting synth harmonised melody, with wheeling syncopated marimba riffage in the background.
We’re getting into Kikuta’s secret sauce here: notice how the marimba has a quiet lower harmony line which subtly contributes some pulsing bass activity alongside the expertly sparse bass guitar throbs. The slapback echo adds texture and emphasises the woody quality while pleasantly obscuring that lower line – just another example of Kikuta’s gorgeous (yet economical) sonic layering – pleasant depth like a bed of bracken.
The slightly out of tune mallet sound adds flavour and realism.
The pumping bass uses the slab-like weight of bass guitar as a powerful device in itself. This is a composer who gets it.
‘Now Flightless Wings’ (Ab major) – Hiroki Kikuta, Secret of Mana
This one’s a special case. From reading the Youtube comments, I glean that it’s the last song heard in the game and it’s there to deliver an emotional payoff at the story’s end. Tense strings chords get harmonically warmer, into an infinite loop of gorgeous glowing barrel-organ and music box sounds. I haven’t played the game but even so the bittersweet “life is sad” loveliness affects me. The extreme shortness and simplicity of the loop makes it like a lullaby, childish, vulnerable and ephemeral. That said, some subtle counterpoint and harmonic variations bring depth and ornamentation so it’s not too plain. Brilliant stuff.
‘Anxious Heart’ (F minor) – Noboe Uematsu, Final Fantasy VII
This one starts with cinematic string swells. The harmony is tenser than in the other Uematsu pieces we’ve seen: minor to parallel major shifts with roots moving in thirds, featuring that shift from a major to minor 3rd which signals a mood of awe and transcendence. A lot of emotional payoffs in music happen on these type of big, simple colour shifts. So good!
The intro is in 5/4, I think, just to lengthen out the chords.
‘A Curious Happening’ (C minor) – Hiroki Kikuta, Secret of Mana
Swung sixteenths sleazy freaky noir funk. There’s probably something that could be said here about Japan’s relationship to African-American culture, but I amn’t informed enough to grasp it.
This track has very funky timbres. Both the synth and the xylophone in the intro vamp are primarily sonic/timbral. Although they’re outlining a Im6 to I-7b5 jazzy chord alternation, what we’re most aware of is the warm, nearly buzzing fatness from the synth, and dry niggling woody oddness from the percussion. Both are staccato sounds, putting that African emphasis (speaking very, very, very broadly) on note onset (and hence rhythmic expression) over the continuous pure tones of classical music.
In this context, the simple clave rhythm for the breakdown was the perfect choice.
‘Sending a Dream into the Universe’ (C minor) – Noboe Uematsu, Final Fantasy VII
This one has, I dunno, maybe “Celtic New Age” instrumentation? Keening woodwind, acoustic accompaniment, slow rock drums and synth pads.
There’s a cool programmatic sequence in the harmony. Three times, we change to a minor key a fifth above, via a pivot chord sitting a third away from each key. E.g. Cm Eb Gm. Then Gm Bb Dm. The effect is simultaneously uplifting and sad. Doing it three times in a row emphasises the theme of the title, with a feeling of hopefully, nobly surging upwards. Nice work, Uematsu-san.
‘Fond Memories’ (C major) – Hiroki Kikuta, Secret of Mana
It’s little wonder people get nostalgic about these games… they were made with a clear-eyed understanding of the mechanics and value of nostalgia! This sparkling gauze of single-note piano and faint accordion, with its shimmering delay effect, just gets right down to the business of plucking your heartstrings. Nice balance between the 4-bar major part and the 16-bar minor part. The harmony is triadic, diatonic then relative minor and finally just a bit of parallel minor in the form of a bVII to get us to a colourful and rather inexplicable, but definitely good VI7 chord before going back to the tonic.
‘Raven’ (A minor) – Hiroki Kikuta, Seiken Densetsu 3
This one’s a pure groove/riff tune. A foot tapper! Like in ‘Few Paths Forbidden’, Kikuta does his dorian two-part harmonising thing in the marimba, and also in the woodwinds. This tune just stays on one chord though, with a stomping rhythmic breakdown followed by an ominous, pulsing, pizz strings and flute tag, for variation.
Thanks so much for joining me. Hope these classic JRPG songs warmed your heart! And I hope I put these lessons to use some day soon myself!
I want to bust a real quick one today on my recent experiences of dipping a toe into alternate and smaller-scale web platforms.
Of course, this article itself is hosted on a dominant web platform, WordPress. And I use Facebook daily for mundane purposes, mostly keeping up with people. (Twitter, on the other hand, gets no love from me.) I’m not writing to rag on big platforms, but to acknowledge a cultural moment when a lot of people are contemplating this switch.
I’ve been reading Hacker News (itself a big platform – they’re everywhere!) for a couple of years and quickly grew familiar with “bring back the old web” sentiments there. I would guess programmers, with their love of the esoteric and the stripped-down, have been saying such things forever. The argument, if I may sum it up crudely, is that personal webpages (whether self-hosted or on services like Geocities) and pre-Web-2.0 media like blogs, newsletters and forums, fostered a more diverse, friendly, expressive, open culture online.
Part of that nostalgia is people remembering a period when only nerds were online – no racist uncles or Karens, to reach for current stereotypes. Also, I’ve the impression that a lot of good memories come from participation in subcultures like MP3 blogs or Flash games, that would obviously have drawn together like-minded folks.
Fast forward to 2021, then, and it makes sense that the many current revivals of the old-school web favour nerdiness over mass appeal. I’ll discuss that a bit more below when I get to my actual experiences.
Another driver of interest in alternative platforms is the manifest inadequacies of Facebook, Twitter and so on. Those companies have the impossible task of trying to please everyone. High-profile bans and legal challenges show that the security, conflict-of-interest and privacy problems of ad-driven social media are out in the open these days.
That recently drove a lot of people from WhatsApp onto the competitor app Signal, including myself.
I also started my own personal website, kevinhiggins.dev, to have an online outlet where the form as well as the content are in my control.
Finally, and mostly inspired by one guy I follow called JP LeBreton, a mild-mannered, leftist game dev, I joined Mastodon, the platform I call “Twitter for nerds”.
I feel much freer to post on Mastodon than on FB, because I don’t have, nominally, 1000 people who know who I am and might be following my posts. The lack of an audience (I’ve no followers on it yet and only got a couple of transient likes) is okay by me. Same with Drum Chant, I never focused on driving traffic to here. This gets right to my perhaps idiosyncratic stance on web publishing of any kind: for me, “putting it out there” is more important than getting a reaction.
I know why this is, it’s a quirk in my personality whereby things feel much realer to me if I’ve written them down. (Hence this blog – and privately, I also journal and keep a half-dozen diaries and logs for various activities.)
Hmm. I’d thought this article might be an encouragement to others to try out alternate platforms, yet now I’m persuading myself that they’re for people like me who are mostly into organising an archive of their thoughts over hanging out with others.
That’s not to say I don’t want the hangs. My own motivation to try out these venues of expression was very simple: lockdown is very lonely and I’m hoping to meet new, like-minded people.
And there are some such on Mastodon, for sure. But rather than starting conversations, for now anyway, I’m taking the shy fellow’s tactic of crafting the feed I’d like to follow.
It’s been fun, and I especially like posting abrupt juxtapositions of content, e.g. counterpoint exercises one minute, rap lyrics the next. I feel free to perform a multipotentialite and intense persona there.
When it comes to my site I imposed more structure to present a neater picture for say a prospective employer. (Check out the site icon!) However, I chose a serif font and some moody colours specifically to hint at 90s web mischief. The links section is intended to send readers off into a maze of esoteric personal pages. Mixing business with pleasure.
I’ll wrap up today with a related trend I’ve noticed and then some blue-sky ideas for more alternate platforms I might try.
A lot of the writing that affected me most last year came by email newsletter. When I contacted the author of one of these to say hi, he mentioned in his answer that he’d found the supposedly old-fashioned format unprecedentedly effective.
I list the three newsletters I follow in the links page of my site.
And to finish… two more avenues for expressing myself online that I’ve been considering are Neocities and Project Gemini. The first is a user-friendly webpage-hosting and linking service, explicitly about recreating the old-school web. I think they might even have, whatchamacall those things, link rings? Webrings!
That could be a place to do something pseudonymous and weird. Prose poetry? Moodboards? Naughty fiction? Something warm and indulgent, anyhow.
(I already have one or two pseudonymous outlets, I recommend it. Though I’m ignorant of the whole web culture of “alts” built on the concept!)
Project Gemini is different. It’s a whole new web protocol, a communication format for online interchange like the Hypertext Transfer Protocol that underlies the whole web. So, instead of an address like https://kevinhiggins.dev, you’d have gemini://gemini.circumlunar.space/servers/
You need special software to view content using this protocol, and it’s text only. It took me more than an hour to find an app that worked, but when I did, it was weirdly fun to read people’s random posts by such a covert, strange route. I remember one person seemed to write only about guitar tunings they were exploring. That kind of thing.
If I publish in “geminispace”, I’d like to write about spirituality and wisdom literature, to lend my own brand of esotericism to the initiative. (Since the Christmas holidays I’ve been reading Chinese philosophy every day, and I’m also a big fan of the likes of M. Scott Peck… and I read a bit of Western philosophy too, until my brain gets tired.) That won’t be under a pseudonym and I’ll let you know here on Drum Chant if I get round to it!
Oh, last thing, I never said anything about Signal. Well, it’s very much like WhatsApp except I found the setup to be a bit more fiddly and tricky – getting stuck in loops asking for permissions on the phone, not immediately importing contacts. It also uses a spaced-repetition technique to get you to learn off your PIN, which is super-nerdy. (Though probably a good idea, I’m sure.) Nothing too surprising there.
This week I made some progress towards towards coding 3D rendering!
I remember when I was in my early teens and a bit bored on holidays at my grandparents, trying to code an image of a road with 1-point perspective. I asked my grandfather to show me how to load the version of BASIC he had on his ancient Amstrad PC (it was GW-BASIC).
Back then, I didn’t get the basic idea of 3D perspective, but it isn’t actually very difficult: if objects are in a space in front of you where X is across, Y up and Z forward (and you are at 0, 0, 0), dividing their X and Y coordinates by the Z coordinate will create the necessary distortion.
(It seems that, typically, a small number is added to the Z component first to reduce the strength of the distortion, otherwise things get madly stretched off screen when the Z approaches zero. I bumped into that problem when making my Pink Sparks demo the other week.)
The real issue is that, to keep the code organised, manipulations of 3D data are best done using matrices. This way, a command can become data. Instead of running some code for each manipulation (such as rotating or resizing a shape), you have one piece of code that obeys a data representation of the desired command. This data is a transformation matrix.
You can then conveniently store these commands, for example the operations “Rotate by 30 degrees around the X axis, mirror in a plane with the same orientation as the X-Z plane but 5 units above it, resize to 80% scale” could be represented as three 3×3 matrices.
If you use homogenous coordinates, which are like ordinary coordinates but containing an extra element by which all the others are divided, then the Z-divide for perspective correction can be represented by a matrix which copies the Z value into that extra, dividing element (typically called W).
But enough of my attempts at understanding linear algebra! Let’s talk implementation details.
However, I decided not to use WebGL, the graphics-card-accelerated renderer that now comes with all browsers. I’ve been having a good time with WebGL tutorials, but connecting up buffers and typed arrays introduces more places to make a mistake and lose time debugging. I’ll return to WebGL someday because the raw power, the basic idea of shader language, and the depth-buffer and texture manipulation capabilities all attract me deeply. But for now there was, again, a clear best option: HTML5 Canvas.
This is a graphics API with higher-level features such as line-drawing commands – precisely what I wanted for my demos!
The first one I made demonstrated linear transformations in two dimensions. As you can see if you click here, these all operate around the origin (the centre point where the two axes meet), or to put it another way they preserve the origin. I used setInterval(..) to make the animation – not a good choice as we’ll see in a sec.
Then, I made a demo of affine transformations – a larger category which includes the linear transformations, as well as translations (i.e. just moving shapes around with no distortion) and mixtures of linear transforms and translations. To show the way that affine transformations can occur around any point, I added some quick interactivity to let the user set the centre point and choose a transformation. I also used matrix multiplication to iteratively apply the same transformation to my shape.
Around here I started thinking of possibilities for game graphics:
a stick person who squashes a bit before and after jumping
a stick person who leans back before and at the end of a run (wind-up and breaking), done with a shear transformation
explosions setting off shockwaves that pass through numerous characters on a the screen, squashing them in its direction as it does so
interactive objects that cause the player character to change size, Alice In Wonderland-style
(I was definitely channeling some old Flash games from my teens… stickdeath.com, anyone? I think that was the URL.)
I’ll get back to these ideas in a second to discuss what I think would actually be the hard part about making them…
My final demo was in actual 3D. Still working off Greg Tavarre’s nice WebGL tutorials (though NOT following his convention for ordering matrix elements in a 1D array), I implemented homogenous coordinates and a Z-divide. My first attempt had an annoying error in the Y-axis. Turned out everything was working, I had just put my transformation matrices in the wrong order so the up and down bobbing was happening after perspective had been applied!
If you look at the working demo, you may see the star seemingly spinning the wrong way, despite the perspective cues, a classic illusion. I think this is just a general fault of wireframe graphics.
BTW The animation here is handled with the preferred modern JS technique requestAnimationFrame(..).
All this demo-making begs the question: could anything here become reusable software?
The matrix stuff is eminently reusable. To make it convenient, I would need to make an engine or interface allowing a programmer to load geometrical data, transform it and display it, through well-documented, user-friendly functions, while hiding inner workings.
So the last thing I did this week was some design work on a personal 3D library. Eventually this should be in WebGL, but to test the design I might do it in Canvas and maybe just with wireframes. The crucial point is that geometry exists in all these different spaces before it’s fully processed:
object space, that is, vertexes positioned relative to the centre of the object they represent
world space, so now that object is positioned in a world
camera space, now the world is spun around to face the camera
screen space, now anything visible is referred to by the position it takes up on the screen (in this case, the rectangular Canvas on a webpage)
All of these have potential for interesting experimentation. What exactly defines an object is an open question – can an object be composed of others, and in what ways might those sub-objects be transformed? Once in camera space, what are the possibilities for fish-eye effects or non-Euclidean geometry? And of course screen space is the traditional domain of the visual artist, the flat sheet.
Well that’s some big talk on the back of a spinning star. Baby steps though!
Last Monday I decided I would work on a different coding-related skill each day, for a week. These days I’m familiar enough with my own brain to know that swapping subject areas and deep-diving into topics suit my attentional style. Because motivation can be hard to come by when jobhunting and self-studying during pandemic restrictions, I thought I’d stimulate myself with novelty. It might even help me get psyched up to work on longer-term ambitions, like releasing my ritual Android app Candle Shrine, and also making a portfolio website.
Day 1 – C
On Monday I worked on C programming – an area I haven’t touched since 1999, ha! Yes as a kid I did a summer course which touched on some C. Anyhow, the aim was to set up the compiler and get console input and output. I used a build of Tiny C Compiler nabbed from the Tiny C Games bundle. My project is a simulation of life as experienced by our family cat, Goldie. Actually I’m pretty pleased with this, my sister played it through and enjoyed it. I was inspired by Robert Yang’s concept of “local level design“- a design aesthetic celebrating small-scale social meanings rather than top-down formalism. (This suited me because I don’t yet know enough C to write anything other than this ladder of if statements. Still, it works!)
Day 2 – Linux
The next mini-project was Linux shell commands. I used VirtualBox to dip my toes in -it lets you run any distro on emulated hardware, from a Windows desktop. It nearly locked up on my a couple of times, but in fairness my computer was running two OSes at once so I forgave it. It never fully crashed.
I’d hoped to get into shell scripting, which is the power user technique of saving listings of shell commands (a shell being a console for directly running programs or OS utilities, etc. – like the ol’ Command Prompt in Windows) as text files to be invoked as, effectively, little programs.
But all I had time for was to learn about 20 standard shell commands. However, I really liked this stuff. I can see why there’s a stereotype that devs use Linux. It’s rather satisfying to install stuff, edit files, and set up the file system via typed commands and not all that intimidating either.
Day 3 – Pink Sparks
This one was fun. I made a 3D particle demo – a spinning cube made of flying pink sparks. My focus here was to prove I could make a simple particle system, which indeed wasn’t hard, cribbing off Jonas Wagner’s Chaotic Particles and leveraging the extremely handy Canvas feature in HTML5. I also wanted to do 3D perspective, which was hard. However here I used the simplest possible version, a bare z-divide where x and y coordinates are divided by distance from the viewer. The proper way to do this involves matrices and transforms, but I’m not there yet.
If I get back to this the two things I’ll do are change it to defining line-shaped sources of sparks rather than point-shaped, and make some nicer data than a cube, like say a chunky letter ‘K’. This wouldn’t be particularly hard. Making a display of my initial fits with my interest in what I call ceremonial coding which I believe will be an emerging cultural field in years to come. As life goes online, we’re already finding the need to program and design software for celebrations and community rituals – an example being my graduation from my computer science course, which is being held on Zoom. I am certain that techniques from game design and aesthetics from digital culture will be important to create spiritual meanings and affirmations of identity on computers. My upcoming ritual app for Android phones expresses this conviction.
Again, Robert Yang’s post is very close to the spirit of this: “What if we made small levels or games as gifts, as tokens, as mementos?”
Day 4 – Huffman Coding
I hit a roadblock on Thursday. Huffman coding compresses text, by taking advantage of the fact that certain symbols (for example, ‘z’) occur far less often than others. To make a Java app implementing this was a meaty challenge, requiring binary buffer manipulations, a binary tree, sorting and file I/O. Still should have been achievable – but I let myself down by not rigorously figuring out the data representations at the start. This meant I threw away work, for example figuring out how to flatten the tree into an array and save that to disk, only to twig that the naive representation I’d used created a file far bigger than the original text file.
Though I was working from a textbook with an understandable description of the Huffman coding technique, that was nowhere near enough. I still needed to design my program and I failed to.
So I ran out of motivation as poor design decisions kept bubbling up. This was a stinger and a reminder that no project is too small to require the pencil-and-paper stage. On the plus side, I did implement a tree with saving and loading to disk, plus text analysis.
Hopefully I can reuse these if I come back to this. It’s a fun challenge, particularly the raw binary stuff and the tree flattening (although I don’t know yet if I want to store the tree in my compressed file, I think probably just the symbol table needed to restore the original.)
Day 5 – WebGL Black Triangle
Now this was pure fun. I used a tutorial to learn how to display graphics on a webpage using WebGL. I… frankly love the feel of OpenGL Shader Language. The idea of using a harshly constrained programming language to express some low-level color or geometry calculations, which is then compiled and run on your graphics card so you can feed it astronomical amounts of data for ultra-fast processing, is so satisfying. (Actually, especially the compilation process, and the narkiness of the parser where for example 1 is different to 1.0… it feels like you’ve loaded a weapon when you’ve successfully compiled at last.) I love graphical magic, but previously have been doing it at several removes, using wrappers on wrappers like Processing. I will definitely be doing more of this.
Day 6 – RESTful API
Another failure! I wanted to make a RESTful API demo as a bullet point on my CV, and host it for free on Heroku. But although I did some good revision on API design, when I got into implementation I totally got tangled up in trying to get libraries to work. Blehh!
I wanted my system to be standards-compliant so I tried using libraries that’d let me use JSON:API instead of raw JSON, which some people say is not good for APIs as it has no standard way to include hyperlinks which are, in fairness, central to the concept of REST (i.e. that instead of the client knowing to use certain hardcoded URLs, each response from the API includes fresh hyperlinks that the client can choose to follow).
But I got stuck when the examples for the library I chose wouldn’t compile because they required a new version of a build tool, Gradle, and despite trying some things off forums my IDE failed repeatedly to automatically install this.
If I get back to this I’ll use the build tooling I had working already for school projects. Life’s too short!
I wonder if the area of web services – so essential, stolid, bland – might be a natural home for rather pedantic personalities. The type who would make a typology of all things and publish it as a web standard. In any case, wading through some comments and blog posts and Wikipedia pages gave me a stronger understanding of state in web services than I had before.
Day 7 – Pathfinding in JS
My plan was to implement a classic pathfinding algorithm, Dijkstra’s algorithm. But I wanted to have it so a little monster would chase your cursor around elements on a webpage! Well, as usual, I should’ve thought this through more. The fact is, web page elements are not intended to be processed as 2D shapes. HTML is semantic – web content is structured and manipulated as elements like paragraphs, headings, links that have meaningful relationships in the context of the document itself – with the final presentation of these elements done on the fly according to the user’s needs.
Anyway… my point is, I had to compromise to get this going. My original vision was of words arrayed around the page randomly, at different sizes, with a monster sprite threading his way around them.
My solution doesn’t have the words, just boring blocks, and though I think I could do words at a fixed size, having splashy words in different size could be quite a hassle.
Nor did I get around to the monster although I have the sprite for when I do:
But the thing that worked well for this mini-project was: web standards! In particular, I made the excellent decision not to hack CSS positioning from JS, but instead take the time to revise the CSS Grid system. Which, as you might imagine from the name, was actually perfect for this use case. Those numbered cells above are arranged by Grid.
I listened to this sweet mix of Madlib beats recently, and was reminded of my firm conviction that he is the greatest beatmaker. Maybe not the greatest living DJ in the sense of all-round hip hop artist, I’d hand that to DJ Premier for his epochal work with Gang Starr (deepest and best hip hop act of all time, for my money) and for making my no. 1 track of all time. Madlib doesn’t aim that high artistically, I think. But his stuff is the funkiest of all.
I had to turn up the mix to neighbour-bothering levels numerous times. It’s that good. So here are some notes I took for myself to try improve my own hip hop beats – a form I’ve been dabbling in for many years. Hope you find something you like.
Madlib excels with pickups. If you don’t know that term, it generically means a melody that enters a few beats before the perceived top of the musical form. However, I use it specifically to describe the funky structure wherein a line – melody, drum fill, vocal sound, whatever – leads the ear through a break to the downbeat. Think reggae drum rolls and jazz horn breaks. This kind of pickup provocatively holds onto or toys with the time/groove in the gap before a beat drop. I noticed that Madlib can use almost any kind of material in this role. Strings/vocal top layer mush, guitar or horn stabs, vocal snippets, anything.
(Something cool I noticed is that this use of chordal stabs/slices in particular as fills or pickups, can be ambiguously interpreted as both harmonic, a meaningful chord change, and as a passing dissonant sound.)
From this follows a more general principle: any sample, any instrument sound can and should be broken or undercut. (See my article on funky structures for more on undercutting.)
This makes me want to revise my comfortable habit of making a 4- or 8-bar loop, quite detailed and full, and then arranging it by basically muting and unmuting, maybe filtering or echoing, parts. Madlib eschews this techno type approach. His tracks are live-feeling and changeable, also quite unlike traditional hip hop like mid-90s DJ Premier or Lord Finesse productions. In those tracks, there’s some muting and breaks and cuts, but everything is based off a main verse groove (and perhaps a chorus change). By contrast, Madlib’s stuff turns and crawls like a beast.
Often this organic development lets an already existing sound flower and manifest its potential, e.g. from happening once every two beats to twice or letting in previously-filtered-out highs. Or switching octaves of a synth bass part here and there – very effective. This is about finding the right degree of saliency (a term I learned from an otherwise fairly boring composing book by Alan Belkin) – not smooth enough to be subconscious or background, but not jarring either. I’d like to learn how to hit that sweet spot.
Madlib’s beats are often pretty sophisticated harmonically – the root movements and chord changes from his source material emerge in the final product. I’m inspired to simply take more care with the chordal content of my samples and productions.
“Taking care” really sums up this music. Madlib never seems content to phone anything in. Every sample is present for a reason, never “just because” – even fundamentals like hats and snares are left out or drastically varied. Also, every sample, without exception, is so, so fat. Like, dripping from the speaker. It’s absolutely incredible.
That’s not achieved by narrowly honing in on perfect synth or EQ or compression settings like a techno producer. The fatness comes in wildly varying flavours e.g. from very subby, electro kicks/bass to earthy, turfy, crackling ones or quite distorted and processed, depending on what each beat needs.
This one’s a bit intangible, but Madlib’s tracks often seem to have a pregnant space. He can make you wait. These grooves are head-nodding yet sound like they haven’t fully kicked in, over long periods. This comes from space and the confidence to use it… and also making every element add to the funk.
Here are some specific things I want to try in my productions…
When using the classic hip hop technique of splitting sampled material into a bass layer and a top layer using filtering, don’t expect the bass layer to sound anything like a solo bassline. It’ll sound like a muffled version of the original sample with all its instruments, and that’s fine, it’s idiomatic. I used to think you had to try literally remove everything but the fundamentals of the bass notes, but this just results in a vague thrumming. That’s not the way!
Madlib has a distinct approach to the other side of the coin, the high frequencies: frequently his strings and vocals and chordal mush gleam hazily over the gritty, present beat. Perhaps some reverb on the top layer, and smart compression somewhere, contribute to this?
Actually, there’s a lot of woozy modulation in Madlib’s music (though it’s not formulaic like in your modern day chill hop/study beats electric piano sound) and I’m gonna grab a tape emulator to try get some wow and flutter and noise into my sounds.
Also there’s liberal use of loud and woofy synth bass, often with tasty (non-diatonic) note choices or chords. I think because I’m still in psychological recovery from quitting bass playing a year ago, I haven’t been focusing on basslines in my productions.
Well, that’s all I got. I would’ve liked to discuss the idea of “beatmaking” a bit – this cultural manifestation of the 2010s, pretty much, that markets aspects of hip hop culture as a hobby which now seems like it could take over much of music. (Especially in these awful, socially-distanced times.) And of course, there’s plenty of black culture stuff we could dig into, metaphors around music as sonic substance (“fatness”), the aesthetic of “taking care” and its gender coding (maternal energies in highly masculinist music), sexual metaphors around cutting, the groove, also the slave sublime (distorted voices, screams), manifesting/smuggling, and so on. But you can find those in any deeply funky music. I hope today’s narrow focus on techniques was worthwhile.
I made a VST software instrument that uses the Golden Ratio to generate frequencies off a given note. You can download it here if you want to lash it into your music program and try it out. It’s unfinished though – more details below.
This didn’t come from any particular intention – it was a discovery I made while messing about in Synthedit many months ago. I was trying to make a normal additive synth where you mix the relative levels of each harmonic within a single note. But I found that if I took the frequency multipliers for the harmonics (which are normally just 1, 2, 3, 4 – so the second harmonic is twice the frequency of the first/root, the third is three times its frequency, fourth four times and so on) and raised them to a particular power around 0.6, a cool sound came out.
“Around 0.6” turned out to be 0.618034 – the “Conjugate Golden Ratio” (or one over the Golden Ratio).
Now it’s not possible to discover some “alternate harmonic series” because harmonics are a physical phenomenon: if you have a vibrating object with a fundamental frequency, whole-number multiples of that frequency can likely also form waves in it. So, each half of a guitar string vibrates one octave higher than the open string, and each third vibrates one fifth higher than that, and so on. Our sense of hearing subconsciously interprets the presence and tuning of harmonics as derived from physical properties: material and size and density. No other frequency series could have this same effect.
Nonetheless, the Golden Ratio seems more musical and harmonious than any other I could get by that exponentiating technique – actually it sounds like a jazzy chord. And it has what Gerhard Kubik calls “timbre-harmonic” aspects – like a Thelonious Monk chord, the harmony bleeds into the perceived timbre. My synth (on default settings) has a silky, bonky, dense timbre. (That territory between noise and harmony is where I like to be, musically. Check out the sounds I used for my drum programming experiment, for example.)
I could hear that it wasn’t in tune to equal tempered notes (nor to the non-equal tempered ratios found in the natural overtone series). But it was tuneful enough to sound concordant rather than discordant. If you download the synth and try the other ratios in the drop down menu you’ll hear the difference, I hope.
Here are the ratios: Golden Ratio conjugate on top, then normal harmonics, then the non-inverse Golden Ratio. You can see that the Golden Ratio conjugate results in a somewhat out of tune minor 11th chord – definitely jazzy! (The normal overtone series results in a dominant chord.)
I whipped up some little riffs so you can hear the synth. It’s very digital-sounding, like additive synths generally are, and also reminiscent of the stacked, sometimes exotic overtones of FM synthesis at its icier end.
Note I didn’t sequence any chords in these – the “harmony” is from the voices of the synth. And there are no effects added.
I’ll evaluate the musical aspect at the end of this post. For now I want to discuss the synth-making software I used: Synthedit.
When I first started messing with production as a teen, the free synths I downloaded were mostly built in Synthedit. I soon got to know its characteristic signs – exuberant amateur graphics, slightly misplaced buttons and sliders due to the software’s drag-and-drop interface, and I guess a lack of quality. I remember one bass synth that was pitched way off A=440 – rank sloppiness. I used it anyway. The Flea, it was called.
Most freeware Synthedit VSTs were like that: knock-off bass synths or delay effects, easy and obvious stuff, frequently derided by snobs on forums.
Synthedit enabled a flood of low-quality, imitative software synths by lowering the barrier to entry. Instead of coding C++, you could (and can today) just drag and drop components, add in other people’s custom components, and instantly see/hear the result in your DAW interfacing with your MIDI gear and other FX.
I was blown away when I first did this a couple of days ago. I clicked export, set some easy options, and then couldn’t find the exported file. Irritated, I went back to REAPER, my production software – and there was my synth just sitting there! And it worked! And nothing crashed!
Having studied programming for the last year, I know how hard it is to make software like that. The default mode of enthusiast-made nerdy software is to fail aggressively until you figure out some subtle, annoying configuration stuff.
So, today’s post is a celebration of a great tool, very much like the one I did about Processing. Once again, I want to emphasise how great it is that people make such programming tools for beginners, where the hard and horrid configuration stuff is done for you.
This is priceless. It can change culture, like Synthedit changed bedroom production culture and marked my adolescence.
Amazingly, the program is developed by a single man called Jeff McClintock. He is active on the forum and from reading a few of his posts I get an impression of someone who takes any user’s difficulty as a sign to improve the program. I really admire that. And it shows in the robustness of the app (even the old free version I’m using).
To make a synth, you drag connections between “modules” that provide a tiny bit of functionality or logic. It’s like wiring up a modular synth. The downside is that, if you already know how to code, it’s a drag having to do repetitive fixes or changes that in a programming language could be handled with a single line. Also, when a module you want isn’t available, you are forced to make silly workarounds, download third party stuff or change your idea. In Java or Python you could just do it yourself.
All told, I enjoyed the experience of making Golden (so I have baptised my synth). The best part is having impressively reliable access to powerful, mainstream standards: MIDI and VST. That made it a lot more fun than my previous synth which took in melodies as comma separated values and outputted raw audio data. It was brilliant to have all the capabilities of my DAW – clock/tempo, MIDI sequencing, parameter automation – talking to my little baby.
The drag-and-drop interface builder is also great. Once again, amazingly, McClintock hides all the donkey work of making interfaces, the boilerplate code and updating and events. You just put the slider where you want it, then it works. The downsides are being locked into standard interface elements unless you want to go much more advanced. So, I wanted to have one envelope take the values from another at the flick of a switch, but I couldn’t. (I’m sure it can be done, but I couldn’t find it easily online. In general, the documentation for Synthedit is weak, and online tutorials scanty. I think that’s due to the narrow niche served – people nerdy enough to make synths, but not nerdy enough to code.)
Although I had a great time with Synthedit, I’d like to keep learning and do this work in a procedural or OOP language next time.
Let’s finish. Do I think this Golden Ratio thing has musical value? Yes, and I would like to use it soon in a hip hop beat or tracker music production. (It could also serve as root material for spectral composition, I strongly suspect.) Is my synth very good as is? No, the envelopes don’t work nicely for immediately consecutive notes (I should make it polyphonic to fix that) and I’m not happy with the use of….
Actually, I should quickly explain the synth’s features.
At the top are overall options: the choice of exponent, then various tuning knobs. “Exponent fine tuning” lets you alter the exponent, “Voice shift” is an interval cumulatively added to each voice, “Keyscaled flattening” is a hack-y tuning knob that applies more to higher notes. Use these to massage the microtonality into sitting better with other harmony/instruments.
Then there are two instances of the basic synth, as you can see, each with 8 voices you can mix. You can turn each one up or down with the little knob on its left end. You can also change its tone with the lowpass filter big knob.
The idea of the two synth engines in one was to be able to double voices at Golden Ratio intervals. Sorry if this only makes sense in my head, but I thought that these dank Golden Ratio sounds should be harmonised using their own kind of interval rather than standard fifths or thirds, so by selecting the interval in one synth instance’s drop-down box you can set it apart from the other by one of those intervals. Selecting “First overtone” with “Golden Ratio Conjugate” set in the Exponent menu will, therefore, displace the 8 voices of that synth instance upwards by a perfect fifth + 42 cents.
Finally, to create some simple motion within the sound, I use two ADSR envelopes for each engine and linearly interpolate between them. The bottom one directly affects the bottom voice, the top one the top voice (always voice 8 BTW – I wanted it to detect how many voices are in use but had to abandon it – one of those workarounds I was talking about) – and voices in between are blended between these two, unless you click the “Link Envelopes” switch in which case only the bottom envelope is used.
And each engine has an LFO which affects the exponent, and therefore has a greater effect on the higher voices.
… I can see why they say writing docs is hard! Hope you could withstand that raw brain dump.
As I was saying, this synth is rough, but hey I’ve seen rougher on KVR Audio so it’s a release.
I’ve been listening to SNES-era game soundtracks so I’m tempted to try make some dreamy, pretty melodies using Golden. I think it might also be good for some woozy house or hip hop.
If I was to develop the synth, the first thing to change would be the two envelopes idea – really it should be some more sophisticated morphing. I saw an additive synth where each voice had its own envelope but that’s too much clicking. Some intelligent system – interpolating but using a selection of curves rather than linear, or maybe something like setting percentages of each voice over time while overall amplitude is determined by a single envelope – would be nice.
It also badly needs some convenience stuff: overall volume and pitch, an octave select, polyphony.
I’m leaving Golden as a nice weekend project. I’ll come back when I have some chops in C++, I would think.
Well, thanks for reading if you made it this far. You get a “True Synth Nerd” badge! If you want to talk about the Golden Ratio or synths, get in touch 🙂 And don’t hesitate to try out the instrument.
I’ve come back to Python a few times in the past year. Before September 2019 I’d never touched it (or any modern programming language, really). Right now I’m using it to build a 3D game level generator, and very much enjoying myself!
I feel I’m close enough to the start of the Python learning curve to give a personal perspective on why this language suits dabblers and students.
This is not a prospectus of Python’s features or design decisions – I don’t know enough to talk about those. Just a happy acknowledgement of what’s been making my life easier in the past couple of days.
I’ll compare it to Java, the language I know best, which is also probably the closest to a standard for Year 1 of comp sci courses.
The look Instead of the curly brackets of C, C++, Java and other much-used languages, Python uses the amount of tabs at the start of a line to determine what “block” that line belongs to (and also the colon at the end of any line that declares a new block). There’s something reassuringly basic and solid about that, to a newbie. You don’t need to learn the little skill of counting brackets (nor do you need to use semicolons) while discounting whitespace. Instead, how it looks in your editor tells you what’s going on directly.
Simple string manipulation Python’s elementary text manipulations are simpler than Java’s. When you’re starting out, I think that helps keep your mind on the actual task. For example, getting input from the console in Java generally requires creating a “Scanner” – a powerful bit of software for carving up and selectively serving text from an incoming stream. Those capabilities are irrelevant for basic text input. So students end up depending on something whose quirks and abilities they don’t understand. Including some unintuitive behaviour that I’ve seen even experts admit is confounding. Also on this point, Python requires you to convert all number types to strings explicitly, when printing them. Which can be annoying. However, this arguably makes the underlying reality clearer than performing the same conversions invisibly as Java does.
It doesn’t force Object Oriented Programming We’re getting close to the topic of many boring flame wars on which paradigm is better, but let’s keep this to an inoffensive point: Python affords procedural, functional or OOP styles of programming. So if you have experience of either, you can start from there. And if you don’t, you’ll be saved the laborious and ritualistic aspects of standard Java style: private member variables, getters and setters, etc. – until you’re ready.
Those data structures I first read about Python in Jay Barnson‘s delightful old piece, which I have returned to many times since, “How To Build A Game In A Week From Scratch With No Budget”. In it, he eulogises its list and dictionary data structures. And he’s damn right, they’re great. Instead of wrapping data in near-pointless objects – a practice so widespread in the Java world it has its own name, “anemic objects” – you’ll find yourself creating, stacking and passing around these built-in structures, as I’ve done pretty much throughout my level generator app.
It’s mellowed by age This one cuts both ways. You can find articles on Medium arguing that Python is past its peak, and getting less fashionable or even relevant by the year. Yet, the advantages include having plenty of 3rd-party libraries (though TBH I haven’t dipped into these yet) and also a smoothing away of rough edges. For example, Python 3.x (available since 2008) has what’s called “new-style classes” that allow for conveniences such as property annotations, getting info on a class or method at runtime… I actually don’t understand that stuff yet, but the point is, whereas a few versions ago I would’ve had to use a special syntax to get “new-style classes”, now that happens automatically. One less thing to think about.
You don’t have to split files Another really small thing that nonetheless will be felt by newbies: Python scripts can have multiple classes, or none, in one file (called a “module”). Whereas even a minor task in Java might push you to make a few classes, and so have to save a few different files.
The end result of all these things together is: even if you’re not very good at it yet, solving problems in Python feels fast. I don’t myself have the depth of understanding to explain this one. But ever since I first tried a maths problem or scripting text templates with it, I was pleased by that feeling of getting things done.
BTW, I wouldn’t call Python a “beginner language” in the same sense I applied to Processing a few posts back. Python doesn’t have every interaction carefully shepherded so as to hide complexity. Nor does it provide a simplifying framework for graphics or what-not. It’s a full-featured language (although with a favoured domain of data analytics, science, scripting, and stuff like that, for sure).
Last thing. For some reason, I pronounce Python as PIE-THON, rather than PIE-thn. Does anyone else out there do this? Let me know I’m not alone.
Apologies for the clickbait format, which is hardly in keeping with the concepts I’ve been absorbing from Ville-Matias “Viznut” Heikkilä’s remarkable recent article. Think of it not as a push for attention on an ephemeral feed, but respectfully memorialising another’s inspiring vision here on my own site.
Today I will summarise some of that piece’s most remarkable insights for you. I’ll react to quotes, picked for their awesomeness, in turn.
(My WordPress stats suggest that most visitors are here for the jazz content. If that’s you, you are most welcome to stick to that stuff. But consider reading on to ponder alternative visions of the internet and entertainment technology that makes this very blog possible.)
BTW, Viznut is not writing in his first language, and uses “would” where “should” might be more idiomatic, when discussing idealistic futures.
1. Computers have been failing their utopian expectations. Instead of amplifying the users’ intelligence, they rather amplify their stupidity. Instead of making it possible to scale down the resource requirements, the have instead become a major part of the problem. Instead of making the world more comprehensible, they rather add to its incomprehensibility.
Pessimistic, yet I agree. “Amplifying stupidity” is quite precisely what Twitter does, intentionally spreading wildfires of outrage through our nerves and networks. ICT is projected to take up between 8% and 20% of all energy worldwide by 2030. And incomprehensibility… Jesus. I feel so strongly about how non-technical folk (my parents, for a start) are made fearful and humiliated by corporate tech like antivirus software, operating systems, bank and telecoms billing, touchscreen interfaces, and so on. Yet technologists (I’m one myself) always blindly return to their comfort zone: abstractions, services, always-on internet, new languages and upgrades and frameworks. “Increased controllability and resource use.” And increased incomprehensibility, infantilisation and frustration for everyone else.
Am I being hypocritical? Totally. I depend on myriad frameworks and the seemingly-invisible, actually aggressively-corporate-sponsored development work that keeps big platforms, and our whole civilisation, going. The point is not to deny that but rather observe it and judge it from a dispassionate viewpoint, asking what do we really need, in the long term?
2. Permaculture trusts in human ingenuity in finding clever hacks for turning problems into solutions, competition into co-operation, waste into resources. Very much the same kind of creative thinking I appreciate in computer hacking.
So, Viznut turns to permaculture, a gardening philosophy. Actually, in my long list of article ideas for this site, is one about how my grandfather manages his large garden, despite being in his mid-80s. The point was that due to an inherent rightness in his methods and tools, and a humble reliance on nature to do the work, his garden is still productive and pleasant no matter how physically weak he gets. His work is opportunistic and adaptive. Son-in-law visiting? Make him sharpen my tools. Grandson loafing about the house? Get him to plant lettuces, or pull down vines. Can’t walk much? Put a trailer on the lawnmower. Even when sinking into decay, everything still works, just at a lower level. His old greenhouse, lean-tos and cages are merely waiting for when he has the energy to put one or the other to use.
What has that to do with staring at a screen and tapping away at a keyboard?
3. Any community that uses a technology should develop a deep relationship to it. Instead of being framed for specific applications, the technology would be allowed to freely connect and grow roots to all kinds of areas of human and non-human life.
Could technology – or one or a few specific, locally chosen technologies – fit into our lives like a well-stewarded garden? Like leaving a garden to grow in rain and sun, we would let it do what it’s good at. When resources were at hand we would apply them, if not we could wait. We could deploy it in new ways all the time, like using a garden for meals, sunbathing, athletics, meditation, crafting, cooking, drawing, retreat, nature watching and so on. Even with minimal maintenance it would function, while occasional bouts of serious group work would provide exercise, catharsis and new directions.
Dream on, Kevin.
But I’m basing these ideas off a real scene, as Viznut does with the demoscene. Since about 12 or 13 I’ve been interested in Quake modding, a scene in which enthusiasts create new levels, monster types, versions and toolchains for the first person shooter game, Quake (1996, id Software). There’s something more than a little amazing about how this online community has grown while nurturing a set of powerful, well-maintained software tools, and releasing hundreds of fun things to play. Which also provides a strong, common base for engineering experiments. All with no money changing hands!! Just people doing things out of pleasure and dedication, making the world better.
The DOOM community, based around a similar but earlier and simpler game, is if anything even more broadly creative and supportive.
I won’t go on – I think you get how I feel about this.
4. At times of low energy, both hardware and software would prefer to scale down…. At these time, people would prefer to do something else than interact with computers.
This is where the radicalism comes in. Viznut doesn’t believe our current civilisation can continue. His is a worldview directly in opposition to values we absorb in school, college courses, news, and so on. (For example, in my one-year computer science course, it was absolutely unquestioned that e.g. ever-increasing virtualisation and cloud storage, or working in a monopolistic platform giant, were desirable things.) None of my close friends, who work in engineering or finance, would find it digestible. I haven’t read up myself on degrowth ideologies although I did learn a lot from the fearsomely knowledgeable Dutchman Kris De Decker who runs Low Tech Magazine. But the highly unpalatable idea is that we’ll all have to stop depending on things we’re used to: unlimited flashy content, new phones and personal gadgets, and quite a lot more; because they take too much energy which ruins the planet.
5. People would be aware of where their data is physically located and prefer to have local copies of anything they consider important.
There are countless ways, most of them still undiscovered, to make low and moderate data complexities look good…. For extreme realism, perfection, detail and sharpness, people would prefer to look at nature.
My quick take on this is I don’t know. I don’t know if Viznut is right. However, my intuition says yes, it is healthier to check out some bark patterns, dewdrops and butterflies in your local park, than clicking through 1080p videos on YT. And that yes, something doesn’t add up when Google offers to host gigs and gigs of my data forever on a server for free, even though it would be a notable responsibility and an effort if I resolved to keep it safe on a disc at home.
(Y’know, on that seemingly facetious point about going outside: I think that could be the unexpected philosophical realisation from our constant exposure to high-quality computer graphics – yes, we human beings like looking at realistic, crisp, crunchy visuals… and they’re all around us, all day long, lit by the sun for our convenience.)
More broadly: maybe the saturation of network bandwidth and processor power that now surrounds us is neither necessary nor desirable? Maybe this thing that we’ve had for the last ten years and not in the preceding ten millenia isn’t yet being used right. Maybe we don’t benefit enough from guaranteed industrial strength computing and data streaming at our fingertips day and night, to justify the environmental cost.
6. Integrated circuit fabrication requires large amounts of energy, highly refined machinery and poisonous substances. Because of this sacrifice, the resulting microchips should be treasured like gems or rare exotic spices.
A great way of putting it! The demoscene that Viznut came from is all about getting the utmost from old technology and systems instead of relying on Moore’s Law. So he has come up with a sound justification for this aesthetic interest, which can often otherwise relapse into mere nostalgia. He’s careful not to tie himself to “junk fetishism” as an end in itself.
7. The space of technological possibilities is not a road or even a tree: new inventions do not require “going forward” or “branching on the top” but can often be made from even quite “primitive” elements.
And here’s a justification for playing with old tech, from the point of view of innovation. It does make sense. Again, what I like about Viznut’s writing is the confident, autodidactic, outsider’s perspective. From there I can look at computing, whether enterprise systems or game modding or web content management, quite afresh.
8. Computer systems should make their own inner workings as observable as possible.
Another lofty ideal. I am strongly, instinctively behind this one. In all the software I’ve coded, I came back to real-time feedback as a tool again and again. Observing changes in a feedback loop suits my short attention span. In my computer science course, I most enjoyed the sensation of tunneling into the depths of a system and making them comprehensible and useful. Even a routine backend database like I made for my e-commerce project gives me this pleasurable feeling.
9. Any community that uses computers would have the ability to create its own software.
I interpret this not as a call for us all to be hackers, or teaching “kids to code”. Rather I think it’s a call for a smooth continuum of complexity to be available, from newbie use to full control of building the software. For example, I would say Excel formulas, Access pivot tables, and any kind of macros are an absolutely legit place to start programming. Same with game modding, or shell scripting, LaTeX, whatever. (This philosophy developed from ideas from the lovely, now-defunct blog by James Hague.)
The tricky part is for each level of complexity to bleed naturally into the next, tempting the learner to try new things.
This is where gated platforms, whether that’s FB posts or software on the cloud, can be the enemy of creativity. I’ve discussed that issue before.
10. The ideal wieldiness [of a program] may be compared to that of a musical instrument. The user would develop a muscle-memory-level grasp of the program features, which would make the program work like an extension of the user’s body (regardless of the type of input hardware).
Not much to say to that, except that most of the programs we use day to day haven’t reached that standard.
11. Artificial intellects should not be thought about as competing against humans in human-like terms. Their greatest value is that they are different from human minds and thus able to expand the intellectual diversity of the world.
Viznut’s interest in AI was perhaps the most disconcerting part of his article and the one that changed my outlook the most.
For the last few years I’ve viewed AI as a tech buzzword whose visible manifestations (neural upscaling, Google DeepMind, GPT-3) are distinguished by aesthetic hideousness. And as you might gather, fear underlies that dismissal. I found the thought of AI disturbing.
Viznut gave me a different view. While emphasising the computational expense of training machine-learning systems, he mostly views AI as a welcome new type of entity for us to exist with. Criticising it for being inhuman isn’t saying anything. Rather it can be judged by how well it helps us humans to survive. Pragmatic, yet (in a nice change from how we started this piece) optimistic stuff!