Revisiting A Classic

I finished another programming project and I think it’s my strongest yet, thanks to me finally getting serious about testing! I called my app HuffmanRevisited because it implements the classic Huffman coding algorithm from 1951, and also because I had previously tried to program this algorithm a few months ago.

This time round, I coded it in Java, not JavaScript. It probably required about a solid week of work. And unlike my earlier attempt, I made a finished app. It can:

  • load a text file, encode (compress) it and save the encoded version, or else
  • load a previously compressed file, decode it and save the original content back as a text file

(You can skim my code here if you like.)

A few aspects of my approach helped make coding this an enjoyable task: I tried to specify it rigorously, I wrote a lot of tests, and I tackled a clear, well-known problem.

Before Christmas, I read a book – well, no, I skimmed a book after reading the prologue (which took multiple attempts) – called How To Design Programs. It’s an MIT textbook, you can read it here. I recommend it.

My paraphrase of the book’s prologue is: “precisely specifying what a program does is the hardest part of making it.”

Of course, I had encountered this sentiment in my Object Oriented Software Engineering module in the National College of Ireland. But the MIT textbook avoids the familiar paraphernelia of design patterns, verbose conventions and profuse diagrams. Instead, the book’s prologue challenges you to specify a problem by answering two questions: how is the data represented (in terms of data types that the machine can understand, and at all steps of the program)? and what are the functions (in terms of operations on those types)?

I mean, when I write it out, it’s dull and obvious. Yet, time and again I’ve found myself skimping on this specification step. Because, yes, it is the hardest part and one’s brain tries to escape it.

Even after my epiphany from reading the MIT book, I still evaded it. I specified most but not all of my Huffman encoding app before I started, thinking that the remainder was “simple”. But the simple part is never simple and if you haven’t thought it through, the solution will be incoherent even if it seems to work.

I failed to specify all of HuffmanRevisited, but at least I knew that this failure was to blame when I witnessed complexity mushrooming in front of my eyes as I tried to solve new, small problems that kept cropping up.

BTW, I’ll mention a couple of those little problems to see if you spot a pattern that kind of pleased me:

  • accessing individual bits from an array of bytes
  • packing an unpredictable amount of bits into an array of bytes
  • turning Java’s int and char types into a byte representation (not just casting which truncates them)
  • saving a compact representation of a binary tree containing characters in its leaf nodes (the ‘Huffman tree’)

Yeah… the pattern I spotted is that I’m doing low-level stuff and pushing somewhat against the limitations of Java. This is nice because I’ve been looking for an excuse to study a more low-level language!

The other thing that went really well with this project was testing. I’d written tests in JUnit before, but to be honest I was doing it to fulfil obligations in school assignments. Just like with specifying rigorously, I knew that tests are a great idea but was lazy about writing them.

I totally changed my tune once I had the framework up and running. (I used JUnit 5, Maven and NetBeans 11, and I mention this combination because I had no joy with JUnit 4 or with Ant.) I realised I’ve always done a lot of testing, but amateurishly: printing variables to the console all the time. That works okay… until your program starts to have a decent amount of methods on the call stack (i.e., functions that call functions that call functions that call functions…) and you spend your time trying to remember in which method did your gnomic text originate. Plus, all those print statements mess up your code.

Not to sound too much like a new convert, but using a test framework was a delight after all that. It’s much more organised. (And with my let’s say “wide-ranging” mind, I need all the organisation I can get!) It’s just like what I was doing before, except you only see debug text if there’s a problem, you get to choose any range of values you like to test, you can test as small or as large a section of the program as you like (as long as you’ve made the program itself reasonably articulated), and all of this business lives in separate files. Oh, and you get a cheerful screen of green when you pass your tests!

It’s enough to warm anyone’s heart

Okay, so specification and testing are non-negotiable aspects of real-world software development, but the last aspect I want to discuss can be more of a luxury: a clearly defined problem.

Until I get a start in a programming job, I can’t be sure, but my impression is that even communicating what a problem is, never mind completing a rigorous specification, can be hard in a typical business context.

However, I did this project for self-study so I got to choose exactly what to work on.

(I was helped in this by a comp sci book called The Turing Omnibus that a mentor very kindly lent me! It has a chapter on Huffman coding. The hook of the book, I would say, is that it conversationally introduces topics but doesn’t take you through every nuance. For example, unlike the Wikipedia article on Huffman coding, it mentions neither the need to pad bytes with zeros, nor any scheme for storing a b-tree.)

I was so glad I chose such a an old chestnut of an algorithm to implement! When I was refactoring my way out of that mushrooming complexity I mentioned earlier, the clarity of my app’s intention was a godsend.

Even better was the lack of edge cases. I could be certain my program had worked when it took a text file, compressed it into a smaller file, and then decompressed that encoded version into the exact same data I started with!

That’s intrinsically neater than some other areas I’ve boldly attempted, for example digital audio or vector graphics, where you need good control of sampling and rounding.

When I do go back to such complex topics, I’ll have a crucial bit of extra experience with the exact weapons needed to contain the ambiguity. Testing and full specification.

So, I’ll sign off there. My app could easily absorb some more effort. The next thing to work on would be efficiency. Do some profiling, and just comb through it for wastages. I can also think of some cool ways to present it, but no point hyping work I may not get around to.

Anyway, I’m pleased with it already. Jaysusin’ thing works, and the code structure is fairly sensible.

Thanks for reading. Take care in these hard times!

The header image “Massachusetts Institute of Technology Parking Lot, Memorial Drive, Viewed from Graduate Housing” by MIT-Libraries is licensed under CC BY-NC 2.0. I chose it because David A. Huffman might have seen such a view as he wrote the term paper in which he invented his compression technique.

Golden Ratio Synthesis

I made a VST software instrument that uses the Golden Ratio to generate frequencies off a given note. You can download it here if you want to lash it into your music program and try it out. It’s unfinished though – more details below.

This didn’t come from any particular intention – it was a discovery I made while messing about in Synthedit many months ago. I was trying to make a normal additive synth where you mix the relative levels of each harmonic within a single note. But I found that if I took the frequency multipliers for the harmonics (which are normally just 1, 2, 3, 4 – so the second harmonic is twice the frequency of the first/root, the third is three times its frequency, fourth four times and so on) and raised them to a particular power around 0.6, a cool sound came out.

“Around 0.6” turned out to be 0.618034 – the “Conjugate Golden Ratio” (or one over the Golden Ratio).

Now it’s not possible to discover some “alternate harmonic series” because harmonics are a physical phenomenon: if you have a vibrating object with a fundamental frequency, whole-number multiples of that frequency can likely also form waves in it. So, each half of a guitar string vibrates one octave higher than the open string, and each third vibrates one fifth higher than that, and so on. Our sense of hearing subconsciously interprets the presence and tuning of harmonics as derived from physical properties: material and size and density. No other frequency series could have this same effect.

Nonetheless, the Golden Ratio seems more musical and harmonious than any other I could get by that exponentiating technique – actually it sounds like a jazzy chord. And it has what Gerhard Kubik calls “timbre-harmonic” aspects – like a Thelonious Monk chord, the harmony bleeds into the perceived timbre. My synth (on default settings) has a silky, bonky, dense timbre. (That territory between noise and harmony is where I like to be, musically. Check out the sounds I used for my drum programming experiment, for example.)

I could hear that it wasn’t in tune to equal tempered notes (nor to the non-equal tempered ratios found in the natural overtone series). But it was tuneful enough to sound concordant rather than discordant. If you download the synth and try the other ratios in the drop down menu you’ll hear the difference, I hope.

Here are the ratios: Golden Ratio conjugate on top, then normal harmonics, then the non-inverse Golden Ratio. You can see that the Golden Ratio conjugate results in a somewhat out of tune minor 11th chord – definitely jazzy! (The normal overtone series results in a dominant chord.)

Here are the ratios for exponents of: 1/Golden Ratio, 1, and the Golden Ratio

I whipped up some little riffs so you can hear the synth. It’s very digital-sounding, like additive synths generally are, and also reminiscent of the stacked, sometimes exotic overtones of FM synthesis at its icier end.

Note I didn’t sequence any chords in these – the “harmony” is from the voices of the synth. And there are no effects added.

I’ll evaluate the musical aspect at the end of this post. For now I want to discuss the synth-making software I used: Synthedit.

When I first started messing with production as a teen, the free synths I downloaded were mostly built in Synthedit. I soon got to know its characteristic signs – exuberant amateur graphics, slightly misplaced buttons and sliders due to the software’s drag-and-drop interface, and I guess a lack of quality. I remember one bass synth that was pitched way off A=440 – rank sloppiness. I used it anyway. The Flea, it was called.

Most freeware Synthedit VSTs were like that: knock-off bass synths or delay effects, easy and obvious stuff, frequently derided by snobs on forums.

Synthedit enabled a flood of low-quality, imitative software synths by lowering the barrier to entry. Instead of coding C++, you could (and can today) just drag and drop components, add in other people’s custom components, and instantly see/hear the result in your DAW interfacing with your MIDI gear and other FX.

I was blown away when I first did this a couple of days ago. I clicked export, set some easy options, and then couldn’t find the exported file. Irritated, I went back to REAPER, my production software – and there was my synth just sitting there! And it worked! And nothing crashed!

Having studied programming for the last year, I know how hard it is to make software like that. The default mode of enthusiast-made nerdy software is to fail aggressively until you figure out some subtle, annoying configuration stuff.

So, today’s post is a celebration of a great tool, very much like the one I did about Processing. Once again, I want to emphasise how great it is that people make such programming tools for beginners, where the hard and horrid configuration stuff is done for you.

This is priceless. It can change culture, like Synthedit changed bedroom production culture and marked my adolescence.

Amazingly, the program is developed by a single man called Jeff McClintock. He is active on the forum and from reading a few of his posts I get an impression of someone who takes any user’s difficulty as a sign to improve the program. I really admire that. And it shows in the robustness of the app (even the old free version I’m using).

To make a synth, you drag connections between “modules” that provide a tiny bit of functionality or logic. It’s like wiring up a modular synth. The downside is that, if you already know how to code, it’s a drag having to do repetitive fixes or changes that in a programming language could be handled with a single line. Also, when a module you want isn’t available, you are forced to make silly workarounds, download third party stuff or change your idea. In Java or Python you could just do it yourself.

All told, I enjoyed the experience of making Golden (so I have baptised my synth). The best part is having impressively reliable access to powerful, mainstream standards: MIDI and VST. That made it a lot more fun than my previous synth which took in melodies as comma separated values and outputted raw audio data. It was brilliant to have all the capabilities of my DAW – clock/tempo, MIDI sequencing, parameter automation – talking to my little baby.

The drag-and-drop interface builder is also great. Once again, amazingly, McClintock hides all the donkey work of making interfaces, the boilerplate code and updating and events. You just put the slider where you want it, then it works. The downsides are being locked into standard interface elements unless you want to go much more advanced. So, I wanted to have one envelope take the values from another at the flick of a switch, but I couldn’t. (I’m sure it can be done, but I couldn’t find it easily online. In general, the documentation for Synthedit is weak, and online tutorials scanty. I think that’s due to the narrow niche served – people nerdy enough to make synths, but not nerdy enough to code.)

Although I had a great time with Synthedit, I’d like to keep learning and do this work in a procedural or OOP language next time.

Let’s finish. Do I think this Golden Ratio thing has musical value? Yes, and I would like to use it soon in a hip hop beat or tracker music production. (It could also serve as root material for spectral composition, I strongly suspect.) Is my synth very good as is? No, the envelopes don’t work nicely for immediately consecutive notes (I should make it polyphonic to fix that) and I’m not happy with the use of….

Actually, I should quickly explain the synth’s features.

My beautiful interface, in resplendent “Default Blue”. I’m not even sure it’s possible to change skins without paying for the full version of Synthedit. Which is entirely fair – I got a lot out of this free version.

At the top are overall options: the choice of exponent, then various tuning knobs. “Exponent fine tuning” lets you alter the exponent, “Voice shift” is an interval cumulatively added to each voice, “Keyscaled flattening” is a hack-y tuning knob that applies more to higher notes. Use these to massage the microtonality into sitting better with other harmony/instruments.

Then there are two instances of the basic synth, as you can see, each with 8 voices you can mix. You can turn each one up or down with the little knob on its left end. You can also change its tone with the lowpass filter big knob.

The idea of the two synth engines in one was to be able to double voices at Golden Ratio intervals. Sorry if this only makes sense in my head, but I thought that these dank Golden Ratio sounds should be harmonised using their own kind of interval rather than standard fifths or thirds, so by selecting the interval in one synth instance’s drop-down box you can set it apart from the other by one of those intervals. Selecting “First overtone” with “Golden Ratio Conjugate” set in the Exponent menu will, therefore, displace the 8 voices of that synth instance upwards by a perfect fifth + 42 cents.

Finally, to create some simple motion within the sound, I use two ADSR envelopes for each engine and linearly interpolate between them. The bottom one directly affects the bottom voice, the top one the top voice (always voice 8 BTW – I wanted it to detect how many voices are in use but had to abandon it – one of those workarounds I was talking about) – and voices in between are blended between these two, unless you click the “Link Envelopes” switch in which case only the bottom envelope is used.

And each engine has an LFO which affects the exponent, and therefore has a greater effect on the higher voices.

… I can see why they say writing docs is hard! Hope you could withstand that raw brain dump.

As I was saying, this synth is rough, but hey I’ve seen rougher on KVR Audio so it’s a release.

I’ve been listening to SNES-era game soundtracks so I’m tempted to try make some dreamy, pretty melodies using Golden. I think it might also be good for some woozy house or hip hop.

If I was to develop the synth, the first thing to change would be the two envelopes idea – really it should be some more sophisticated morphing. I saw an additive synth where each voice had its own envelope but that’s too much clicking. Some intelligent system – interpolating but using a selection of curves rather than linear, or maybe something like setting percentages of each voice over time while overall amplitude is determined by a single envelope – would be nice.

It also badly needs some convenience stuff: overall volume and pitch, an octave select, polyphony.

I’m leaving Golden as a nice weekend project. I’ll come back when I have some chops in C++, I would think.

Well, thanks for reading if you made it this far. You get a “True Synth Nerd” badge! If you want to talk about the Golden Ratio or synths, get in touch 🙂 And don’t hesitate to try out the instrument.

Harping On

I made an online toy in JavaScript, called TextHarp. Try it out (it needs a computer rather than a phone because it uses mouse movements).

The idea popped into my head a few weeks ago. It won’t leave the prototype stage because this combination of technologies – pure HTML, CSS and JS (although do I use one library to synthesise sounds) doesn’t robustly support what I wanted.

I aimed to turn a piece of text into an instrument, where moving the cursor over any letter which corresponds to a musical note – so, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘a’, ‘b’ – would pluck the letter like the string of a harp, and play that note as a sound!

At the time, I was thinking of a few possibilities

  • adding audio feedback for testing web pages, so that a developer/designer could hear if an element was malformed or missing information (aspects which are often invisible to the eye)
  • sonification, a concept which I think is rapidly going out of date as people realise its enormous limitations, but which was about turning reams of data into continuous sound patterns that would somehow reveal something within the data, but which I think were usually just third-rate electronic music or else showed no more than a good graph could, and basically made clear that the humanities PhD system sucks in people who’d be better off elsewhere… sorry I seem to have gotten into a rant here
  • simple enrichment and adding magic to the experience of visiting a webpage

That last is out of favour for web design nowadays. Instead, minimalism, accessibility and function are the buzz words. Fair enough… but also ominously envisaging the web as merely where stressed and harried folk get updates from a corporate or government source, staring down at their little phone screen.

Well. My little toy isn’t going to do anything to overturn that paradigm. Still, let’s take a short tour of the challenges in making it work.

I used basic JavaScript mouse events to change elements in the webpage, modifying what’s called the Document Object Model; which is nothing more than how your browser perceives a page: as a hierarchy of bits containing other bits, any of which can be scripted to do stuff.

My script caused each paragraph to detect when the mouse was over it. Then it cut the paragraph into one-character chunks and placed each of these single letters into a <span></span> HTML tag, so that it became its own card-carrying member in the Document Object Model.

Not very elegant at all! Also, despite span tags being supposedly invisible, throwing in so many of them causes the paragraphs to twitch a little, expanding by a couple of pixels, which wouldn’t be good enough for a production page.

However, it works. I set each of the single-letter chunks to play a synthesized tone when the mouse goes over them, and that’s it. Also, when the mouse leaves that paragraph’s zone, I stitch the letters back together, the way it was.

The downsides are that any HTML tags used to format or structure the text tend to get damaged by the process. Usually resulting in piles of gibberish, or text disappearing cumulatively. It would be possible to improve that, but with a lot of manual work. And, the browser’s attempts to be clever by healing broken tags here actually cause a lot of difficulties.

Defining some new kind of object that held the text and knew the location of each letter, would be a better bet.

However, I’m turned off this avenue of enquiry for the moment, because dealing with audio in browsers is a pain. Not for the first time, musical and sensual uses of technology have been left in the gutter while visuals get all the investment.

There are two big problems with web audio. First, JavaScript doesn’t offer precise timing. I see precision, whether in first person computer games, input devices, or in this case reactivity of audio, as inherently empowering – inviting us as humans to raise our game and get skilled at something. Sadly, much of our most vaunted current technology crushes this human drive to excel and be stylish, with delays and imprecision: touchscreens, cloud services, bloated webpages…

Where was I? Yes, the second problem is that Google Chrome made it standard that web sites can’t play sound upon loading up, but only after the user interacts with them. Well meaning, but really shit for expressivity – and quite annoying to work around. My skillz are the main limitation of course, but even trying out two libraries meant to fix the issue, I couldn’t make my audio predictably avoid error messages or start up smoothly.

No tech company would forbid web pages from showing images until the user okays it. But sound is the second class citizen.

When I know my JS better, I’ll hopefully find a solution. But the sloppy timing issue is discouraging. Some demos of the library I used show that you can do some decent stuff, although the one I experimented with took a good idea – depict rhythm as a cycle – and managed to fluff it with two related interface gripes. They made the ‘swing’ setting adjustable for each bar of a multi-bar pattern – pointless and unmusical. And they made the sequencer switch from bar to bar along with the sound being played – theoretically simple and intuitive, but – especially with the above-mentioned time imprecision of web interfaces – actually resulting in loads of hits being clicked into the wrong bar. (And if I say a drum machine is hard to use, it probably is – I’ve spent so much time fooling around with software drum machines I ought to put it at the top of my CV.)

But what am I saying! That demo’s way more polished than mine.

Perhaps even a little too polished! Visually anyway. All of the examples on that site are rather slick and clean-looking, perhaps because, I believe, the whole library has some affiliation with Google.

Ah, I’m being a prick now and a jealous one too, but one scroll down that demos page would make any human sick. The clean grids. The bright colours. The intuitive visualisations – yes, technology now means that you too can learn music, it’s just a bit of fun! Practice, time feel, listening, gear, human presence – nah!! And then the serious, authoritative projects – generated-looking greyscale, science-y formal patterns and data…. bleh.

My next JavaScript project is an exploration of a visual style which I explicitly intend to raise a middle finger to those kind of polished, bland graphics. I’ll be taking lessons from my 90s/00s gaming past to experiment with pixel art but without the hammed-up console-nostalgia cutesiness.

And I’ll be using standard web technologies – JS, SVG – to make anything I come up with 100% reusable by non-programmers.

Thanks for reading!

A Bass Practice Setup in Reaper

A satisfying practice session can involve many subtasks. I’ve been using the music production program, Reaper, to conveniently manage some of these. In this post I’ll go through my setup. It’s a work in progress. Eventually, I want to have a friendly and supportive digital environment for my creative mind, something to help sustain the musical work I’m doing and minimise clicking around on the computer.

My setup uses one free VST plugin, some drum samples I found for free, three plugins that came with Reaper, the webcam software that was bundled with my (Dell, Windows) laptop, and Reaper itself. An unlimited licence to Reaper costs €60 for personal or small business use, that’s the only thing I paid for. Here’s what it looks like in action:

It took me a while to figure out the arrangement of screen space, so I’ll go through it bit by bit. The aim was to minimise mouse clicks and maximise time with my hands on my bass. This setup is what I leave running as I play.

  1. These are the basic track controls for the recording of my bass. Sometimes I use monitoring i.e. listening to the bass sound as it comes out of Reaper through my speakers, rather than my bass amp – but usually not. Using monitoring would allow use of effects, but there’s still perceptible latency (in the low two digits milliseconds) which I don’t like. I record everything and throw it out after. I keep my amp plugged into my soundcard all the time. I suspect this habit of recording everything may have led to some recent slight corruption errors on my hard drive, because recording involves constant drive access and I left it running for a few hours at a time more than once, by accident. So I put a recording time limit of 45 minutes in my default project options.
  2. My teacher in Amsterdam years ago, David de Marez Oyens, recommended using the waveform of recorded bass as a visual aid to check one’s playing, but I only realised how powerful it is recently. Seeing the waveform instantly gives information on note length and attack, timing and perhaps most of all dynamics. The consistency of my playing has improved from routinely having the waveform on the screen.
  3. The webcam image of my lovely self provides a check on my posture and particularly hand position (especially fretting hand wrist angle and finger curvature). As I’ve had health issues in the past from bad technique, this is a bit of a godsend.
  4. Reaper has a handy tap tempo function so I can click here to change the project tempo (i.e. if I want a slightly different metronome tempo).
  5. Assuming I pressed record at the start, this shows how long my practice session has lasted.
  6. Transport controls to start and stop recording, say if I’m listening back to myself or whatever. Eh, my point is that I don’t allow any of the other windows to cover this up.
  7. This is a cool little thing I discovered recently. You can “expose parameters”, or as I like to say “expose the knobs”, which means putting in a little dial in the track control which will control a parameter in one of the track’s FX plugins. In this case, this little dial controls what pattern my drum sequencer is on – here 0, which is an empty pattern and so plays nothing. But I can load up the sequencer with various patterns like a dance beat, claves, hip hop beat or whatever, and choose between them with this knob, without having to keep the sequencer window open.
  8. Track controls for the drums and metronome, if I need to adjust levels or whatever.
  9. I have lost probably about ten electronic tuners in my life. I just leave them behind routinely at gigs. So a digital solution is nice to have. Reaper’s standard “ReaTune” plugin works grand for bass once you turn up the window size to 100 milliseconds to allow for those big fat bass wavelengths.
  10. For drums and click I use the bundled plugin “JS: MIDI Sequencer Megababy” which is a nice piece of software. It takes a bit of learning as it uses a lot of keyboard shortcuts and some of its design choices aren’t immediately evident, but it’s great and minimises the clicks needed to input a rhythm (because you don‘t have to put in a new MIDI item). The controls could be easily used to manage polymetrically related click tempos (“okay put the metronome once every two and half bars of 4”).
  11. This purple horizontal bar is the click rhythm, in case I wanted to throw in a clave or something here. I could similarly display the current drum machine sequence, but it would take more screen space than this single bar, and also I don’t want my practising to be derailed by drum programming. For the same reason, I haven’t prioritised ease of adding or replacing drum samples – another rabbit hole.

To summarise, this setup lets me have the following functions available at all times as I play:
Tuner
Metronome
Drum machine with preset beats
Waveform visualisation
Video of myself

The plugins I use are:
JS: MIDI Sequencer Megababy (Cockos)
ReaTune (Cockos)
shortcircuit (Vember Audio) (a nice sampler)

Another function I haven’t tried yet would be putting in sound files to play along with (in full or looped). I used to use Audacity for this but it’d be easily done in Reaper.

The main downside from a user interface point of view is that each time after I change anything in Reaper or start recording, I have to click on the webcam software to open up that window again. Another thing is that changing tempo confuses things if done mid-recording and so necessitates a stop and a few clicks, although I could perhaps change some options to mitigate that.

Okay that’s it, I hope you enjoyed the tour. Feel free to comment about any software or configurations you use for practising!

The Joyless Medium

Today a non-music post following on from some other posts: Beats, Windows 98-Style, Are Videogames The New Jazz, and an upcoming piece about how listeners interact with groove music e.g. at house parties.

Basically, last night in bed I woke up and started imagining how those communal grooving/listening situations might happen online.

Take the typical social media comments section, and substitute the comments with layered music tracks in a loop… so whereas in Soundcloud you can put a text comment on a precise moment of a song e.g. “sick bass drop yo”, what if you could drop in a clap or bell pattern, precisely in time, to someone else’s music… or maybe some VST– or SFXR-type customisable synth sounds.

Nice stuff to fantasise about. There seem to be a couple of projects hinting at this kind of functionality. But definitely nothing taking off.

That made me think about the expressive channels currently available on my main social network, Facebook. That’s when I made the connection to my 90s throwback article which celebrated the techno-creative possibilities we had in the late 90s. I realised that FB intentionally forbids a spectrum of modes of expression and features that were actually taken for granted two decades ago.

This isn’t a technophobic post. I’ve no problem with people spending hours staring at screens. If I’m criticising anything here, it’s greed, and also blind faith in free markets + engineers’ optimisation to make people happier.

Here are some ways you can’t express yourself on FB:

  • pixel art or high-resolution art (because FB resizes and compresses all images)
  • ACII art (because text layout can’t be controlled and you can’t switch to a monospaced font)
  • decorative backgrounds
  • choosing the colour of elements, choosing a colour palette
  • making buttons or a user interface, trompe d’oeil/mimicking visual elements
  • laying out a page (the only option is, like with long posts on Twitter, to make a screenshot and share as a picture, but that loses the text data)
  • sharing sound snippets
  • italics, bold text, underlining

You are even discouraged from making your own smilies because they won’t register with the system that converts them to a little cartoon.

20 years ago, anyone making a personal webpage had all of these features at their fingertips. Forums and other communities allowed some of them too.

How about more mundane capabilities?

  • proper hyperlinks (FB lets you put links but without changing the text, and encourages one link per post by allowing a single preview pane; linking to other posts is limited/bogey in a number of ways… sponsored posts can’t be linked to, preview panes are generated in comments but not in news posts, and linking to an old post of yours presents the content with the text removed)
  • searchable posts (because FB’s model is based on feeding you algorithmically selected new material or else you stalking people’s profiles… so they can’t give you ways to find old posts)
  • choosing what you see, not just blocking vaguely defined content or blocking people
  • tags (unlike the other features I’ve mentioned, this is modern, from 2007)
  • metrics i.e. how many views you get (obviously, FB want you to pay for this information by buying sponsored posts)
  • publically editable posts a la Wiki

Will this change? I doubt it. Facebook have something that makes money for them. Perhaps the mass market (which is obviously what a social media site aims for) will never care enough to want those features. But if they were there, we’d be spending our time in a space that felt a lot less grim and robotic, and maybe, if we could play with and surprise each other, we’d be less grim and robotic.

Rant over. As usual, I’d love to hear your comments!

Let me anticipate a couple of objections. Yes, there are hundreds or thousands of websites where you can express yourself in these ways. But a lot of them work on the same formulaic, business-like assumptions of Facebook – that we are all just trying to promote and brand ourselves. Anyway, I think it’s fair to criticise a site where we spend a lot of time and which makes every effort to keep us there.

Oh and I should say that I recognise how useful many of Facebook’s features are, i.e. events and band pages. (I think that intersection of personal scale with a small organisation or business’ scale is where the site works best.) I just think we’d be better off if we could pay for those features straight out rather than by participating in the rote “interaction” of sharing itemised, cling-wrapped content.

Beats, Windows 98-Style

It’s been a while since I blogged here. In the meantime I’ve been working a lot on my rock band Mescalito… but some blog ideas have been simmering in the back of my mind.

Today’s post is a quick chat about a creativity-boosting project I thought of. I’ll be making a drumloop a day, every day of December 2016 and uploading them to my Soundcloud.

I was recently producing beats for my trio with Dyl Lynch and Max Zaska. I enjoyed trying to imitate the likes of Madlib, using compressor and EQ plugins etc. to make our live performances as fat as possible. For this month’s project, though, I’ll just focus on drum programming. I’m inspired by another bandmate, Ben Prevo’s, song-a-day project where he used whatever was at hand to make a more-or-less finished product each day.

To avoid the rabbit hole of tweaking FX plugins, and for a healthy dose of nostalgia, I’ll only use software available in the year 2000!

Hammerhead.png
Hammerhead Rhythm Station (Bram Bos, 2000)

Drumsynth.png
Drumsynth 2.0 (Paul Kellett, 2000)

To me, these programs evoke a different world. I imagine bedroom tinkerers sharing coding techniques, knowledge of analog and digital hardware, and a love of dance music. Bram Bos’ program even displays his student email address, from a Dutch university. The last days of a smaller, less consolidated internet.

Hammerhead Intro.png
The intro screen for Hammerhead

If you had a PC back then, your music-making options were limited to MIDI sequencing, basic layering of samples, trackers – or free programs like these.

massiva_.jpg
This screenshot took a bit of effort to find – it’s easy for the history of a scene like PC music software to disappear into the ether … Massiva, another program I was messing around with around the year 2000

The nicest thing about (my fantasy of) the 90s is the DIY mentality. The tools are by amateurs and rely on no-one else’s file formats or software. These guys saw a problem, coded up a solution and gave it to the world. That still happens today but you are far less likely to hear of it in the hyped and moneyed tech/startup landscape of today.

Admittedly, some of those pioneers monetised their work. Drumsynth 2 is now bundled with FL Studio.

I say “pioneers”, but the reason there was a space for pioneering, is that the professional music world had little time for PCs. PC music was a nerdy little field, obsessed with emulating “realer”, cooler sounds – a vibe you can pick up by browsing old magazines.

The presets in Drumsynth 2 do try to emulate iconic drum machines – but the little synth can’t really hack it and the noises are crude. I kind of like that though. To recap, I’m using 20-year-old free software to get a sound roughly (but not convincingly) like 40-year-old drum machines.

Having a small number of samples (20 preset, 6 custom, only 6 channels) in Hammerhead, my drum machine, forces me to listen closely to how sounds work together. No delay or reverb makes me strive for other ways of creating depth: volume differences, layered and interlocking syncopations, and expressive, varied timbres.

I’ll be pushing the software past what it was designed to do. Hammerhead does 4/4 beats in 16th notes only. By using odd numbers of bars, though, this can be got around (e.g. 5 bars of 4/4 can be 4 bars of 5/4). Similarly, the shuffle control can be abused for some beat-bending tricks, if the given 4/4 grid is disregarded.

So in a humble way this project might represent some DIY values from the hacker and demo-scenes of my idealised 90s – which were all about overcoming computational limitations.

By the way, those 4/4 grids are how I first learned rhythm, at the age of 12 or so (first in a MIDI sequencer, then in Hammerhead). Here is my first ever beat, from 2001:

And here is the first drumline of my month of beats, Windows 98-style. (Try this direct link if the soundcloud embedding doesn’t display below.)