Synth Update

Just a real quick one today. I’ve made a new version of my software synth Golden (that uses the golden ratio as its sound source!) and you can check it out here:

I simplified the interface a lot, mostly by removing unnecessary options. There are no longer two instances of the additive synthesis engine bundled together – a more flexible way to experiment with that stacking effect is to use multiple tracks in your DAW.

For example, you can get instant deep drones by making three identical tracks with a long MIDI note, and then setting track 2’s instance to “First overtone” and track 3’s instance to “Second overtone”. This will get you a chord tuned according to golden ratio intervals! The sound is a little harsh but it’s amazing with the well-known free delay effect NastyDLA providing some dusty air.

I also fixed the polyphony/retriggering issue so notes will behave as expected. And I fixed a bug in the 8th voice and standardised the startup values.

As always with additive synthesis, watch out, it can get very loud.

That’s it from me, have fun if you download it!

Harping On

I made an online toy in JavaScript, called TextHarp. Try it out (it needs a computer rather than a phone because it uses mouse movements).

The idea popped into my head a few weeks ago. It won’t leave the prototype stage because this combination of technologies – pure HTML, CSS and JS (although do I use one library to synthesise sounds) doesn’t robustly support what I wanted.

I aimed to turn a piece of text into an instrument, where moving the cursor over any letter which corresponds to a musical note – so, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘a’, ‘b’ – would pluck the letter like the string of a harp, and play that note as a sound!

At the time, I was thinking of a few possibilities

  • adding audio feedback for testing web pages, so that a developer/designer could hear if an element was malformed or missing information (aspects which are often invisible to the eye)
  • sonification, a concept which I think is rapidly going out of date as people realise its enormous limitations, but which was about turning reams of data into continuous sound patterns that would somehow reveal something within the data, but which I think were usually just third-rate electronic music or else showed no more than a good graph could, and basically made clear that the humanities PhD system sucks in people who’d be better off elsewhere… sorry I seem to have gotten into a rant here
  • simple enrichment and adding magic to the experience of visiting a webpage

That last is out of favour for web design nowadays. Instead, minimalism, accessibility and function are the buzz words. Fair enough… but also ominously envisaging the web as merely where stressed and harried folk get updates from a corporate or government source, staring down at their little phone screen.

Well. My little toy isn’t going to do anything to overturn that paradigm. Still, let’s take a short tour of the challenges in making it work.

I used basic JavaScript mouse events to change elements in the webpage, modifying what’s called the Document Object Model; which is nothing more than how your browser perceives a page: as a hierarchy of bits containing other bits, any of which can be scripted to do stuff.

My script caused each paragraph to detect when the mouse was over it. Then it cut the paragraph into one-character chunks and placed each of these single letters into a <span></span> HTML tag, so that it became its own card-carrying member in the Document Object Model.

Not very elegant at all! Also, despite span tags being supposedly invisible, throwing in so many of them causes the paragraphs to twitch a little, expanding by a couple of pixels, which wouldn’t be good enough for a production page.

However, it works. I set each of the single-letter chunks to play a synthesized tone when the mouse goes over them, and that’s it. Also, when the mouse leaves that paragraph’s zone, I stitch the letters back together, the way it was.

The downsides are that any HTML tags used to format or structure the text tend to get damaged by the process. Usually resulting in piles of gibberish, or text disappearing cumulatively. It would be possible to improve that, but with a lot of manual work. And, the browser’s attempts to be clever by healing broken tags here actually cause a lot of difficulties.

Defining some new kind of object that held the text and knew the location of each letter, would be a better bet.

However, I’m turned off this avenue of enquiry for the moment, because dealing with audio in browsers is a pain. Not for the first time, musical and sensual uses of technology have been left in the gutter while visuals get all the investment.

There are two big problems with web audio. First, JavaScript doesn’t offer precise timing. I see precision, whether in first person computer games, input devices, or in this case reactivity of audio, as inherently empowering – inviting us as humans to raise our game and get skilled at something. Sadly, much of our most vaunted current technology crushes this human drive to excel and be stylish, with delays and imprecision: touchscreens, cloud services, bloated webpages…

Where was I? Yes, the second problem is that Google Chrome made it standard that web sites can’t play sound upon loading up, but only after the user interacts with them. Well meaning, but really shit for expressivity – and quite annoying to work around. My skillz are the main limitation of course, but even trying out two libraries meant to fix the issue, I couldn’t make my audio predictably avoid error messages or start up smoothly.

No tech company would forbid web pages from showing images until the user okays it. But sound is the second class citizen.

When I know my JS better, I’ll hopefully find a solution. But the sloppy timing issue is discouraging. Some demos of the library I used show that you can do some decent stuff, although the one I experimented with took a good idea – depict rhythm as a cycle – and managed to fluff it with two related interface gripes. They made the ‘swing’ setting adjustable for each bar of a multi-bar pattern – pointless and unmusical. And they made the sequencer switch from bar to bar along with the sound being played – theoretically simple and intuitive, but – especially with the above-mentioned time imprecision of web interfaces – actually resulting in loads of hits being clicked into the wrong bar. (And if I say a drum machine is hard to use, it probably is – I’ve spent so much time fooling around with software drum machines I ought to put it at the top of my CV.)

But what am I saying! That demo’s way more polished than mine.

Perhaps even a little too polished! Visually anyway. All of the examples on that site are rather slick and clean-looking, perhaps because, I believe, the whole library has some affiliation with Google.

Ah, I’m being a prick now and a jealous one too, but one scroll down that demos page would make any human sick. The clean grids. The bright colours. The intuitive visualisations – yes, technology now means that you too can learn music, it’s just a bit of fun! Practice, time feel, listening, gear, human presence – nah!! And then the serious, authoritative projects – generated-looking greyscale, science-y formal patterns and data…. bleh.

My next JavaScript project is an exploration of a visual style which I explicitly intend to raise a middle finger to those kind of polished, bland graphics. I’ll be taking lessons from my 90s/00s gaming past to experiment with pixel art but without the hammed-up console-nostalgia cutesiness.

And I’ll be using standard web technologies – JS, SVG – to make anything I come up with 100% reusable by non-programmers.

Thanks for reading!