Pages

2018-04-07

The nature of mixing

Just a little thought in between, while I'm writing a longer post...

The concept of mixing audio, applying filters, processors and whatnot to perfect and polish the sound is actually quite artificial, at least to the extend that it's being done these days.

If you think about how music was performed for most of humanities existence, all you had were the "raw" sounds of the performers. There were some limited ways to tweaks those, like "hey you, don't hit those drums so hard" or "you, come to the front, so we can hear you better", but that's about it.
And while people for sure noticed the difference that the environment had on the experience, if they didn't have a cathedral at hand that would give the chorus that impressive volume, they would still enjoy the performance in a normal room.

The ability to micromanage sounds, in the way we can do it now in the studio, feels like a qualitative step beyond what we used to be able to do in the past. Like going from some salt and the herbs that grow in your backyard for spicing your food, to glutamat and a whole catalog of E-something chemicals used for industrialized food production.

Yes, those tools that we have today are great, and it's even greater that they're now available to anyone with a computer. But be careful and conscious about how you apply them.
At least I, when I make a pizza myself at home, don't want it to taste like a frozen pizza out of some factory.

2018-04-03

LV2 confusion with internal presets

I managed to get the Helm synth working as an LV2 in noisicaä, including UI. Which is great, because that's a very nice synth.
But I ran into an issue, which caused me some head scratching, because I thought that I was doing something wrong. As it turned out, this is really a problem with (at least) LV2 and Helm's feature to load presets from within its own UI.

Helm's internal state is determined by the values of the 100+ control input ports, plus some additional state that isn't captured by those ports (e.g. which controls are modulated by an LFO).
Now when you load a preset, Helm sets all of its control to specific values, but it has no way to tell the host about those values. Which means the host is still feeding the previous values into all control input ports. Helm seems to have some logic to ignore those values from the host, until you actually change them from outside of Helm (e.g. via automation or the generic UI). If you change some control manually in Helm's UI, then the values are posted from the UI to the host, so it can update the value that it feeds into the respective control port.

But if you now close the project and open it again, Helm's state gets all messed up, because for most control the host never learned about the right value from the preset that you selected, and Helm gets initialized with effectively random values. Things get even more confusing, because the control values are also stored in Helm's 'internal state' blob and depending on the order in which control ports and internal state are initialized you get different results.

Turns out that Ardour has the same issue, so I'm not alone with it.
I don't know, if e.g. the VST API solves this issue, or if it was generally a bad idea to have presets, which change the values of control ports - and manage them within the plugin itself. If the presets would be handled in the UI, then LV2 has the API to set the control ports... or is it just a bug in Helm, that this API isn't used when switching presets?

So the lesson is to not touch the preset browser in Helm, but instead use the preset menu from the host - which has the same list of presets, and knows how to set the control port correctly when loading a preset.
But noisicaä doesn't know about presets yet...

2015-12-03

Sins of my Youth

I mentioned the Deluxe Music Construction Set before. I used that mostly to transcribe sheet music, which I found in the public library. Some classical pieces, e.g. https://youtu.be/UqOOZux5sPE - because that movie is awesome, and if I remember correctly some of the older Genesis songs. I can't remember if I actually tried to write my own music with it - whatever files I saved back then are gone forever.

But I also played around with "trackers" - I think it was mostly NoiseTracker and OctaMED. And strangely enough, even after ~25 years I still have those files sitting on my harddisk. And there's even software that can play them back or convert them to slightly more modern formats.

Just for the giggles, I uploaded some of those to SoundCloud:



I can't claim that my artistic skills have changed, let alone improved, in the meantime.
But I'm probably a better coder now.

2015-11-29

Introducing noisicaä - A simple music editor

I don't know much about music, but it's a topic that interests me.
I don't play any instruments and I'm too old or too lazy to learn one. Probably both.
But I'm one of those computer persons, so I would naturally use a computer to make music - or at least attempt to.

So I looked around what software is out there, but the stuff I found didn't get me terribly excited. Now I have to add that I'm using Linux (Kubuntu on my desktop to be more specific) and while I also have Windows on my machine, I wouldn't want to use it - except as a bootloader for Steam. I also didn't look at commercial software, just what's out there in the open source world. That limits the selection. And the stuff that actually exists in that small subsection of the market is either too simple or overwhelmed me a ton of options and possibilities that I don't know how to use, because I don't actually know what that stuff means.
On the other hand I still remember playing around with the Deluxe Music Construction Set on my Amiga back in the days. And that level of complexity (or the lack thereof) feels like exactly the thing that would be appropriate for me today.

So because I couldn't find any existing software that I liked, I started writing my own. Also writing code is fun, so I don't really need a reason.

I'm also using this opportunity to play with some fancy tools, which I haven't used yet: Python3 (it's about time), Cython (sounded like a cool toy, but I never had a reason to use it before) and Qt5 (I've been using wx for UIs in the past, but I like Qt way more now).
I'm also giving git another try. Well, we'll see. That has been quite a lackluster experience in the past.

What's different about noisicaä?

noisicaä uses classical musical notation. Putting notes onto staves, clefs, measures. etc. Most "modern" software seems to focus on a sequencer style interface. My little bit of knowledge about music is stuck in the time when I had music in school, and continuing from that point seems to be easiest for me. Perhaps later I'll figure out that the piano roll UI of sequencers is indeed more powerful than the system that been used by Bach and Mozart back in the days. But the software can make that transition together with me, as I acquire more skills.

Another thing that seems popular, especially in the realm of commercial software (which I can only tell from seeing random screenshots on random websites, since I never used any of these), is that the UI simulates existing hardware. That might be nice for people with existing experience in making music using real instruments and studio hardware. But for me that's just another obstacle, because in addition to learning about the musical domain itself, I would also have to deal with a user interface that doesn't follow known guidelines. At least known to someone who never set foot in a music studio, but worked with computer for the past 25 years.
So noisicaä's UI rather follows established schemes that you would also find in IDE, office suites, and the likes.

Features

Not a lot yet. This is still pre-alpha state software.
There's some basic editing and you can play it back. The instruments are currently rendered using FluidSynth, i.e. it uses soundfonts. I have vague plans to also support plain .wav files as samples, or use synthesizers to create instruments. Csound seems like the kind of arcane but powerful system that I could like a lot.

Of course the music should eventually be rendered as .flac, .ogg, etc. files, but that doesn't exist yet.
There should be some support for filters and general mixing/production features, so I get a little bit more than the typical MIDI sounds. LADSPA/LV2 plugins and again Csound seem to be the right tools for that.

I'll also do something with MIDI input. There's a little MIDI controller here on my desk, that wants to be entertained. But probably not real-time recording, because I could play anything in real-time that's worth recording.

And in the further future I'm also thinking about some hybrid system which mixes composed tracks with recorded tracks - where the source of the recording will probably be my wife playing the guitar or bass.

And who knows what. That'll depend a lot on where my learning curve takes me. I will use this project to implement those things that I learn about music, so I can apply them.

What, music software in Python?

That's right. People seem to make a big fuzz about latency, real-time support in the kernel and that kind of stuff. They probably know better than I, but I'll ignore that advice anyway. Most of the coding is around the UI, persistence, etc. and not involved in the playback at all. And despite not having spent a lot of time into optimizing anything, I haven't observed any buffer underruns during playback yet. Perhaps using a totally overpowered desktop is just good enough (though it's already a few years since that machine was "high end").

Also I don't see noisicaä as a tool to be used on stage for live performances. So occasional glitches do not worry me a lot.

And then there's Cython, which I started using mostly for interacting with C libraries. I already use it for some of the bit crunching parts - not because those already needed optimization, but just because I wanted to play with it a bit more (aka. premature optimization).

So... you want to make music?

That's the weak spot of this whole enterprise. I seriously doubt that I have the creativity and artistic skills to produce something that could actually be called music. Or warrant all the effort I'm now putting into this. Even after I learned all the theory that there is to learn and created the greatest software that any artist could dream of.

Bah, who cares...

Initial commit on GitHub

I've been working on this project for a while already, but only now made it public on GitHub. It's not yet in a state where it's usable though - that'll still take a while.
And I guess I should also describe what this is about at some point...