listening from a distance

I was once told that a painter spends about 80% of the time looking at the painting from a distance. somehow, emotionally detaching yourself from your work and perceiving it as somebody else would, seems helpful when making creative decisions. here are a few techniques that i use to listen from a distance.
the first one is obvious: let a track rest, work on other music for a few days and then come back to it. you won’t be wrapped up in all the details and usually things will be much clearer.

another technique is to do something that demands your attention, while your music is playing in the background. doing the dishes, for instance. personally, i like to listen to my unfinished tracks while cruising the city on my barfoot longboard. because i have to be very alert of what happens around me, i only get the big picture of the track. it’s a good way to check whether something drags on for too long or could be stretched out.

you can also get some distance when asking others for feedback. i always ask my girlfriend. she isn’t into electronic music at all and knows nothing about creating it, but her reaction is very valuable. it’s usually more direct than feedback from someone in the know. if she says a sound is teh suck, it probably is. she won’t be impressed by ‘but that’s an fm8 sound i programmed from scratch’.

sending tracks for feedback to other electronic musicians can be helpful too. but for me, nothing beats having one of my peers in my studio. i usually ask that person to sit in my chair and actively listen to the whole track. i just sit in the back and observe.

somehow, this is always a revealing experience. it’s almost as if the other person becomes me and vice versa. i feel it’s the closest i get to listening through someone else’s ears. observing helps too: unconscious tapping of feet tells me the thing has got groove. rapper mad skillz called it the nod factor, i call it the tap factor.

giving useful criticism is really hard, and not many people can do it. so i treasure the few ones that have helped me every time they gave feedback. i just hope this post doesn’t make them too self-conscious the next i ask them for a reaction.

Read More

mid/side processing in ableton live

Mid/side (or m/s) processing is a technique that i use in almost all my tracks. it’s pretty simple to set up in ableton live and you can get stunning results that are otherwise impossible to achieve. i’ve always thought it was pretty standard, but when i show it to people, i often get looks as if i pulled something from an ancient audio alchemist’s handbook. not quite so.
first a bit of theory: the difference between mono and stereo is that in a mono signal, the left and right signals are identical, and in a stereo signal they are different. the familiar way to encode stereo signals is to make a left and a right signal. but there’s another way: m/s encoding. m/s also consists of 2 signals, but they are called mid and side. the mid signal contains all the signal that left and right have in common. and the side signal contains everything that is different between left and right.

m/s encoding is quite common, for instance in the stereo audio on vinyl records. the left-right movement of the needle describes the mid signal and the up-down movement describes the side signal. this is decoded to l/r before the signal leaves your turntable.

and now the practice: in live there’s an easy way to turn l/r into m/s. if you set the width of the utility effect to 0% you get the mid signal, setting it to 200% gives you the side signal. all that’s left to do, is insert these in seperate chains in an audio effect rack to use them in parallel.

but the of course you have to do some processing to either the mid or side signal to take advantage of m/s. an example is to put a hpf on the side signal. this will remove the stereo part from your bass, making it mono. this can focus the bass and is essential when doing stuff for vinyl, which has a hard time handling stereo bass.

but you can get more creative. you can make any frequency less stereo by cutting that frequency in the side signal. boosting frequencies in your side signal makes your signal wider at those frequencies. and a wider track can also be achieved by compressing the whole side signal. be careful and use your ears, because your mix can easily become too wide and loose coherence. here’s a rack i use a lot on my master channel. it features a high-pass filter and a compressor in the side signal.

but you don’t necessarily have to apply m/s processing to the whole mix. imagine having a stereo clap. encode it as m/s and only apply reverb to the side signal. you can get a pretty big reverb without drowning the clap and losing the impact. you can find that rack here. try applying the same reverb on the l/r stereo clap. the difference will surprise you.

for those who don’t work in ableton live, voxengo makes a good free m/s encoder/decoder that can give you the same results. it’s a bit more elaborate to set up and you have to explicitly decode the signal back to l/r again after processing.

just bear in mind that m/s encoding in itself is not an effect. but what makes m/s processing so interesting, is that you can split your signal in unusual parts, process those parts with effects, and then put the signal back together. and there are other useful ways to split your signal, about which i will talk in future posts.

Read More

Free preset bank for korg polysix

a softsynth i use a lot is the korg polysix, an emulation of the 1981 6-voice polyphonic syntesizer by the same name. it’s a pretty basic synth, and the range of sounds that can be coaxed from it is limited. but what it does, it does rather well. i made a free preset bank for it, and i couldn’t resist to also write a little personal review.
first the goods: the minz polysix bank. feel free to change the presets and use them in any track. but if you want to redistribute the whole bank, please link to this blog post and leave the file name as it is.

korg polysix

and now the talking. let’s start with the obvious question with emulations: does the software polysix sound like the real thing? frankly, i couldn’t care less. the people at korg tried to capture the quality that made the hardware a classic and to my ears, they have made something that sounds good. because it sounds like a hardware polysix? no, because it sounds good. for me, it’s as simple as that. i’m not here to impress people with the fact that i use a real polysix or something that sounds 100% the same. i’m here to impress them with good sounds. and the virtual polysix never fails to amaze me.

in comparison to its original competitor, the roland juno60, i’d say that the sound of the polysix emulation is a bit softer and more akin to stringsynths. the juno60 sounds more brassy and can be quite piercing, probably also because of its ridiculously fast attack. both sound warm, fit into the mix easily and are able to deliver simple but powerful analog style basses, stabs and pads. sometimes it’s easy to forget that these are actually one oscillator synths. korg and roland added effects to cover that up, but in my experience, the virtual polysix needs them less often than the juno60.

the original polysix had nothing in the way of elaborate modulation, and neither does the emulation. when programming the presets, i occasionally missed the ubiquitous modulation matrix. yet i’m glad korg didn’t add it, as it would radically change the character of the synth. and getting back to basics can actually be quite fruitful. korg made a few changes though.

four parameters can now be externally modulated, scaled along the keyboard and controlled by velocity. this means that sounds can be way more expressive. there’s also the obvious ‘sync to clock’ option for the lfo and arpeggiator. my favorite change is that the filter tracking is now accurate. this means that if you crank up the resonance with the filter tracking set to 100%, you can get some sweet bell-like sounds that can be played in tune across the keyboard. this is not possible on the original polysix.

then there’s the unison function with spread and detune. often, this is the quickest way to turn any sound into a bog standard tarnce lead. but when applied with taste, it’s nice for creating some fat and wide sounds. usually i prefer to make a big sound without unison and then use it to make that sound even bigger. you’ll find most of my presets don’t use the unison function, which also leaves the user a bit of room to easily fatten up a sound.

the final addition in the emulation is the illustrious ‘analog’ knob. i seldom use it. as i see it, any real polysix that sounded like the emulation with the ‘analog’ knob at 3 or higher, would immediately be brought away for repair. i imagine that if you crank the knob up to 10, you hear what a polysix sounds like when its guts are being eaten away by leaking battery acid. which, according to a repair tech i spoke, is exactly why a lot of of hardware polysix synths out there are dying at the moment. oh, the wonders of analog.

finally, a cool trick that is typical for the polysix: put the polysix in poly mode, make sure polyphony is set to 3 or greater, hold a minor chord on the keyboard, and then press the chord button. now play some notes and party like it’s 1991.

Read More

more news from namm 2009

The more i learn about the upcoming max for live, the happier i get. it seems you can indeed load any max/msp patch. and even jitter. probably with a bit of adaptation, but a conversion tool shouldn’t take long. the promised integration i raved about yesterday, might extend even further than i thought. through the use of custom max objects, it will allow people to use open source stuff like processing or ruby inside max for live. some of these objects have already been created for max and no doubt people will be inspired to adapt them for max for live or create new objects where there are none.
why is all this so exiting? being a closed system, live offers the stability you need to focus on making music. but you also get forced into a mold. max for live offers a backdoor into the system. there’s a strict door-policy, but once you’re in, enjoy the party. effectively, you get the stability of a closed system, but with all the possible craziness of an open system. it will surely result in many new and inspiring ways in which people use live.

other cool namm news was volta by motu. it’s a plugin that allows you to send control signals to cv inputs on analog synths. all you need is an audio interface. volta has built-in lfos and sequencers that can be beat-synced. it can also pass on automation from your host. this is one of those products that makes you wonder why it took so long.

Volta First Look from stretta on Vimeo.

it’s mac only for now, which is the bad news for me. but i expect other manufacturers to join in and hopefully improve on volta. i really don’t get why this has to be done through an audio interface. maybe so people can use hardware they already have, but i’d prefer a plugin interfaced to a dedicated usb hardware box with, say, 16 outputs and 16 inputs. that should be cheaper than a 16in-16out audio interface. why the inputs? to me, modular synthesis is all about controlling everything with everything. controlling the rate of a software step sequencer with a hardware lfo, anyone? and what about controlling parameters of vst synths from your hardware modular?

this might even dent into the market for hardware control modules. buy some cool audio generating and processing modules and do all the control from the computer. no doubt we’ll get discussions about how analog control signals just sound warmer, but to most it means being able to buy a fully featured analog modular for less.

now if somebody just made that box and allowed it to be used by max for live…

Read More

Soundcloud FTW?

Ableton live. summing bus.

throw those 2 terms at google and within seconds you’ll find yourself in the middle of one of the many wars on the intertoobz. on one side you’ll find people who claim that live’s summing bus sounds crap compared to for instance apple logic’s. on the other side are people who claim that this is simply not true. or that, if it is true, it doesn’t matter. for those who don’t know what this is all about: summing happens when 2 or more signals are mixed together. the master channel of your daw is the most obvious example. if your summing bus is bad, every track you do suffers from it.
i’m serious about mixing and want the best possible sound. within my budget of course. currently i mix in live. but i also own logic and if logic’s summing bus indeed sounds better, i’ll switch immediately. so yesterday i ran a little test.

i exported every channel of one of my tracks to a separate audio file, imported all those audio files into both live and logic and bounced both to a stereo mix. listening to the bounces didn’t reveal any differences. but i have far from the best ears in the industry, so i used the infamous null test to objectively compare the outputs of live and logic. i loaded both bounces into wavelab, inverted the phase on the logic bounce and mixed them together. the theory is that the signal that both bounces have in common will cancel out, be ‘nulled’. the difference doesn’t get nulled, and you should be able to hear that.

this is a generally accepted way to see whether 2 signals are the same or not. if you find difference though, that doesn’t mean one signal is worse. it just means it’s different. but if you find no difference, any claim that one of the signals is better or worse has no factual basis.

so i mixed the live bounce with the phase-inverted logic bounce and i heard… absolutely nothing. human hearing has it’s limits, so i grabbed the excellent free voxengo span spectrum analyzer. nothing. not even with the amplitude scale going down to -130db. luckily live has a built-in spectrum analyzer that can go even further. and that showed the following with the amplitude going down to -160db:

i ran the test again with a different track and got similar results.

so there is a difference. let’s assume it’s not due to fft metering limitations, and that it is really there. fine. but is it significant? i’m well aware that seemingly inaudible subtleties can influence audio quality. but i don’t think this subtle difference is proportionate to the certainty with which some claim live’s summing bus is made of fail. we’re talking about a maximum value of -140db here.

to give you an idea: the softest possible sound on a cd is at -96db. due to how the decibel scale works, -140db is about 10000 times softer. and it is -140db at 50hz, where the human ear is at its most insensitive. as robert henke said in a lecture about live’s sound quality, this is purely academic.

so are those who claim live’s summing bus sucks talking out of their ass? nope. i have no doubt people hear a difference between live and logic. but the reason is almost certainly not the summing bus. so what is it?

the placebo effect is always a good one. you’ll hear what you think you should hear. funny thing about the placebo effect is that more expensive placebo works better. maybe doubling the price of apple logic will make it sound even better.

then there’s peer pressure. if you follow some of the discussions, you’ll quickly discover that among real professionals™ it’s quite common to consider live a toy, not suited for real professional work™. if you want to be one of the boys, you’d better hear that live indeed sounds bad. and have some revealing personal experience to back it up. and if you don’t have that, any hearsay will do.

all irony aside, i do think there is also a real difference in sound between live and logic. i worked in logic for about 7 years and have been using live for 4 years now. due to how both programs are organized, it’s clear to me that you’ll make music differently in live. with sonic consequences.

to prevent this post from getting too long, i’ll post a more detailed explanation of that in a few days.

Read More