How is that sound achieved? The vocoder analyses the character of a ‘modulator’ signal (ie your voice) and applies a filter to the ‘carrier’ (a waveform produced by an oscillator, in this case the oscillator of the mK). The result: you sound like Sparky the magic piano. Or a Dalek. Or Darth Vader.
For ÄUTODIDACT, what I had in mind wasn’t too fancy: a flat robotic sound, perhaps with a little delay to round it out.
First thing was what waveform to use. This involved lots of speaking into the mic, whilst dialling through different waveforms. In the end it came down to a choice of Sawtooth and Vox Wave. Vox Wave simulates a waveform similar to human vocal chords, but it sounded a little ‘too good’: too human, not enough robot. I went for the sawtooth.
You could almost go from there: you’ve got your modulator (voice) and your carrier (oscillator producing a sawtooth wave). Everything else is refinement. But it was a bit basic.
I mixed in some noise (ie output from the microKORG’s noise generator) to make the sound a little edgier, and boosted the resonance of the filter a little to give the sound more character (too much made it sound like an alien).
Next, to specify how much of the signal was the modified oscillator and how much was my own voice: this was easy; I wanted it to be 100% vocoder with 0% of my unmodified voice coming through. To give a bit more ‘oomph’ to the sound, I turned on distortion for the oscillator and noise generator.
The final thing was to throw in some L/R delay (where the delay is output to left and right channels alternately). The song’s tempo is 120BPM, so I set the MK’s internal tempo to 120, and locked the delay of the vocoder to that. Then it was a case of twiddling around with the delay depth, which controls the volume of the delay, and the amount of feedback (echoes).
The final result:
And in the context of the song from 0:32 – of course it’s also been through the Mini KP….