Recently, when I looked for some plug-ins for audio to midi, I saw a demo video uploaded by someone and he/she uses a plug-in called ‘audiotomidi’. And what he did just remind me of one of my MAX/MSP assignments in my freshman year. Check his work first:
So what he did is using his voice as the input, converted it to MIDI then triggered a VST instrument to play.
And my work is very similar to his that I built that on MAX/MSP, designed the patch to receive vocal signal, then build different chords/arpeggio based on users’ setting. In the end, trigger the VST instrument you have and you can establish your own A Cappella.
And it is pretty simple that only has three main parts:
This is the AD convert part which based on Yin~ offered by IRCAM. Basically, it just tracks your vocal’s pitch and converts the analog signal to the frequency in Hertz.
And this is most crazy, or dumb part that I split the frequency into several ranges according to its corresponding MIDI note.
Once we get the root note in MIDI, we can just edit a bunch of chords information and save them in a coll patch, to build the correct interval relationship based on the root.
In the end, setting up the delay and velocity, or just randomize them. After that routing them into the VST instrument.
So here is a demo video that just presents this patch can do similar to what that guy did. But it’s in real-time that better for live performance. (If you really want to perform in that, make sure using the interface, and spend the time on the sessions/VST instrument arrangement.