minor optimizations

This commit is contained in:
2026-04-24 11:31:37 -04:00
parent c57126f4b5
commit 72cfbe841f
25 changed files with 12296 additions and 20491 deletions

View File

@@ -0,0 +1,120 @@
Synth School: Part 1
Analogue Oscillators, Filters & LFOs
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published June 1997
After all the political talk in recent years about a return to traditional values, Paul Wiffen kicks off a major new series on synth programming by arguing the Analogue Fundamentalist Party case <20> that an understanding of the basic elements of traditional analogue synths is essential to fully exploit the various types of synthesis available today. This is the first article in a 12-part series.
Back when I wrote my first series on the basics of synthesis (longer ago than I care to remember), there was really no need to mention the word analogue, as it was the only type of synthesis commercially available (except for the odd bit of harmonic addition on prohibitively expensive computer systems like the Fairlight). As a result, anyone who knew anything at all about synthesis would be familiar with the basic building blocks of waveforms, filters and envelopes. This meant that when a new synthesis technique came along, some of the elements in it would be familiar, even if it didn't use all the same components to build up a sound. So FM synthesis (or Frequency Modulation, which will be covered in a future part of this series), for example, might not have filters, but it used sine waves and envelopes. Sampling might not use regular waveforms, but most samplers had filters and envelopes on them <20> and so on.
While I may sound like a right-wing politician attempting to claim the moral high ground, I still maintain that anyone who has a good grounding in the principles of analogue synthesis will not take long to get their heads around any new system that comes along, simply because several of its elements will probably be familiar to them, so all they need to do is spot how the unfamiliar elements are used to do the job of the missing analogue stages.
While experienced analogue synthesists can get into the right ballpark when imitating acoustic sounds, it's clear that this way lies considerable frustration.
Five years ago, such an insistence on starting with analogue might have been greeted with scorn, as few people were using analogue synths for music making. Now, though, whether through the use of original analogue instruments bought on the second-hand market, authentic recreations of the way the sound was made (like the Novation Bass Station), hurriedly adapted PCM-based systems like the Yamaha CS1x and MC303, or even the computational muscle of DSP-based physical models of analogue such as the Korg Prophecy and recently released Roland JP8000 and Yamaha AN1x, the analogue sound and programming style are back in a big way. Perhaps it's the pre-millenium retro vein in all forms of music, from techno to straight rock. But it does mean that starting this series with analogue makes me hipper now than I've ever been accused of being in my life. Even if analogue synthesis hadn't made a huge comeback, I'd still be starting with it. I just wouldn't look so cool!
Another side benefit is that those of you buying brand new physical models of analogue synthesis (three of which will ship this year to swell the growing numbers already out there) need not worry about how the sound is achieved internally (any more than those of you using the genuine article or PCM-based copies). The controls still use the same terminology, the very terminology we will be exploring in these first few articles.
Subtraction <20> That's The Name Of The Game
Most other forms of synthesis are additive in nature <20> they take simple elements and add them together to build up the more complex sounds which our ears find interesting. The most obvious example of this is additive synthesis, which takes sine waves (possibly the most uninteresting sound of all) and sums them to imitate the harmonic series found in nature. Even FM synthesis, which multiplies sine waves together in an attempt to generate complex waveforms more quickly, tends to add several of these products together to get to its more effective results (which is why 6-operator FM sounds better than 4-operator FM, because you can add more products together).
Analogue, or subtractive, synthesis (as it is sometimes called in academic circles) does the opposite. It starts with more than you need, and you take away bits until you're left with the sound you want. This makes it more analogous to sculpture (where the sculptor knocks lumps off a big block until the shape he wants is revealed) than painting (where the image is built up from individual brush strokes).
To continue the sculpture analogy, where do we get our sonic block of stone and what form does the audio chisel take? Let's take the block first. If we're to remove frequencies from sound, presumably we need to start with a sound that has more frequencies than we need. There are two possibilities here. Firstly, we could take a sound with all the audible frequencies contained in it, and many analogue synths do have the ability to generate this sound, the technical term for which is... noise. It should be reassuring to any absolute beginners that they are already familiar with this term, even if they would normally associate it with a generic description of non-musical sound. Indeed, if you just listen to the noise setting on an analogue synth, without any filtering or enveloping, it is a fairly unmusical sound. Usually referred to as white noise (meaning that it contains all frequencies at equal volume), it takes a fairly severe amount of processing to remove enough frequencies from this to leave you with a musical sound (although, as we will see next month, it can be done, by using resonance).
It is the variable nature of the pulse wave which makes it my favourite as a starting point for analogue synth sounds.
Fortunately, there are other sound sources which contain lots of frequencies suitable for selective removal, but which also sound more musical to begin with. Although the sine wave we mentioned earlier in conjunction with additive and FM synthesis contains only a single frequency, other commonly recognised regular waveforms <20> square, sawtooth and pulse <20> contain whole families of frequencies in mathematical relationships to each other, known as harmonic series. In lay terms, this means that the human ear perceives them as a single pitch whose tonal quality is determined by the exact mix of related harmonics present.
This is because these harmonic series are naturally occurring and are produced by traditional 'pitched' musical instruments. What we hear as a single note from a flute, piano or violin is actually a whole series of sounds which are related to each other. The actual pitch we hear in musical sounds is known as the fundamental and this is usually the lowest and loudest frequency present (although not necessarily so). The other frequencies present in the aforementioned waveforms (and many natural sounds) are all multiples of the fundamental's frequency (two times, three times, four times, and so on). These are referred to as the first harmonic, second harmonic and so on, making up the harmonic series. Guitar (and other stringed instrument) players actually use these harmonics as a part of their repertoire of timbres. By touching the string halfway, or a third or quarter of the way, along its length they cause it to vibrate in two, three or four sections, at twice, three times or four times the frequency respectively. Many wind instruments achieve the higher octaves in their range by a similar technique, blowing harder to split the vibrating column of air into sections. Indeed, brass instruments before the introduction of valves could only play the pitches in the harmonic series (hence the reason why standard military bugle calls are variations on the higher harmonics).
Of course, all these acoustically produced 'harmonics' actually contain their own harmonic series, from the new fundamental that has replaced the original. Few sounds in nature consist of a single frequency, as the energy used to create any particular frequency usually spills over into creating its related harmonics at lower volumes. The closest sound you might get to a sine wave produced acoustically is wetting your finger and running it round the rim of a wine glass till it begins to resonate (a great way to liven up a dull dinner party).
Unfortunately, the 'pure' sound of a sine soon bores the ear (unless combined together by additive synthesis or FM <20> see future episodes), so what are your waveform choices if you want a whole raft of related frequencies instead of a single one?
Meet The Candidates
Fortunately for subtractive synthesis, waveforms such as sawtooth, square and pulse, which are easily produced by electronic oscillators, contain a whole heap of harmonics which determine their characteristic timbres. Indeed, the sawtooth waveform (so called because the slow rise/fast fall of the cycle when traced out resembles the teeth of a saw) contains all the harmonics within the human hearing range, although not in the same quantities. In fact, the loudness of each harmonic is inversely proportional to its frequency. So the harmonic with double the frequency of the fundamental is at half the volume, three times the frequency is at a third the volume, and so on. This makes this waveform ideal for producing fuller sounds, as it contains all the frequencies related to the fundamental.
The square wave (so called because the trace it makes looks like square blocks <20> or the tops of castle walls) is the one electronic waveform which has always produced a murmur of recognition on first hearing. It contains only all the odd-numbered harmonics (ie. every other one) again in inverse proportion to their frequency, and the 'hollow' sound this produces is extremely reminiscent of the clarinet. Presumably this is because the resonant characteristics of the body of the clarinet accentuate the odd-numbered harmonics and mute the even-numbered harmonics. The patch charts supplied with old analogue synths always had a clarinet patch (square wave with wide open filter), and this was also a common preset when technology became available to recall synth settings instantly.
The other common waveform on analogue synths is the pulse wave and this is a bit of a chameleon. You can't describe its timbre, nor even list the waveform's harmonic content, as this varies with the width of the pulse. Yes folks, unlike the staid old sawtooth waveform, which is unvarying in its harmonic content, you can change the harmonics and their proportion in the dynamic go-ahead pulse waveform by changing the width of the pulse. Indeed, the aforementioned square wave is actually a special case pulse wave, where the negative and positive sections of the cycle are of equal length.
It is the variable nature of the pulse wave which makes it my favourite as a starting point for analogue synth sounds. This enduring love affair started on the day when I twisted the width control on a Wasp for the first time with a pulse waveform selected on the oscillators (before that I had assumed that the width control must be broken, because it didn't seem to do anything). The moving harmonic spectrum which greeted my ears really transformed my interest in synthesis from a cerebral one to an emotional one. In that brief sweep many different harmonic spectra came and went, and I realised that analogue synthesis could hold as much sonic interest as any naturally produced sound. While the human ear cannot always pick out the static presence of particular harmonics, it's extremely sensitive to changes in their levels (as we'll see when we come to additive synthesis in a later article). The fantastic thing about the pulse wave is that not only are there thousands of variations of harmonic spectra available as starting points for sounds, at the tweak of the width knob, but also, most analogue synths will let you automate the moving of the pulse width. This technique is referred to, unsurprisingly, as Pulse Width Modulation, or PWM for short.
Why should we limit ourselves on electronic instruments to things that occur in the real world?
The width parameter actually refers to the duration of the positive component in proportion to that of the complete cycle. So a 10% pulse wave means that the positive segment only lasts one tenth of the cycle length before dropping to the negative segment. A 50% pulse wave (aka square wave) means that the positive and negative segments are of the same duration.
We've already looked at the harmonic content of the square wave (all the odd harmonics decrease in volume as they go up, in case you weren't paying attention earlier) and whilst it's not feasible to describe the spectra at every possible width setting, the fundamental and lower harmonics become weaker the further from the central setting you venture. This leads to a bright but thin sound which, at the extremes, starts to sound as if it is moving up several octaves before disappearing altogether. Some analogue synths prevent this from happening, by restricting the width control to between 5% and 95% or even 10% and 90%, providing a sort of set of 'training wheels' for fledgling synthesists, but on other machines you can completely silence the oscillator by turning the width control too far.
Later analogue synths (usually those with presets) feature width knobs which only vary between 0 or 5% at one end of their range and 50% (square) at the other, as their designers started listening to the result and noticed that a 30% pulse wave sounds the same as a 70%. So if the analogue synth you have access to doesn't have graphics or numbers next to the width knob to indicate the width at that position, try the following procedure to find out which range you have. Move the width knob (with pulse wave selected on at least one oscillator, unless you want to repeat my error of all those years ago) until you hear the signature 'hollow' sound of the square wave (you may even have a preset square wave to compare it to). This will probably be either the central position or the maximum.
Oscillators: The Other Use
Pulse width modulation, the automatic movement of pulse width by the synth in a repeated cycle, is as good a way as any of being introduced to the other type of oscillator used in analogue synthesis: the Low Frequency Oscillator. The LFO is one of the many tools first invented for analogue synthesis which have found their way into other synthesis styles, just because they're so useful. The low frequency at which this type of oscillator cycles is below the range of human hearing, so it's no use routing an LFO through the audio pathway of the synth. Instead we use an LFO to control the regular, repeated change of settings on the synth (the jargon term for this is modulation, because 'change' would just be too easy to understand!). The LFO can be routed to control, amongst other things, the pitch of the audio oscillators (for vibrato), or as here, the width of the pulse wave. Hardy souls may prefer to move the width control for their pulse wave themselves, but for the busy player (using all 10 fingers on the keyboard) and the lazy (more my style), LFO control of PWM (aren't all these three-letter abbreviations great?) is the best thing since sliced bread (no, actually, it's more satisfying than that!).
Minimoog Model D: The third audio oscillator (a luxury few analogue synths boast, whatever their price point) can be set to operate as an LFO.
Minimoog Model D: The third audio oscillator (a luxury few analogue synths boast, whatever their price point) can be set to operate as an LFO.
On the Minimoog and Memorymoog, the third audio oscillator (a luxury few analogue synths boast, whatever their price point) can be set to operate as an LFO, but this example of switching between audio device and modulation device is fairly rare. Normally audio oscillators are audio oscillators and LFOs are LFOs and ne'er the twain shall meet. Audio oscillators are usually labelled as OSC 1, OSC 2, and so on, and LFOs as LFO 1, LFO 2, and so on. The waveforms these low-frequency oscillators can adopt vary slightly from those used by their audio cousins. The sine wave, for example, often eschewed by analogue audio oscillators because of its rather thin, single-frequency sound, really comes into its own on an LFO because of its gentle undulating nature.
Most of the time you want LFO changes to be gradual and without sudden jumps. Sudden or instant movement of parameters tends to introduce an 'event' into a sound which the ear often perceives as a new note. Gradual changes, such as those brought about by the smooth cycle of a sine wave, maintain an interest in the sound without demanding the full attention of the listener, as abrupt changes do. Thus it is that the classic pulse width modulation effect uses a sine wave on a slow LFO to vary the width setting. Particularly on low bass notes or string ensemble sounds, this makes for the most sensual sound an analogue synth can produce, with the slow ebb and flow of the harmonic content making for a subtle but intoxicating effect. The best-known example of this is the original Moog Taurus pedals, which featured a special preset with this effect hardwired in. Beloved of many a prog-rock band, this sound has yet to re-surface in the analogue vocabulary of dance music, probably because there is more interest in the real sub-bass end, which is somewhat concealed by the PWM movement higher up the harmonic series. However, anyone who has heard Taurus pedals through a big arena PA cannot doubt for a second that the real low end is definitely present. If you want to try out this effect for yourself at home, it's fairly simple to set up.
Route a slow LFO (no more than one cycle per second) to the pulse width of your oscillator, and crank the depth of the modulation up.
Play a low note and you should hear a continuous movement in the sound as the harmonics come and go.
If you want to use the sound higher up, you may find the effect a little lost, as many of the harmonics will have moved out of the audio range, but you can compensate for this by speeding up the LFO a little (not too fast, though, or it can end up sounding out of tune).
While I would always rather have other types available, if your analogue synth only has low-pass filtering, you will still be able to get the majority of 'standard' analogue sounds
One word of caution when setting up your own PWM effect: just as you can set the static width of a pulse wave to be so narrow that the sound disappears altogether, so an LFO set to too great a modulation depth can take the pulse width in and out of the same territory, so that the sound temporarily disappears. If you hear this happening, just back off the LFO depth a little. Sometimes this can happen just once every few minutes, but in that case, you can be sure it will happen right in the middle of your best take or the highlight of your solo. Here's one solution I've found which avoids the need to decrease the amount of PWM in your sound.
If both slightly detuned oscillators of an analogue synth are set to pulse wave and their widths are modulated by different LFOs, set to slightly different speeds, then not only does the richness of the PWM effect increase as the two shifting harmonic patterns interact, but the chances of your sound going AWOL at the critical moment are less than you winning the lottery jackpot. Of course, there are some who might describe this technique as over-egging the pudding (usually insensitive producers trying to get some other instrument to fight its way past my overblown synth sound), but I've never subscribed to the 'less is more' philosophy (being more of a 'too much is never enough' kind of guy!).
Other uses for the LFO, such as vibrato (modulating the pitch) and tremolo (modulating the volume) are also best used with the sine wave settings (indeed some synths don't offer a choice, their LFO waveform being fixed to sine wave). Its near-relative, the triangle wave, sometimes available as an alternative, is subtly different, making the variations linear instead of exponential (straight up and down instead of slowing towards the extremes before going back to the centre). If you've got both on your synth, see if you can hear the difference. Even with a slow LFO speed, it's a subtlety easily lost in a mix. If you don't have it, don't feel too hard done by. It's a bit like New Labour and the Conservatives: 9 out of 10 voters can't tell the difference.
Filter Tips
Once you've selected the waveforms that give you the mix of harmonic content you want to represent your virgin sculptor's block, you need the sonic equivalent of a hammer and chisel to 'chip away' the unwanted bits. This is the filter which, as its name implies, removes unwanted frequencies and also allows you to boost certain frequencies if required (a capability not implied in its name, admittedly). Which frequencies are removed and which are left depends on the type of filter used. Most analogue synths only have one filter per voice (except modular designs, of course) and a good many of those are limited to the low-pass type. Others may have a switchable type, but even then it will be the low-pass setting which gets most use.
The low-pass filter attenuates (lowers the volume) of the frequencies above its cutoff point (the frequency at which it is set to work either manually or automatically). It lets frequencies lower than this cutoff pass through to the audio output (hence its name). The reason why this is the most commonly used type of filter is that for most musical purposes we need to hear the fundamental frequency of the oscillator, and a low-pass filter will not remove this until it is closed down nearly all the way (ie. until the cutoff frequency is moved to the bottom of its range). So even when some pretty drastic filtering is going on, we can still hear the fundamental pitch. That's why many manufacturers decided it was the only filter type needed. While I would always rather have other types available, if your analogue synth only has low-pass filtering, you will still be able to get the majority of 'standard' analogue sounds. It may limit your ability to venture into the weird and wonderful, but it shouldn't restrict your mainstream analogue palette too much.
The cutoff frequency of the filter is perhaps a slightly misleading term, as it actually refers to the frequency at which the filter starts to do its job of attenuation. However, analogue filters can only gradually reduce frequencies in proportion to the distance from the cutoff. Slope-off might actually be a more accurate term, if it didn't imply someone leaving work early. Indeed the measurement of how quickly a filter attenuates is known as the slope or gradient of the filter. On conventional analogue synths (and many modern ones) this is either 12 or 24dB per octave <20> so each time the frequency doubles, anything at that frequency is reduced by another 12 or 24dB.
If you're interested in making new sounds, you'd do well to look for analogue synths with high-pass and band-pass filters.
The characteristics of a filter change subtly, depending on the degree of attenuation it offers. Aficionados of the more drastic slopes (those on the Minimoog or ARP Odyssey, for example) praise the punchiness of the resulting sound, whilst those who favour the gentler gradient filters (on Roland instruments, for example) speak of a smoother, rounder sound. It's all a matter of taste, and you'll have to compare analogue synths to see which suits yours, if these vague descriptions don't immediately strike a chord with you.
You may come across another way of referring to a filter's attenuation capabilities: the terms 2-pole and 4-pole. These refer to the number of circuits the filters originally used to do the job <20> each pole represented 6dB of attenuation. Don't worry too much about this, though (if you're buying second-hand the information may not even be available); just listen to the sound as you move the filter about and see if you like it. Those who need scientific accuracy in the description of their filters may do better to look at some more modern DSP models of filters, which are very precisely documented.
By this point the more perceptive of the uninitiated will be saying to themselves "Never mind all this dB/oct stuff; why use waveforms full of harmonics if all you're going to do is take half of them out again?" Why indeed? The answer lies in the fact that the filter's cutoff frequency can be controlled in real time, either manually or via devices like the LFO (which we have already looked at) or the envelopes (which we will cover next month, as they're used in all types of synthesis). So you can start with all the frequencies present but close down the filter quickly, taking out progressively more frequencies as you go, so that the tail end of the sound is much duller, lacking the top end. This is a fair approximation of how plucked strings act in the real world. As the string is struck, much of the harmonic series is generated, giving a very bright attack. But as the energy present in the system dissipates, it's the higher frequencies which die away fastest, leaving the lower harmonics to ring on until only the fundamental is left.
Again, while the imitative role of analogue synthesis is much reduced, the ear still gravitates to sounds which although not exactly the same as naturally-occuring sounds, nevertheless have some of the same characteristics. So a previously unheard bright sound dying away is more easily assimilated by the ear, as it shares the same overall timbral characteristics as more familiar sounds. In a similar way, sounds whose harmonic content stays roughly the same, or rise and fall more slowly as a means of expression, are also familiar, as the ear recognises these characteristics from bowed strings and wind instruments. Here, too, the player can make a note last as long as (s)he wants (provided they have the stamina) and bow/blow harder or softer for expression. The sound which starts dull and gets brighter/louder is a much rarer phenomenon in nature, and as a result synth sounds like this have that 'backwards tape' character.
We'll look in detail at how envelopes shape these timbral (and other) variations in the sound next month, but to conclude this article, I'd just like to acquaint you with the rarer types of filter, as some of them are in danger of extinction (notwithstanding some brave preservation work being done by the DSP engineers at Emu Systems on the Emulator Operating System). Whilst they will never help you in your search for piano, strings and brass, they are creative tools which should appeal to those interested in less run-of-the-mill sound design (see Figure 3, which illustrates the three types of filter you're likely to encounter).
The high-pass filter does the opposite of its more common brother and removes the frequencies below the cutoff point. So a sweep of the filter in the upwards direction will remove the fundamental first and then the lower harmonics, leaving the upper harmonics sounding till last. Again this is a fairly unnatural situation, and may sound strange to the ears, but why should we limit ourselves on electronic instruments to things that occur in the real world? Why not do things which are unusual or impossible in nature, and if we like them, use them? Let's face it: most of the current uses of sampling are hardly naturalistic!
The band-pass filter is a combination of the operation of low-pass and high-pass, in that it attenuates frequencies both above and below the cutoff (leaving only those around the actual cutoff frequency). In some analogue synths band-pass operation was actually achieved by running low-pass and high-pass filters in parallel (usually splitting the available poles of filtering between them). Some of the more interesting and unique filter configurations were based on this principle. Several ancient Korg solo synths had a great device, called a Traveller, which consisted of two sliders, one of which controlled the low-pass cutoff and the other the high-pass cutoff. Although they could be moved apart to widen the frequencies allowed through, they had a physical restraint to prevent the high-pass going lower than the low-pass, which would have filtered out all frequencies, leaving no sound.
The OSCar had a similar system, but in band-pass mode the two cutoff frequencies were swept in tandem from one knob (with two poles of filtering on each, instead of the 4-pole filtering on high and low pass), with a second knob, labelled Separation, which governed the distance between them. This allowed some interesting vocal effects, as this is a fairly crude model of the way the human vocal system works (those interested in this type of thing should look at Emu's formant filtering on Morpheus, UltraProteus and their samplers, as it is a much more sophisticated version of the same principle!).
However, most band-pass filters, when available at all, did not offer this degree of control. The single cutoff parameter applied to both high-pass and low-pass elements, and frequencies either side were attenuated equally and immediately. Its principal effect was to make the waveform sound as if it were coming down a telephone line (as an analogue phone cannot reproduce lows or highs, it can be considered a primitive band-pass filter). But clever use of even simple band-pass filters still produces interesting, if more esoteric, timbral changes. These kind of facilities are what fascinate me most about analogue synthesis, and if you're interested in making new sounds rather than just imitating acoustic ones, you'd do well to look for analogue synths with high-pass and band-pass filtering on them.
Next month I'll look at how resonance accentuates a filter's action, and l'll cover the way in which an envelope works and how it can be used to shape a sound's pitch, volume and harmonic content in real time. This is a staple analogue technique, but its application is universal to programming, as it's a standard tool in any type of synthesis. Until then, if you have an analogue synth, experiment with manual tweaking of filtering (especially quick movements of cutoff) as you'll understand the need for automatic control via envelopes better when you've tried to do things manually.
Which Waveform?
When I used to do my Adult Education classes on Electronic Music for the late lamented Greater London Council, the question most asked was "Which waveform do I use to make a flute/violin/piano sound?" (Delete as applicable). This was, of course, before sampling released analogue synthesizers from the tyranny of having to imitate acoustic sounds, so the first instinct was to try and make recognisable timbres. Whilst there are some immediately noticeable resemblances <20> the square wave, as mentioned in the body of this article, sounds a lot like a clarinet <20> and experienced analogue synthesists can get into the right ballpark when imitating acoustic sounds, it's clear that these days that way lies considerable frustration. After all, on a PCM synth, you can dial up a multisound actually sampled from the instrument you want to imitate. In the same way that the invention of photography changed forever the other visual arts, giving them a more interpretive, abstract role, sampling and PCM ROM have provided a short-cut to the slavish reproduction of acoustic instruments. Having said that, sometimes an 'analogue' of an acoustic sound can bring a breath of fresh air to a track.
Analogue Waveforms: A Pictorial Representation
SINE WAVE: contains fundamental pitch only; main use in analogue synthesis is for LFO modulation.
TRIANGLE WAVE: contains the fundamental and a few high harmonics. Normally only found in analogue as variant on sine wave for LFOs.
SQUARE WAVE: contains all the odd-numbered harmonics in inverse proportion to their number in the harmonic series.
PULSE WAVE: contains differing harmonic levels depending on the exact width of the pulse.
PULSE WIDTMODULATION: moves through all the harmonic profiles of the various pulse widths.
SAWTOOTH: contains all the harmonics in inverse proportion to their number in the harmonic series.
RISING: only differentiated on LFOs.
FALLING: only differentiated on LFOs.
The Rise & Fall Of The Sawtooth
Rising and/or falling sawtooth waves often appear on LFOs and, while there would be no change in harmonic content between these two on an audio oscillator, on an LFO there is a world of difference. One gives you events in the sound with a sharp attack and slow decay (the falling sawtooth), whereas the other gives events with a slow attack and fast decay (rising). The falling sawtooth is probably more useful, as it can create rhythmic elements with volume, tone or pitch which can sound like a repeated note. These days, however, you are probably better off doing this using a repeated envelope, arpeggiator or sequencer, unless you have the fairly rare facility of sync'ing the LFO to your track. The rising sawtooth usually tends to sound like something recorded onto tape backwards and is included on exhaustive analogue synths more for completeness than for practical musical applications.

View File

@@ -0,0 +1,98 @@
Synth School: Part 10
Modelling Electric Instruments
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published August 1998
The Technics WSA1; its excellent electric piano models attracted many traditional keyboard players.
The Technics WSA1; its excellent electric piano models attracted many traditional keyboard players.
Last month, Paul Wiffen looked at how virtual synthesis can emulate analogue synths whilst going beyond their hardware-based limitations. Now he looks at its applications for imitating and exceeding older instruments such as electric piano and organ. This is the 10th article in a 12-part series.
As I explained last month, virtual synthesis consists, in principle, of recreating what happens inside a 'real world' instrument in the ethereal domain of Digital Signal Processing. The technology involved can be viewed as an extrapolation of effects processing (with which we're all reasonably familiar) back to the point where the sound is first generated. It applies to real mechanical musical instruments as much as to electric or electronic ones where the sound is generated and modified by discrete analogue components.
Driver & Modifier
Korg's Z1 provides the musician with authentic Rhodes and Wurlitzer patches.
Korg's Z1 provides the musician with authentic Rhodes and Wurlitzer patches.
When modelling analogue synthesis, software engineers replace each element of the synthesis process (oscillators, filters, envelopes, and so on) with software routines which interact in exactly the same way as their analogue counterparts. The process of modelling earlier musical instruments is actually simpler, in theory, as it separates the process into just two sections, however, the implementation of each of these sections may well be much more complex than the modelling of the individual elements of analogue synthesis). The technical terms for these two sections are 'driver' and 'modifier'.
In the simplest terms, you could think of the driver as what actually produces the sound in the first place <20> or, to be slightly more scientific, how the energy is initially put into the system. In the case of a guitar, the driver would be the finger or plectrum hitting the string; in a wind instrument, it would be the breath passing through the mouthpiece; in a violin, it would be the bow scraping across the string. These are all actions which produce the initial vibration, and as such they 'drive' the systems.
The modifier is fairly easy to comprehend: it is the part of the musical instrument which takes the initial vibration and changes it into what we recognise as the sound of that instrument. This would be the bridge and sound box on a guitar or violin, the tubing on a wind instrument, and so on.
But before we look at the modelling of traditional western classical orchestral instruments, which are somewhat complex sound-production systems, let's look at how the theory of driver and modifier is applied to some rather simpler electronic instruments which pre-date analogue synthesis.
Technics took an alternative route to giving modelling technology a reasonable amount of polyphony at an affordable price.
New Model Piano
Figure 1: Korg Z1 Wurlitzer patch.
Figure 1: Korg Z1 Wurlitzer patch.
Although the conventional acoustic piano is such a complex system that an authentic model would cost a fortune in DSP hardware, the somewhat simpler system developed for electric pianos is much more feasible to physically model, and as a result there have been some quite successful models of electric pianos, by Technics on the WSA1 (see the 'Higher Polyphony The Technics Way' box), and Korg on the Z1 (many of which have been bought by traditional keyboard players because of the authenticity of their Rhodes and Wurlitzer patches). The driver in the electric piano system is, of course, the hammer hitting the tine; a physical action. The modifier is the pickup placed over the tine to capture and amplify its sound, and this part of the process is electrical. It may be worth recalling at this point that in instruments referred to as electric (electric guitar or electric piano), the source of the sound is a physical event and the mechanism for amplifying it electrical. In instruments referred to as electronic (such as the organ or synth), the entire sound-generation process is electrical.
Having decided what our driver is, in the case of the electric piano, we have to create a model of what happens when the hammer hits the tine. Clearly, there is a degree of timbral change in the initial sound, based on how hard the key is struck, so not only do we need to vary the volume of the sound but also to create a different harmonic series based on the velocity of the key-strike. The increase in the proportional level of higher harmonics on harder key-strikes is a fairly well documented phenomenon which conforms to the natural increase in brightness which many musical systems exhibit when more energy is put in. This is because higher harmonics require more energy to generate at a given volume (because there are more cycles per second), so when there's less energy present in the system, the amount converted into higher frequencies is reduced disproportionately. This not only explains why a low-velocity key-strike produces a duller sound, but also why the initial strike produces the brightest point in the sound, after which the sound quickly becomes duller. An electric piano sound very quickly approximates to a sine wave at the fundamental frequency of the note. This is fairly standard stuff and will not cause too many problems for any software DSP engineer worth his salt.
Figure 1 shows the parameters for the Electric Piano Model in the Korg Z1. The settings were programmed by producer Martyn Phillips for a Wurlitzer sound. If you look at the Hammer parameters, you'll see that the Wurlitzer is fairly velocity sensitive (76, where 0 equates to no velocity and 99 is incredibly velocity sensitive), but generates very little attack noise. Rhodes patches created with this model tend to have at least a setting of 35 for click, unless they're emulating the DynoMyRhodes electronics, in which case a setting of 75 is more appropriate.
The most interesting part of the electric piano model is the modifier. This is to be expected, as the driver part of the electric piano, the struck tine, is a very small, uninteresting sound (which is why it was easily covered by a sample in the Technics WSA1). The most successful electric pianos used a fair amount of electronic processing to turn this sound into something more interesting to the ear. Clearly, an in-depth analysis of how the sound is modified by such electronics is more the domain of the software engineer creating the model than the musician using the model to recreate his electric piano timbres. Indeed, many of the terms used for the parameters are drawn from electronic circuit design. However, each separate physical model tends to have one key parameter which leaves you in no doubt about the authenticity of the model (as you'll see next instalment, when I talk about Rosin Amount for bowed string and Embouchure for brass/reed instruments). In the case of Electric Piano models, this key parameter is clearly Pickup Position, which appears in both the Technics WSA1 and Korg Z1 electric piano models.
Anyone who owned a Rhodes or Wurlitzer piano in the '70s should remember the fashion for opening them up and individually adjusting the position of the pickup over the tine. It was a very time-consuming process, but was perhaps the best way of customising your sound, as it really did bring about major changes in the timbre of the instrument. At one extreme it was possible to achieve a very bright, thin sound which would cut through anything, while moving the pickup to the other end of its travel yielded a plummy, mellow sound (a bit like the difference between the bridge and neck pickups on an electric guitar).
The joy of physically-modelled electric pianos is not only that this Pickup Position parameter allows you to change the apparent pickup position without all that tedious mucking about inside the instrument with a screwdriver: you can also do it in real time, while you're playing. On the Technics WSA1, pickup position is available on the unsprung mod wheel, while most electric piano patches on the Korg Z1 have the pickup position mapped to the Y component of the X-Y pad. This means that in both cases you can fiddle with pickup position until you get the sound you like and then leave it there (using the X-Y Hold switch on the Z1).
Organ Transplant
Figure 2: Korg Z1 Jazz Organ patch and effect settings.
Figure 2: Korg Z1 Jazz Organ patch and effect settings.
Many people are familiar with the fact that the electronic organ works as a sort of primitive additive synthesizer. Drawbars control the level of a series of tone wheels, each of which (in theory, at least) should produce a sine wave representing one of the harmonics in the natural series. These form the driver component of the system, with the rotation of the tone wheels being the original source of the sonic energy in the system. This, of course, dates back to how pipe organs (perhaps the first additive synthesizers) changed the timbre of the sound by adding together pipes of related pitches to create a fuller sound. Electronic organs had as many as 10 drawbars, which gave the ability to mix together the lower pitches in the harmonic series to create different registrations (the latter is originally pipe organ terminology, referring to a series of stops for each of the sets of pipes which were either in or out <20> ie. on or off). Nowadays, we would probably refer to them as presets, as they essentially change the timbre of the instrument.
This is rather a simplification of what happens inside the most enduring versions of the electronic organ <20> and we must not, of course, forget the major 'external processor' involved: the Leslie speaker, which modulated the organ sound, making it sonically more 'interesting'. As so often happened with early analogue applications of technology, the actual product departed from what it should have been according to its 'on paper' design, but was none the worse for that. Indeed, the organs which came closest to producing pure sine waves were the ones often referred to as 'cheesy' these days. The distortion produced in the classic Hammond sound, often a product of ageing tone wheels and abused circuitry, added greater harmonic complexity than simple harmonic addition ever could, often in a similar way to the complex but aurally pleasant distortion produced by guitar amps and distortion pedals. Clearly, a physical model of electronic organs which could only recreate the theoretically pure organ sound would only be of interest to those recreating kitsch '60s lounge music. So organ models need to recreate the more complex phenomena which resulted in the more enduring organ timbres.
The first instrument to use modelling technology to recreate electronic organ sounds was the Technics WSA1. This instrument does not use physical modelling in the purest sense of the term, as the basic source of most driver sounds is samples (see the 'Higher Polyphony The Technics Way' box for a more complete description of Technics 'acoustic modelling' technology). However, for electronic organ sounds, single-cycle waveforms could be added together to model how the basic organ timbre is built up using tone wheels at related frequencies.
In Organ mode, the WSA1's backlit LCD display changes to give a representation of drawbars, which can then be modified with the sliders next to them. This means that harmonic content can be changed in real time, just like in all those Keith Emerson solos (although I have yet to see the modulation parameter for routing virtual knives into the cabinet...).
On the Korg Z1, although the assignable knobs below the display can be set to control the level of up to five drawbars (or groups thereof), the way in which organs are modelled is slightly different. Each oscillator can have a different model loaded into it, but the Organ model only has three drawbars (although there are three different variations on a sine wave or triangle wave for each drawbar to control the level of). The best way to make a complex organ sound is therefore to switch both oscillators to the Organ model and then use each one to produce three different drawbar harmonics. The Sub Osc can also be used to produce the fundamental, so that the six drawbars can be set to higher harmonics. The Jazz Organ patch in Figure 2 demonstrates this very clearly: the Sub Osc is set to the fundamental (16' in classical pipe organ terms), Osc 1 is set to the second, sixth and twelfth harmonics (8', 22?3' ' and 11?3') and Osc 2 has two drawbars set to the eighth harmonic (2') and detuned slightly; the third drawbar is doubling up the fundamental.
Leslie Fillip
Figure 3: Korg Z1 Pipe Organ patch and effect settings.
Figure 3: Korg Z1 Pipe Organ patch and effect settings.
The other section of the Z1 you can see in Figure 2 is brought into play all the time for electric organ sounds: it's the rotary speaker effect algorithm. This gives the Leslie effect, which, as I mentioned earlier, is synonymous with enduring organ timbres. (If you're interested in how the Leslie cabinet works, or the history of Hammond organs, take a look at our 'Vital Organ' feature in the October 1997 issue of SOS). Indeed, where such organ sounds are concerned, the Leslie effect is the major part of the modifier, in that (apart from some distortion caused by knackered circuitry, key-clicks caused by worn contacts, and so on) it is the rotary effect which gives the sound its character and charm. This is where the line between physical modelling and DSP effects blurs to the point where one can be seen as part of the other. In fact, a physical modelling instrument which could not apply a rotary speaker effect could hardly be said to properly cover organ modelling. Fortunately, both the Z1 and WSA1 (the only two synths which claim to cover organ modelling) both have effects algorithms for rotary speaker simulation.
As you can see from Figure 2, the proper modelling of a Leslie speaker includes parameters for the rate and acceleration of both the rotor and the horn, as well as for the distance and spread of the virtual microphone which is picking up the sound. Mod switch 2, just next to the X-Y pad on the Z1, is normally used to swap between the slow and fast rotation rates.
The Organ models on both the WSA1 and Z1 are not just restricted to the modelling of electronic instruments. Both are extremely adept at pipe organs of the ecclesiastical variety, although in both cases the rotary speaker is best eschewed in favour of the largest hall reverb available on the machine. Figure 3 shows a typical Classical Pipe Organ patch. You will notice that all modulations have been switched off and that the click component, so common in electronic organs, is also defeated. Then it simply remains to select the required footages (remembering, again, that the Sub Osc can be used to add in an extra footage at the bottom end) and give the hall reverb its largest possible setting.
Orchestral Manoeuvres
The Technics WSA1R rack modelling synth.
The Technics WSA1R rack modelling synth.
This move from the electronic to the acoustic world (albeit still within the digital domain) leads rather nicely into the remaining chapter on physical modelling, coming your way next month. I'll be looking at plucked string algorithms (which produce both acoustic and electric guitars, harpsichords, and other plucked string instruments, such as dulcimers and spinets), and at the three most widespread uses of physical modelling in the acoustic world <20> brass, reeds and bowed strings.
Although the balance of this piece has been based around Korg's MOSS system (with a small contribution from Technics' acoustic modelling), next time I'll be broadening the palette to include the Yamaha VL system in its many incarnations, including the cheapest physical modelling unit to date, the VL70M. Until then, see if you can lay your hands on a WSA1 or Z1 to try out some of the electric piano and organ sounds we've been looking at. If not, a Korg Prophecy will be good preparation for next month, as it also covers plucked strings, brass and reeds.
Higher Polyphony The Technics Way
The Technics KN5000 virtual drawbar acoustic modelling, first used on the WSA synths.
The Technics KN5000 virtual drawbar acoustic modelling, first used on the WSA synths.
The main obstacle to the development of affordable physical modelling synths over the last five years has been the expense of the DSP hardware. The flood of polyphonic DSP-based machines which hit the market last year (Yamaha AN1x, Roland JP8000, Korg Z1, Nord Lead 2) was very much due to recent decreases in cost and increases in power of DSP chips. Before this price/performance breakthrough, companies working on the development of physical modelling, such as Yamaha and Korg, were forced to limit the polyphony of their early instruments (the VL1 and the Prophecy) to a single voice. This wasn't too much of a problem, as these instruments were principally designed to recreate the voicing of monophonic instruments such as brass and woodwind or lead and bass synths, and it allowed all the power available to be concentrated into a single powerful voice.
However, Technics took an alternative route to giving modelling technology a reasonable amount of polyphony at an affordable price. They realised that the greatest amount of DSP power was taken up by producing the driver, the original sound before the resonator modifies the harmonic content. By replacing a modelled (and therefore processor-intensive) driver with a PCM sample, they could save an enormous amount of processing power <20> power which could be ploughed back into increasing polyphony. As a result, when other modelling synths on the market were duophonic at best, the WSA1 had 32 voices of polyphony, a figure still not achieved by the most powerful current modelling synths. (With the optional 6-voice expansion, the Korg Z1 still only clocks in at 18 voices, for about the same retail price that the WSA1 had on its release in 1995.)
To make PCM samples work as drivers for a modelling synth (rather than as the more 'finished' sound you'd normally expect from the PCM sound sources in an S+S synth), Technics had to record the samples in as primitive a way as possible, removing as much of the resonator component as they could from the samples. So strings were miked as close to the string and as far away from the sound hole of violin or guitar as possible, while woodwind reeds were sampled without the resonating column component. This makes the raw samples in the WSA1 rather less polished and exciting than those in the average PCM-based synth.
A perfect example is the source samples for the electric pianos, which have much of the toy musical box about them when heard unmodified.
Fortunately, no-one is expected to listen to these raw samples as they were recorded. When the sampled driver is passed through the DSP resonator section, the acoustic modelling process recreates the same timbral and enveloping changes which take place inside the instrument once the initial sound has been triggered. So what's the advantage in this? Why not just use a sample which has the final sound of the instrument?
The answer to this lies in the increased expressivity which can be achieved by modifying the resonating component with real-time controls. The amount that can be done with 'finished' samples is fairly limited, especially with fairly crude 'analogue'-style filters and envelopes, which can make the sound brighter or duller, and end sooner or later, but can't make it fundamentally different. The harmonic variation which modelled modifiers can introduce is much more akin to the kind of filtering offered by Emu's Z-plane synthesis, as represented by Morpheus and UltraProteus, for example. Modifications are at the same sort of level as multi-band graphic EQs with serious amounts of cut and boost available, or even the more complicated parametrics which can precisely tailor frequency components.
The much more complex and subtle variations which can be produced by modelling the modifier component of an instrument system get to the heart and soul of an instrument's expressivity, without the problems that cross-switching or cross-fading samples brings. In the case of the electric pianos on the WSA1, you can control parameters such as pickup position and pickup sensitivity, as well as the timbral effects of the electronic circuitry itself. Using DSP for the modifier side of the process meant that much of the subtlety and precision of physical modelling could be introduced without the cost of the drivers being produced entirely by DSP.
Sadly, the WSA1 (and its module counterpart, the WSA1M) were not major hits in the S&S-dominated market. They were launched at a time when the demand for subtlety of expression for players and authenticity of real sounds was an all-time low, and the market requirement was for sounds suited to techno and related dance styles. Having discovered some potential for the type of real-time modulation favoured by the dance producers in Technics' acoustic modelling process, this author participated in the production of an expansion board full of drum loops and analogue oscillator timbres, in an attempt to save this developing technology from becoming obsolete before it matured. Unfortunately this Dance board was released too late to save the instrument from the ignominy of the discounted blow-out, and the last WSAs were sold off at a quarter of the original RRP, complete with the Dance board. However, anyone who picked one up at this final price (or buys second-hand at a similar price) secured an amazing deal, as the value for money of the original RRP has still to be equalled in terms of polyphony, if not fidelity, by the modelling synths of the present day.
Technics chose not to continue the technology of acoustic modelling in the form of follow-up WSA products, but one component at least lives on: the way in which the WSA's electronic organ sounds was programmed has been used in Technics' KN5000, and it's from this instrument that we've sourced the LCD screen illustrating the modelling of organs via virtual drawbars.

View File

@@ -0,0 +1,161 @@
Synth School: Part 11
Modelling Strings & Wind Instruments
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published September 1998
The VL70m made Yamaha's modelling technology widely affordable.
The VL70m made Yamaha's modelling technology widely affordable.
In the penultimate part of his series on synthesizer technology, Paul Wiffen turns his attention to the problem of emulating acoustic instruments in which the sound is produced by a string or reed, and amplified and modified by the body of the instrument. This is the 11th article in a 12-part series.
Those who were paying attention last time will remember that although the critical thing about physical modelling is that the parameters involved change depending on the instrument being modelled, the underlying theory breaks the process of modelling down into two main constituents; the driver and the modifier. The driver is the point where the energy is put into the system <20> the bow or plectrum hitting the string, or air being blown through reed or mouthpiece. The modifier is the part of the system where the quality of the sound is changed by resonance and amplification. For strings, this would be the bridge transmitting the vibration of the string to the wooden case, or the column of vibrating air being modified by the size and shape of the tubing in the case of wind instruments.
The main reason for making this distinction between driver and modifier is that, broadly speaking, the driver tends to be the thing which is being constantly changed to modulate the pitch and introduce expression, whereas the modifier tends to be the more constant factor which gives the instrument its recognisable character. The problem with sampling is that it cannot separate these two elements from the final sound. As a result, the speeding up or slowing down of the sample which is needed to change the pitch produces unfortunate effects like changing the apparent size of the resonating case or column. This doesn't grate too much on the ear when the pitch change is small (a few semitones), but once you exceed half an octave the sampled instrument changes radically. This is why multisampling was developed, to change the source sample often enough across the keyboard to minimise the change in playback frequencies required to cover all the required pitches.
Sharp ears will have noticed, however, that some instruments respond much better to multisampling than others and that as a result some instruments have to be sampled much more frequently across their range than others. This is usually because of the complexity of their resonant component (ie that part which doesn't change when the pitch is changing). The more complex this part of the system, the more the vibration of the driver is changed from the original input energies. As a result, the re-pitched sample loses its authenticity very quickly, maybe within a minor third. Less complex resonant systems may allow sampled versions to be transposed as much as an octave before the altered resonance gives the game away. The most complex system, and the one which is much the most resistant to multisampling is the human voice. This is because its most important component by far is the non-pitch related changes in the resonant characteristics (as we will see later, modelling systems can make a fair stab at human vowel shapes because the resonant characteristics can be separate from pitching control).
Eastern Wind (And Strings)
The first commercially available synth to offer physical modelling of wind and string instruments was Yamaha's flagship (but pricey) VL1.
The first commercially available synth to offer physical modelling of wind and string instruments was Yamaha's flagship (but pricey) VL1.
So far, there are really only two manufacturers using modelling technology for the emulation of wind and string instruments: Yamaha and Korg (although Roland do use modelling components in their Virtual Guitar system, they are applying DSP modifier techniques to the actual sound of a real string, which replaces the virtual driver component). Yamaha were first on the scene with the somewhat pricey VL1 <20> a duophonic 4-octave keyboard <20> four years ago. Although the price has come down dramatically, they have not really expanded the polyphony of their implementation at all. The latest versions of what they call Virtual Acoustic synthesis are the monophonic <20>499 VL70m module and a single VL voice in the EX5 workstation and EX5R rack (see box on Combining Modelling with PCM Synthesis). This emphasises the strength of physical modelling for solo acoustic instruments and the fact that the DSP power required to produce polyphonic modelling is still very expensive.
So much so, in fact, that Korg's much written-about OASYS modelling system has never actually been made commercially available (despite several appearances at international trade shows like Frankfurt and NAMM), but has instead metamorphosed into a development platform, from which the technology is trickled down in more affordable packages. This represents a marked change in manufacturer philosophy; 10 or even 20 years ago, they might have tried to sell a limited number at a high price for keyboard stars like Keith Emerson or Stevie Wonder to buy (remember the Yamaha GX1 or the Crumar GDS). These days, however, even the stars prefer to keep their cash in a high-interest account until the more commercially viable versions appear.
The first trickled-down version of this technology (which Korg refer to as MOSS <20> Multi Oscillator Synthesis System) was the Prophecy Solo keyboard, released three years ago. Like the VL1, it featured a shorter-than-normal keyboard, clearly showing its solo synth status (most solo acoustic instruments only offer a range of around three octaves) and was monophonic. However, it did considerably broaden the range of physical modelling, adding plucked string, brass and reeds to the various different analogue configurations we looked at in the first part about physical modelling, all for (just) under a grand on its release.
In the UK the Prophecy was a massive success, mainly due to the rediscovery of analogue synthesis as an important factor in the emerging Dance music scene (and perhaps the fact that all the old pomp rockers live here, although the biggest market for their music is still in the US). In other territories, it was a disaster as Korg distributors, scratching their heads for potential markets, attempted to sell it as a lead instrument to put on top of home organs. As a result, huge stockpiles of Prophecies built up throughout the world and these have recently been cleared through the UK for as little as <20>499. If you search the dealer ads in this issue, you may still find a few for sale.
Polyphonic physical modelling of strings, brass and woodwind finally hit the market last year in the form of the Korg Z1, as DSP chips came down in price to the point where 12-voice polyphony (and 6-part multitimbrality) could be provided for <20>1699. The models provided were expanded from those in the Prophecy up to 13 by the addition of electric pianos and organs (which we looked at last time) and bowed strings (Prophecy had only featured a plucked string algorithm).
...the future probably lies in hybrid instruments combining the expression and real-time control of modelling with the authenticity of PCM.
Modelling The Computer Way
Yamaha's VL Visual Editor allows you to construct "hybrid" virtual instruments by combining drivers and modifiers from different instrument models.
Yamaha's VL Visual Editor allows you to construct "hybrid" virtual instruments by combining drivers and modifiers from different instrument models.
One of the problems with physical modelling, especially once you break out of dedicated analogue re-creation, is that the number of parameters involved is huge and can be very tricky to program from the front panel of the instrument. If programming Prophecy from its rather obtuse user interface is proving frustrating for anyone out there, it may be some consolation that the great factory presets were created using custom software running on some obscure Japanese computer platform, rather than on the Prophecy itself.
In fact, the complexity of real instrument modelling is definitely something which benefits from computer software control, not just for the programming of sounds, but also for simply comprehending what is going on. Thankfully, then, both Yamaha and Korg have released programming software for slightly more widely available computer platforms to aid in sound programming. This has the extra advantage that I can use screen dumps from the different sound models to illustrate my descriptions!
Yamaha produce three different editors, Visual, Analogue and Expert, in versions for the different implementations on the VL1, VL1M, VL7 and VL70m (the fundamental difference being that the VL70m has only one element available per voice, whilst the more expensive units have two).
The Visual Editor is an ideal introduction to the concepts of physical modelling. By allowing you to mix and match drivers and modifiers, it really underlines the fact that Yamaha's modelling system will let you take the output of, say, a reed and modify it through the resonant characteristics of a non-wind instrument body, like that of a cello. By pointing and clicking at the energy input device (bow, reed, finger, mouthpiece) and the resonator (horn, f-hole body, etc), you can design your own hybrid instruments and then make them more bizarre still by processing the sound through something even more (conventionally) inappropriate like a humbucking pickup. Alternatively, you could be boringly conventional and put a bowed string through a violin body or a trumpet mouthpiece through a horn.
Once you have set up the basic configuration of your revolutionary instrument and decided whether you want an alto or tenor voice version (that's high or low to you), there are nice simple editing parameters which allow you to 'tweak' the brightness, thickness, distance, breath feel and reverberation characteristics of the sound. In fact these simple controls are hooked in software to multiple parameters in the VL system, but they provide a 'no fear' editing system. Clearly, the simplification of the parameters means that you cannot get the full capability of the VL system by using this editor, but it can provide an introduction to physical modelling which is free of technical jargon.
Don't make the mistake of trying to do everything with one type of synthesis: give yourself as big a palette of sonic generation as possible!
Experts Only Need Apply
Yamaha's VL synths use identical parameters to model string and brass instruments.
Yamaha's VL synths use identical parameters to model string and brass instruments.
The Expert Editor is just the opposite, and within seconds of loading it you have access to the most alarmingly-named parameters <20> Slit Saturation Feedback Balance and Graham Function Argument had an old bluffer like me in a flat panic (a little research in the Penguin Dictionary of Physics tells me that the latter refers to Graham's law of diffusion). This editor is definitely not for the faint-hearted, because it really throws you in at the deep end, allowing access to every single parameter in the VL system via four or five tall windows (this software was clearly written on an A4 DTP monitor). However, as with most editing software the best way to learn about it (or any type of synthesis, in my view) is through grabbing the parameter bar, waggling it about, and seeing the effect it has on the sound.
I was a little confused at first to find that the string model had parameters for Conical Horn Insertion and other clearly brass-related terminology, but this turned out to be because the parameters for string and brass modelling on the VL are identical. This is apparently because the characteristics of a vibrating string are very similar to those of a vibrating column of air (see the box on Karplus-Strong synthesis). However, Yamaha's programmers have realised that this might be a barrier to thinking clearly about how you want to change your model to be more like a particular instrument, so there is a menu which lets you change the displayed parameter names between string and wind terminology. Thus the Slit Saturation Feedback Balance legend in a Wind model becomes Friction Function Feedback if you switch to String Terminology (whether or not this does help you to get your head round what you are trying to achieve is a debatable point!).
I have to say that this program really does deserve its 'expert' denomination, if only for the terminology <20> but don't let the jargon confuse you, it is fairly easy to use to get the results you want. The one problem I found was that the Expert editor does not allow you to keep several different windows open at the same time. This means that making simultaneous changes to the driver and the modifier is not possible, though separating the driver from the modifier by a different window at least leaves you in no doubt about the effects of each on the final sound.
The third piece of software from Yamaha, the Analogue Editor, really falls outside the scope of this piece, but briefly speaking allows you to turn the VL into a fairly simple analogue synthesizer, with all the standard components you would expect.
Z1 To The Macs
Many of the parameters in Korg's Z1 Plucked String model can be modulated in real time.
Many of the parameters in Korg's Z1 Plucked String model can be modulated in real time.
Korg's Z1 editor for the Macintosh is very different from the various different VL editors which Yamaha offer, being a much more integrated program. Analogue and acoustic instrument modelling are both covered in the same piece of software and, at another level, parameters for both driver and modifiers are all covered in one window. The plus side of this is that you can see all the parameters for the Reed model at once. The minus side is that if you are unaware of the driver/modifier theory side of physical modelling, this software will not make you aware of it <20> indeed, no knowledge of physical modelling theory is required at all to use this software.
For the most part the parameters (of which there are substantially more than in the Yamaha Visual Editor) are named much as a player of the instrument in question, rather than a physicist, would refer to them (see Bell Resonance and Lip Character in the Brass Model screenshot on page 190, for example), and even when more technical terms like Bow Differential or String Dispersion have to be used, you need only try switching it on and off to see what it does. The fact that everything within the individual model is available in one window also means that the mix and match approach of the Yamaha Visual Editor is not possible (no putting a cello bow across a flute tube in this implementation!). This is presumably because Korg's models are actually very different from each other, whereas Yamaha's VL is based around a single model which covers the territory between string and wind modelling.
Full Of Pluck
The Z1's Reed and Brass models are divided into a number of sub-models, in which parameters for the length and shape of the instrument body are preset.
The Z1's Reed and Brass models are divided into a number of sub-models, in which parameters for the length and shape of the instrument body are preset.
The Plucked String model made its first public appearance on the Korg Prophecy, but as that was a monophonic instrument its use was limited to things like bass and lead guitars. On the Z1, the polyphony allows its use to be broadened to include strummed guitar chords and violin pizzicato as well as other instruments which you don't immediately think of as having plucked strings, like harpsichord and clavinet. Most of the parameters are fairly obvious (see screenshot opposite), with such factors as the position of the string pluck and harmonic stopping (as well as electric pickup if used), the force of the strike, the amount of damping and the dispersion within the string all not only accessible, but able to be modulated by keyboard tracking and/or all the real-time controllers.
Another String To Korg's Bow
The new string model in the Z1 is the Bowed String model, which of course means mainly members of the violin family. The real blessing of this is that finally strings which are both really responsive and authentic are available polyphonically (Yamaha's VL series have an excellent solo violin which can be duophonic on the VL1). Sample technology gave really authentic strings in one bowing style (Marcato, Legato, Sforzando, etc) but by using the real-time controllers to change the Bow Speed and Pressure, you can now make smooth changes between these different playing styles without worrying about the artifacts which come from crossfading between different samples.
Amongst The Reeds
Both the Reed and Brass Models on the Z1 actually have a number of instrument sub-models (to take account of the individual differences between instruments). The parameters used do not change between different sub-models; the sound, however, changes fairly significantly as you switch from one to the next with exactly the same parameter settings. This is because the Korg models do not have parameters to describe the exact shape and length of the tube. Instead these are preset for each conventional instrument within the sub-model. This, again, fits in with the Korg implementation of physical modelling which aims for accuracy in the modelling of real instruments, rather than the ability to 'morph' between different instrument configurations as you can on the Yamaha. Here are the Reed sub-models available on the Z1:
HardSax1
HardSax2
HardSax3
SoftSax1
SoftSax2
Double Reed1
Double Reed2
Bassoon
Clarinet
Flute1
Flute2
Pan Flute
Ocarina
Shakuhachi
Harmonica1
Harmonica2
Reed Synth
Because the size and shape of the instrument is fixed in the sub-model, the parameters which can be adjusted are principally things which may change due to the playing style, such as breath pressure. This means the expression available can be tailored very precisely to a player's technique or the style of music the instrument will be used for. Clearly, the way a clarinet sounds in classical music will differ greatly from its sound in jazz, yet the same physical instrument is used for both. It follows therefore, that it is the playing style which must differ. The parameters you see in the Reed model are what allow this difference to be made.
Where There's Z1, There's Brass
As with the Reed Model, the Brass model copes with different sizes and shapes of instrument by having sub-models which you switch between. The user-alterable parameters are once again designed to elicit expression and feel from the model, by routing modulations to real-time controllers like the
X-Y pad and the soft knobs below the display. The Z1 Brass Sub-Models are:
Brass1
Brass2
Brass3
Horn1
Horn2
Reed Brass
Here there are fewer sub-models than in the Reed Model, presumably because there are fewer differences between the different brass instruments than there are variations on the reed theme.
Future Modelling
The Z1 probably represents the pinnacle of modelling achievement to date, not just because of its 18-note polyphony or multitimbrality (although these are where the bulk of the DSP horsepower is expended), but because of its versatility. It covers the same analogue territory as Yamaha's AN1x, Roland's JP8000 and Clavia's Nord Leads, but allows more flexible imitations because it can have two models at once (as we saw in the first part of physical modelling), it does electric pianos and organs as well as the Technics WSA1 we covered last time, and now we find it a more specialised modeller for acoustic instruments in a similar vein to the Yamaha VL series. If you want to get a feel for the breadth of sounds and expression that physical modelling can cope with right now, the Z1 defines the current boundaries. If you want to really explore physical modelling for authentic sounds, then beg steal or borrow a Z1 and a Mac to run the editor on. Those of you who want to experiment with the grey areas between specific models and get into the more experimental side of modelling should look at some member of Yamaha's VL family, either a second-hand VL1/7 or the current VL70M module with one or more of the software editors (Visual if you want fast results or Expert if you really like a challenge).
I get the feeling that the Z1 will look as relevant as the DX7 does now in 10 years time <20> an instrument that represented a quantum leap forward at the time of its introduction, especially in terms of allowing a player's individuality and expression to come through. Given another 10 years of DSP development, we can expect to find instruments that have the power and speed to tackle the really tricky timbres like the acoustic piano, modelling the interactions between the struck strings and the undamped ones authentically in real time.
Until then the future probably lies in hybrid instruments like the Yamaha EX5 for monophonic instruments and the Korg V3 for polyphonic ones (see box, left), combining the expression and real-time control of modelling with the authenticity of PCM for big ensemble sounds for which modelling still can't create the sonic complexity.
Author's Message
As we near the end of this year-long round up of the different synthesis styles which have been made available commercially over the years, there is a thought I would like to share with you. My recent experiments in combining modelling technology with PCM synthesis (see box) served to underline a lesson I learnt years ago when first combining samples with analogue and digital synthesis, a technique which manufacturers eventually refined into the PCM-based synths of today. No one type of sound generation will give you all the different timbres and expressiveness you want. Don't make the mistake of trying to do everything with one type of synthesis: give yourself as big a palette of sonic generation as possible! Mix and match synthesis types to play to their strengths and cover their weaknesses. Mistrust those ads which tell you any one product will give you all the sounds you need, but encourage manufacturers who combine technologies within individual machines like the Yamaha SY99 or EX5, the Technics WSA1 or the Korg V3, as well as those who persevere with the more esoteric forms of synthesis like Kawai and Waldorf. It will be a very dull world, sonically speaking, if we all end up using PCM-based synthesis for everything (something which looked a very real danger a few years back, but which has now receded somewhat thanks to physical modelling and the re-emergence of analogue synths in dance music and the like).
Next time, we will finish off by taking a look at some more esoteric types of synthesis like granular and re-synthesis which are emerging from the less commercially driven areas of computer shareware and the Internet, further expanding the palette of sonic creativity. In the meantime, get your hands on physical modelling in some shape or form if you possibly can (remember you can now get a VL70m or a Prophecy for under <20>500), and don't forget to try combining it with the other synthesis types to which you have access, either in sequences or individual program combinations. Your music will be the more expressive for it.
Long before anyone succeeded in properly modelling plucked strings (see main text), there came the Karplus-Strong synthesis algorithm (after Messrs Kevin Karplus and Alex Strong, who developed it at Stanford University in California). A description of this algorithm was first published by its developers in the Computer Music Journal Vol 7 Part 2 in 1983. It is now often identified as one of the first physical modelling algorithms, as this technique anticipates modelling by defining the required stages using terms coined by physicists analysing components of a vibrating string. Essentially the way it works is to introduce a noise burst into a delay line whose time determines the resonant frequency of the string, pass this through a low-pass filter to simulate the energy loss caused by the reflection of the wave in the string and then feed back the result into the delay line.
The original version of the Karplus-Strong algorithm would produce two or three 'moderately realistic plucked string sounds' (to quote the humble Kevin Karplus) simultaneously in real time on an 8080A processor (imagine what it could do on a modern processor) and gained several US patents. It was licensed by several companies who have yet to produce a stand-alone product from it (although Kevin Karplus reports that a few companies have tried to market the technique without paying royalties).
Apparently, if the decay element from the filter is taken out then it performs a reasonable impression of a vibrating column of air in a tube open at both ends. This perhaps goes someway to explaining the similarity of Yamaha's string and wind models, in which some parameters do the exactly the same things but are given different names relating to the physical attributes of the instruments being emulated.
Most people who are familiar with Karplus-Strong synthesis will know it from its inclusion in Digidesign's seminal sample editing program from the '80s, Sound Designer (before it transmogrified into the proto-hard disk editor, Sound Designer II). Unfortunately, this implementation of Karplus-Strong, whilst producing some quite nice timbres, suffers from not being real-time. Once the computation has been done off-line, it is rendered as a sample so that it can be transferred across to whichever sampler your version of Sound Designer was supporting. This means that it suffers from the same problems in playback as all samples, ie. it gets longer the lower you play it and shorter the higher up the keyboard you go. It does, however, give you quite a nice flavour of the potential of the algorithm as a historical step on the road to current physical modelling techniques, so those of you who can track down the original version of Sound Designer (it was produced in customised applications on the Mac for the Emulator II, Prophet 2000, Akai S900 and E-max among others) can have some fun generating mutant guitars and mandolins.
Strong Plucked Predecessor
Modular synthesizers were originally developed in the '50s and '60s and were frequently called wallpaper synths because of the sheer size of the things, which often stretched across an entire wall. (If you wanted a system like this nowadays, it wouldn't cost quite as much as the Lord Chancellor's famous wallpaper, but not far off...). Modular synths came into their own and into popular culture in the 1970s, with bands such as Tangerine Dream, Kraftwerk, and Tonto's Expanding Head Band, and artists such as Tomita, Keith Emerson, Rick Wakeman and, of course, the ultimate modular evangelist, Walter/Wendy Carlos.
A typical modular system consists of banks or blocks of sound-generating, sound-modifying and controller modules such as oscillators, filters, amplifiers, envelope generators, modulators, mixers and sequencers. Every module has input and output sockets that are used for interconnecting with the others. They don't have MIDI, memories or presets and they very rarely have hard-wired connections internally <20> everything is connected across the front of the modules using patch cords.
The underlying principle of modular synthesis is Voltage Control. For example, a typical analogue keyboard generates a different voltage (CV, or Control Voltage) for each key, plus a separate on/off voltage for each key, called a Gate or Trigger. The CV signal can be used to control a Voltage Controlled Oscillator (VCO) to produce different pitches, while the gate control signal is used to trigger an envelope generator (ADSR <20> Attack, Decay, Sustain, Release) to give dynamics to the sound. So to produce a basic playable sound you would need a keyboard controller, a source such as a VCO, a VCF (Voltage Controlled Filter) to add tonal variation to the sound of the VCO, and an envelope shaper connected to a VCA (Voltage Controlled Amplifier) to vary the dynamics of the sound.
Another fundamental aspect of modular synthesis is that there is little or no difference between audio and 'modulation' signals, and practically any input or output can be connected to anything else. The audio output of a VCO can be used to modulate the control input of a second VCO, a VCA can be used to modulate a control voltage, and a mixer can mix CV signals just as an audio mixer would.
Combining Modelling With PCM Synthesis
Whilst last year saw the release of a whole slew of new physical modelling instruments, including the Yamaha AN1x, Korg Z1, Roland JP8000 and Nord Lead II, this year has been very quiet. Apart from the rack version of the Roland, the 8080, which was shown for the first time at Summer NAMM in Nashville (see this month's News pages), the only really new development is the rackmount Supernova from Novation (reviewed in last month's SOS) which sadly I have yet to get my hands on. However, this does not mean that physical modelling is about to go away. On the contrary, the big news this year is that modelling is now being integrated into workstation synthesizers with a vengeance, allowing the solo instrument and analogue sounds at which it is particularly strong to be used alongside PCM-based synthesis. This year has already seen two Japanese manufacturers further extend the workstation concept with modelling and more must surely follow.
Actually, this is not that new an idea; two years ago, Korg's Trinity Plus, Pro and ProX included a solo board (available as the SOLO-TRI option), which added the monophonic capability of the Prophecy modelling synth to the Trinity's PCM-based synthesis. As the name implied, you could add a solo instrument, say a lead or bass sound, over the top of a polyphonic Combi or Sequence. As most of the sounds in the Prophecy were already designed for this kind of use, it made the Trinity workstation that bit more versatile especially for keyboard soloists who found PCM-based sounds great for backing tracks, but lacking the expressiveness needed when the spotlight fell on them.
However, users soon found that you often needed more than one sound from the S Bank (where the sounds were stored). The Solo board did great analogue or plucked string basslines as well as solo reeds and woodwinds or lead synths. The more creative Trinity owners added the HDR options which gave them four tracks of hard disk recording. This meant they could record four tracks of solo sounds to a SCSI hard drive and then have the fifth play back from the internal MIDI sequencer. Whilst not ideal, as only the last one was instantly available for editing, you could always keep the original MIDI solo tracks muted, so if they needed re-editing, you didn't have to play them in again from scratch. Then you would just re-record them to the hard drive once you had edited them.
This year saw a new range of workstation synths from Yamaha, the EX series (see review of EX5 in May's SOS). Along with their vast PCM-based polyphony there was also a duophonic An synthesis capability (monophonic on the EX7) and a VL synthesis capability (on the EX5 and EX5M only). This meant that EX owners could now also add an analogue lead or bass line or a solo reed or string sound to their sequences, perfect for bringing that expressiveness which only physical modelling can give to the most noticeable element of your sequences.
However, just as FM became truly usable and sonically pleasing (to this author's ears, at least) only when it could be combined polyphonically with AWM & User Samples on the Yamaha SY99 (my all-time favourite Yamaha product), the next big step for physical modelling will be when there is the DSP capability available within a synth to allow the polyphonic layering of a modelled sound with a PCM-based one. There was a certain element of this in the Technics WSA1 which we looked at last time, but this was still a bit of a compromise as the driver element of physical modelling was replaced, rather than augmented, by samples. In the past few months, I have been experimenting with this using a Trinity and a Z1 MIDIed together, and got some great results (once I thought to disable MIDI Program Change Receive on the Z1 <20> imagine the frustration when you haven't saved your editing on the Z1 and you change programs on the Trinity, thereby selecting a new Program on Z1 and losing your edits).
In the way that the future has of becoming the present sooner than you think, then, imagine my joy when the Trinity V3 turned up at Korg (for whom I consult as a part-time product specialist/tech support person) with a new six-voice MOSS synthesis capability (identical in structure to that of the modelling on the Z1) in addition to its 32-note PCM-based polyphony. This made my experimentation with modelling/PCM combination synthesis much easier (no more losing edits by changing programs) and I was also able to introduce my own samples into the equation from the PBS-TRI option. I have finally been able to get that elusive orchestral string sound, where the expressiveness of the bowed string model in changing from light strokes to the fierce 'digging-in' of marcato bowing can be combined in the same program with the rich texture of an entire string section which, for the time being, only samples can capture. The nearest I had before was on the SY99, where I used the FM element to get the variety in the playing style, but it never quite had the authenticity that you get from modelling. This instalment of Synth School was originally due to be published last month, when I wouldn't have been allowed to mention this latest development, but happily I waxed so lyrical about the modelling of electric instruments that the piece had to be split in two, and the V3 has now been publicly announced (and is reviewed on page 150 of this issue of SOS).
Perhaps one day the amount of DSP power available will be enough to generate the richness of texture which comes from 20 or 30 string players in unison, without needing samples and effects to fatten the sound up, but for the time being I am quite happy with this new combination of modelling and sampling which gets me closer than ever before. As a failed second violin player and would-be composer/conductor of orchestral music, this is the closest to heaven I have yet come. Those of you fortunate enough to have access to both modelling and PCM-based synthesis really should try combining the two, whether via MIDIed 'additive synthesizers' or internally within one instrument.

View File

@@ -0,0 +1,60 @@
Synth School: Part 12
The Way Ahead
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published October 1998
The Kawai K5000 <20> the only additive synth still in production.
The Kawai K5000 <20> the only additive synth still in production.
Will physical modelling continue to be at the leading edge of synthesis, or are there other methods moving up on the inside track? Paul Wiffen winds up the Synth School series with a little crystal ball-gazing. This is the last article in a 12-part series.
There are many lessons to be learnt from the various technologies we have examined in Synth School over the last year or so. The history of FM teaches us that a method of synthesis can go from being the be-all and end-all of the professional synth market to the lowest common denominator of computer video games in a relatively short time (and that despite this, Yamaha are probably making more money out of FM today than they ever did in the heyday of the DX7). The elevation of the fat analogue sound to the modern Holy Grail, when 10 years ago you couldn't give analogue-sounding machines away, warns of the dangers of selling off old gear in pursuit of the latest sonic fashion. But perhaps the most important lesson is a general one on how the relentless development of VLSI technology driven by the computer industry (to which we are but a very small sideshow) turns today's impossibility or very expensive luxury into tomorrow's staple product (which doesn't really get anyone excited anymore).
Take additive synthesis as a classic example; it is a much more powerful technology in its only current production incarnation, the Kawai K5000, than the infinitely more restricted non-real time implementations which the Fairlights and Waveterms offered 10-15 years ago. Even the early real-time implementations like the K5 and the never-released Technox Axcel caused more of a stir than something wonderful which you can now buy for around <20>1000. Sampling is another classic example; the early Fairlight which turned the whole industry on its head had lower sample quality than the most despised Soundblaster-compatible PC soundcard. The former would have cost you <20>25,000+, the latter you can pick up for under a ton.
As far as physical modelling is concerned, I feel we are midway between these two extremes. Yamaha, who released the first commercial available physical modelling synth, the VL1, have now adapted that same technology to a <20>500 module or an even cheaper plug-in card for their computer-based system. Korg's OASYS, perhaps the most powerful modelling synth exhibited to date, has never been released, because the days of even megastars shelling out thousands and thousands of pounds for the first implementation of a new technology are over. This hasn't prevented the technology it contained being extremely successful (in this country at least) in the Prophecy. Korg's current Z1 covers more territory than any other physical modelling synth, from analogue and FM-type synthesis through to a host of string and wind instruments, but I often hear people complaining about it because it can only achieve 18-note polyphony and 6-part multitimbrality (PCM-based synthesis has made people blas<61> about amounts of polyphony, sample memory and multitimbrality which would have seemed like science fiction five to 10 years ago).
The current state of DSP technology means that certain areas of imitative synthesis are still no-go zones simply because of the sheer amount of DSP power required. But DSP technology is now progressing so fast that I suspect it won't be that long before all the sympathetic harmonic interactions between strings on that most complex of instruments, the grand piano, will succumb to the computational power of the microchip.
The real challenge these days for physical modelling is not the perfect recreation of acoustic instruments or even the biggest sounding, most powerful analogue-style synth ever, but making the technology easy to operate by people who have never even learnt the basics of analogue synthesis (none of whom are amongst SOS readers, I am sure). The various solutions to this, from the increasing use of dedicated front-panel knobs or X-Y pads or ribbon controllers, through to SysEx control by computer programs, have helped expand the market for physical modelling, but I still feel that this is just another example of 'dumbing down' technology so it can be sold. For the time being at least, the development of physical modelling seems to be its consolidation into more marketable versions of the technology, and its integration into workstations (see last month's sidebar on "Combining Physical Modelling with PCM"). So what other contenders are there for the Future of Synthesis?
Resynthesise
Emu's Emax samplers were capable of a unique type of synthesis involving the combination of two different samples.
Emu's Emax samplers were capable of a unique type of synthesis involving the combination of two different samples.
An old chestnut which periodically turns up is the concept of resynthesis. This is the name given to a generic process whereby an analysis of the sound (usually sampled) is made in an attempt to break it down into its constiuent parts, which can then be recreated piecemeal from basic building blocks. These building blocks are usually hundreds of sine waves which are used to build up the harmonic content of the sound, the sound having been analysed in the first place via a Fast Fourier Transform. Those of you who saw Duran Duran's 'Reflex' video will have seen Fairlight displays of FFTs on its samples, usually compared to a plot of a mountain range or the seabed. The Fairlight was not the only system which could produce pretty FFT displays. They were even possible on the cult UK sampler Lynex in the late '80s which ran on the Atari ST. However, all these systems had one thing in common; they could produce a lovely picture from a sample, but they wouldn't let you change the harmonic content, because they couldn't actually turn the sound into its constituent harmonics, let alone convert it back to a sample.
Because FFT analysis breaks the sound down into harmonic content, it made sense that the first systems which could attempt a reconstruction would be additive synthesisers. In fact, one of the earliest commercially available systems was a Dr T's program for the K5 that ran on the Atari. Although there were not really enough harmonics and envelopes available on the K5 to cope with really complex sounds, it would produce recognizable versions of simple sounds which made good starting points for sound design rather than having to set all the harmonics manually from scratch (in fact, if anyone out there still has a copy of this software, perhaps they would contact me via SOS as I would love to get my hands on it once again). Of course, if someone were to do something similar for the current Kawai, the K5000, which has a much more flexible implementation of additive synthesis, this would probably get a lot closer to a useable resynthesis system.
Perhaps the best resynthesis I ever heard was on the Technox Axcel, a system which came originally came from Canadian academia, but which went through the inital phases of commercial marketing. It had a flexible additive structure which could assign more or fewer harmonics to each voice as required (although this meant more complex sounds had less polyphony), and at the Paris show in about 1989 they had got the resynthesizer analysis working. I heard a very respectable resynthesis of a flute sound, complete with the more demanding breath component (a flute on its own wouldn't have been that impressive, as the pitched component is a fairly simple harmonic series). However, I never got sufficient hands-on time to evaluate the potential of the system on really demanding sounds. I believe Jean-Michel Jarre bought that unit, but the company went in liquidation shortly afterwards and very few units were actually shipped.
Over three years ago our venerable editor wrote a piece about Oberheim Electronics (now owned by Gibson) having developed a similar system in conjunction with Berkeley, Stamford, MIT and IRCAM (see the January '95 issue) under the unlikely name of G-Wiz Labs, but we have no more recent information, so either the development process is taking longer than they thought or the project has been abandoned. Again, as its name implied (FAR <20> Fourier Analsis and Resynthesis) it seems to have used an FFT analysis of the source sample to set up harmonic components. One potential problem with resynthesis, the recreation of unpitched noise components, was dealt with rather elegantly by comparing the result with the original and then creating shaped noise to fill out the differences. At $10,000 plus a Macintosh, it was not cheap, but Paul's report mentioned a recognisable line from Suzanne Vega's 'Tom's Diner' being replayed at different pitches and tempos without any of the normal drawbacks of sampling. Certainly, resynthesis is one of the few systems which seems to have the potential for synthesising vocal performances.
The appeal of resynthesis is that it would have all the advantages of sampling, in that any sound which can be played in to the system could be reproduced, but without the disadvantage of samples playing back at different lengths when repitched. When a resynthesis is triggered at different pitches on something like the Oberheim FAR system, the replay time would be constant and noise elements in the sound would not be repitched at all. Looping would also no longer be a problem; you would merely extend the duration of the harmonic series in the sustain phase of the sound. Of course, the repitching would not necessarily remove all the problems associated with sampling. Sounds which have been shaped by some sort of resonant chamber (human voice, bowed strings, guitars, etc) would have the harmonic boosts/dampening repitched, which introduces the Pinky and Perky/Carlsberg effect that often forces multi-sampling. This is where physical modelling triumphs as it splits the sound into the driver (which is usually repitched) and the modifier or resonator (which usually doesn't change).
Perhaps the ideal resynthesis system would be one which does not simply reduce each slice of sound to its constituent harmonics, but would instead look for the effect of a constant resonator in a longer sample of an instrument playing across its range, and would then recreate the harmonic spectrum of the driver separately from the resonant amplifier of the modifier. It might be referred to as 'remodelling'.
One drawback with resynthesis or 'remodelling' is that would leave nothing for the programmer to do. Just play the sound in, let the computer do its number-crunching and hey presto <20> your sound can be played back from the keyboard. Of course, if the sound has been broken down into constituent harmonics, then the levels of these could be edited or adjusted in real time for creating new sounds or adding expression, but it still reeks of the increasing dominance of factory presets and lack of user editing and personalisation of the sounds. 'Remodelling' would be better as you could adjust the parameters of the model to make new sounds. But still I find I miss the challenge of 'pure' synthesis, where you have to be the brains and do the analysis of the sound yourself and then recreate it with the parameters available (or even make up a completely new sound).
Where Do You Want To Go Tomorrow?
So if you are interested in synthesis and sound design for its own sake, rather than having specific timbres to recreate or gigs to do with the minimum number of synths, then where are the new frontiers? Where can you rediscover the thrill of finding a new way of doing things, or even a technology to misuse or trick into doing something unique? The answer to this question, as with so many these days, seems to concern computers and the Internet. In fact, most new types of synthesis since the '80s have been developed at their theoretical and experimental stages through computers. Generally speaking, a designer/engineer had an idea or came across a phenomenon when doing something else which he thinks has potential. The cheapest way to investigate further was to set up some computations on a generic system, ie. a computer, which can be programmed to simulate (often not in real time) the effect which will be produced when certain novel configurations and/or processes are tried. He then took this to an electronic music company and tried to persuade them to take it a stage further. This sometimes took the form of developing specific hardware which is fast enough to do things in real time (like Yamaha's development of John Chowning's FM) or, alternatively, adding it to an exisiting generic product like a sampler. A good example of this latter is Emu's addition of Transform Multiplication as part of the SE software upgrade to their Emax samplers (see 'Transforming Samples' box opposite).
DIY Synthesis
So back then to our own computers and their umbilical link to the repository of human knowledge that is the Internet. Modern personal computers' CPUs are now so fast that they rival the computational power of systems that only major manufacturers or universities could afford 10 or 20 years ago. You also now have a direct link to the people in educational establishments who are trying to push back the boundaries. Lacking any other public forum to publish their ideas, many academics now post their ideas and sonic experiments on the Internet, just for the satisfaction of airing their concepts to a wider audience who can try their techniques out (indeed it is difficult to see how some of these methods could be implemented into a tradition commercial synthesizer). As a result, you can get into more or less esoteric forms of sound generation at the leading edge of academia via that PC or Mac sat in the corner of your living room. One that has been coming to SOS's attention over the last few months is Granular synthesis, explained elsewhere in this article.
The main lesson however, is that it has never been easier to get into weird and wonderful forms of synthesis yourself. With a computer and an Internet connection, you can do your own research, download examples and descriptions and then with a sampler or generic synth you can recreate some of the things described and try them for yourself. New types of synthesis without expensive new keyboards <20> sounds great to me. So Synth School is not exactly coming to a close but transferring to the Internet (a sort of Open University for the new millennium). Get your search engines in gear and you can try three impossible methods of synthesis before breakfast.
And so we reach the end of the final instalment of Synth School. I have thoroughly enjoyed writing this series and I am particularly grateful to all those of you who have cornered me at trade shows or product launches and been kind enough to say how they have found it useful. Perhaps the most important message I have tried to put across is this: refuse to use factory presets and make up your own sounds using whatever tools come to hand <20> your music will be the better, or at least the more individual, for it. If you have been led by any of these articles to try out new ways of creating sounds, (or even return to some old ones you thought you had left behind) then these articles have done their job.
Sprinkle On The Granules
There are numerous references on the Web to Granular synthesis, a method which builds timbres out of very small snippets of sound stuck together to create completely new timbres. Having been informed by various authorities (including Leon Zadorin from some Antipodean seat of learning or other: www.academy.qut.edu.au/music/new...) that the content of the granules is less important than their size and shape (or, as he put it, "human perception of frequency, duration and amplitude tends to reside within a practical minimum)", I decided to dig out my sampler and have a bash myself. As long as your sampler does not restrict the smallest loop length you can have (as some of the early Akais and Rolands did), pretty much any sampler will do. The length of these 'sonic grains' (as each small snippet is known) should apparently be less than 100 milliseconds, because anything larger than that starts to reveal the source sound.
I started by cutting and pasting a small snippet of sound (less than 1/10th of a second) to itself until I realised that way was going to take for ever. Then I realised the I could use the loop length to replay the small snippet over and over. As long as you keep the loop length very short, the granulated sound bears absolutely no apparent relationship to the source sample. To begin with I used the auto zero crossing feature on the Prophet 2000 to make loops with a smooth cycle crossing in them, which tended to produce very pure sounds with not too many harmonics present, but then I realised that was spoiling the fun. So then I turned to the Roland S760, which doesn't automatically find zero crossings, and things got really interesting. By setting the loop points almost randomly, you get some fantastically twisted angular timbres. I then found a way to move the fixed loop length around quickly within the sample, which made a very quick way of changing the timbre radically.
Reading further with my faceless Australian mentor, I discovered that another factor is the 'density' of the grains (ie. how much silence there is between them). So then I started to cut and paste some silence in at the end of the loop and found that this tended to make the timbres slightly more acceptable to those of a nervous disposition. Basically, adding silence between the grains seemed to act like adding water to Scotch, making the sound more platable to the sensitive soul. Mind you I never got to any sounds I could have played to my mother, but then isn't that what rock & roll is all about? In these days of techno and other industrial types of dance music, this technique seems to have a lot going for it. I strongly recommend experimenting with it, if you have a sampler and a couple of hours to kill.
Transforming Samples
Transform Multiplication was a form of synthesis unique to Emu's Emax range of samplers, which used some heavy computational algorithms to combine two samples in a unique but time-consuming way. The process came up with some weird and wonderful sounds ideal for futuristic timbres and sound effects, but it suffered from the same problem as many non-real time implementations of synthesis: the process of tweaking a promising first try into a satisfying sound could take days. When a typical computation duration exceeds thirty minutes, the problem is not so much that creating a completely new sound from a set of parameters entered takes a long time (although this will deter the superficial user), but that each minute adjustment of those parameters, or to use the technical term, 'tweak', takes exactly the same time. So to refine a promising sound can be soul-destroying, especially if you are at the experimental stage where you do not know exactly what each of the parameters will do. Changing a parameter in the 'wrong' direction or altering the 'wrong' parameter altogether means that you have sentence yourself to another long wait just to get back in the right direction. Indeed, to become as familiar with Transform Multiplication as I am sure many of you are with the other forms of synthesis we have looked at might well take a lifetime, unless someone comes up with a real-time implementation. Perhaps Gerry Basserman, who did the demos for Emu for years, might well have reached the stage where he was confident of the effect that individual parameter changes to Transform Multiplication would have, but I suspect that there are precious few others. My experimentation with this technique often produced some fascinating results, but I never really felt like I was doing anything more than randomly combining samples which sometimes had serendipitous results. I certainly never felt completely on top of the method. However, if Emu or anyone else were to come out with a real-time implementation of this style of synthesis, you can bet I'd be first in the queue to master the technique. Sadly, the cynic in me suspects that the market for synthesis styles which create new sounds rather than attempt to duplicate old ones is not large enough to prompt Emu or anyone else to produce the expensive hardware this would need (probably leaving physical modelling far behind in terms of the raw horsepower required). In the meantime, if you can get your hands on a Emax SE, Transform Multiplication will certainly satisfy an appetite for new weird and wonderful sounds.

View File

@@ -0,0 +1,71 @@
Synth School: Part 2
Resonance, Envelopes & Routing
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published August 1997
This month, Paul Wiffen looks at ways of modifying a filter's shape, both in terms of frequency response and over time, and considers the importance of routing in connecting together a synth's various sound-generating and -modifying components. This is the second article in a 12-part series.
Having established the basic principle of analogue (or 'subtractive') synthesis in the first part of this series, back in June's issue <20> ie. start with a sound containing more than you need (a waveform which contains lots of harmonics) and whittle it down (using a filter to remove the unwanted harmonics) <20> we can now come on to ways of refining this process and automating it. If you have been trying the manual filter frequency manipulations I suggested at the end of the first piece, you will have noticed that small movements of the filter cutoff are not that noticeable, and that to get a marked effect you need to sweep the filter over a sizeable portion of its range. Although later on in this installment we will look at ways to do this automatically, without spraining your wrist every time you move the knob quickly, it is sometimes more appropriate to accentuate a small filter movement than to make the movement itself bigger.
A Greater Emphasis
This is done by amplifying the frequencies around the cutoff point. This means that instead of having to detect the filter's position by noticing what is not there, we can actually hear more of the frequencies around the cutoff point because their presence is exaggerated. There are perhaps more synonyms for this feature of analogue synthesis than any other, and this can make it difficult for beginners. If the terminology for this parameter on the front panels of two synths is different, how are you supposed to know they both do the same thing? The most self-explanatory of the terms used is Emphasis, which probably explains why it is the least common. All too often, manufacturers try to mystify the processes they use, so more scientific terms, like Resonance and Q, are much more common. But whether the control is labelled Emphasis, Resonance, or Q, it does the same thing. At the point where the filter cutoff slope begins, there is a very narrow band in which the frequencies are actually boosted. The higher this control level is set, the more the frequencies at the cutoff point are amplified. When the filter is static (ie. the cutoff point is not moving), the effect can sometimes be difficult to spot, possibly because there are few frequencies in the filtered waveform around the cutoff point. Sometimes, when you turn the resonance up on a static filter you hear it quite clearly (because there are frequencies around the cutoff point and they are being boosted), other times not. But the surest way to hear the effect of resonance on a filter is to sweep it, even by a small amount. If you have access to a filter with resonance, select a sawtooth wave (or some other harmonically-rich source if you don't have analogue waveforms available) and try adjusting the resonance on a static filter setting first. If you don't immediately hear certain frequencies being picked out, just move the cutoff a little bit. Then do the same with the resonance set to zero. The difference will be very clear. As the filter with resonance is moved, the individual harmonic components in the source waveform(s) will be picked out one by one. This, for me, is another one of the great joys of analogue synthesis. Quite often, the sonic interest created by this slow sweep through the frequencies on a single note is worth a thousand played notes with unvarying harmonic content, especially if you sweep in a low register, where all the associated harmonics are within the audible range. The most common use of resonance is with low-pass filters, but on synths with high-pass and even band-pass filters (see June's instalment for more on these), you usually find that the resonance control is still available, and sometimes it can be very effective when used with such filters, especially for creating 'vocal'-type movement in a sound (see the 'Vowel Play' box).
Oscillator sync is ideal on lead synth sounds, where it can make the synth scream like a distorted lead guitar...
Of course, resonance has many other uses. You can use it whenever you want a sound to catch the ear in a busy mix, where it has to fight its way past a lot of other attention-grabbing sounds. It is also useful to alert the ear to the presence of basslines, when you know (or suspect) that the music is going to be heard on systems that cannot accurately reproduce the bass end (AM radio, older TV sets, and so on). A bit of resonance will bring out the higher harmonics which are in the bandwidth of the playback system, and listeners' ears will extrapolate to the fundamental and 'fill in' the missing frequencies.
Filters As Oscillators
On some analogue synths, if you turn the resonance right up, the filter starts to howl in a way that is very similar to guitar feedback. This is known as 'going into oscillation' and happens because the resonance is up so high that a clearly distinguishable frequency is created, with the harmonic characteristics of a sine wave (ie. very little else except the fundamental pitch). Sadly, some analogue manufacturers and many of those currently producing PCM-based synths felt/feel that you need to be protected from this extreme effect, so you may find that you can't get this to happen on your synth. If you can, try using full resonance with the audio oscillators set to generate white noise (if available on your synth). This is the extreme example of subtractive synthesis I referred to in the first part of this series, where you start with all frequencies present, but hack most of them away, until you are left with just a raw oscillation of a very narrow band, amplified to screaming level. You can then use the filter frequency as a sort of very rough pitch control. While it is unlikely that you will find a use for this technique in a sensitive ballad, sometimes it is just the thing for the climax to a full-frontal sonic assault. This technique will really make ears bleed, and also offers the synthesist one of the few ways with which to fight a guitarist stuck in front of a Marshall stack with all six strings feeding back (you can hear Brian Eno making excellent use of the technique on Roxy Music's early, well, music). I've not heard self-oscillation being used in techno yet, but I'm sure it would fit right in with that 'machinery on overload' vibe.
Introducing Envelopes
In the course of discussing the effect of resonance, we've seen that it brings out movement in the filter cutoff. So far, we have assumed that this movement will be induced manually by the performer... and so it often is. For me, the difference between a great player and a greater synthesist is that the latter often does more with the parameter knobs during a solo than with the keyboard. Listen to Larry Fast with Peter Gabriel or the aforementioned Brian Eno on early Roxy Music albums and you won't hear a bewildering flurry of notes, but complex changes in timbre which are far more interesting than 'chops'. However, there are many filter movements which are too fast to be produced for every note played. Wouldn't it be nice if there were a way to automate these filter movements, leaving both hands free to play the keyboard? Well, the good news is that there are several. We already saw one of them in last month's instalment; the Low Frequency Oscillator, or LFO, which can be used to induce regular repeated variations in the sound. The first applications we saw were in using the LFO to control pitch, adding vibrato or volume for a tremolo effect. By routing the LFO to the filter cutoff frequency (more on the concept of routing in a minute), you can constantly vary the harmonic content of the sound, an effect which is particularly pleasing at very slow LFO frequencies. If you then also increase the resonance, the harmonics will be emphasised in turn as the cutoff sweeps back and forth.
There is another way to vary the cutoff, which is based not on repeated effects, but happens automatically each time you trigger a note. This means you can set up the same shape of filter movement for each note, even when you are playing very quickly or polyphonically. This sound-shaper is not only the lynch-pin of analogue synthesis, but a mainstay of all other types of synthesis as well, and is called an Envelope. It allows us to automatically shape sound over time, beginning from the start of each new note. By taking care of the changes we require on every note we play, it leaves us free to worry about what we are playing. I've introduced the concept here by explaining how envelopes can alter filter cutoff over time, but they may also be used to control any other aspect of the sound which we want to affect each note played, such as the volume level or pitch. This is what makes the envelope such a universally useful synthesis tool, not just for analogue filtering, but for overall volume (which we need to control in any type of synthesis). The envelope is also important in other synthesis methods, for controlling frequency modulation (or FM) amount or the level of different harmonic groups in FM or additive synthesis respectively (more on FM and additive synthesis next month).
The most common type of envelope in traditional analogue synthesizers is called the ADSR. This is an abbreviation for the four stages the envelope can pass through, namely Attack, Decay, Sustain & Release. While these are not universally implemented by any means (on cheaper machines you may find only Attack, Decay and Release, and on more recent synths there may be additional parameters available), the ADSR is the most common type, and a good place to start understanding the idea behind envelopes. Three of the four standard envelope parameters refer to the times taken to move between specific levels (Attack, Decay and Release). The third parameter, Sustain, is different, as this sets the level at which the envelope remains until the key is released.
Attack is the time taken for the envelope to move from the initial zero level to the maximum level. The higher this parameter is set, the longer it takes to reach that maximum level; so if the Attack Time is at zero, the full level should be achieved instantly (in fact, it does take a small amount of time to reach full level, and this time varies from synth to synth; this variation in the minimum attack time is what can make one synth sound punchier than another). The Decay parameter sets how long it takes for the envelope level to drop from the maximum to the variable Sustain level. If this Sustain level is set to maximum, the Decay parameter has no effect, and if the Sustain level is zero, the level will drop to zero at the rate set by the Decay if the key is held long enough. Setting the Sustain level to maximum means that once the attack portion of the envelope has happened, there will be no change in the sound until the key is released. The lower the sustain is set, the more the level is allowed to decay while the note is still held. Once you have let go of the key, the Release parameter governs how quickly the level drops to zero from that set by the Sustain value. If this is set to a short time, then the level will drop very quickly.
It is fairly easy to understand how these levels work if you imagine the envelope being assigned to control the overall volume of a sound. A slow Attack will fade the sound in instead of it appearing instantly, a fast Decay will make it die to the Sustain level more quickly, a high Sustain level will keep the sound at high volume until the key is released, and a long Release means the sound will take a while to die away once you have let go of the key. All analogue synths will have a volume envelope (as will 99% of all other synthesizers) so you can very quickly acquaint yourself with the effect of these controls on the volume by adjusting the parameters and seeing how they affect the sound. Of course, if you don't have all four parameters, just learn the effect of those you do have. Those of you using synths with more complex envelopes will have to wait until later in the series to fully understand how they work, when we will look at those synthesis styles which use more stages.
Envelopes & Filters
Of course, envelopes may be applied to the filter as well as the volume of a sound (this is where we came in), and this is important when creating sounds that appear 'natural' to our ears. In acoustic instruments, the harmonic content of the sounds generated often changes radically over time, as well as just the volume: a plucked string starts off very bright, but quickly dies away to just the fundamental. Even bowed or blown instruments, which can maintain a steady harmonic content over time, tend to have a harmonically brighter attack as the player accentuates the beginning of the new note. Even if you're not seeking to directly copy acoustic sounds (I've already mentioned what a non-starter this is with most analogue synths), the ear still likes to hear familiar patterns in sounds. However, when it comes to applying an envelope to the filter cutoff, things get a little bit more complicated. A volume envelope will always start from silence and return to it (otherwise the synth would be sounding even when you hadn't played anything), but this is not necessarily the case with the filter envelope. The filter cutoff may not start from completely closed, nor may it be returned to that position. In fact, most of the time the volume envelope is used to silence the sound long before the filter envelope might achieve the same result.
For me, the difference between a great player and a greater synthesist is that the latter often does more with the parameter knobs during a solo than with the keyboard.
However, in certain cases, you may want to use the filter envelope to remove all frequencies. In this case you would use the manual filter control to close the filter completely, and then set the envelope to open it and return it to the closed position at the end of the Release phase of the envelope. Remember to make sure the the release on the volume envelope goes on long enough to let you hear the effect of the filter envelope. It is also best if you set the volume attack to minimum and the volume sustain to maximum. In general, you should use the manual filter cutoff to set the start and end position of the filter. Remember that if the manual filter cutoff is set to fully open the filter, there is no way the envelope can affect the filter any further (unless you have one of the more flexible synths which allow for negative settings of the filter envelope). So make sure that the filter is at least partially closed before you start trying to hear the effect of the filter envelope. You will also need to set the amount of effect that the filter envelope has on the cutoff position (look for the parameter on your synth labelled Filter Env Amount, or perhaps just Filter Amount). If this is set to zero, you might spend all day adjusting the filter envelope parameters without hearing any difference! The Filter Amount control determines how much movement the envelope will induce in the cutoff frequency. If you set a large amount, the filter will probably be fully open at the end of the Attack phase of the envelope, and lesser amounts will cause it to open up less.
To imitate the natural harmonic decay heard in 'plucked' acoustic sounds, you should set the attack of the filter envelope to zero, so that when you play a note, the filter will open up fully straight away. If you use a slower attack, the note will sound more like a instrument being bowed or blown softly to start with and then increasingly harder. Again, these are just examples from the acoustic world to help you understand what you are doing, not attempts to make exact copies of 'real' sounds. The great thing about analogue synthesis is that you can create lots of sounds which don't exist naturally, and if you have access to more comprehensive analogue synths, you should also experiment with envelope control of band-pass and/or high-pass filtering. Similarly, if it is possible to set a negative envelope amount to the filter on your synth, check out the effect that this gives. In this case, you should set the manual cutoff to the most open position that you want it to be, as the negative envelope will close the filter to start with, and then return it to the most open position at the end of its cycle.
It is always a good idea when experimenting like this to work with fairly long attack, decay and release times, with the sustain level at about half way. This gives the untrained ear more time to follow what is happening to the sound during each phase of the envelope. When you feel comfortable with the slow movements, reduce the times so that the cycle happens more quickly. Once you have heard a filter opening slowly and then sped it up bit by bit, you will soon recognize the characteristic sweep, however fast it is happening in a sound <20> if you have trouble, you can always turn up the resonance, which will help pick out the filter movements.
Routing And Misrouting
Of course, envelopes can be used to control much more than volume and filter cutoff, but how much you can experiment with this will be determined by how much routing you can do on your synth. The most basic analogue synths will be hard-wired to the sort of signal path shown in Figure 3. Usually two oscillators (sometimes one, sometimes three) are mixed together and passed through a filter <20> also known as a VCF or DCF (Voltage Controlled or Digitally Controlled Filter), and then the volume amplifier or VCA/DCA (Voltage Controlled or Digitally Controlled Amplifier). Normally the filter and amplifier will each be controlled by an envelope (on some more basic synths you may have to share one envelope between volume and filter) and you will often find that your envelope(s) cannot be set to control anything else. A single LFO will probably be available to control the pitch of both oscillators (vibrato), the pulse width of one or both (PWM), or the filter cutoff. If you find yourself able to do more than this, then your synth is definitely above average. Additional routing possibilities include envelope to pitch (for automatic bend effects), pulse width and LFO amount (to delay vibrato till after the note has been held for a second, and so on), and switching a third oscillator between normal audio and LFO operation. On some synths (such as the EDP Wasp and OSCar) you may even find that you can switch the envelope to repeat its cycle, allowing for the creation of custom LFO waveforms using the ADSR shape.
Attack is the time taken for the envelope to move from the initial zero level to the maximum level.
At the opposite end of the scale, you may have access to modular analogue synthesizers whose routing possibilities are completely up to you; with these, you use patch cords to connect the different parts of the sound-generation and -shaping architecture together in any order you like. The degree of complexity is directly proportional to the number of patching points in the system (and the number of patch cables you have <20> a steadily decreasing number in my experience!). On big modular systems, not only are the routing possibilities infinite (even discounting those which do not produce an audible result), but the actual number of oscillator, filter and envelope modules is variable (assuming you have the money <20> so if you want another oscillator, you go out and get another oscillator module), and you can build up ridiculously complicated routings. There comes a point where the law of diminishing returns is clearly applicable, but unless you are very experienced, long before this point you will lose all grasp of what is actually happening to the sound in your mega-patch.
Korg MS20 Mini
The Korg MS20 has enough patching points to be flexible, but not so many as to be unmanageable or incomprehensible.A good compromise between the fixed architecture of the basic analogue synth and the totally open system of gigantic modular systems is something like the Korg MS20, which has enough patching points to be flexible, but not so many as to be unmanageable or incomprehensible. This was perhaps the most successful of the 'patchable' analogue machines (even though the single-oscillator MS10 was much cheaper). As a result, there are a decent number of these machines floating around out there (whilst house-hunting in Carshalton recently, I spotted one left behind by a teenage son when deserting the parental abode) although their price on the second-hand market has risen drastically of late because of the renewed interest in all things analogue. However, once you have mastered the fixed routing of the simpler analogue synths, such 'patchable but simple' analogues are ideal for learning the more advanced applications of analogue synthesis <20> if you can track one down.
So, when the routing of the analogue signals is left up to you, what are you going to do with your new-found freedom? Well, as we so often discover when all constraints are removed, many of the possibilities opened up actually lead nowhere at all or, to be more literal in this case, result in silence. So you should actually start by recreating the signal path shown in Figure 3; one, two or three oscillators routed into the mixer, with the result put first through a low-pass filter and then amplifier, with an ADSR envelope each controlling the filter's cutoff point and the amplifier's level respectively. This advice is not so conservative as it sounds; it's not so much 'don't try this at home, children' as 'It pays to learn the rules before you break them!'. LFOs can be routed initially to oscillator pitch and pulse width (if pulse wave is selected on one or more of the oscillators, that is), or filter cutoff and amplifier level (for wah-wah and tremolo-type effects). Then try moving one connection at a time and see the way the sound changes; start with the points to which LFOs and envelopes are routed, as these are much less likely to make the sound disappear altogether.
Don't think it's the end of the world if you don't have an analogue synth with physical patching facilities, either. Although Sequential Circuits never went as far as offering patching cables, the Poly-Mod sections on everything from their Pro One up to the Prophet T8 give you some pretty wild routing capabilities which allow you to get away from the standard analogue setup, and most modern synthesizers have pretty flexible internal routing capabilities now that such things can be done in software. So even if your PCM-based synth doesn't have the most authentic analogue oscillator sounds, it can still teach you a great deal about the way routing works. Particularly good examples of very flexible routings are Emu's rack units, from the Proteus onwards. The only real problem with software routing is that you may have to become familiar with a lot of abbreviations, as sometimes there is not enough room in digital displays to list out the parameters and their settings fully. So be prepared to decipher combinations of numbers and letters like OSC1 PWM ENV or FIL TYP: BPF in the display. Whatever access you can get to more flexible routing synths, whether via patch cables or software switching, don't be afraid to experiment with bizarre routings. The more advanced techniques discussed below both evolved from people plugging things in where they weren't supposed to go! Who knows, maybe you will be the first to discover a new routing technique which will be as full of character as these two.
Ring Modulation & Oscillator Sync
The first of these, Ring Modulation, is a process for modulating one frequency with another in such a way as to produce only sum and difference frequencies, but none of the original fundamental. The original ring modulation circuit has its origins in radio communications, and was originally based around a couple of transformers and a diode bridge or ring (hence the name). Subjectively similar effects can be created by routing an oscillator operating at an audible frequency into the LFO input of another audio oscillator, which is possibly how the effect was first discovered. This would probably have originally have been done on a modular system, but it is also possible on the classic MiniMoog/MemoryMoog design, which allows oscillator 3 to be switched between audio and LFO function. By switching to the LFO routing, but keeping the frequency in the audio range, you can modulate the pitch of the other oscillator so fast that you produce new frequencies which are multiples of the two source oscillator frequencies, many of which are not in the normal harmonic series of either oscillator's fundamental frequency. This produces a range of sounds with a metallic quality, and is therefore useful for making bell sounds or more abstact timbres. Whether the sound has a slight metallic edge to it or is completely atonal depends on whether the frequencies of the two oscillators are closely related or not, as well as whether the pitch of one is being moved in real time as you play it (by an envelope or LFO, for example). As very small adjustments to a ring-modulated oscillator's frequency can make a major difference to the timbre produced, you will find the results can be unpredictable but very rewarding.
Another technique which produces major changes in the harmonic content of the sound, but is less radical in terms of those harmonics' mathematical relationship to the fundamental, is oscillator sync. In this specific configuration, one oscillator's cycle is synchronised to that of a second. This forces the waveform of the sync'ed oscillator to restart its cycle each time the other one crosses the zero point going from negative to positive. As a result, the fundamental frequency of the slave oscillator is kept the same, but the waveform is radically changed. The pitch of the controlling oscillator is not normally added into the audio mix, but instead can be shifted by pitch-bend, envelope, aftertouch or LFO. This makes radical changes to the harmonic content of the synchronised oscillator, but without making the fundamental pitch as weak as ring modulation does; instead, the higher harmonics around the pitch of the moving oscillator are picked out. Oscillator sync is ideal on lead synth sounds, where it can make the synth scream like a distorted lead guitar, or on bass sounds, where it makes the bassline stand out with a really hard edge. Oscillator sync is to be found on many analogue synths, from the classic Prophets and Moogs to the more recent Novation BassStation Rack and Yamaha AN1x. It is another one of my favourite features on analogue synths, giving unparalleled expression to the sound when the pitch of the controlling oscillator is linked to aftertouch or one of the mod wheels. However, like ring modulation, oscillator sync is not, strictly speaking, a 'subtractive' technique, in that it adds to the frequencies originally present in the oscillator waveforms (although you shouldn't let that stop you making good use of it!). As such, these techniques make a good bridge from 'straight' subtractive techniques to other calculation-based styles of synthesis, which use multiplication and waveform manipulation to produce frequencies outside of the normal harmonic series, such as Frequency Modulation and Phase Distortion. In the next part of this series, we will look at the most successful of these 'multiplication' synthesis types, Frequency Modulation, or FM.
Erratum
Apologies go to readers for the mistake which crept into the diagrams illustrating Paul Wiffen's first article in the Synth School series, and our thanks to those four observant readers who contacted us to point it out. The fundamental frequency in any waveform is of course the same as the first harmonic in the harmonic series, and should not have been illustrated as two separate components. The correct harmonic series for the sine, sawtooth, square and pulse waveforms are displayed left.
Analogue Recreations With Physical Modelling
Although many people swear by the original analogue synths, some of which are now changing hands for more than their original retail prices, a new generation of synths is recreating the analogue sound via the state-of-the-art technique of physical modelling. Using raw processing power, DSP chips (first used for effects processing) are now being using to simulate the exact stages of the sound modification procedure which occur in analogue synthesizers, from oscillator waveforms to filter action to envelope shaping, all entirely in the digital domain. The principle advantages of these modern recreations are that they boast rock-solid tuning (never original analogue's strong point), hundreds of presets and user programs, and all the advantages of MIDI for sequencing and SysEx communication. Korg's Prophecy did not restrict itself to just analogue sounds but the analogue models it did feature were extremely reminiscent of the classic monosynths of yesteryear. The first polyphonic synth to recreate analogue sounds was Clavia's Nord Lead, which allowed real-time control with dedicated analogue controls, and this machine had the market to itself for over a year (and was recently upgraded to the Nord Lead II). However, the Japanese manufacturers have responded strongly in the last few months, with Roland's JP8000 (a thorough recreation of that company's classic Jupiter 8) coming first. This was swiftly followed by Yamaha's AN1x, a 10-voice synth with particularly good sync sounds <20> see the AN1x review starting on page 166 of this issue. You can also read Gordon Reid's preview of the very latest contender, the 12-voice polyphonic Korg Z1, elsewhere in this issue. Whether any or all of these machines can be seen as authentic replacements for the classic synths of yesteryear is a personal opinion, and no doubt the debate on this point will rage long and hard. What is beyond question is that as the second-hand market runs out of bargains (as owners wise up to the value of the pearls they have been sitting on), these new machines offer a very viable alternative, particularly in the modern MIDI setup.
Vowel Play
Sometimes analogue impressions of vocal sounds can work better than sampled vocals in a track, because the frequencies affected by the filtering are not directly related to the pitch of the note you are triggering, but dependant only on the filter cutoff. The human vocal chords apply the same resonant filtering effect, and they don't vary this just because you sing a different pitch. Instead, the variation is used to create different vowel sounds, independent of the note being sung. When you play a new note with even the most accurate samples, the resonant frequencies shift in strict mathematical relationship to the transposition from the original pitch. So when you transpose a sampled voice by even a semitone, it sounds more like a different person singing the new pitch, not the same vocal chords. Whilst an analogue synthesizer will rarely be mistaken for human voices, it may well give you a more organic impression of voices used as an ambient background than a sampler whose variations in timbre directly related to pitch jar on the ear. As always, I advise people to steer clear of the idea of using analogue synthesis in direct imitation of a sound. However, analogue synths can be excellent for giving the general impression or feel of conventional instruments without being slavish imitators, especially when placed further back in the mix and given their own ambient space. When trying to produce a vocal effect on an analogue synth, the best results tend to come from those which have a band-pass filter setting or a high-pass and low-pass in series (essentially the same thing). Set the resonance to just under the point where it is about to go into self-oscillation, and then move the cutoff frequency (or frequencies if you're using low-pass and high-pass filters in series) around slowly. With luck you will find a point where a distinct throaty element creeps in. Patience is a definite virtue in the search for this elusive effect, and if the synth you're using has user memories, be ready to save as soon as you find it. If not, then be ready to record the part you want the sound for, as the sound can drift all too quickly on unstable old machines. My favourite machine for this is the Elka Synthex, which had two different widths of band-pass filter, a very stable resonant response, and a ton of user memories. My 'Choirboy' patch, a serendipitous find on that machine, has fooled many an untrained ear (I'm thinking mainly of TV and film directors with that 'untrained' reference, by the way) to the point where I could probably have got away with billing them for a session with Aled Jones or whoever the current pre-pubescent warbler was! The dual filter of the OSCar is another winner for this (moving the Separation parameter controlling the distance between the two resonant peaks can create vowel sounds which give the impression of singing in a foreign language), as are any of the early Korg synths featuring the splendidly-named 'Traveller' (they don't make parameter names like that any more, do they?), which is a disguised high-pass and low-pass filter in series.

View File

@@ -0,0 +1,99 @@
Synth School: Part 3
Digital Synthesis (FM, PD & VPM)
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published September 1997
Having completed his study of analogue synthesis last month, Paul Wiffen takes a look at FM and its related digital synthesis types, which rocked the synth world throughout the 1980s. This is the third article in a 12-part series.
Incredible as it now seems, 10 years ago, Frequency Modulation (or FM) synthesis ruled the world <20> well, the musical world at least. Having already sold a phenomenal number of FM-driven DX7s, Yamaha were in the process of creating cheaper and cheaper FM synths (with increasingly large model numbers) so that even the most impoverished musos could have their own. A couple of years after this, Yamaha launched their (in my opinion) best-ever synths with the hybrid SY series, which combined 16 FM voices with 16 PCM sample-based voices. My treasured SY99 is actually my favourite Yamaha product of all time (for reasons which will become clear elsewhere <20> see the 'Which FM Synth Is Best?' box).
These days, from a synth player or programmer's point of view, you could almost be forgiven for thinking that FM had never existed, and that Yamaha had discreetly drawn a curtain over that whole period of synth history (they dropped FM from the SY85 a few years back, and then quietly discontinued the synth itself); in short, the profile of FM keyboard synths on the market is at an all-time low. But, in fact, FM synthesis is more widespread than ever; hidden in the bowels of every IBM-compatible computer now being sold is a miniature FM synth. Back in the days when PCM-based synthesis (or wavetable synthesis, as the computer industry incorrectly refer to it) was still too expensive to put on soundcards, Yamaha sold their FM technology to Creative Labs as the sonic engine on the original SoundBlaster, in the form of the OPL chip. Due to the phenomenal success of this soundcard and its descendants, the OPL chip is now part of the ubiquitous SoundBlaster spec which games and other multimedia applications alike use for their sonic accompaniement. So every SoundBlaster-compatible card now produced has to have the OPL chip (even if it also features a high-quality PCM sample-based synth as well) <20> and these days you can't sell a PC without a SoundBlaster card built in. This means that there are actually more FM synthesizers out there than ever, although they no longer sport Yamaha's unique combination of brown and green, nor the label DX. In fact, the huge numbers of FM synths sold in the '80s has probably now been eclipsed by the number included in the PCs that have been sold worldwide!
Capital FM
As the FM sound engines on PC soundcards use a less sophisticated version of FM, we will concentrate on the implementation of FM first made available on Yamaha's DX7 (and their much more expensive DX1). If you are serious about using FM in your sonic arsenal, try to get your hands on an FM machine with the same implementation as this, because the scope of what you can achieve with the less complex implementation, sonically speaking, is rather more limited. In addition to the thousands of DX7s out there (including 1988's DX7 MkII), there are also TX802 and TX816 racks, SY77 & SY99 workstations, all now going second-hand for a fraction of their original asking price, and all with the 'more evolved' version of FM. This implementation is generally referred to as 6-operator FM (as opposed to 4-operator FM on the DXs with higher model numbers and, of course, their OPL chip descendants in PC soundcards). Straight away, we've run into our first piece of FM-related jargon; operator. However, there is no need to worry about this, as when I tell you that all operators do is produce sine waves, you will (hopefully) recognise them from the previous two parts of this series as being closely akin to the oscillators in an analogue synth. What's different in FM synthesis is the way these sine-wave generators interact to create sounds.
Don't expect to master FM synthesis as quickly as you did analogue.
So, why are they called operators? The answer lies in history... Unlike analogue synthesis, which was developed and refined in parallel by several different individuals and companies over some years (Bob Moog of Moog Electronics, Dave Cockerell of EMS, Dave Smith of Sequential Circuits, and Dave Rossem of Emu, to name but a few), FM has a much more singular parentage. In the early '80s, a team at Stanford University in California, led by Dr John Chowning, discovered a pure synthesis application for Frequency Modulation, which had been in use as a high-quality audio broadcast transmission system for some years. The new form of synthesis allowed the creation of sounds which had previously been beyond the ability of most analogue synths (for example, reasonably realistic brass, electric piano, and bell sounds, as well as other 'metallic-sounding' timbres); and so, emphasising this particular strong point of the new discovery, Chowning shopped FM synthesis around several manufacturers, and, after a few refusals from American companies (who must have later felt like the man at Decca who turned down the Beatles), signed an exclusive agreement with Yamaha for them to develop the new method and bring it to the market. As a result, Chowning and Yamaha were able to develop their own jargon, and decided that operator, rather than oscillator, was the term for them. Actually, it wasn't a bad decision, as it meant that nobody expected to use operators in the same way as oscillators on an analogue synth.
So, how does FM synthesis use operators to create sound? (Warning: more incoming jargon, albeit slightly more familiar!) Firstly, it is important to note that on any FM synth, each operator is known as either a Carrier or a Modulator, depending on the role that the sine wave produced plays in the creation of your FM sound. Those of you who know something about the way radio transmissions work may recognise the terms Carrier and Modulator. In broadcasting, it was discovered that if you modulated the frequency of a waveform instead of its amplitude (as in the earlier method of radio broadcasting, Amplitude Modulation, or AM), you could encode more audio information more accurately for transmission, which resulted in better reception, and a greater bandwidth and dynamic range. Those of you reading this in the Shires will be capable of listening to Mr Branson's fine radio station in hissy old AM (if you haven't given up on it because it's too much like listening down an old telephone), while us Greater London types have it in glorious FM stereo (although I can assure you that Michael Bolton's bleatings do sound worse on FM). The frequency of the carrier, in the case of Virgin Radio FM, is 105.8 MHz (ie. what you tune your radio to <20> or away from if Mr Bolton is playing) and that of the modulator varies according to the audio signal being transmitted. When we then tune into an FM radio signal, the carrier signal is taken out and we listen to the modulator.
However, in FM synthesis, it is the carrier we listen to (although with the effect of the modulator still present). In other words, we are actually interested in the interaction of the carrier and modulator, rather than using one as a means to get the other from A to B.
In fact, those of you who remember the difference between audio oscillators and low-frequency oscillators (LFOs) from the first two parts of this series should now be able to understand exactly the relationship between the carrier and modulator. A carrier is like an audio oscillator; its modulated output is sent to the mixer and thence to the speakers. The modulator is more like an LFO. You don't actually hear it, but you do hear the effect it has on the carrier.
The main difference between the modulator and an analogue synth's LFO is the speed of operation. As implied in its name, an LFO operates at low frequencies, nearly always below the audio range. There is no such restriction placed on a modulator; it may operate at frequencies considerably higher than audio, within the audio range, or below it. In this respect, the way it is used is similar to the way analogue synth oscillators can be employed to create ring modulator-style effects, where you set up one oscillator to modulate the pitch of another (this was covered in more detail at the end of the last instalment in this series). The frequencies produced by this analogue cross-modulation are ideal for bell-type sounds and other metallic timbres <20> exactly the timbres which FM later became famous for. This is no big surprise when you consider that pitch is merely the musical way of referring to frequency.
So, what's the difference between cross-modulation as practised on an analogue synth and the frequency modulation in the Chowning/Yamaha implementation? Well, to start with, FM operators can only ever produce sine waves, unlike an analogue oscillator. At first, this may seem a bit limiting when you remember that a sine wave only produces a fundamental frequency, but in fact the interaction of just two sine waves can produce incredibly complex timbres. This is because in FM, you don't simply mix two frequencies. Frequency Modulation actually produces multiple frequencies based on the sum and the difference of the original frequencies. The greater the modulation depth, the louder these new frequencies are in relation to the original frequency of the carrier. When you also consider that in Yamaha's FM, modulated carriers can be further modulated or themselves used as modulators, you should see that it is possible to quickly build up large numbers of frequencies in harmonically-related and/or harmonically-unrelated series. In general, the former will give the purer tones, and the latter the more complex, more enharmonic sounds, until all at last becomes noise if the relationships get too complex.
This also explains why 6-operator FM is more versatile than its lesser, 4-operator relative.If you think about it, there are so many more ways you can configure six operators as carriers, modulators, modulated carriers, carrying modulators, re-modulated modulators, and so on, than you can four. This brings us neatly to the pretty pictures screen-printed on the front panel of the DX7 (or on illuminated displays if you are the proud owner of a DX1), which tell you how the operators are currently configured. These configurations are known (jargon alert!) as Algorithms.
Slave To The Algorithm
On modular analogue synths, you could create your own audio signal routings, allowing you to plumb the most bizarre signal paths together. Most analogue synth manufacturers preferred to set the possible paths in stone to some extent, so that you didn't end up with routings which produced no noise or led to the direst rackets (both of which tended to trigger lots of calls to modular manufacturers complaining of "broken" machines). Yamaha saw this problem coming a mile off, and quite sensibly chose to limit how the six available operators on the DX1 and DX7 could be hooked together, permitting only those algorithms liable to produce the most musical results, and thus giving the end user reasonable flexibility without the suffering. 6-operator FM offered 32 algorithms (its 4-operator junior had just eight <20> another plus point for 6-operator FM). Most DX owners complained that the synths were too complicated, but can you imagine the complexity if all routings had been freely user-definable? Or what the user interface would have been like? The mind boggles!
The conventions Yamaha use in the diagrams of these algorithms are fairly simple once they are explained. Whilst we do not have the room to reproduce the layout of all 32 algorithms in this article, 11 are pictured in Figure 1, on the first page of this article, as examples, which will allow you to visualise some of the ways DX operators can be arranged.
Algorithms are 'read' from top to bottom, and the operators on the bottom row of any algorithm are the carriers. These are what actually produce the sound you hear coming from the machine, mixed together like the signals from analogue oscillators before they enter their synth's filter. Anything above the bottom row is a modulator, and the lines joining them to the carriers show which is modulating which. You will see from Figure 1 that in many of the first few algorithms, one carrier is being modulated by two or even three modulators. In the final 16 algorithms on the DX7 (numbers 17 to 32), one modulator is able to modulate several carriers. Of course, some algorithms (numbers 3 and 4, for example) have a third row on top, and this is where you find the modulators modulating modulators (stay awake at the back there, please). Algorithms 1, 2 and 16 (in the diagram) even have a fourth level to re-modulate a modulated modulator. In case I have failed to lose you so far, I must also mention that each algorithm has an feedback loop somewhere, with which one or several operators can be set to modulate itself/themselves. As an example of the latter type, take a look at algorithm numbers 6 and 4 (again, see Figure 1) where a chain of two and three operators respectively are fed back on themselves).
I hope you can now see that the algorithm selected has the most fundamental role to play in terms of the sound the DX produces. Randomly switching the algorithm on any of the preset sounds will soon convince you of this. In fact, if you are not careful, you can end up removing all the FM from your sound!
DX In Not-All-FM Horror Shock
Now it's time to let you in on a big secret. Some of the algorithms on a DX7 have hardly any FM in them at all, which you might feel is a bit like finding out the Pope is not Catholic after all. Algorithm 32 has no modulators at all (except that operator 6 can modulate itself, thanks to the feedback loop) and is actually more like additive synthesis (next month's topic of investigation) in that it just mixes together the sine wave audio output of all six operators, a bit like drawbars on an organ. Algorithm 31 is little better, with one modulator tacked on to carrier 5 like an afterthought. If you are scared of getting lost in over-complicated modulation paths, I suggest you start your experimentation with algorithms 31 or 32. The more adventurous among you should dive straight in on the triple modulation loop of algorithm 4.
Incredible as it now seems, 10 years ago, Frequency Modulation (or FM) synthesis ruled the world...
By the way, not all the operators featured in an algorithm are necessarily being used. Firstly, whenever you are in Voice Edit Mode on the DX, you will see what looks like a 6-bit binary number in the upper half of the display. This actually shows the status for each operator (1 for On, 0 for Off). These can be toggled using the first six of the 32 green switches on the right-hand side of the machine. Clearly, if a carrier or modulator is switched off, its sound (in the case of carriers) or effect (in the case of modulators) will not be heard.
Secondly, if the modulation depth of a particular modulator is set to zero, its effect will also not be heard. To check this, first use the Operator Select switch to step through to the operator you want to check. The various parameters for controlling the modulation amount (initial depth, envelope, velocity, aftertouch, LFO, and so on) will then all be available under their respective switches to the right.
Real-Time Timbre Change
Without going into more detail than we have room for here, this last sentence has quietly summed up one of the real strengths of Yamaha's FM synthesis implementation, and also explains why it had such a massive impact on the synth market of the mid-'80s. Up until then, most analogue synths had not been velocity sensitive, and those that were tended to be very expensive. The Prophet T8 first shipped around the same time as the DX7 (some two years after it was first shown), and its eight (admittedly very big) voices set one back a not-so-cool <20>5000, compared to the DX7's much less wallet-savaging release price of <20>1549. Even when a manufacturer managed to produce a more cost-effective machine, the limited number of voices was still a problem <20> at the same Frankfurt show where I first saw the DX7, Siel launched the velocity-sensitive Opera 6 for slightly less money, but as its name implies, it still only had six voices.
Not only did the DX have 16 voices (the first affordable synth with more voices than people have fingers), but on each, velocity and aftertouch could be routed to control some or all frequency modulation depths. This brought pianists (many of whom had sneered at the limited polyphony and real-time expression of synths for years) flocking in droves to hand over their hard-earned for a DX, particularly those jazz players for whom a chord is not a chord unless it has a dozen notes on top of those in the straight major or minor triad.
Suddenly, voices on a synth could change quite radically, depending on how hard you hit them and whether you leant on the note afterwards. This meant that you could keep your left hand (previously needed to push the mod wheel up every time you wanted a bit of vibrato or other form of expression) free to play basslines or a further five/six notes of jazz harmony (or, as we classical types like to refer to it, dissonance!). Previously, polyphonic synthesis had been an exclusive club, firstly thanks to its price, and secondly because you had needed to learn new ways to introduce expression into your playing. Suddenly, you could use all the techniques learnt in those years of piano lessons without modification. The sustain pedal could even be used unsparingly, as you had enough notes to ring on while you flourished up and down the keyboard (so to speak). Suddenly, all the real players wanted a DX7 <20> or a DX1, if they had money to burn.
Meanwhile, die-hard synthesists were still reeling from the shock. Of course, the drawback was that even the most experienced analogue programmer had to learn how to program a DX from scratch, and for many, unfamiliar terms like operator, feedback and algorithm proved a hefty culture shock. Fortunately, there were a couple of familiar terms amongst the alien parameters on the front panel.
It's Ok <20> FM Has LFOs & EGs Too
Certainly, when my eyes first fell on the DX's front-panel EG label, I breathed a sigh of relief. OK, so the DX had no filters, but at least it had envelope generators... and an LFO as well. Phew. Everything I had previously learnt about analogue synthesis had not gone out of the window. In your first close encounters with FM, I suggest you keep this thought in mind, as FM synthesis can seem like another universe after the familiarity of analogue, particularly when the envelopes don't seem to have the familiar Attack, Decay, Sustain and Release phases. We will look at how DX envelopes work, but first let's look at the altogether more familiar use of the LFO. The DX7's LFO parameters, Wave, Speed and Delay, should be self-explanatory (what waveform you want, how fast it cycles and when it comes in), and PMD and AMD are just abbreviations for pitch modulation depth (vibrato) and amplitude modulation depth (tremolo). Of course, if the operator to which these two are applied is a modulator, the audible effect will not be vibrato and tremolo, but instead a variation in the modulation frequency or depth, the end result of which will be changes in the harmonic content of your sound. Despite these differences, the DX7's LFO section will be fairly familiar territory to the average analogue programmer. If you have access to a DX7, try setting up fairly radical LFO modulations (for radical read unmusical) on operators 2, 4 and 6, with algorithm 32 selected (where all six operators are behaving as unmodulated carriers), and then switch to algorithm 5 or 6; you will be surprised at how much more interesting (and musical) the timbre will suddenly become as these wild audio sweeps are converted into frequency modulations. Switching algorithms really is the fastest way to sonically, rather than intellectually, appreciate the relationship between carrier and modulator. As you get more proficient, you can try using the more radical algorithms, like numbers 16, 17 or 18.
Pushing The Envelope <20> Rates & Levels
A similar technique can be applied as you find your way around the DX's initially unfamilar envelopes. These allow a greater degree of fine-tuning of the operator level than the simple analogue ADSR type, but they are fundamentally the same. If it helps at first, try thinking of a carrier's EG as an amplifier envelope which changes the volume in real time (because that's what it is), and a modulator's EG as a filter envelope which changes the harmonic content over time (because that is the closest analogy). It is by increasing the level of the modulator with its envelope that you can simulate the changes in brightness which occur in real plucked or blown sounds.
The way to approach rate/level envelopes is to see them as a customisable ADSR for the DIY enthusiast. The level is the point at which each segment of the envelope changes to the next and the rate is the time it takes to make this transition. Yamaha thoughtfully provided a diagram of the envelope on the front panel to the left of the algorithms (see Figure 2 on the previous page), and if you take a look at this, you should see straight away that rate 1 and level 1 provide the equivalent of the ADSR's Attack phase. The only difference is that the attack does not necessarily have to go to the maximum level available, as with an ADSR. By making level 2 higher than level 1, you can have a secondary attack phase, whereas making it lower would put you into a more conventional decay situation (sorry if this is starting to sound like strategic military jargon!). The third phase of the envelope can be another increasing or decreasing segment, depending on whether level 3 is higher or lower than level 2. Care must be taken with level 3, however, as it is also the sustain level <20> that is, once this level has been reached, you're stuck with it until the key is released. Rate 4 gives you the equivalent of an ADSR's release time, but again this need not necessarily go to zero (just as rate 1 need not necessarily go to maximum). This is ideal on a modulator if you want to keep the harmonic complexity as your sound fades out, but be careful when using this on a carrier, or the note may go on forever (or until you change the sound, at least!).
Don't expect to master FM synthesis as quickly as you did analogue. If using analogue synthesis is like learning Spanish, with a simple structure and warm to the ear, then FM is more like German or Russian, complicated and often harsh-sounding (anyone spotted what I studied at university yet?). Some people never feel confident that they know what parameter changes to make with FM, and just busk it, or give up altogether. But don't be put off. Investing time pays dividends with FM, and it is possible to achieve an expressivity rare in the world of synthesis, especially if you are a jazz player and need to create the sort of sounds that don't over-fill the frequency spectrum when you play a dozen notes at once. Some people I know claim that they are still getting new sounds from FM (although when I've heard them, 'new' is perhaps a little strong). Techno/industrialist enthusiasts will love some of the metallic 'klangs' you can produce, and it is the only type of synthesis to appeal to bell-ringers (pun intended).
Yamaha's SY-series of synths contained, to my mind, the ultimate implementation of FM...
Those of you whose appetite for knowledge has been whetted by this all-too-brief resum<75> of Yamaha's FM can find much more detail, programming advice and exhaustive analysis in my colleague Martin Russ' series on FM programming which ran from May to October 1988 in the pages of this fine publication. There are also many fine books on the subject, including the excellent tome by Howard Massey, The Complete DX7. Those of you who like a more surreal approach should try the alternative DX7 manual from Rittor Music, translated rather literally from the Japanese (if it's still available) which features several pages on operators which produce a POOOsound and a section on what happens to a 'tickled' operator when it is being 'tickled' by the 'tickler' (presumably the carrier modulated by the modulator). I was reduced to tears of laughter by this Ken Dodd approach to FM synthesis.
Casio, Pd & The Cz Range
For several years, Yamaha had the digital synthesis field all to themselves, until competition came from the most unlikely source. I can still remember driving round the North Circular Road to Casio (yes, they of digital watch, calculator and VL-TONE fame), wondering what I had done to so upset the Editor of a certain now-long-deceased hi-tech music and recording publication that he would send me on a punitive mission to look at a Casio mini-keyboard. I came away a total Casio CZ-series convert, due in no small part to the enthusiasm of Richard Young, who persuaded me to 'listen without prejudice' to the CZ101 and its grown-up equivalent, the CZ1000. What I heard was reminiscent of FM in its clarity and expressivity, but it had the warmth I felt the DX lacked (I was fairly anti-FM at the time). I ended up writing a book with Dave Crombie on the CZ range and the Phase Distortion (or PD) system it used, and I still use IPD sounds (the later, advanced version of Phase Distortion) in my Casio PG MIDI Guitar <20> it works really well for plucked timbres.
There is not a huge amount of room left in this slot to explore Phase Distortion fully, but as it's much simpler in structure than FM, we can have a brief look at the principles behind it. Like FM, it eschews the use of subtractive techniques such as filtering, preferring to create its harmonic complexity and real-time timbral changes with a mathematical process which makes complex sounds by altering simpler waveforms (although not quite as simple as plain old sine waves, as in Yamaha's FM). This is not done by using one waveform to modulate another, as in Yamaha's synths, but rather by changing the speed at which the waveform is read out within the duration of each individual cycle. This has the effect of distorting the phase of the waveform (hence the name) and thereby changing its shape and harmonic content. The greater the amount by which the speed is shifted, the more the waveform changes shape, and the more complex the resulting harmonic content.
There is actually a wider range of raw waveforms on offer for Phase Distortion than in analogue synthesis; there are the old favourites like Square, Sawtooth and Pulse, but also some hybrid forms like double sine, saw pulse and three different resonant waveforms. You can also combine these to produce more complex harmonic structures before you even begin distorting the phase. However, to try and help you understand PD a little better, we will look at what happens to a cosine wave when its phase is distorted (don't worry about the fact that it's a cosine wave <20> this is simply a sine wave starting from a different point in its cycle, and doesn't actually sound any different). If you look at Figure 3 on the previous page, you will see the cosine waveshape and the corresponding phase angle throughout its cycle.
If we now speed up the readout of this waveform in the first half of the cycle, making a steeper phase angle gradient, and then slow down the readout in the second half, so that the total cycle time does not change (this keeps the pitch constant), the cosine wave will be altered as shown in Figure 4. As you can see, the whole waveshape leans forward, producing a much more complicated set of harmonics than just the fundamental contained in the original cosine. Of course, the wider the variation in the readout speed, the bigger the kink put in the phase angle graph, and the more radical the harmonic content.
By using an envelope to control the amount of phase distortion (Casio refer to this as the DCW ENV, the Digitally Controlled Waveform envelope), you can achieve the kind of real-time harmonic synthesis changes which all synths must have if they're to produce something more interesting to the ear than organ sounds. As you will see immediately from Figure 5, left, the envelopes Casio used have double the number of Rate and Level components found in Yamaha's FM envelopes (8 instead of 4), but the same basic principles apply, with level 5 being the Sustain component (the one which is maintained while the key is held down). As well as the envelope controlling the Phase Distortion, there are also the DCO ENV for pitch and the DCA ENV for volume.
The main advantage of PD synthesis over FM for the would-be programmer is the fact that much of the terminology and many of the concepts are closer to traditional analogue synthesis. In fact, if you think of the DCW envelope as akin to the filter envelope on an analogue machine, you will be knocking out your own sounds in no time. However, the principal advantage of PD to the listener is that the sounds have a much warmer quality, without resorting to the kind of chorus detuning of the TX816 or DX-MAX, or the combination with PCM sounds and effects of Yamaha's SY series. But PD's success in the mid-'80s was probably due more to the fact that the CZ synths were amongst the first to have a proper implementation of multitimbral operation via MIDI Mono Mode; you could actually trigger different timbres on different MIDI Channels. This led Vince Clarke to use half-a-dozen CZ101s hooked up to an old UMI sequencer on the BBC Micro in his late-'80s setup (before he went right back to analogue stuff triggered from a Roland MC4 at the turn of the '90s).
Casio followed up the success of the CZ models with the VZ range, expanding PD into IPD (Interactive Phase Distortion), which increased the expressivity available through aftertouch and velocity. Although there were far fewer VZ synths made, I would thoroughly recommend them if you are looking for the most developed implementation of Phase Distortion. Those of you wanting to make your way further into the subject can read more in Phil South's two-part SOS article which ran in the July and August 1987 issues, or even in the book Dave Crombie and I wrote, which rejoiced in the wonderfully imaginative title The Casio CZ Book (if you can still find it).
Having looked at the forms of synthesis based on more complex mathematical operations this month, you may be relieved to hear that next time I will be concentrating on simple addition (so you can put those wet towels and bottles of aspirin away now). We will be looking at the way additive synthesis builds up complex timbres, and will examine the dominance of another Japanese company, namely Kawai, in this field. Till then, happy modulating (or tickling)!
FM Without A Yamaha Synth
A little-known fact these days is that you can experiment with FM-style programming if you own an Oberheim analogue synth of the right vintage. Both the seminal Xpander and the herculean Matrix 12 offer FM-style oscillator cross-modulation, and have the added flexibility of being able to use waveforms other than simple sines as both carrier and modulator. So, although the Oberheim version of FM is, strictly speaking, only 2-operator (ie. the two oscillators), you can arrive at very complex sounds like bells and tuned percussion very quickly, because of the additional harmonic content in the 'operator' waveforms which produce a very complex harmonic spectrum when cross-modulated. If you are looking to mix and match synthesis types, you could do far worse than acquire one of these vintage analogue synths (designed by Marcus Ryle and Michel Dodo<64>c, now of Alesis fame). They sound excellent and are incredibly versatile <20> and the Matrix 12 is also one of the few analogue synths to be truly multitimbral.
The only company still producing synthesizers which can make sounds along FM lines is Korg, whose version of FM, VPM (or Variable Phase Modulation) features both in the highly successful Prophecy monosynth and, for those of a more polyphonic and multitimbral frame of mind, the brand-new Z1 (the subject of Gordon Reid's exclusive SOS preview last month, and due for a full SOS investigation in the very near future).
Both machines use the (by now) familar terminology of carrier and modulator and, like the Oberheim versions, only have two oscillators, but these can be set to produce all the available waveforms, not just sine waves. Once again, this gives you a shortcut to the more complex timbres which would be created with several levels of modulation in Yamaha's FM implementation, although it does reduce the sheer width of sounds you can create, because you don't have the choice of different algorithms.
It would be fair to say I think that whilst Yamaha's FM offers as much programming potential as a big modular analogue system (German synthesis specialists Jellinghaus once made a DX7 remote programmer which had a physical knob for every DX parameter; it covered about an acre), what Oberheim and Korg's versions offer is more akin to what a hard-wired analogue synth can produce. Nevertheless, you can still obtain all the staple sounds (that means the bells, electric pianos, tuned percussion and metallic sounds which made FM famous). However, if you mean to dig really deep and lose yourself like an explorer in the jungle of FM, there really is no substitute for Yamaha's 6-operator implementation. Just don't forget to tell someone where you are going and when you expect to be back, or you may never be seen again!

View File

@@ -0,0 +1,76 @@
Synth School: Part 4
Additive Synthesis
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published October 1997
Throughout the '80s, additive synthesis was the Holy Grail for synth purists; many machines aspired to it, but only one achieved it successfully. Paul Wiffen explains how additive works and looks at the various implementations, including the newly updated Kawai version.
In previous instalments of this feature, I've used various analogies from the visual arts to help illustrate how different types of sound generation work. Analogue or subtractive synthesis I likened to sculpture, where artists starts with more 'stuff' than they ultimately need and remove large chunks of it until they are left with what they actually want. Sampling is more like photography, with a snapshot of the required timbre being taken; in PCM-based machines (often known as sample + synthesis) that snapshot is tweaked for the final result in much the same way that a photograph is manipulated during development and printing. It can be altered a little, but it will always be a photograph of the same subject.
To continue in this vein, additive methods of synthesis are closest to the oldest of the visual arts, painting. The sound is built up from its constituent parts, just as a painter mixes together different hues to achieve the required colour, and then lots of different colours are used to create the final picture. Additive synthesis uses combinations of harmonics to create the basic tone colours or 'timbres' and on more sophisticated systems several of these timbres can be combined to make the overall sound. On later additive synths it's not uncommon to find filters, borrowed from subtractive synthesis (just as you find them on many PCM-based machines), used to highlight the unique harmonic content of the waveforms that additive can create.
All-Artificial Additives
When we looked at the basic waveforms used in subtractive synthesis (see SOS June '97), we found that certain common electronically generated waveshapes <20> square, sawtooth, sine, and so on <20> could be described in terms of their harmonic content. A sawtooth contains all harmonics in inverse proportion to their number, a square wave all the odd harmonics in the same ratio, a sine wave only the fundamental or first harmonic. Additive synthesis turns this arrangement on its head and uses those very harmonics as the building blocks for much more complex waveforms. Where real-time timbral changes are possible, these are achieved by varying the levels of individual harmonics or groups of harmonics, often using devices we have come across before, such as envelopes and LFOs.
Because the sine wave is the purest waveform, in that it only contains the fundamental, and because it is the easiest to generate electronically, being very simple to describe mathematically, the sine wave is used as the basic building block of additive synthesis. A whole series of sine waves (whose frequencies are related to each other in exact correspondence with the harmonic series we used to analyse analogue waveforms previously) are 'summed', or mixed together. The second sine wave is double the frequency of the first, the third three times that of the fundamental, and so on. This makes it very easy to know the frequency of the harmonic in relation to the fundamental <20> for example, if your fundamental is good ol' A440, then the frequency of its fifth harmonic is 2200Hz.
One of the measures of power in an additive synth is how many harmonics are available per note of polyphony. Although in natural and synthesized sound the greater proportion of the harmonics present are the lower ones, if the upper harmonics (normally only present in very small amounts) are not there at all, a the sound is perceived as dull or distant, because distance and obstacles remove the higher frequencies first. This is why, when that flash car masquerading as a mobile disco pulls up next to you at the lights, all you can hear through your closed windows is the bottom end of whatever dubious taste in music the occupant has decided to share with you. Open your windows and the full glory of the unvarying hi-hat pattern becomes clear.
So, in order to create bright, interesting sounds, an additive synthesizer must be able to produce more than just the lower harmonics. This is particularly true in the lower ranges, where more and more of the harmonics of a sound are brought down into the audio spectrum. On higher fundamental frequencies, the higher harmonics quickly move into ranges which can only be appreciated by dogs.
Any self-respecting additive synth should be able to manage a minimum of 32 harmonics. Any less and the proper term for it is 'an organ'. In fact, strictly speaking, the tonewheel organ is the first additive synthesizer, allowing you to mix ten or more sine waves at related harmonic frequencies via the drawbars. Of course, if the tonewheels are creating pure sine waves, the sound will be very thin and uninteresting. Those organs which tend to sound the most pleasing to the ear are those where, through age or deliberate design, the tonewheels are putting out more complex waveforms, augmented by percussion, overdrive and a rotary speaker. Ten sine waves on their own (whatever unique mix of levels you come up with) do not a full sound make. In fact, 32 harmonics is an absolute minimum, and many additive synths provide 64. On the really well-specified machines, it is often possible to go to 128 by halving polyphony (ie. the second voice is used to create harmonics 65-128).
It's an interesting exercise (and proof that the theory I have been spouting is based in fact) to use additive synthesis to recreate the standard waveform timbres of analogue synths (although if this is all you ever plan to do with additive you will be drastically under-using its potential and should give up now!). By setting the second harmonic to half the level of the first, the third to a third of the level, the fourth to a quarter, and so on up the series, you will soon hear the familiar timbre of the sawtooth wave emerging as if a filter were being opened up slowly on it. In fact, as long as you're able to set the levels precisely enough, this will probably give you a more accurate sawtooth than most analogue synths. If you don't recognise the sawtooth timbre from your analogue synth, it's probably because the synth is only producing an approximation, with a bunch of extra frequencies not technically supposed to be present adding the extra character (just like the more interesting organs I referred to earlier).
Herein lies an early warning of one of the main dangers of additive synthesis. Without care it can sound weak and thin, and on simple implementations the best you can hope for is some pure tones with a glass-like transparency. If you're looking to additive for rip-roaring sounds which cut through everything else and grab the ear, you'd better make sure that your additive synth offers enough complexity and real-time operation to vary the harmonic content enough to demand the ear's attention (or that it 'cheats' by adding subtractive filters or PCM snippets to its sonic arsenal). I actually think there's little point in additive synthesis if you're going to stick to imitations of waveforms which are produced in analogue synths, or <20> worse still <20> attempt to recreate 'real' sounds. Having said this, theoretically speaking, any sound can be broken down into its constituent sine waves and therefore could be recreated by a sufficiently powerful additive synth. This theory was often advanced by many of the hobbit-like academics who lurked in the aisles of smaller stands at '80s trade shows, waiting to ensnare innocent journalists who weren't forewarned by previous encounters of this type. The real problem was that, after expounding half an hour of theory along these lines, when you finally persuaded them to play you a sound from this system of theirs (which was going to change the history of synthesis forever), it was always the same thin-sounding pipe organ patch they came up with (hardly surprising, since the pipe organ was the first additive system).
On many of the computer music systems of the early '80s which offered multiple methods of sound generation, additive synthesis was the poor relation, the 'also ran'. The Fairlight had its sampling, the PPG its wavetables, the Synclavier its FM, and these were the glamorous aspects of these machines. They all also had some form of additive capability, yet somehow this was rarely mentioned, and used even less often. There were two reasons for this. The first was that the other means of sound production offered by a given system was more or less unique to the system (in the early days, at least) and therefore its promoters would always emphasise that side. Secondly, the other ways of working offered far more in terms of instant gratification than the additive side, which suffered from what I always refer to as the 'Compute' syndrome.
Stone-Age Additive
Early implementations of additive synthesis were not real-time implementations. The actual computational power in these computer music systems was pretty puny by today's standards (the current average PC with a soundcard outstrips the sonic potential of the original Fairlight by several powers of 10). As a result, they couldn't perform the level changes you might make to different harmonic components in real time, but had to go off-line to compute the new waveform. So, when adjusting the relative levels of the harmonics, you would be flying blind in terms of what the result would be. Actually, deaf is probably a better term than blind, as most of these systems had some pretty fancy graphics to show you what you were doing; my favourite was the PPG Waveterm, which superimposed the waveforms for new harmonics on top of the current waveform, complete with amplitude representation of level. Then, when you pressed 'Compute', it merged these together (eventually) into a single new waveform. However, no matter how pretty these displays were, unless you were very experienced they told you little about how the final product would sound. As a result, the process of creating an additive waveform could be very long-winded, unless you just went for the serendipitous approach of bunging in a load of harmonics with random levels, pressing Compute and hoping for a gem sooner or later.
But the amount of time taken by these early systems to create an additive waveform wasn't the only drawback. Because they were computed rather than generated in real time, these waveforms were set in stone when it came to playing them back. In fact, they were just like samples <20> indeed, the additive waveform the Fairlight created was loaded into the sample RAM for playback, in exactly the same way as a sample. Even when you used a merge facility to move from one additive waveshape to another, the result was still a fixed calculated product and the speed of the transition would increase as you went up the keyboard and decrease as you descended. When an additive capability was provided really cheaply by Digidesign's Turbosynth software for the Mac and Atari, the same restrictions applied. The resulting sound could only really be played effectively by MIDI sample-dumping it across to a sampler, with all the restrictions that implies. Of all the first generation of additive-capable systems, the sounds generated on the PPG Waveterm were probably the most useful, as they could be played back with real-time movement between different waveshapes in the wavetable (if you had the time to create several and then compute the transitions between them) or analogue filtering (if you didn't).
Oxford Synthesizer Company OSCar
The OSC OSCar had the capability to generate new waveforms using additive principles.I'm proud to say that the first commercially available synth with the capability to alter additive waveforms in real time was British, and the present author had the honour (if not the financial reward, for there never was any) of being the midwife at the birth. The OSCar, which was mentioned in a previous instalment of this series for the flexibility of its filtering system, also had the capability to generate new waveforms using additive principles. What was unique at the time was that you could actually hear the harmonics being added or removed in real time. I vaguely remember saying to Chris Huggett, during the OSCar gestation period, that if he was going to put additive capability on the OSCar, it had better be more usable than on other machines I had tried. I had clearly been traumatised by my singular lack of success in coaxing something interesting in the additive vein out of Oxford University Music Department's Fairlight on my sole encounter with it, making a mockery of the lengths of bribery and corruption I had gone to in order to gain access to it.
Although the system Chris came up with for defining the mix of harmonics was perhaps a little unscientific (each key on the keyboard represented a harmonic, and pressing it repeatedly in additive waveform creation mode increased its proportion in the overall result), it was fairly intuitive and gave you real-time feedback. If you didn't like the immediate change in timbre when you added a new harmonic in, you could just take it out again, without all that tedious mucking about with computing. The actual process of building up a waveform was so pleasing to the ear (as harmonics came and went) that several artists used it unadorned as intros to tracks. You can hear a clear example of this on Jarre's Revolutions (perhaps the most OSCar-intensive album ever made, although Ultravox's Lament comes a close second and their 'Love's Great Adventure' takes the award for most OSCar-laden single).
Unfortunately, this real-time change in harmonics during waveform creation could not be reproduced during playback, but two waveforms created like this could be played back at once, and then mixed or filtered to create real-time timbral change. I always found the mixture of an additive waveform with a conventional analogue one to be most useful in imparting a little bite and unique character to the traditional analogue synth sound.
Other real-time additive implementations started to appear, mainly from the realms of academe, and they were usually lamentable both in terms of sound quality and of playability <20> not to mention the poor appearance and hygiene of the member of the design team who had been let out of the lab to do the demo at the trade show where they were previewing. Mercifully, very few of these systems made it to commercial release, but one of the few that did (and proved to be one of the more successful implementations) was the Technos Axcel shown above. Of French-Canadian origins, it had a splendid multi-LED touch-sensitive user interface which made it possible to draw harmonic levels, waveforms and envelopes with a single sweep of the hand. This made it terrifically easy to use but also horrendously expensive (probably the main factor in its short life <20> a little over a year of intermittent commercial availability).
It also had the capability to load a sample, analyse it and produce an approximation to it built up from sine waves. While this was not very close in terms of fidelity, it made a great starting point for new sound creation (another of additive's traditional drawbacks is the amount of time it takes you to set all the harmonic levels and envelopes to get an interesting sound going <20> this made for a great shortcut). However, the Axcel's main strength was that it could set the amplitude envelope separately for each harmonic (or vary the level from other controllers), so you could get really interesting timbral changes in a sound in real time, and in this respect it pointed the way forward. The Axcel's weakness was that the more harmonics you used (ie. the more complex the sound), the more polyphony suffered (the best sounds were monophonic or duophonic), and this, coupled with its high price, led to its early extinction.
The Land Of The Rising Synth
It fell almost inevitably to Japan to produce the first implementation of additive synthesis which was both real-time and affordable without sacrificing polyphony. The Kawai K5, when I first came across it in 1987, was a revelation, and its sound and facilities still stand up pretty well today. Offering 8-note polyphony (only the DX series had ever offered more at the time), it nevertheless managed up to 64 harmonics per note (128 if you used two notes per voice) and, most important of all, real-time control of the levels of various harmonic groupings. I fell in love with it for its speed and flexibility, and for the fact that I had always known that there must be something in this additive synthesis business <20> I just hadn't managed to find it until then. If you can find one of these wonderful machines on the second-hand market (it also came in rackmount form as the K5M), it's well worth the paltry sum you will probably have to pay to make it yours. It makes a fine introduction to additive synthesis and is only bettered by Kawai's current K5000 range.
Also worth looking out for is the Dr T's Atari program which took Akai S900 samples and analysed them, for subsequent downloading, via MIDI, into the K5 as additive impressions. First seen on the Axcel, this capability would never fool anything but the most untrained ear, but it made for excellent sounds and a great starting point for new sound development.
Let's look at how the K5 allowed the individual level of harmonics to be controlled in real time, as this synth is one of the best models for successful additive synthesis (and one Kawai have expanded on in the K5000 series). As I mentioned earlier, one of the drawbacks of additive synthesis can be how long it takes to make a sound, simply because of the sheer number of parameters that needs to be set. There's the starting volume level of each harmonic, to begin with (in the earlier non-real-time systems, this was all you could do, because, having been computed, those levels then couldn't be changed). Just setting the level of each of 64 harmonics could take 20 minutes (more if you decided you didn't like the original level you had set). The K5 cut out a lot of the donkey work, by first showing you all harmonics at once, with a bar representing the level of each in the LCD display and then allowing you to select groups of harmonics whose values could be adjusted simultaneously. These groupings include Odd, Even, Octaves (2, 4, 8, 16, and so on) or 5th intervals (3, 6, 12, 24, and so on) or a user-definable Range specifying the lowest and highest harmonics you want to affect. Once these are selected, turning the increment dial raises or lowers the level of those harmonics in proportion. This may seem simple enough, but before the K5, no-one had streamlined the process to this extent. The Axcel's touch-sensitive interface made it quick to set the levels individually, but grouping harmonics was Kawai's innovation.
Of course, this is only the beginning of making an additive sound on the K5. To actually change the harmonic levels over time, we return to our old friend the envelope. It would have been possible to make do with the traditional ADSR-type envelope, but Kawai opted for the more flexible rate/level style, with six stages, and the settings of these rates and levels are all visible at once, which saves flipping between screens all the time. Each harmonic (or group of harmonics) can be assigned to one of the four envelopes. There are even short-cuts for the programming of these envelopes, to speed up the process of setting them up. Higher-numbered envelopes can 'shadow' or take on the settings of the lower-numbered envelopes. So you can set up the first envelope and then tweak the higher-numbered ones using the settings of the first envelope as your starting point, rather than having to do each one from scratch.
In addition to the four harmonic level envelopes, there are three more: one for the overall level of the sound, one for its pitch, and one for the filter. To the additive purist, this last word is probably the equivalent of blasphemy, but Kawai realised that sometimes there is just no substitute for the sheer speed of using a filter. Having said that, the filter is a very accurate digital one, with a unique set of parameters to control it. In addition to the normal cutoff frequency, the point around which the filter operates, you can specify the 'flat level' (ie. the amount of signal that is passed below the cutoff frequency). By reducing this to zero you can acheive the same sort of result as a band-pass filter; with it set to maximum you get a normal filter response without resonance; and in between, the frequencies immediately around the cutoff are passed at a higher level (very similar to the effect of resonance). The final parameter, Slope, actually gives a degree of control over how steep the transition is between the cutoff frequency point and the flat level. This is equivalent to changing the number of poles in an analogue filter (ie. increasing the dB/octave cut), and at very steep settings gives a similar result to high resonance settings.
Not content with that filter, Kawai also added a digital formant filter, which works on 11 bands set an octave apart. Although a full examination of formant filtering is really a subject for a future Synth School, the effect of this filter is very similar to that of a graphic equaliser, where the amount by which each octave of frequency range is boosted can be independently set. This dovetails very nicely with additive synthesis, because the harmonics are also related to the octaves above the fundamental <20> so you know, for example, that boosting the third octave will affect the harmonics centered around number 8 (if you play the lowest C). Kawai rounded off their real-time implementation of additive by making sure that it was not only the envelopes which could affect the harmonic levels, but also parameters such as keyboard scaling and velocity, giving additive an expressive feel for the first time ever.
Adding It All Up
Reading this piece might leave you with the impression that you're actually in the middle of a review (or a eulogy) of Kawai synths. But to talk about additive synthesis without mentioning Kawai would be like covering FM without reference to Yamaha, or analogue synthesis without Bob Moog. Yes, other people made FM synths, and analogue synthesis existed before Bob came along, but the DX7 and the Minimoog produced the most cost-effective and manageable versions of those types of synthesis, and for me the K5 is in the same league. It took a previously interesting but unwieldy type of synthesis and made it available in a form that was quick and easy to use. Sadly, no other manufacturer has picked up this ball and run with it. For 10 years the K5 has really been the only additive synth to sell in any quantity, and only Kawai's recent re-investigation of the concept has saved additive synthesis from being consigned to the history books (see 'Current Additive Possibilities' box).
The great thing about additive synthesis is that, unlike many of the other methods we have covered in these Synth School pieces, it has not been done to death. It is perhaps one of the most flexible types of synthesis, and is particularly well suited to the creation of abstract sounds rather than imitative ones. In terms of its usage in commercial music, the surface of what additive can do has hardly been scratched, and now that there is a new generation of additive synthesis on the market, I'm optimistic that we may see a revival in its fortunes. If you're looking to add a bit of originality to your music, whatever the style, additive will enable you to depart from the fixed sounds of PCM and the well-trodden timbres of analogue. You certainly won't exhaust its potential in a hurry.
Current Additive Possibilities
Until late last year (that is, when Kawai re-entered the additive arena with a totally updated additive synth, the K5000W, which I had the pleasure of reviewing in January's SOS), it looked as though the Kawai K5 might be the final full stop in the history of additive. You should refer to the K5000W review if you want precise details of how Kawai have used the extra processing power available in the late '90s to update their concept, but let me just broadly cover how the additive synth's potential has been expanded by the K5000 series:
There are now individual envelopes for each of the 64 harmonics.
The formant filter has 128 bands (adjusted on a semitone rather than an octave basis) and can be swept using the LFO or envelope.
The more standard filters now allow high- or low-pass configuration.
Envelopes can now be looped to cycle complex harmonic changes.
It is now possible to morph between harmonic snapshots in real time (an updated version of the old Merge capability on the Fairlight, which was so interesting but unfulfilling because it was frozen into a sample format).
The last major addition is the ability to add DSP effects to the sounds, something no modern synth can afford to be without.
Since looking at the K5000W, which featured some other facilities in the auto-accompaniment vein which I felt were of peripheral interest to synth aficionados, my excitement at Kawai's development of the additive strain has further increased with the release of the K5000S and K5000R. These two units forgo auto-accompaniment in favour of the real added value of an arpeggiator and <20> joy of joys <20> hard-wired and assignable knobs for real-time manual performance-parameter control. Having a dedicated knob to tweak the balance between Odd and Even Harmonics, adjust the Low and High Harmonics, move the Bias (centre frequency) and LFO speed and amount of the Formant Filter, not to mention the Cutoff and Resonance of the standard Filter and the main envelope parameters is, for me, the icing on the cake that Kawai have been baking for ten years now. Those of a more scientific frame of mind will appreciate the ability to assign the four user knobs to the parameters of their choice, and additive sounds are ideal for triggering from an arpeggiator.

View File

@@ -0,0 +1,131 @@
Synth School: Part 5
The Origins Of Sample & Synthesis (S&S)
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published December 1997
Synth School, Part 5: The Origins Of S&S;
Roland D50 multitimbral S&S synthesizer.
At the January NAMM show in 1987, Roland launched their D50, which mixed synthesis and sampled sounds in one package, a combination which has remained popular to the current day. Paul Wiffen examines how S&S evolved into the most widespread form of sound generation on the market. This is the fifth article in a 12-part series.
Until Roland launched the D50, sampling and synthesis had been perceived as two wholly different disciplines, almost like competitive ways of doing the same thing. Some people favoured sampling because it gave you a more accurate representation of actual instruments (the holy grail of piano, strings and brass, for example), while others stuck to the various styles of synthesis because they offered greater expressivity and speed of use. There had been massive improvements in the sampling arena in the preceding few years. It was no longer just the province of rich stars and well-paid programmers. The Ensoniq Mirage had solved the expense problem, the Prophet 2000 had made state-of-the-art fidelity affordable (12-bit linear as opposed to 8-bit companded) and shortly thereafter the Akai S900 had made fidelity relatively easy to use as well (thereby drastically reducing my income, as I had been making a nice living out of operating first Emu and then Sequential samplers for people who found them difficult to use!). So sampler ownership was reaching a much wider market than in the early '80s.
Sampling's principal remaining drawback in the late '80s was the amount of time it took to load sounds. As a result, the majority of people playing live, and those who were frightened by the idea of using computer technology (RAM, floppy disks and hard drives), were still using the various competing forms of synthesis we have examined in previous instalments of this series, because even if less sonic authenticity was available from these forms of synthesis, they responded better to velocity and aftertouch and (most importantly, I suspect) you could switch sounds instantaneously. The great debate raged between the two opposing schools of thought, often with things getting a bit personal. The great irony was that the whole situation was about to be resolved, by these two supposedly conflicting technologies being merged together (a bit like this year's shock announcement that Bill Gates was putting money into Apple).
Before The D50
Synth School, Part 5: The Origins Of S&S;
It has to be said that sampling and analogue synthesis were not existing in glorious isolation anyway; as early as the PPG Waveterm it had been possible to make a sample and then play it back on the Wave synth through analogue filters. The Emulator II added analogue filtering and enveloping to sampling technology, and this was carried over into more affordable samplers such as the Mirage, Prophet 2000 and Akai S900 (although many people never used the facility). And even on synthesizers there had been the odd attempt to increase the fidelity of certain sounds by using small PCM samples loaded into ROM. (This was how the Ensoniq ESQ1 provided its drum sounds.) But all these half measures meant that synthesis and sampling were seen as mutually exclusive fields <20> until the D50 came along.
The D50 used a much larger amount of PCM ROM (separate from that holding the operating system of the synth) to store a significant number of samples, allowing the expressive performance of a wide range of sounds previously only possible with any fidelity on a sampler. Although the D50 itself didn't have a sequencer, this approach paved the way for a new breed of instruments known as 'workstations', which were designed to perform a wide range of musical tasks <20> for example, playing drum, bass, piano and string parts simultaneously using an internal sequencer. These sounds were the ones which were the most difficult to make with analogue or digital synthesis, the ones which had previously only been possible by loading a disk or two's worth of data into a sampler. Although Ensoniq had already released the first instrument worthy of the workstation title, it was Korg who had the breakthrough success with the M1, not because its sequencer was notably easy to use, but because of the sheer size of its palette of sounds. The reason was that the samples in the M1 were larger than those in the D50, in the same way that the D50's samples had been larger than the percussion snippets in the ESQ1.
How Did They Do That?
Synth School, Part 5: The Origins Of S&S;
It's still well worth looking at how the D50 generated sounds, because in the course of taking a few enforced shortcuts (dictated by the budget they were working to), the Roland engineers came up with some techniques which changed synthesis forever.
The D50 was actually a hybrid of three previously distinct technologies:
<EFBFBD> Analogue (or subtractive) synthesis
Digital sampling
Digital signal processing (DSP) for effects
These core technologies met in the D50, perhaps not for the first time, but certainly in the most affordable and usable way. Although each on its own would not have been enough to make a viable instrument (the samples were too short, the synthesis too restricted and the effects too primitive), the combination of the three made an instrument people couldn't wait to get their hands on. And although its imitative capabilities have long since been surpassed, as a synthesizer it still has much to recommend it today.
Let's first analayse the strengths and weaknesses of the D50's three component technologies:
Analogue synthesis took electronically generated waveforms and used filtering to shape the harmonic content of the sound over time. While such a process was excellent for creating rich and interesting sound timbres, its imitative capabilities were limited, especially for inexperienced users. In addition, polyphony was usually limited, due to the need for discrete circuitry for each voice.
The digital sampler used the technique of digitising sound and storing it in computer memory to allow real instruments to be recorded and played back from the keyboard. This provided an instant realism unavailable from traditional synthesizers, but with a loss of expressivity (the only nuances available being those 'frozen' in the recording). The amount of sample recording time available was very limited, due to the high cost of computer memory. In addition, instruments had to be sampled every few notes along the keyboard for authentic reproduction, which used up the available memory even more quickly. Looping (repeating an unvarying section of the recorded sound for as long as the key was held down) helped reduce memory usage enormously. Techniques such as fading or switching between samples depending on how hard the key was hit increased expressivity, as did the introduction of analogue components such as filters and envelopes, to allow the timbre of the recording to be changed by playing style.
Digital signal processing, the third component, had reached the stage where a single DSP chip could be programmed to imitate many different analogue effects, such as chorus, flanging, reverb and echo, and even combine two or more of these effects simultaneously.
Interestingly, the D50's PCM loops pre-date the use of sampled drum loops, now omnipresent in modern recordings.
Roland used the term Linear Arithmetic Synthesis (or LA Synthesis, for short) to promote this combination of technologies, although the irony is that the term gives very little clue as to how the D50 works, based, as it is, on one often-overlooked part of the process which determines how two waveforms are combined. The options are to sum the waveforms together (like two oscillators in analogue synthesis) or multiply them (as in ring modulation or various digital synthesis techniques). It will become clear that this part of the process played little or no part in the D50's phenomenal success. It seems likely that the term was coined more for its two-letter acronym value (like FM, PD, and so on).
So what were the key components which gave LA synthesis its appeal?
The Sampled Attack
Synth School, Part 5: The Origins Of S&S;
What let analogue synthesizers down more than anything in terms of imitation was that they could not create the extremely complex sets of harmonics present at the beginning of most acoustically produced sounds. The first few milliseconds of sound, when a piano hammer strikes a string or a bow begins to move on a cello, have a huge harmonic content not available in traditional electronic waveforms. If we do not hear these short-lived frequencies, we perceive the sound as lacking in authenticity.
The Roland engineers realised that if they could use a digital recording to produce the initial attack, this would go a long way towards creating realistic instrument sounds. In addition, very little computer memory would be needed to store these very short 'attacks'. Pulse Code Modulation was used to record the attacks, and they were stored on Read Only Memory (ROM) chips, which did not lose their contents when the power was shut off, unlike the Random Access Memory (RAM) chips used in samplers. This did away with the need for floppy disk drives (although a memory card could be used to make further PCM samples available).
The Sustain Loop
Synth School, Part 5: The Origins Of S&S;
The Roland engineers turned to waveforms that had more in common with analogue synths in order to produce the sustained portion of the sound. The looped portions of samples often sounded very similar to traditional synthesizer timbres. They were known as single-cycle loops, as they contained only one of the repeating patterns which make up the timbre of an electronic oscillator. The small amount of data contained in such loops meant that they took up very little room in the ROM chips, which meant that notes could be held indefinitely without using up valuable RAM space.
Partials
Synth School, Part 5: The Origins Of S&S;
Obviously, the fact that different parts of the sound were being created by different PCM waveforms meant that it was necessary to control these sources separately. The solution that was devised to allow this was called the Partial. There were four Partials available for each sound program in the D50. Each Partial could be loaded with a PCM waveform and then combined with its fellows. Each Partial used one voice of the D50's polyphony, so it was often better not to use all four Partials unless really necessary (more complex sounds would result in less polyphony). Most of the more realistic sounds used at least two Partials (one for the sampled attack and one for the loop section). However, some of the sounds had four Partials, using two pairs of two to create the effect of two sounds layered together (piano and strings, for example). Perhaps the sounds which characterised the D50 the best were those which used all four Partials independently, to create complex evolving timbres. These did not require much playing, but simply sustaining a note or chord while different elements faded in and out. Such sounds were almost unknown to the average musician before the D50, being only possible on professional systems such as the Synclavier or PPG.
Add Or Multiply?
Synth School, Part 5: The Origins Of S&S;
Although I've played down the importance of the way the Partials were combined in the success of LA synthesis, it is actually one of the things which makes the LA process of interest as a synthesis type today, now that it has been superseded by systems with more memory. The standard way of combining the different elements of sound since the beginning of synthesis was simply to mix them <20> ie. sum them in a linear fashion. This was just one way of combining the sounds on the D50 (admittedly the one used by the majority of its sounds). The other was to multiply the waveforms together, which tends to create metallic or bell-like timbres, as the normal harmonic series is supplemented by less usual frequencies. This is where (for me at least) the really interesting sounds from the D50 were created. Although the success of the instrument can be attributed more to the new level of authenticity it brought to conventional keyboard sounds than any revolutionary new timbres it created, it is in this latter area that purchasers of second-hand D50s today may want to look for its unique character.
The combination of the four Partials on the D50 was determined by Algorithms. Although there were nowhere near as many as offered by 6-operator FM, they were illustrated on the front panel of the instrument, just like on the DX7 (see picture above). These algorithms allowed you to determine which sounds were added together and which multiplied. You could, for example, multiply two pairs of Partials and then add the results, or multiply one pair and add the result to two other Partials. However, this feature of the machine was perhaps the least exploited, with 95% of sounds created by simply adding together whichever Partials were selected.
Enveloping
The combination of sampled attack and single-cycle loops meant that realistic strings, brass and other sustained sounds could be easily created. However, pianos and other timbres produced by striking do not sustain forever, but die away gradually. A single-cycle loop, therefore, has to be 'faded out' to simulate this effect. The more sophisticated samplers had already borrowed envelopes from analogue synthesis to deal with this, and the D-series engineers followed suit. Because the D-series used sampled attacks along with single-cycle loops, a more complex envelope system than the conventional ADSR was used, with six independently adjustable times and levels was employed. In this the D50, once again, had more in common with the DX7 than conventional analogue synthesizers.
Because the D50 featured analogue-style filtering (albeit digitally controlled, hence the term DCF), these envelopes could be used to adjust not just the volume of the loop over time but also the harmonic content. This was absolutely critical to the realism of LA synthesis, as you might not want the looped part of the sound to retain its full harmonic content throughout a decay. Without the ability to vary harmonic content, any decaying sounds, from pianos and accoustic guitars, through to tuned percussion, would have had little realism.
Blending Partials
Even very careful adjustment of the basic envelopes could not prevent the attack and looped segments sounding like two different sounds being triggered at the same time. What was needed was something to 'fuse' the sounds together so that they became one instead of two distinct timbres. To achieve this, the Roland engineers used DSP effects such as reverb and chorusing to blend the parts together. Reverb tends to make the point where a sound ends difficult to perceive, so this was perfect for hiding the fact that the sampled attack had suddenly stopped and the looped portion was the only part remaining. Chorus was also good for adding some timbral movement to the single-cycle loops, which can be a little static to the ear.
As a result of all the M1's features, Korg found themselves with the best-selling synth of 1988/9.
Because the sounds which needed this 'smearing' really did need it, in order to be usable at all, the Roland engineers made the effects setting part of the basic patch, so that it was always automatically selected with the program, becoming one with the sound and therefore a fundamental component of the synthesizer for the first time. Previous synths had often featured a chorus unit, but usually as a separate item (it was not usually tied in permanently and selected simultaneously with the timbre, except in the case of the Roland Jupiters and Elka Synthex). Although the inclusion of full-scale DSP effects may originally have been decided upon to mask problems with this style of sound creation, it changed the face of synthesis forever. Today it is virtually impossible to sell anything but the most basic of monosynths without a built-in effects capability. This is because once a programmable DSP effect section has been added to a synth, there is no reason not to make available as many different effect algorithms as possible.
The Wider Effect
Once the DSP chip was inside the unit, there was no reason to limit its use to disguising the shortcomings of the D50's synthesis system. Reverb and chorus had a very pleasing cosmetic effect on any sound and were used on virtually every patch. Effects like distortion and ring modulation could take the most bland source waveforms and turn them into complex, expressive sounds. As a result, the D50 caused an even bigger change in the world of synthesis than the processing of sampled sounds through analogue-style synthesis. With the exception of a few professionals, who would pointedly ask to hear synths on demo with the effects bypassed, the majority of purchasers simply accepted that this was an improvement in the final sound they could obtain from a synth, without the need to hook-up expensive external effects. Soon built-in effects were the norm, not the exception, for synthesizers.
The Korg M1
The next refinement of the PCM-based synthesizer was the Korg M1, which burst upon the world one year later at the 1988 NAMM show in Los Angeles. The price for memory had come down since the launch of the D-series, allowing Korg to increase the amount of memory within their new instrument. The major advantage of this was that instead of having to split samples into attack portions and single-cycle loops, they could use samples which moved naturally from the attack into a longer looped section, in exactly the same way as in a sampler. This meant that it was no longer necessary to disguise the join between the attack and the loop, because there no longer was one.
Another advance the M1 made was that only one voice of polyphony was required to play back each entire sound. This meant that polyphony did not vary from one sound to the next quite as dramatically as on the D50 and, at the same time, the sounds were not so reliant on the built-in effects to make them sound natural. Sound-stacking could be used to make very complex timbres, rather than being necessary just to create authentic simpler ones.
Of course, this did not mean that all the M1's sounds were perfect reproductions of the instruments from which they had been sampled. With hindsight, a lot of the original sounds in the M1 used perhaps too short a segment of attack sound, and the loops came too early for authentic reproduction of timbres like pianos, guitars and other sounds which die away gradually (although anything which could sustain indefinitely, like strings and brass, was extremely authentic). As a result, the M1 produced a 'compressed' sound which became very popular in certain styles of dance music. The M1 Piano, in particular, became a staple of house remixes because it was artificially bright and 'in your face', and the organ sound on it was a similar staple for garage music. Whilst you would rarely use an M1 piano today for a 'straight' piano sound, at the time the M1 brought an unprecedented level of authenticity to sample-based synths.
The other thing which the M1 offered, over and above the authenticity of its sounds, was multitimbrality (the ability to play numerous different timbres at the same time). This could be done from the on-board sequencer or via MIDI, assigning each timbre to a different channel for triggering. Multitimbrality wasn't new; Sequential circuits had introduced it at the end of 1983 in the Six-Trak and Ensoniq had made it a major feature of all their products since the ESQ1 in 1985. However, the M1 was the first instrument with a good range of really authentic sounds to offer this facility. As such, it was perhaps the first synth whose on-board demos sounded like complete pieces of music, because it had everything from authentic drums and basses, to piano, strings and brass, guitars and synthesizer sounds, all in one unit.
Many previous keyboards had featured sequencers, but the usefulness of these was limited by the number of timbres they could produce simultaneously, or by the limited range of the synthesis type they featured. The full PCM multitimbrality of the M1 meant that the sequencer became much more than a sketch-pad or demo facility. It was a compositional tool which was hooked directly to the sounds. This meant that people who had no computer sequencing facilities or knowledge of MIDI could sit down and play something, record it, and overdub more tracks, with different sounds on each one. Whilst those who had mastered MIDI and computer sequencing would find nothing remarkable about this, it was a real revelation to those who had never experienced the power of MIDI sequencing.
The term 'workstation' was borrowed from the computer industry to market this concept, as well as a floppy disk drive as standard, so that M1 sounds and sequences could be recorded, saved and loaded back into the machine. As a result of all these features, Korg found themselves with the best-selling synthesizer of 1988/9.
The DSP Effect
The only area which caused a bit of difficulty for M1 users (and still does today, as one in every 10 calls to the Korg technical helpline still bears out) was the allocation of effects in multitimbral mode. Like the D50, the M1 features built-in DSP for a wide variety of effects, which are also memorised and selected by the individual sound Programs. So whilst using a single timbre, it is possible to obtain sounds which are dramatically altered by the DSP. However, when Combi mode (which allows multiple timbres to be available simultaneously) is selected, the chances are that these sounds will suddenly become very flat and uninteresting. This is because the DSP circuitry in the M1 can only produce one effect setup at a time. Unless the Combi setup has exactly the same effects and routing setup as the individual Program, there's bound to be a noticeable difference to a Program when it's selected as just one of the sounds in Combi mode. The degree of difference is determined by two factors.
The first of these is how close the settings of the individual Program are to those selected in the Combi. If reverb is selected for both, and only the amount of early reflection or size of space has been changed, the difference will be subtle and not noticed by many novice users. If the Combi and Program settings have different effects selected, such as chorus and echo, the difference will be far more noticeable even to the most untrained ear.
Until Roland launched the D50, sampling and synthesis had been perceived as two wholly different disciplines, almost like competitive ways of doing the same thing.
However, the second factor is usually what alerts the novice user to the problem. If, in Combi mode, the user selects a Program where a complex DSP effect is actually creating the timbre from a very simple source (such as a distorted guitar sound), suddenly all the character of the distorted guitar or synth disappears and is replaced by a thin plucked sound. This often happens after three or four backing tracks have already been recorded and the user wants a really exciting lead instrument to play over the top. The general reverb used for the other sounds just does not work as a substitute for the distortion the guitar Program uses. The change in some timbres is so radical that some users at the time even contacted their dealers to report that their synth was not working properly. Many dealers received M1s back for repair, only to find that there was nothing wrong with them, except for an inability to faithfully reproduce several different Programs simultaneously in Combi mode. This is because the M1 does not have multiple effects processors (as do most other multitimbral workstations). M1 users have to try and make Combi setups for multitimbral sequencing which can share the same effects setup. This may mean backing off the reverb on that string sound, so that the bassline doesn't disappear in woolly mush, or leaving the chorus off the guitar, because it makes the piano sound like a honky-tonk.
The best way to deal with this situation is to try and plan which sounds you want to use simultaneously in advance. It is usually obvious which sounds really need the effects to retain their inherent character and which ones only use the DSP for a little sugar-coating. Then you need to reach a compromise between the amount of effect needed to give one sound its character and the amount which will not render the others unrecognisable.
Ideally, each M1 sound would have effect 'send' amounts, but this capability was not introduced by Korg until much later, on workstations such as the X-series. The result was not the same as if separate effects were available on each part individually, but it did give the best compromise available. This effects limitation was only finally resolved last year on the Trinity, which does have enough DSP horsepower to allocate 'insert' effects separately to each part during multitimbral operation, and also to offer an overall master effect, such as reverb, which is available to every part in amounts determined by effect 'sends'.
The Rest Of The D-Series
As with the Yamaha DX7, the previous universally successful synthesizer, the D50 spawned a whole series of descendants, both larger and smaller, from the flagship D70 to the D5 and D10 at the lower end of the market. None of the D50's descendants really added anything to the basic principle of LA synthesis, nor to the basic architecture of their antecedent. The D70 was the only machine which expanded at all on the spec of the D50, and that was in practical areas like the number of keys on the keyboard, and controller functions. So the D50 remains the definitive example of LA synthesis.
LA Synthesis In The '90S
While the fidelity and sonic quality of LA synthesis have long since been surpassed, the role of the D50 need not be a purely historical one. Unlike the DX7, which can be replaced by more recent FM synths from Yamaha and others, the D50 can still make certain sounds which no other single synth on the market can emulate, and many of its trademark sounds still have a place in modern music.
One of the things for which the D50 became famous, and eventually infamous, was its looped sounds featuring a rhythmic element and shifting harmonic overlays, previously seen only on the Prophet VS and PPGs. These are now a staple of most synthesizers, and for this we really do have to thank the D50. The most famous D50 sound of this type was 'Digital Native Dance', a slowly evolving combination of synth timbres and a percussion loop. Although this particular sound was done to death as an intro on recordings by many artists in 1987, including the great Wacko himself, the other complex looped/ambient sounds on the D50 still have a certain charm. You can even create your own combinations by mixing and matching synth timbres with percussion loops.
Interestingly, these PCM loops pre-date the use of sampled drum loops, now omnipresent in most modern recordings, but they have the same drawback as sample loops in that their tempo cannot be changed without re-pitching the loop, and they therefore cannot be synchronised to other instruments within the track (hence their use as intros and ambient backgrounds). The ReCycle approach developed by Steinberg cannot be used, as there is no way to download PCM samples into the computer, and even if there were, the results cannot be loaded back in nor sync'ed to MIDI Clock. However, if you're fortunate enough to find a pitching where the loop fits harmonically and rhythmically with your track, re-triggering at the beginning of each bar (or every few bars) can be quite effective. If not, you may need to use the loop as a starting point or inspiration for a track and then fade it out when the other elements kick in. It must be said, though, that there are better systems available now for creating interesting synthesized loops in your music.
Ironically, it is in the setup of the often-overlooked Linear/Arithmetic algorithms that the biggest potential for creating unique sounds on the D50 remains. Avoiding the linear summing of partials and opting instead for arithmetic combining brings you into the sort of territory you could otherwise only explore with ring modulation or FM, to which the LA process is related. It's the same procedure of building complex sounds very quickly by multiplying simpler waveforms together, but what's unique about the D50 is that the source samples are no longer just sine waves (as in the case of Yamaha FM) or other analogue waveforms (as in basic ring modulation) but complex, sample-derived timbres which already have many harmonic characteristics before you combine them. The result of multiplying such sources together can give unpredictable but fascinating results <20> occasionally beautiful, often angular, and even ugly, but never dull. Even timbres which are unpleasant when dry acquire an interesting character when processed through effects, so don't write off even the ugliest sounds until you've smoothed them out with some chorus or reverb.
I find it particularly interesting to try this with one of the aforementioned percussive loops on one Partial and a sustain loop on the other, as this imparts a crunchy rhythmic feel to a sustained timbre. Persistant experimentation is the key here, as the result will not always be usable on first combination (unless you're at the most industrial end of the techno movement, in which case the first thing you try will probably fit right in).
One tip, though: steer clear of sampled attacks for things like this. If both Partials you're combining are just attacks, any nuances of arithmetic combination will probably not have time to come through and if only one of the Partials uses a sampled attack, the other will sound even more dull and lifeless once the attack is over. Pair arithmetically combined sounds of similar duration, and then add attacks or longer sustains linearly, for a more even result.

View File

@@ -0,0 +1,110 @@
Synth School: Part 6
Building On PCM <20> The Next Generations
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published February 1998
Ensoniq's VFX-SD workstation.
Ensoniq's VFX-SD workstation.
The ball of S&S synthesis had been thrown, and most of the big names in synthesis caught it and ran with it, scoring some notable goals in the process. Paul Wiffen continues his chronicle of modern synthesiswith a look at the state of play from the late '80s to the present day. This is the sixth article in a 12-part series.
When we left the PCM-based synth story in the last instalment of this series, Korg's M1 workstation had superseded the Roland D50 and was in a dominant market position. But technology marches on, and there were many new developments in sampled-based synthesis and hybrid systems still to come from Ensoniq and Yamaha (amongst others), but also from Korg themselves.
An American Tale
Sampling and FM synthesis technology in one box <20> Yamaha's SY99.
Sampling and FM synthesis technology in one box <20> Yamaha's SY99.
The next company to develop PCM-based synthesis was the American manufacturer Ensoniq. In fairness to Ensoniq, they were actually the first company to put sample-based waveforms in synthesizers. Back in 1985, the ESQ1 had a few small PCM samples built in to allow drums and strings to sound more authentic. In fact, these samples, combined with a built-in sequencer, made the ESQ1 a candidate for the title of first workstation synth.
However, the VFX was really the first machine from Ensoniq which could be compared with the Korg M1, in that it had quality samples and effects, multi-stage envelopes and multitimbrality. Introduced in late 1988, the VFX lacked only one thing to qualify it as a workstation <20> a sequencer. This was added in the VFX-SD the next year. As the suffix implies, Ensoniq added not only a Sequencer but a floppy Disk drive to this model, to enable saving and loading of sequences and programs.
The VFX architecture is well worth examining, because for the first time it made a PCM-based synth as easy to set up for live use as the split/layer keyboards of the late '70s and early '80s. Combining M1 Programs into Combis had always been a bit of an effort, and certainly not the sort of thing you tackled on stage halfway through a gig. For the VFX, though, a system was developed which made the process easy enough to contemplate in front of an audience.
Live Or Programmed?
Emu's Proteus 1.
Emu's Proteus 1.
Ensoniq achieved the task by adding a different section to the VFX, from which the combining sounds could be played live. This was separate from the setup used for multitimbral access by internal or external sequencer. The arrangement made perfect sense, because the parameters you need to change quickly when layering two or three sounds together live are very different to the parameters you might need to adjust during the playback of a multitimbral sequence. The VFX allowed the user to quickly select three different patches (the second and third by double-clicking) and then see and adjust their respective volumes, pan, keyboard range, effect amounts and other important parameters related to live presentation. This meant that VFX users could very quickly assemble a complex split/layer setup, with maybe two sounds under the right hand and a bass sound under the left, balance them, and position them in the stereo mix, without the minutes of parameter adjustment that would have been required to do the same with an M1 Combi, for example.
As a result, a whole new breed of players was encouraged to start programming, because the surface layer of the VFX gave easy access to the combining of programs into performances without the need to develop an in-depth knowledge of how the machine worked. Whether any of them were encouraged by this to delve deeper into the machine's archictecture is open to debate, as the story of synthesis seems to be one of more and more user-accessible parameters being accessed by fewer and fewer users. It seems to have taken the return to popularity of simple analogue synths to encourage people back into knob-tweaking for themselves.
Inside The VFX
Korg's T2 workstation.
Korg's T2 workstation.
Those intrepid users who did venture into the structure of the individual VFX patch were rewarded with a voice structure laden with possibilities. The VFX's voice architecture was actually more reminiscent of that of the D50 than that of the M1 (which had tended to sound OK with one or two base samples, so Korg's engineers hadn't needed to develop so complex a voice architecture). The VFX, however, now allowed up to six voice components instead of the four components (Partials) of the Roland machine. Not all of these had to be used, and many sounds used only one or two source waveforms, especially as the VFX did not split individual sounds into attack and loop segments. Using all six components meant that you could create some of the most complex, evolving sounds ever possible on a synthesizer, especially as the multi-stage envelopes available on the VFX could be used to control the level of each component individually. As a result, the VFX could produce sounds of such complexity that they made the D50 sound like an old two-oscillator synth. If three patches were layered together, up to 18 oscillators could be triggered from a single key. However, that many oscillators piled together can be rather overpowering in everyday sounds (not to mention the fact that they exhausted the synth's polyphony very quickly).
Yamaha's SY99 took things a stage further, by adding a disk drive and the ability to load samples into RAM. This meant that users could actually take their own samples and combine them with FM sounds.
Within each component of a sound the possibilities were even more complex. In another development reminiscent of the D50, a series of waveforms could be strung together to create a loop, and these were no longer fixed by the manufacturer as on the D50; users could now specify the starting and finishing source waveforms for their loops. They were, however, restricted to using them in an order defined by Ensoniq, so the best results still came from the serendipitous sequence of waveforms at the design stage. However, it was possible to add or take away waveforms from the beginning or the end of a sequence, or even to move to another part of the waveform ROM completely, giving a lot more control to the user than the D50's fixed loops did. Playback rate was, of course, still fixed, but at least the rhythmic patterns created could be changed to some considerable extent.
As Ensoniq had also taken multi-stage envelopes further than before, with multiple rates, levels and loop points, the potential for creating sounds of unprecedented complexity was great (as were the chances of getting completely confused and giving up). However, in the hands of Ensoniq's creative team of developers astounding results were achieved, some almost qualifying as pieces of music in their own right. A selection of breathtaking programs shipped with the VFX, some still evolving and bringing in new components a couple of minutes after being triggered. This only led to one problem: how should you use them in a track? It was back to the old story of intros and quiet middle sections (where the synthesist in a band has too often been banished before). Problems with sync'ing rhythmic elements to the tempo of the song still existed, so keyboard players stuck to the basic pads and imitative instruments unless they were doing film, TV or ambient music where such restraints are less common. It would be left to Korg, a few years later, to solve this problem and allow complex changing timbres with rhythmic elements to be sync'ed to the tempo of the song.
Performance Controls
To make the VFX's six available components within the voice more versatile for performance (and to prevent enthusiastic programmers exhausting the polyphony too quickly), Ensoniq used an expanded set of real-time controllers to bring different voices in and out of play. So, instead of combining together voices with radically different elements, it was possible to group together voices which were very similar but with slight variations. Additional switches provided the means to mute and un-mute different components. For example, a flute patch might have a straight flute sound as one of its components, one with extra breath noise as its second, a third with a 'flutter-tongue' effect, and so on. Switching these in and out in different combinations meant that the user could circumvent one of the biggest problems with sample-based synthesizers, the fact that the source PCM waveform cannot be altered in real-time.
This was an important development in PCM-based synthesizers. It has been taken on by many other manufacturers, whether in keyboards which feature expanded performance controllers, like Korg's most recent PCM-based machine, the Trinity, or Emu's range of modules featuring extensive modulation routings, which allow the user to make the most of the standard MIDI Continuous Controller inputs. A raw PCM sample can only ever be a 'snapshot' of an instrument at one moment in time, played in one way. Only by mixing between different snapshots in real time can any sense of the motion and change that is part of the nature of real instruments be conveyed by a PCM-based synth.
The VFX also improved on the amount of multi-channel MIDI access that a user could have to PCM-based synthesis. Twelve patches could be set up very quickly to be controlled on any MIDI channels, with similar key range, transposition and controller setups to the performance mode. As computer-based sequencing became more and more important to the majority of users, it was a major plus that 12 of the VFX's programs could be used simultaneously on different MIDI channels. Hence, the Multi was born (as opposed to the Combi, a Korg invention). In general, these days, PCM-based keyboards tend to feature both a Combi-derived mode for performance (it may even be called Performance Mode) and a Multi-derived mode for internal or external sequencing (usually accessed by a switch marked 'Seq' or 'Multi'). The jargon may vary from manufacturer but the two are usually easily distinguished, as one gives quick access to a lesser number of sounds, while the other gives much more complete access to at least 16 different timbres (several manufacturers now have schemes, especially on PCM-based modules, for 32 different timbres to be used simultaneously).
For under <20>1000, Emu's Proteus gave the user access to many of the sounds which had only previously been attainable from a <20>2000 sampler.
Ensoniq & Effects
Of course, the old problem of different effects being an integral part of sounds and not being available simultaneously reared its head when MIDI was used to sequence multiple programs. Whilst Ensoniq didn't have the solution to this (which can only be the expense of the extra hardware to provide a separate effect circuit for each multitimbral voice, as on the Korg Trinity), its Copy routines did allow the most important effect in any combination of programs to be quickly copied to the effects buss of the Multi section. This meant that the procedure described last month when I talked about the M1's effects (for copying the effect of the sound most reliant on it and assigning some of the other sounds to it as appropriate) could not only be carried out on the VFX, but it could also be achieved more quickly. It is noticeable that nowadays most PCM-based synths allow this procedure; the main difference between synths is whether it is made easier or more difficult by the architecture of the machine in question.
This capability was equally useful on the VFX-SD, which had the necessary on-board sequencer and floppy disk drive to qualify it as a workstation. Now the chances were increased that the user might be trying to create his/her entire piece on the one instrument, it was even more important that the ability to share the two simultaneously available effects between up to 12 musical parts was as easy and flexible as possible. In addition, the sequencer could record any changes to this shared effects capability.
Although they have not made such a giant leap forward again since the release of the VFX, Ensoniq have continued to include the originally unique aspects of the VFX design in their subsequent synthesizers. After the VFX-SD came the TS10 and its 76-note keyboard variant, the TS12. These had even more complete sequencers and so took over the 'top-of-the-range' workstation mantle. They also added the ability to load samples from the Ensoniq library as source samples, allowing the user to customise the basic set of sounds in the machine. (One of the principal problems with PCM-based synths is that if you don't have a kazoo sample in the basic waveform ROM, you are unlikely ever to get a kazoo sound out of the machine.) For the smaller budget, Ensoniq also produced the SQ1 and SQ2, which came with a less powerful sequencer and synthesis architecture. Those who are interested in the combination of multiple sound sources within a single program, however, will still find the same ability in these machines.
EMU Exploit Their Sound Library
Around this time there was a major change in direction from Emu Systems, who, until this point, had concentrated almost exclusively on samplers. Although Emu's samplers had much of the subtractive synthesis architecture of the machines we have already looked at here (in fact, the Emulator II was the first sampler which included a full complement of filters, envelopes and other subtractive standards), they could not really be called synthesizers because all source waveforms had to be loaded from disk into RAM. It should be noted, though, that Emu samplers come much closer to the traditional synthesizer than those from Akai or even Roland.
Emu had been making a big investment in sound sampling for their machines for more than 10 years by this stage, and they suddenly realised they had a marketable asset (outside the library disks which they been selling or bundling for their end users). They decided that if they were to design a small rackmount module which had the synthesis capabilities of their samplers, but with lots of short samples pre-loaded into masked ROM, they too could join the PCM synthesizer revolution.
Thus was Proteus born, named after the Greek god who could change his shape at will (a reference to the number of different instrument multisamples in ROM, which the user could instantly switch between). For under <20>1000, it gave the user access to many of the sounds (albeit in much shorter samples and loops) which had only previously been attainable from a <20>2000 sampler, and then only after waiting minutes for the sample data to load from floppy or hard disk. The result was an overnight success, and thousands of these modules are still in use.
But the real appeal of Proteus for those interested in synthesis was not the instantaneous availablility of quality Emu sounds but the very real synthesizer architecture of the machine. It inherited all the standard Emu synthesis capabilities, with proper filtering, enveloping, and a modulation routing system to die for.
As computer-based sequencing became more and more important to the majority of users, it was a major plus that 12 of the Ensoniq VFX's programs could be used simultaneously on different MIDI channels.
Not only could Proteus change its shape from program to program, but also from machine to machine. Emu hit on the idea of selling their extensive library piecemeal, divided into categories for the needs of different users <20> so whereas the original Proteus gave you a wide sweep of sounds for general use, subsequent machines became more targeted to specific music styles. The first of these was Proteus 2 Orchestral, a big hit with film and TV composers who, even if they didn't use it in their final mixes, found it invaluable for composing and trying out arrangements and orchestrations. Proteus 3 World satisfied a growing demand for ethnic samples after the influence of Peter Gabriel's Real World label started to make itself felt in the crossover markets, and the Procussion gave drummers and synthesists the same editable access to a huge drum library. The real payoff, though, came with the Vintage and Classic Keys models (by now the numbering system had been abandoned). These offered sampled synthesizer waveforms from classic synths of yore, which could be properly filtered, modulated and enveloped through a real synthesizer voice circuit. If you want an example of how Samples & Synthesis can be a really creative tool, take a look at these two modules.
The Proteus heritage still continues today in the UltraProteus (a sort of greatest hits with some extra filtering capabilities from Emu's own Morpheus, which we will look at next time) not to mention the increasingly bizarrely named modules (Orbit, Planet Phatt and Carnaval) which court the dance market. Other manufacturers have clearly learnt the lessons of marketing specific sound sets at different target groups (Roland and Akai in particular), but for me the joy of Proteus remains in the synthesis rather than the sample side, which is why those machines loaded with synth waveform samples give the synthesist the greatest creative potential!
Korg's Response
After the success of the M1 and its rack relations, the M1R and M3R, the next series of synths from Korg, the T1, T2 and T3, addressed the growing criticism the M1 had started to suffer for the compressed nature of its more percussive instruments, such as piano and guitar. Korg did this by taking advantage of the continually dropping price of ROM and allocating twice as much memory to the storage of the PCM samples. This meant that the sample could be longer before the loop needed to start on sounds which decayed, and the loops could also be longer, if required, on sustained timbres. As a result, the piano and guitar sounds gained much more natural decays and so could be used for a wider range of music styles, rather than just the fast repetitive triggering of dance music, where subtlety was not required. The T-series appealed much more to the performer, and this was why the flagship T1 had a full-range weighted keyboard, to allow traditional pianists to feel more at home with it.
Of course, as Winston Churchill remarked, you can't please all of the people all of the time, and many people in the emerging field of dance music complained that the T-series didn't sound like the M1, and so didn't use them. The reason was simple. The compressed nature of the sounds in the M1 made them ideal for the no-holds barred sound of dance music, where everything needed to cut through and be louder than everything else. Pre-compressed sounds such as the M1 pianos and guitars were ideal, especially if the user didn't have the money <20> or the awareness of the need <20> for a separate compressor. As a result, Korg have made the orginal M1 samples available several times in more recent products to court the dance market (the X5DR and Trinity PBS options, as we shall see later).
Another key feature of the T-series was that the user could load his or her own samples into the machine for processing through the instrument's filtering and enveloping. Like the Ensoniq TS-series, this gave the user a way around the main limitation of PCM-based synthesis <20> that if the waveform ROM does not contain a multisample approximating to the sound you need, you'll be hard-pressed to drag said sound out of the machine. Now Korg users could at least expand and customise the source waveforms with samples to take care of their less mainstream needs. On the T1 the sample RAM to do this came as standard, whereas on the T2 and T3 it was an optional extra.
The 0/W series added more refinements; the major step forward was the doubling of polyphony to 32 notes. The module version, the successful 05R/W, also helped develop the PCM-based multitimbral module into a commonplace item in any setup. One of the most important improvements that arrived with the 05R/W was the implementation of effects via a send amount system. This allowed you to remove effects from sounds like the bassline (which often didn't need them, especially if the effect was reverb) by setting the effects send amount on your bass sound to zero.
The story of synthesis seems to be one of more and more user-accessible parameters being accessed by fewer and fewer users.
Next came the X-series, which made the concept more accessible at the low-budget end and gave more user control. The module versions, the X5D and X5DR, added some of the original sounds from the M1 because these had become so important for certain styles of dance music that they became a major selling point (despite being less authentic than the more recent versions of pianos and organs which used more generous allocations of RAM).
The M1 tradition culminated at the beginning of 1996 with the Korg Trinity, still using PCM samples as its main source of sounds (now at 48kHz sample rate, allowing this synth to be used in an all digital systems with the addition of S/PDIF or ADAT interfaces), but with the addition of separate DSP circuits to finally get around the problem of changing effects when it was being used as a multitimbral instrument. The Trinity comes in various versions, but nevertheless still uses the same fundamental technology that Korg introduced back in 1988 with the M1.
Apart From PCM...
Clearly, other companies have made extremely good use of PCM technology in their synthesizers (including newcomers to synthesis Alesis, as well as Kawai, Roland, and Akai), but it is the quality and type of the source samples rather than the innovation of their synthesis archictecture which makes them useful. However, the mid-'90s saw several developments in synthesis, including Emu's Morpheus and Korg's Wavestation, and we'll focus on these transitive types of synthesis next time, as well as looking at some of the precedecessors, such as the PPG Wave and the Prophet VS, which inspired them.
Yamaha Join The PCM Party
So great was the success of Yamaha's FM (Frequency Modulation) synthesis in the early '80s that the company spent most of that decade 'trickling down' the technology into cheaper and cheaper synthesizers. As a result, they were the last of the 'big names' to introduce PCM-based technology into their synthesizers, and it was initially a supplement to their FM technology, not a replacement for it. This meant that the sounds which FM had proved very good at producing <20> electric pianos, tuned percussion and woodwind <20> could still be provided by the FM circuitry, but the sounds which were better produced from PCM samples, like acoustic piano, strings and other fuller sounds, could be generated using the more recent technology.
However, the reason why the SY77 (the first machine to combine FM and S&S) proved so popular with its more professional users was not the ability to produce sounds with one or other of the two complementary technologies, but the ability to produce hybrid sounds from the combination of the two. PCM's biggest weakness was still the difficulty of adding expression to the performance. The basic sample would sound very authentic but somewhat static. FM was the perfect antidote for this, as it has always been very responsive and expressive, although not the most authentic way of reproducing the fundamental timbre of sounds.
Of course, the two technologies did not necessarily always sit together well in the mix. Fortunately, the SY77 also had the necessary DSP hardware to produce effects. This meant that the same 'smearing' techniques as used on the Roland D50 (see December '97's Synth School) could be used to bind the two sounds together. The only difference was that, instead of using the effects to join together an attack with the sustain portion of the sound, both the FM and PCM parts would sound simultaneously but, being of different characters, they would stand out from each other. The effects would be used to blend the two sounds together.
Of course, many of the programs would only use one sound or the other, so in this case the effects would be used just to add reverb or chorus, in the normal way. The SY77 also had the ability to sequence multitimbrally, but the effects had to be shared between all the different programs being triggered. It therefore allowed you to use the effect/s from whichever program seemed to need it most and then assign the other sounds to those effects where appropriate.
The SY99 took things a stage further, by adding a disk drive and the ability to load samples into RAM. This meant that users could actually take their own samples and combine them with the FM sounds. At the time, however, there was not much general cross-platform support for reading other manufacturers' disks, so unless you wanted to contend with MIDI sample dump, your main option was sample disks for Yamaha's own TX16W sampler (a 12-bit machine which had been able to do stereo samples but only at 33kHz, whereas mono ones were available at 48kHz). One of the interesting side-effects of this was that some use was finally made of a TX16W sample library which the old Yamaha R&D Centre in London's Conduit Street had given me a splendid budget to create. Yamaha distributors all over the world finally had a use for the piles of beautifully bound disk sets for the rather overlooked sampler, which had been in stock for years. For me personally, it meant the ability to load all these great sounds, which no-one else had ever given me a budget to record and edit properly, eaxctly as I had set them up, but in a synthesizer (without two hours wasted on MIDI sample dump and basic editing, which tends to de-motivate me seriously!). This isn't the only reason why the SY99 is my favourite Yamaha synth of all time, but it certainly goes a long way towards it. The SY99 certainly is unique in allowing you to combine FM, subtractive synthesis, and your own user samples all within a single machine.
Even when Yamaha eventually dropped the FM capability and produced their first purely PCM-based synthesizer, the SY85, they still kept the ability to load user samples into RAM, but using the much cheaper method of SIMMs (Single In-Line Memory Modules), just like modern samplers. As a result, a colleague (now the manager) at a certain retail store I used to manage was able to use his library of dance loops and drum samples to cheaply customise that machine for the evolving dance market. The rack version of this synth, the TG500, actually went a stage further, using Flash ROM to store samples so that they were retained in memory after power-down (I suspect because it had no disk drive to quickly reload from, so MIDI Sample Dump was the only way to get them in). Korg are now offering this as an option for the Trinity in the form of the PBS-TRI option, and it really expands the usefulness of a PCM-based synth, especially for live applications, where loading samples even from hard disk is altogether too long a procedure. Nothing beats turning a machine on and finding your own personalised samples already to go.
Yamaha's most recent PCM-based synths unfortunately no longer have the ability to load samples into RAM or Flash ROM. Despite the fact that they retain the DSP effects capabilities of all the previous Yamaha workstations and the ability to sequence multitimbrally, they're not as exciting, for me, as the glorious hybrids of the late '80s and early '90s. The really notable and innovative products from Yamaha at the moment are in the physical modelling arena.

View File

@@ -0,0 +1,104 @@
Synth School: Part 7
Transitional Synthesis
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published April 1998
Fairlight's Computer Music Instrument (CMI) was one of the first systems to offer a form of transitional synthesis, but the feature was never really exploited due to the technological limitations of the day.
Fairlight's Computer Music Instrument (CMI) was one of the first systems to offer a form of transitional synthesis, but the feature was never really exploited due to the technological limitations of the day.
Between the extremes of the broad brushstrokes of subtractive synthesis and the painstaking detail of additive, there have existed many hybrid styles of synthesis combining the speed of the former with the precision of the latter. Paul Wiffen traces the development of this middle ground through its successes and heroic failures. This is the seventh article in a 12-part series.
Throughout much of the last 20 years, there has been a strain of synthesis which, although it has never challenged the dominant variety at any point, has always provided a worthy alternative for the synthesist looking for that little bit extra control over the timbre of the source waveform without having to go to all the effort of specifying the shifting level of each harmonic individually, as in additive synthesis. While individual manufacturers have coined many terms for their variation on the theme <20> Wavetable Synthesis, Vector Synthesis, Wave Sequencing, and so on <20> the overall term which seems to best fit this broad category is Transitional synthesis, because the sound, broadly speaking, starts with the specific harmonic content of one or more waveforms, and evolves, through various means, to end with a different harmonic spectrum (rather than decaying to fundamentals, as with a closing analogue-style filter). How this is achieved varies from one implementation to another, but what all the forms of this type of synthesis have in common is that they offer the user greater control of the harmonic content of the sound as time passes, by allowing him/her to specify the waveform at given moments in the sound's development.
In this they are very different from analogue synthesis, where the fundamental nature of the timbre throughout its development is determined by the basic waveform selected. All that can happen is that some or most of the frequencies this waveform contains can be removed by the filter cutoff or exaggerated by resonance; no radical shift in harmonic content can be achieved. In PCM-based synthesis, the harmonic content of the sound is dictated by the frequencies present when the recording was made. Although these can also be modified by cutoff and resonance, new frequencies, again, cannot be introduced.
Timbral Evolution In A Fair Light
PPG's wave (below) with the Waveterm sampling hardware on top.
PPG's wave (below) with the Waveterm sampling hardware on top.
Perhaps the earliest manifestation of the kind of Transitional synthesis I'm talking about this month was on the original CMI (Computer Music Instrument) from the innovative Australian company Fairlight. Press and media coverage of the instrument made much of its light pen and the facility to draw single-cycle waveforms that it offered. Those who tried this method, however, soon found that, without analogue filters to run through the harmonic content of waveforms, picking out and exaggerating their differing compositions, most hand-drawn waveforms sounded rather ordinary and often bland, despite the revolutionary way in which they were created. The simple fact of the matter is that the human ear is sensitive to change in harmonic content, and tends to be unimpressed by a static harmonic content, however complex. The secret of the success of the enveloped filter, as a mainstay of synthesis over the years, is that it's an exceptionally quick and easy way to vary this harmonic content.
Lacking any such filtering capability, the Fairlight engineers had to look for another way to make harmonic content change. Obviously, the samples the Fairlight made could contain timbral changes but only if they were present in the source being recorded. Introducing timbral change on the machine itself would be a tougher job. The system they eventually came up with was perhaps the only function on the CMI which really made use of its computational power, all its other facilities being simple RAM storage and replay tasks, whether of sample data or sequences.
...if you can get your hands on a PPG, Prophet VS or Yamaha SY22, you'll discover a style of synthesis which is perhaps the most powerful of all the non-imitative styles.
Having created two waveforms, the user could place one at the beginning of the available sound memory and the other at the end. The computer would then calculate a waveform for every other memory location in between, by interpolating between all the corresponding points on the two waveforms (this process was known, a little inaccurately, as a Merge). As a result, each waveform played back in the course of a sound made in this way was similar to the one which preceded it but with subtle changes as the waveform was slowly altered to evolve towards the final result. The important thing was that these changes were entirely different to those which a filter would give, as they were produced by a mathematical method which was not in any way restricted by how sound behaves in the real world. Exciting new timbres emerged which had never been heard before. These could be radical (if two completely dissimilar waveforms were specified as start and end points) or subtle (if the two waveforms were closer together in appearance). There were changes for the ear to pick up on, and these changes were also unpredictable and different.
Unfortunately, although the CMI's method might sound like a sound designer's dream, the technology of the time had some major limitations which restricted its usefulness in mainstream musical applications. Firstly, memory size was limited, so the transition from one waveform to the other happened fairly quickly at the nominal original pitch. This meant that it was no good for sustaining sounds, where a gradual change in timbre works wonders; the sound was always of finite length. Secondly, the memory into which the transitional sound was loaded achieved pitch changes in exactly the same way as a sampler <20> by replaying at different rates. So the higher up the keyboard you triggered the sound, the shorter it became, and the quicker the timbral change happened. A sound triggered lower in the keyboard would last longer, and its timbral change would take longer to happen.
Of course, there were times when this unalterable relationship was fortuitous. Occasionally the short high notes would have enough punch and character to stand out well over a pad sound, which made up for their brevity. More often, the longer, slower notes worked well as sustained low-end sounds, with the timbral change accentuated because the higher harmonics were more audible.
But most Fairlight users lost patience with these limitations, at least in part because the CMI had nowhere near enough computional power to perform its operations in real time (which is why the computations had to be stored in memory and played back just like samples). As a result, they would stick to the (at the time) unique sampling and rhythm-sequencing capabilities of the Fairlight.
Very little use of the CMI's Merge facility was recorded for posterity, and I have never heard a sound on record which I could positively identify as having been created in this way. I live in hope that, since today's computational power could create the interpolated waveforms needed on the fly, without even breathing hard, someone will do a real-time implementation of this exciting feature of the grandaddy of all digital systems, since, if the calculations were done in real time, the speed change needed to vary the pitch would disappear. The bright, metallic sounds I found the CMI created would suit current musical styles, like techno, down to the ground.
Wave Goodbye To Pitch Limitations
The successor to Sequential's Prophet VS, designed by the same team of engineers and programmers <20> the Yamaha SY22.
The successor to Sequential's Prophet VS, designed by the same team of engineers and programmers <20> the Yamaha SY22.
The next digital instruments to venture into the territory of harmonic transition were the PPG Wave series. Happily, this system, invented by Wolfgang Palm, did not rely on computation in real time, so the Wave synthesizers did not suffer the problem of the evolution of the sound being linked to its replay pitch. As a result, our old friend the envelope generator could be used to control the speed and direction of the movement between waveforms.
This was possible because the waveforms were created at the factory and loaded, in 'family' groupings, into so-called 'wavetables', sets of digital memory locations exactly like single-cycle sampled waveforms. These wavetables allowed a style of harmonic transition which was very similar to the Merge facility on the Fairlight, in that each waveform was only slightly different to the one on either side of it <20> but over the 32 locations within each wavetable, wide timbral changes were possible.
Of course, those wavetable groupings were decided by the manufacturer, removing the element of serendipity available on the user-defined Fairlight implementation. But this was more than made up for by the fact that the results were usable in a real-time mainstream format.
The synthesist was able to specify the wavetable used by each oscillator, and the starting waveform. However, there was then no obligation to make use of the wavetable's harmonic flexibility. The specified waveform could be used through the sound's duration, complete with normal amplifier and filter enveloping, exactly as on an analogue synth (although the waveform was generated digitally). In fact, one of the PPG's wavetables contained the standard sine, sawtooth, square and pulse waveforms, so that you could make sounds in exactly the same way as with analogue synths, although they never sounded quite the same.
However, nobody bought PPG Wave synths for their ability to duplicate the analogue synthesis process, but rather for the fact that they could supersede it. Once you had specified the initial harmonic content with the starting waveform, an envelope or LFO could be used to change that harmonic content, by moving around inside the wavetable in much the same way that envelopes and LFOs can change the filter cutoff from its initial harmonic-content setting in analogue synthesis. The greater the envelope depth or LFO amount, the further away it was possible to move from the original waveform in the wavetable. The speed of that movement was determined by the attack, decay and release times of the envelope, or the frequency of the LFO.
Despite the fact that the wavetables were factory-preset, this gave the PPG Wave synthesizers a much broader timbral range than standard analogue synths, especially as enveloped analogue filters could also be brought to bear on the sound after the wavetable synthesis had done its unique job. The closest analogy for those of you who have only heard analogue synthesizers is Pulse Width Modulation (PWM): the timbre changes without any movement on the part of the filter as the waveform moves between different variations of the basic waveshape. It was when I first heard Pulse Width Modulation that synthesis came alive for me, and the PPG system offered this same kind of movement, but with a host of different timbral groups in the various wavetables.
Just as with PWM, you could choose to set a constant timbral motion, with an LFO moving the wave readout evenly on each side of the starter waveform, or set up a more tailored single harmonic movement using the attack, decay, sustain and release phases of an envelope. You could use the attack to move quickly from the initial waveform to another further along the wavetable, move back a portion of that distance using the decay, hold on one particular waveform for the sustain segment, and then move slowly back to the original waveform during the release phase.
The beauty of Vector Synthesis was that it was very 'hands on' (to use the modern jargon) and simple to grasp (figuratively and literally).
More Power To The User
The mighty Waldorf Wave.
The mighty Waldorf Wave.
As stated earlier, the PPG system was considerably more musically useful than the Merge function of the Fairlight, but was restricted to the waveforms provided by PPG in the Wave synthesizer. PPG's Waveterm changed all this, by providing the computational power for users to create their own waveforms (and, incidentally, make samples) and download these into the Wave 2.2 synth for use just like the factory-preset wavetables. On the Wave 2.3, the whole memory could be used to download and play back 12-bit samples linearly.
There were two versions of the Waveterm (A & B), easily distinguished externally by the fact that most 'A's had 8-inch floppy drives, while the 'B's used the newer 5.25-inch disks. More importantly, the 'B's were improved internally by 16-bit resolution and better A/D conversion for the sampling side. Of course, sample playback through the analogue filters of the Wave 2.3 drew most attention (not surprisingly, as it pre-dated PCM-based synthesis by seven or eight years) but more creative users latched onto the fact that with the Waveterm they could build their own wavetables and turn them into custom sounds on the 2.3 synth.
Of course, time eventually catches up with any technological innovation, and PPG's fortunes faltered with the arrival of cheap samplers from Ensoniq, Sequential and Akai. Ironically, these never attempted to cover wavetable synthesis, but nevertheless the writing was on the wall for the Wave system. Despite ground-breaking new product designs, which were the first attempts anywhere in the world at stand-alone hard disk recording and virtual synthesis (called the HDR and the Realizer), PPG finally went bankrupt in 1987 (see the 'Thoroughly Modern Wave' box for what happened next).
Following The Sequential Vector
Waldorf's Microwave II.
Waldorf's Microwave II.
The next company to go in for a system which allowed you to change the harmonic content of the source sound in real time, before the filter section, was Sequential Circuits. However, instead of changing the waveform that an oscillator was generating, their system allowed you to set up four different waveforms on four different oscillators and then mix between them by means of a joystick. This was clearly a much cheaper system: the Prophet VS, the synth which used this technology, was released with a price tag of around <20>2000 instead of the <20>3-4000 price tag the PPG Waves had carried. Of course, the resulting sound was not quite as smooth as that produced by the PPG, where the harmonic content of every waveform was closely related to that of the one either side in the wavetable. On the VS you could choose to mix between waveforms with vastly different harmonic contents, which made many of the resulting sounds a little harsh to the average ear.
Sequential dubbed this technology Vector Synthesis, which was perhaps a bit of a misnomer. A vector is a straight line between two points, but the VS's joystick allowed you to take any indirect path between the starting and end positions of the oscillator mix. No doubt Sequential thought that Vector Synthesis sounded better than Cartesian Synthesis, or any other more accurate name.
Apart from the waveforms supplied as standard (which included sine, sawtooth, square and various widths of pulse, so you could produce standard analogue timbres through the filter), the VS also allowed you to create your own waveforms, through a basic form of additive synthesis (which had only been available in the PPG system if you could afford a Waveterm to go with your keyboard). On the VS, this was done by stepping through the various harmonics and specifying a level for each. What's more, you could actually hear the resulting change in real time (unlike with the Waveterm, which had to compute when all harmonic levels had been set).
Once you'd created your waveforms, or just selected the factory-preset ones you wanted to use, you could place two pairs of them on the X and Y axes of the joystick. This meant that left/right movement would control the mix between one pair and up/down movement would simultaneously do the same for the other pair. Any position of the joystick thus gave a unique mix of the four oscillators, and as a result, extremely complex timbral changes could be produced as part of a real-time performance. This was something the PPG could only achieve through programming. Of course, there were envelopes on the VS, to allow this mix to be altered automatically during the playback of a note, but to make life even easier the VS could record a manual joystick movement, and use this as the model for automatic change in the mix.
The beauty of Vector Synthesis was that it was very 'hands on' (to use the modern jargon) and simple to grasp (figuratively and literally). There were no difficult concepts to get your head round. Everyone understands the concept of 'mixing', and a couple of minutes with the joystick made it very easy to understand the possibilities for unique sound creation.
So if it was such a great idea, why was the Prophet VS the last synth Sequential made, before going bankrupt and being taken over by Yamaha? Well, it was the usual combination of poor mechanical reliability and other developments in the industry with more mass-market appeal. Sequential had their problems with quality control: one particular quirk with the case design made aftertouch stop working if you put the keyboard at an angle on an A-frame stand. The joystick on the first VS I used to reacquaint myself with Vector Synthesis for this piece had partially dropped inside the case and was held in place by tape.
The other reason the VS remained a specialist taste was what I refer to as the 'Piano, Strings and Brass Effect'. The VS couldn't do any of these at all authentically. Unfortunately, it came out at around the same time as the D50, which had the rudiments of PCM-based synthesis, so it could get close, and it had built-in digital effects to boot. The fact that it was also cheaper than the VS was the final clincher. Sequential returned to sampling technology in the Prophet 3000, the first 16-bit sampler to hit the market, but this came to market too late to save them.
The Japanese Pick Up The Baton
Fortunately, the design talents at Sequential were not scattered to the four winds, as Yamaha stepped in and kept Dave Smith and his team together. They took over Sequential's building in San Jos<6F>, and although little was seen from the team in the year after the takeover, they were put to work on a more commercial implementation of Vector Synthesis. This eventually emerged a couple of years later as the Yamaha SY22, its ancestry clear from the joystick on the front panel. The synth included some PCM source waveforms (to take care of the Piano, Strings and Brass Effect) but lost out on the ability to build custom waveforms through additive synthesis, as this was felt to be too marginal.
But the main advantage the SY22 had was that it was built by Yamaha. The case design was much more solid and the reliability a thousand times better. In addition, Japanese manufacturing techniques had brought the price down to well under <20>1000. As a result, the SY22 sold in much greater numbers, and if you find one on the second-hand market, the chances are that it will be in much better condition than a VS and will continue to work properly for many years to come (even if there are those, like myself, who would argue that the Yamaha version misses out on much of the uniqueness and character of the original VS).
Curiously, by the time the SY22 hit the market, Yamaha had already been without the Sequential design team for almost a year.
Curiously, by the time the SY22 hit the market, Yamaha had already been without the Sequential team for almost a year <20> inscrutably, they had parted company with the ex-Sequential personnel almost as quickly as they had moved to keep them together. However, another Japanese manufacturer, Korg, stepped in to preserve the unity of the design team, and Korg have continued to use their talents as an R&D facility ever since (two current Korg products which owe their existence to this facility are the 1212 I/O PCI card and the Z1 synth). Ironically, the first product they presented to Korg, another implementation of some of the concepts first introduced in the Prophet VS, was developed so quickly that it was launched at the same NAMM show which saw the introduction of the Yamaha SY22. We'll look at this instrument, the Korg Wavestation (perhaps the most successful of all the transitional synthesizers), in the next instalment of Synth School, as well as the most powerful implementation yet from Emu Systems, in the form of the Morpheus.
In the meantime, if you can get your hands on a PPG, Prophet VS or Yamaha SY22, you'll discover a style of synthesis which is perhaps the most powerful of all the non-imitative styles <20> no use at all if you want authentic piano, strings and brass sounds, but all the better for that if you want to come up with truly unique and personalised synth timbres.
Thoroughly Modern Wave
Fortunately, when PPG ceased to be, its Wave technology was not lost forever. Wolfgang D<>ren, who had masterminded worldwide sales for PPG in their heyday, decided, at the end of the '80s, to recruit designer Wolfgang Palm. The aim was to use the new LSI (Large Scale Integrated) circuit technology to produce a MIDI-controllable rackmount version of the Wave system. In a inspired moment they decided to call it the Microwave, and this instrument is still available in an updated form today <20> the Microwave and the Microwave II boast the original wavetables from the PPG instruments. Rumour has it that the new company spent over a year trying to make digital filtering sound as good as the original analogue filters of the 2, 2.2 and 2.3, but, in a move reflecting the original Wave keyboard's design years before, they were forced to go with analogue filters to keep the sound authentic. Distinguished visually by a large, bright-red parameter value dial (reminiscent of Comic Relief's Red Nose), these instruments have brought the price of wavetable technology down to around the <20>1000 mark, without sonic compromise, thanks to the modern economy of single-parameter access.
A few years later the same team followed the Microwave with the impressive Waldorf Wave keyboard, boasting a front panel which can be raised up, Minimoog style, for ease of use when programming. In addition to the trademark big red dial, there are scores of smaller red knobs and switches to make programming as quick and easy as possible. Unfortunately, all this instantaneous parameter access has its downside: the price. The Waldorf Wave is one of the most expensive synthesizers on the market, but this hasn't stopped the production being pre-sold for years in advance.
As a result of Waldorf's efforts, if this piece has whetted your appetite for wavetable synthesis, you're not obliged to brave the second- (and third-) hand marketplace. You can purchase a current Wave synth in either a very affordable (Microwave) or very expensive (Waldorf Wave) form depending on your budget, but either way you will have perhaps the most authentic recreation of a vintage technology on the market. If you do go, instead, for an original PPG, make sure you know a good service engineer.

View File

@@ -0,0 +1,78 @@
Synth School: Part 8
Wave Sequencing To Z-Plane Synthesis
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published June 1998
Paul Wiffen continues to examine transitional synthesis, covering the Wave Sequencing facility, first introduced on the innovative Korg Wavestation, and concluding with Emu's Z-plane technique, which may be regarded as bridging the gap between S&S and today's physical modelling.
In the previous part of this series (see SOS April 1998), I began to talk about what I've termed 'transitional' synthesis methods, where, broadly speaking, a sound begins with a given harmonic content and evolves, to end with a different harmonic spectrum. This type of synthesis was exemplified, in different forms, by the Fairlight CMI, the PPG Wave series, the Sequential Circuits Prophet VS, and the Yamaha SY22, which was designed by the Sequential team for Yamaha after Sequential's demise. At the end of our last exciting episode, the ex-Sequential personnel had parted company with Yamaha and had been taken under Korg's wing instead, where they continued to develop their concepts further.
Crossfade To Wave Sequencing
The transition from waveform to waveform in Sequential's Vector Synthesis, first seen on the Prophet VS, was a simple crossfade, and although two of these crossfades could be controlled or programmed by the joystick which was so integral to the Vector Sythesis system, the maximum number of waveforms which could be involved in a single sound was four. However, the San Jos<6F>-based team's next development <20> termed Wave Sequencing <20> allowed up to 255 different waves to be involved. This innovation was introduced on the Korg Wavestation, which still featured joystick-controlled Vector Synthesis, but added the much greater potential for transitional synthesis that wave sequencing gives.
Korg Wavestation
Korg Wavestation.The closest precursor of wave sequencing was the PPG system of wavetable synthesis, where related single-cycle waveforms were stored in a group of 32. The user could pick a starting waveform and then use an envelope or LFO to move around in the wavetable, causing timbral changes as the waveform being read out changed. Differences between adjoining waveforms were fairly slight, so the degree of timbral change was determined by how far and how fast the readout moved from the original starting point.
In the case of wave sequencing, coming 10 years after wavetable synthesis, there was much less economic restriction on memory for storing waveforms. As a result, instead of access being limited to 32 single-cycle waveforms, full PCM samples were available, and up to 255 could be 'on-line' for use by an oscillator in a sound. Each stage in the wave sequence could be occupied by a PCM sound radically different from the one before or after it in the sequence. The potential for striking sonic change is therefore much greater in wave sequencing, especially since the PCM waveforms can be deliberately moved around by the user to contrast as much as possible with their neighbours.
Not only can the number of steps in the wave sequence be up to 255, but at each step the user is also able to determine not only the PCM or single-cycle waveform that is to be played, but also the duration of that wave and of the crossfade to the next wave. As a result, a greater degree of fine-tuning is possible than in any preceding form of transitional synthesis. Of course, this also means that it can take a great deal of time to create a really complex wave sequence.
Figure 1: A 7-step wave sequence, as shown in all Korg Wavestation manuals.
Figure 1: A 7-step wave sequence, as shown in all Korg Wavestation manuals.To get a better idea of exactly how wave sequencing works, let's take a look at Figure 1, which is the diagram Korg supplied in all their Wavestation manuals, depicting a 7-step wave sequence. It shows how each of the steps can have an entirely different waveform asssigned to it. Some are clearly PCM samples, such as Step 1 and Step 6, some are standard analogue waveshapes, such as Step 2 and 4, and others are more complex singlecycle waveforms, such as Step 7.
When the wave that each of these steps uses has been set, the level (volume) of each step can be individually set, as can the semitone (+/- 24) and fine (+/-100 cents) tuning for each. In the diagram you'll see that Step 2's level is set louder than the others, and that Steps 4, 6 and 7 are quieter. The duration of each step is set as an arbitrary value between 1 and 499 or to Gate (which means that it lasts for as long as the key is held down). If the scale between 1 and 499 is not right for your needs, there's a neat little utility which lets you compress or expand the overall timescale of the sequence by up to 200%. This means that you can instantly make the wave sequence last twice as long or a fraction of its former length.
The transition between each step and the next is set by a crossfade parameter with another arbitrary value range, of 0 (no crossfade) to 998. This allows the timbral change between one step and the next to be instantaneous or to occur smoothly over whatever time interval you choose. This is the real power of wave sequencing <20> that these timbral changes can be as sudden or as gradual as you like.
Having set up some really interesting shifts of timbre, you may want to have them repeat. The Wavestation allows you to cycle around as many steps of the wave sequence as you want, in either a forwards or forwards/backwards loop. The number of repeats can be set from 1-126, or you can specify infinite repeat, until the amplifier envelope fades the volume down completely.
The cycling of the steps still does not exhaust the possibilities of wave sequencing. Once the wave sequence is set, complete with crossfades and loop if required, the point at which playback of the wave sequence starts can be controlled by a variety of modulation sources. These include velocity, which can be set up so that harder keystrokes start playback from early in the wave sequence and gentler ones later on. This technique can be used with wave sequences which include harsher, brighter waveforms in the early steps and softer timbres in the later ones, to create a natural increase in harmonic content on faster keystrikes and a gentler sound on a lighter stroke. Alternatively, you can set a dynamic modulation source like mod wheel or aftertouch to change the step number of the wave sequence. In this case a start step is specified and this stage of the wave sequence is held until the modulation source is activated. Then the movement within the wave sequence is controlled by the mod wheel or aftertouch, so that timbral changes can be introduced as a real-time expression factor. It is this type of facility which makes wave sequencing such a powerful form of synthesis, especially for lead synthesizer work.
More Fun In The Waves
Wave sequencing is only one of the techniques available on the Wavestation synthesizers. The instrument can be reduced to the simplest of analogue-style architecture, with just one or two oscillators playing back single-cycle waveforms through standard subtractive synthesis filters, but complete with specialist analogue techniques like Hard Sync (for those who remember back to our second instalment of Synth School). However, the number of oscillators can be set to four, and then they can be mixed, either in the normal way, or by using Vector Synthesis via a live joystick or the Mix Envelope, which stores this two-dimensional mix as an envelope over time. Add to this the fact that any or all of the four oscillators can be set to play back wave sequences with their own filters and envelopes and you can see how complex each Patch can become (if the programmer has the time to set it all up). And since the Wavestation is multitimbral, it's possible to combine up to eight Patches into a Performance or 16 Patches on different MIDI channels in a Multi. At Patch, Performance or Multi level, the entire sonic result is passed through two effects, which are as good as those available on any synthesizer at the time (indeed, the effects were considered so good that the later, rackmount, Wavestation AD allowed external sound sources to be processed through them).
Although the sales of the Wavestation series of synths (including the much cheaper 1U Wavestation SR) never challenged the success of straight PCM-based machines such as the Korg M1 or its successors, many people now declare the Wavestation to be their favourite Korg synth, or even their favourite synth of all time. It certainly has the potential to be an inexhaustible source of inspiration for real synthesis aficionados, allowing access to traditional subtractive synthesis, vector synthesis and wave sequencing. Not surprisingly, all three members of the Wavestation family hold their value extremely well on the second-hand market, but if you can find one within your budget, it's a machine whose potential you are unlikely ever to exhaust.
Leading On A Z-Plane
Figure 2: An example of an Emu Morpheus filter configuration.
Figure 2: An example of an Emu Morpheus filter configuration.The next type of synthesis I'll be looking at, Z-plane synthesis from Emu Systems, fits broadly into the category of transitional synthesis. However, the transition does not happen between different oscillator waveforms but in the filter section of the synth. Z-plane synthesis was first implemented in the wittily-named Morpheus (the name has nothing to do with the figure from Greek mythology but refers to 'morphing', a term which means to change from one thing to another), and its use of interpolation between two filter shapes is very reminiscent of how the Fairlight 'merged' from one waveform to another. Extremely complex filter shapes are created through the use of up to eight filter components, each of which is comparable to the traditional low-pass, band-pass or high-pass filters or parametric equaliser bands (see Figure 2 for one configuration example). The resulting sculpting of the sound is far more precise and subtle than in any previous type of synthesis. In addition to the basic function of the filter, starting by removing the high and/or low end, peaks and notches can be placed at will anywhere across the entire audible frequency range.
Figure 3: The Morpheus filter can change its function over time, as this graph from the original Morpheus manual shows. Here filter characteristic A morphs into Filter characteristic B over time (the axis labelled 'Morph' here).
Figure 3: The Morpheus filter can change its function over time, as this graph from the original Morpheus manual shows. Here filter characteristic A morphs into Filter characteristic B over time (the axis labelled 'Morph' here).Once you've managed to get your head round this, brace yourself, because we still haven't scratched the surface of Z-plane synthesis. In fact, the basic Morph parameter on its own might be thought of as X-axis synthesis. Another parameter, Frequency Tracking, introduces the equivalent of a Y-axis into the equation. This is the closest parameter to the conventional filter cutoff, in that it moves the complex Morph filter up and down the frequency range (Figure 3).
In combination with the Morph parameter, Frequency Tracking gives two-dimensional control over the filter shape (as illustrated in Figure 4). Unlike a conventional filter cutoff, though, the Frequency Tracking parameter cannot be moved in real time, but must be set at Note On (presumably because there has to be some limit on the processing power required). This makes it suitable for hooking to parameters like keyboard tracking and velocity, but unavailable for controlling from aftertouch or envelopes. However, the real-time Morph parameter allows much more radical effects than filter cutoff movement, and thus more than makes up for the fact that you have to fix the Frequency Tracking at Note On.
Amazing Transformations
Figure 4: Two-dimensional control over filter shape is provided by the combination of the Morph parameter and the Frequency Tracking parameter.
Figure 4: Two-dimensional control over filter shape is provided by the combination of the Morph parameter and the Frequency Tracking parameter.
The observant amongst you will have spotted that I've still not mentioned the 'Z' axis that completes Z-plane synthesis: a third parameter, Transform 2. The function of this varies from Z-plane filter to Z-plane filter, but one example of what it can do is increase the size of the peaks and notches in the filter contour (similar to the individual peak which is increased in a conventional filter by the resonance control). Now we've introduced the Z-plane into the equation, and now the three-dimensional variations possible in the resulting filter contour are best visualised as the cube shown in Figure 5 (above), rather than the square in Figure 4.
The Transform 2 parameter, like the Frequency Tracking parameter, is also fixed at Note On, but this actually gives you more flexibility than most traditional filtering, where there is rarely any automatic control of resonance at all and you have to make do with the fixed setting whatever the note played or its velocity.
Figure 5: The Transform 2 parameter introduces the Z-plane into Z-plane synthesis, giving three-dimensional variations in filter contour. The concept is shown here as a cube.
Figure 5: The Transform 2 parameter introduces the Z-plane into Z-plane synthesis, giving three-dimensional variations in filter contour. The concept is shown here as a cube.Figure 5 shows the result when velocity is used to set Transform 2 and keyboard position used to set Frequency Tracking. Not all of the 197 filter types in the original Morpheus feature this third Transform 2 parameter, but about half do (so technically there are around 100 Z-plane filter configurations in Morpheus). All the filter configurations are individually described in the manual, complete with comments and suggestions for specific uses, so there's no danger that you'll be left to yourself to try and work out where to use them (although I find that random assignment leads to some of the most exciting results <20> but then I've always been a great believer in serendipity, or the 'happy accident', ever since Jon Pertwee explained what it meant in an episode of Doctor Who!).
You really can make some major timbral alterations to your source waveform, changing it almost beyond recognition. In fact, the sheer range of filter types and the way they can be altered in performance, the technology used to create and modify the filter contours on an individual basis, and the resulting sonic variations in the sound, make Z-plane synthesis a real precursor to last year's buzz technology, physical modelling (also known as virtual synthesis or acoustic modelling). This uses shedloads of DSP power to modify source waveforms in the same way that the physical modifiers of the real instrument (shape and size of resonating case or vibration column, for example) affect the input sound. Many of the Morpheus' filters are described in these terms <20> for example, F097 ("designed to make possible a set of piano presets that sound like they were recorded with the sustain pedal down"), or F105 ("designed to emulate some of the resonant characteristics of an acoustic guitar body"). As such, the Morpheus probably represents the missing link between instruments which just use DSP to add some effects sparkle, and those which create the entire sound through raw DSP, as in physical modelling instruments such as the Yamaha VL series or the Korg Prophecy or Z1.
Of course, we haven't really looked yet at the source waveforms that Morpheus allows you to filter in this radical way. Although the standard analogue waveforms we know and love from the very first Synth School (sawtooth, square and pulse in various widths) are available, these are crammed in with 48 sampled sounds, 22 harmonic waveforms (built additive synthesis-style), 92 singlecycle samples from organs and synths, and 68 percussion sounds. So while Morpheus has something in common with PCM-based synthesis, it also adds elements of analogue, additive and other types of synthesis along the way. If you're looking for a synth that will yield hours of experimentation and sonic creativity, Morpheus is a monster, but like so many of the best synths ever made, don't look to it for piano, strings, brass and drums (unless you like these with a twisted edge).
The real power of wave sequencing is that timbral changes can be as sudden or as gradual as you like.
Physical Relationship
With Z-plane synthesis, we've started to touch on the technology used in physical modelling, which brings us up to date, as this is currently where all the big strides in synthesis are being made. From purely analogue models (those on the Roland JP8000 or Clavia Nord Lead, for example) through those which are based on other synthesis styles (such as FM on the Yamaha AN1x or other electronic instruments such as organs and electric pianos on the Korg Z1), to models of purely acoustic instruments (such as brass and woodwind from the Yamaha VL series, or plucked and bowed strings on the Z1), physical modelling is playing a greater and greater part in sound production on modern synthesizers. And it will become more and more prevalent as DSP gets more powerful and cheaper to implement.
Next time, we'll look at how physical modelling can not only imitate but sometimes go beyond the type of synthesis from which it draws its inspiration, to create even more exciting possibilities for those who are constantly searching for that something extra from a synth.
VFX PCM Loops: A Step In The Right Direction
Synth School: Part 8
The Ensoniq VFX, while not offering the flexibility of wave sequencing, can give you a taste of the possibility of using a string of PCM waveforms as part of your sound. Although you cannot determine the order of the PCM sounds, which is strictly governed by the order in which they were loaded into ROM by Ensoniq, you can set the sample from where the string of samples starts reading out and how many samples will be included. There's no potential for looping each individual sample and setting how long it lasts, let alone crossfading between one sample and the next, but it is possible to set the string of selected samples to loop. Looping allows you to start to create rhythmic patterns which can be used either as the basis for a patch, or as an element to fade in and out via an envelope.
As the percussion samples are all stored together, it's quite often possible to find some really neat loops in this area of ROM. Some of the areas with brass and woodwind samples produce loops which sound like the worst sort of avant-garde jazz, but by messing around with the start point and the number of steps in the loop, you can come up with some unexpectedly musical results, especially if you want to create sounds which evolve and change their fundamental nature over time.
If you can't get access to a Korg Wavestation, some time spent with this facility on an Ensoniq VFX will certainly give you a taste of what can be done with wave sequencing. Incidentally, later Ensoniq synths, such as the TS10, expanded on what the VFX offered, allowing both user selection and ordering of samples.

View File

@@ -0,0 +1,88 @@
Synth School: Part 9
The Imitation Of Analogue
Synthesizers > Synthesis / Sound Design
By Paul Wiffen
Published July 1998
Korg's Z1, like all modelling synths, requires masses of DSP horsepower.
Korg's Z1, like all modelling synths, requires masses of DSP horsepower.
Physical Modelling and Virtual Synthesis have been buzzwords for several years now, especially when it comes to imitating analogue synthesis. But what are their advantages and disadvantages, and how do they work? Paul Wiffen explains. This is the ninth article in a 12-part series.
About 12 years ago, I was taken by a guy, with whom I was working on an Atari sampler, to the engineering lab where he moonlighted as a Cambridge research doctor in DSP for audio. There, from a computer which filled half a decent-sized room, I was played a series of brass and woodwind sounds which I assumed to be samples. They certainly had an authenticity I had previously only heard from sampling. But the more I listened, the more admiration I had for the guy who had made the multisamples. I couldn't hear the loops, nor the points up and down the range where one sample stopped and the next one started. I knew he couldn't have used positional crossfading, because that always gives a 'doubled', chorus-like effect. What's more, sometimes the effect of velocity (from the MIDI master keyboard being used to trigger the sounds) changed the sound subtly, in a way that velocity crossfades could not. I was flummoxed. "How's it being done?," I asked. "Physical modelling," came the reply. "One day all synthesis will be done like this!"
Two years later, at a US NAMM music fair, I was helping out on the stand of my room-mate in California. My main contribution had been to use a Roland MC500 to sequence the backing for his demonstrator, ex-Berlin guitarist Dave Diamond, and sync it to the PPG HDR, the world's first stand-alone hard disk recorder, so that Dave had something to record his guitar and vocals alongside. I had assumed that the HDR ($17,000 with an 85MB hard drive) was the most advanced piece of technology I was going to see during that show, but when the German designer, Wolfgang Palm, emerged from the internal booth, saying "Andy, I think I have it working again," we all huddled inside to hear an even bigger and more expensive box do a passable imitation of a Minimoog. Then he flipped a switch and it produced the kind of non-electric piano which only FM can be responsible for. The box was called the Realizer and when I asked how it was being done, the dour German replied "Virtual Synthesis." It took me years to equate this with what I'd heard two years before in Cambridge.
Let's just make this clear: Virtual Synthesis is another name for Physical Modelling. One term describes where it is done, the other how, but the procedure is the same.
So before we go any further, let's just make this clear: Virtual Synthesis is another name for Physical Modelling. One term describes where it is done, the other how, but the procedure is the same. So don't let any boffins or, worse still, marketing men, hoodwink you <20> they are two terms for one technology.
But what is the technology, exactly? Again, you may well receive several different answers depending on who you ask. Here are just a few of them: "masses and masses of DSP horsepower"; "software models of the way real instruments produce their sound"; "built-in DSP FX taken to its logical conclusion"; and "the sonic equivalent of virtual reality". The trouble is that there is an element of truth in all of these; it does take a huge amount of Digital Signal processing to undertake realistic physical modelling; the software involved does attempt to recreate the way sounds are made in the real world; instead of just changing the basic sounds through effects processing, the sound is created from scratch by the same sort of chips which have been producing the effects in synthesizers for years; and the level of realism involved these days beats anything I have seen on a virtual reality system into a cocked hat.
She's A Model...
Korg's Prophecy.
Korg's Prophecy.
Let's return to first principles and the word 'modelling', because this is the key to the technology. All the other methods of synthesis we have looked at over the last year have one thing in common: the parameters involved with each type of synthesis don't change depending on the type of sound you're trying to get. There's a filter attack parameter on an S&S (Sample & Synthesis) synth whether you're trying to produce a piano, strings, or a synth bass. There are harmonic levels on an additive synth whether you're making a brass sound or a harpsichord. The wave sequencing parameters on a Wavestation are always there, whether you use them or not!
The same is not true of a current multi-model synthesizer such as the Korg Prophecy/Z1 or a Yamaha VL-series synth. Look for the same parameters you used to make a flute sound when using the Bowed String model and you'll be out of luck: the parameters change depending on the model you have selected. This is why the time it takes to change patches on a modelling synth is often perceptible, because so many different parameters need to be broken down and re-configured. Quite often when you change models, you are quite literally changing synths. This can make physical modelling as a method of synthesis quite challenging to define, which is why the DSP effects analogy is quite useful. We expect the parameters to change when we switch a multi-effects unit from reverb to flanging or distortion; the multi-modelling synth is the same <20> only more so. Think of changing from a tenor sax to a soprano as akin to changing from a hall reverb to a room; changing to a violin is like selecting a phaser effect instead. The only real difference is one of scale: the amount of DSP power is greater in a modelling synth by at least a power of 10 or two.
However, this really doesn't help you understand how physical modelling does what it does, in the same way that most people don't have any idea how DSP is used to create effects. In fact, the principle is the same as with digital reverb. The designer attempts to work out what happens in the real world, and then uses mathematical calculations to attempt to recreate this in software. The degree of realism achieved depends on two things: howaccurate is his analysis or 'model' of what happens in the real world, and how closely the DSP algorithms he then writes reproduce this analysis. If the designer has misunderstood how the sound is produced in the real world, then <20> however good his DSP code is <20> it's unlikely that he'll make a very realistic-sounding reverb or plucked string instrument (although he may create some great new effect or sound which can't be produced in the real world). On the other hand, however great his understanding of the processes involved, if he doesn't have the necessary DSP horsepower to hand he may get into the right ballpark, but he isn't going to fool anyone that this is a real hall or a real guitar. For this reason, I still haven't heard a halfway decent model of a grand piano, because it's still prohibitively expensive to provide the amount of DSP power needed to recreate what's going on inside a 9ft B<>sendorfer (even after you've spent a lifetime analysing exactly what that is). I would hesitate to say that it will never happen, but I think we're probably still a few years away from a great physical model of an acoustic piano. (However, the rate of acceleration of technology we're currently experiencing, coupled with the falling price and increasing power of DSP chips, might make it sooner than I think!)
Think of changing from a tenor sax to a soprano as akin to changing from a hall reverb to a room; changing to a violin is like selecting a phaser effect instead.
Analysing Analogue
Roland's DSP-based JP8000 modelling synth (top) offers much of the look and feel of the analogue Jupiter 8 (above).
Roland's DSP-based JP8000 modelling synth (top) offers much of the look and feel of the analogue Jupiter 8 (above).
The easiest instrument for the programmer to figure out, with a view to recreating what's going on inside it, is the analogue synthesizer. The reason for this is that these instruments were designed by people who knew the physics of what they were creating (unlike Stradivarius, who refined his craft empirically <20> by observation and experimentation). It's thus easy to break the analogue synthesis system down into components for recreation in software, and cheaper to provide the necessary DSP power to do this. So even if the modelling synth programmer doesn't know how an analogue filter works, he can get a book on filter design and read up on it. He can then recreate this with a fairly limited amount of DSP. This explains why we currently have more affordable modelling analogue synths on the market, but also why their polyphony is fairly restricted (at least in comparison to today's PCM-based synths, if not to the original analogue machines on which they are 'modelled'!). Once you move into the recreation of acoustic instruments such as brass and strings, however, as with Korg's MOSS system (Prophecy/Z1) and Yamaha's VL multi-modelling synths, things get a lot more complicated and therefore a lot more expensive (or a lot less polyphonic, to save money).
I'm going to leave the modelling of acoustic instruments until next time, and concentrate this month on the advantages that the modelling of analogue synthesis gives. First, I'll make sure that everyone has understood exactly how the model works. The designer starts by analysing how the analogue synth breaks down into its component sections <20> oscillators, filters, envelopes, and so on. If you don't know this stuff by now, you've clearly not been paying attention. But worry not <20> you can re-enrol in Synth School simply by contacting our back issues department and re-ordering the June and August 1997 issues of SOS, which dealt with the components of an analogue synth.
Once the designer has a block diagram of an analogue synth in his head, he simply goes about replacing each component section with a software engine to accomplish the same task. In fact, this has been happening to analogue-style synthesizers over the last 20 years anyway. First, synths like the EDP Wasp, the OSC OSCar, the Elka Synthex and the Korg Poly 61 replaced analogue oscillators with digital ones (with greater or lesser success). Instead of the waveform being produced by analogue components, it was read out through a D-A (Digital-to-Analogue) converter as a string of numbers which acted in roughly the same way (the degree of roughness tended to determine how good the machine sounded), before being fed through an analogue filter. Many envelopes were generated digitally from right back in the early 1980s, but it took until the end of the '80s before analogue filters were replaced with digital ones, Roland and Emu being amongst the first to achieve respectable sounding digital versions of this central component of analogue synthesis.
However, these digital replacements still tended to be implemented in 'discrete' circuitry. You could still point to the bit of the circuit board where the filtering was done or the waveforms generated (if you were a suitably-trained engineer with a circuit diagram, of course). Often the signals were passed around the board in the analogue domain between these sections (less and less often as time went by, of course) and from them to the increasingly present DSP effects chips.
From Real To Virtual
Synth School, Part 9: The Imitation Of Analogue
Even though by the time we reach the modern S&S synth almost everything inside is done digitally, it is not a virtual synth, as parts of the process still take place in the physical world. The transition to virtual synthesis takes place when all these elements are replaced in code which runs on general-purpose DSP hardware chips, and when it is no longer possible to say just where the filtering or enveloping takes place (except in a particular line or lines of code). The true virtual synth may look like its analogue or digital equivalent when represented as a block diagram, but look inside the case and all you will see is a bunch of (usually identical) DSP chips, with a very simple circuit board layout that allows them all to be harnessed together as one mass of DSP horsepower.
Of course, many of today's DSP-based modelling synths look almost more like analogue synths than analogue synths do! The Roland JP8000 modelling synth lovingly creates the front panel of the analogue Jupiter 8 (JP8), even down to the sliders I never liked the first time around. Clavia's Nord Leads (I & II) give you a knob for every parameter, so you can twist again like you did last decade but two (if you were born then), and even a synth like the Korg Z1, which doesn't have room on its entire surface to provide a knob for every parameter (it takes 17 Mac windows to fit them all in), offers you the most commonly tweaked ones as dedicated hardware knobs.
Dedicated these knobs may be (and should be, in my opinion <20> there's nothing worse than a knob with an identity crisis when you're stood performing in front of a crowd of people), but don't think this means that there's any physical connection with the parameter they control. The pot is simply read, and the value derived applied to the appropriate software parameter. A new operating system could very quickly make nonsense of the parameters printed on the front panel (perhaps this is how DSP engineers play practical jokes on mere mortals, who see a label and believe it). The Yamaha AN1x makes this point pretty forcefully, as you need to check which of the colour-coded modes is selected and then look for the parameter name in the matching colour. This does enormously reduce the number of knobs required, but it makes use on a darkened stage a bit exciting!
Unique Attributes
Mac editing software for Korg's Z1, showing the sub-oscillator being used as the sync source.
Mac editing software for Korg's Z1, showing the sub-oscillator being used as the sync source.
Apart from the return of knobs, what other advantage does the modelling of analogue synths offer, especially over the original machines, which tended to have knobs aplenty? Well, let's start with the mundane but very important quality which digital synths have been offering for a while: stability, especially in tuning and in the accurate recreation and recall of sounds. The younger members of my audience may not value this very highly, but they've probably never played a Minimoog or Prophet 5 in a hot club. With modelling, you could emulate the tuning inconsistencies of analogue machines, though there's little artistic use for such things (unless you are going for the nostalgia vote), but there are some modelling systems which can introduce the slight random changes in pitch which fatten up a multi-oscillator analogue synth sound.
The exact matching of filter settings is something else which digital technology has brought, and physical modelling makes the exact polyphonic reproduction of the same sound more precise, as well as allowing the recall of timbres and their porting from instrument to instrument. Again, though, this is old news for anyone who owns a synth made in the last 10 years.
The designer attempts to work out what happens in the real world, and then uses mathematical calculations to attempt to recreate this in software.
So what are the advantages of physical modelling of analogue which we have never seen (or should that be heard?) before? Put simply, the synthesizer can be reconfigured for different ways of combining components in a way which compares to a modular synth, although new components can be created with less cost than in a modular system. The most obvious example is on the Korg Z1. Instead of having one model for analogue synthesis, it actually has six different models, with different configurations.
Let's look at the Sync Osc model as a case in point. This model runs on a single oscillator (so, for example, you can have a different model on the Z1's second oscillator). Those of you who were paying attention to the instalment of Synth School which covered oscillator sync (SOS August '97), should be raising one eyebrow, Spock-like, by now, wondering how you can sync an oscillator if you haven't got a second oscillator to sync it to. Well, fear not. The Z1's designers have made it so that you can use the sub-oscillator as the sync source (see the accompanying screen dump from the Mac Z1 software). You then don't need to actually listen to what this oscillator is doing. In fact, the other oscillator can have a completely different model on it (in the patch shown in the screen dump, the second oscillator is set to the VPM model, Korg's equivalent to FM). The real advantage of this is that when you take the pitch of the sub-oscillator which is controlling the sync right up, the sound of the sync'ed oscillator, while very interesting, becomes very thin. Because the second oscillator is still available for playing another waveform, you can keep a solid basis to the sound, even when you making the sync'ed oscillator squeal by taking the control oscillator pitch right up.
The Z1 has different models for Cross Modulation and Ring Modulation, which (as I pointed out when we originally covered these techniques in the August 1997 issue of SOS, as above) was previously often only possible on modular synths which allowed you to patch any source to any control point <20> techniques developed from what the original designer might have seen as 'mis-patching.' Of course, there were analogue synth keyboards which started to bring in these routings as standard, but they offered nothing like the complexity of reconfiguration of the standard analogue setup which is possible with Korg's MOSS physical modelling system. As the 'components' are only DSP software routines linked by more software, re-routing is not as difficult as when one had to switch control voltages coming from one part of the system to a completely different part of the system, often where the original designer had not expected them to go. Back then, a physical wire was needed to connect the source to the destination, but in the virtual world of physical modelling the designer only has to think it there, change a line of code for the address a datastream is sent to, and hey presto, it's done. Of course, you're still limited by the imagination of the designer and his fixed ideas on what you might want to do (the machines with a knob for every parameter tend to limit you in much the same way as the original synths did), but with the advent of software editors for machines like the Z1 and the AN1x, the possibilities really open up.
Modelling The Future
Seer Systems Reality software synth.
Seer Systems Reality software synth.
I've only really scratched the surface of physical modelling in the imitation of analogue here: while analogue imitation may be the most widesprad use of modelling at the moment, and brings certain advantages even over the original classics, it is but the visible tenth of the modelling iceberg. As time goes by and DSP chips become cheaper than the cholesterol-filled variety, we'll see physical modelling expand to take over some of the market share currently dominated by S&S machines. As I said earlier, the acoustic grand piano may be some way off, but there are already great woodwind and brass, bowed and plucked string, electric piano and organ models out there. Currently it's Yamaha and Korg who have the 'virtual' monopoly, and it is their models of these instruments which we'll be looking at next time, as well as an early 'close but no cigar' attempt by Technics, whose WSA1 synth is very close to this writer's heart. But who knows <20> at any moment, any of the other major manufacturers (or even a brand-new name) may burst onto the market with a revolutionary modelling system which will replace the entire orchestra.
In the meantime, see if you can get access to one of the analogue modelling instruments described here, as they are particularly rewarding for those of an experimental frame of mind. I've gone from being a sceptic to a total convert in less than 18 months (but then my favourite 'analogue' synths all had well-programmed digital oscillators anyway). The authenticity of the sound quality is, of course, purely a matter of opinion, but most people seem to find one of the current crop of analogue imitators which they can live with.
Computer Love
With the advent of the powerful CPU (Central Processing Unit) chips in personal computers, there are now also physical modelling programs which actually run on the main processors of Macs and PCs. The first of these that I became aware of a few years back was by Seer Systems, for the Pentium. Those of a Sherlock Holmes mentality may be able to guess the identity of the synth guru behind Seer (another word for Prophet). It was the brainchild of Dave Smith, who left Korg's R&D facility in San Jos<6F> some years back (where he had set them on the track that would lead to the Prophecy and Z1) to develop the same kind of technology as software-only programs running on PCs. I'm afraid that my admiration for Dave, great though it is, was not enough to make me buy a PC to run his software on, but the demonstrations I heard (and played) at several NAMM shows in succession were enough to convince me of the validity of Seer's code, if not their choice of computer platform. Those of you who have the patience to install a soundcard (because, of course, you need its D-A converters to listen to the sound created in the digital domain) into a PC (something for which my life is definitely not long enough) can check this software out.
I was cheered, on Apple's own stand at this year's NAMM show, to find a company called BitHeadz, who have produced a very similar product for the Macintosh, called the Retro AS1. I wasn't able to spend long with this program, but the authenticity of the analogue sound was certainly there, even through the basic Apple Sound Manager (greater fidelity is available through PCI digital audio cards). Have a look at this product on the company's web site (www.bitheadz.com) and see if they have any audio examples to download. In the course of writing this piece, I spoke with BitHeadz, who have been shipping Retro AS1 for a little while now. PC people (if they haven't stopped reading already, because of my partisan copy!) will be pleased to know that the PC version will ship this summer, although it will apparently need much faster Pentiums to produce the same results as on the PowerPC chips.
While no proper analysis of either of these programs is possible here, this is developing technology, and anyone really interested in synthesis should watch this space as it looks as though cyberspace is probably where the exciting development in physical modelling will come from. Something common to both the Seer and BitHeadz products is the fact that polyphony and numbers of oscillators per voice are fluid and entirely dependent on how much CPU power you have and how much of it you want to dedicate to synthesis. This means, with the accelerating pace of CPU speeds and MIPS capacities, that the 6-oscillator-per-voice, 1000-voice multitimbral analogue synth is only a matter of time. I can't wait!

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1 +1 @@
{"book_files": ["Books\\Music\\Articles\\Gilmore.txt", "Books\\Music\\Articles\\Gilmore2.txt", "Books\\Music\\Articles\\Gilmore3.txt", "Books\\Music\\Articles\\Gilmore4.txt", "Books\\Music\\Articles\\Gilmore5.txt", "Books\\Music\\Articles\\Jeff Beck 1.txt", "Books\\Music\\Articles\\Jeff Beck 2.txt", "Books\\Music\\Articles\\Jeff Beck 3.txt", "Books\\Music\\Articles\\Satriani.txt", "Books\\Music\\Articles\\Satriani2.txt", "Books\\Music\\Articles\\Satriani3.txt", "Books\\Music\\Articles\\Satriani4.txt", "Books\\Music\\Articles\\Satriani5.txt", "Books\\Music\\Articles\\Satriani6.txt", "Books\\Music\\Books\\Jazz Theory From Basics To Advanced.txt", "Books\\Music\\Books\\Strange Beautiful Music - Joe Satriani.txt", "Books\\Music\\Books\\The Jazz Theory Book - SHER Music.txt", "Books\\Music\\Production\\Jimmy Page Guitar World 1993.txt", "Books\\Music\\Soloing\\BuildingSolosWithMotifs_1.txt", "Books\\Music\\Soloing\\BuildingSolosWithMotifs_2.txt", "Books\\Music\\Song Queues\\Seans Mission.txt", "Books\\Music\\Song Queues\\The Strong Willed Man Lyrics.txt", "Books\\Music\\Song Queues\\The Strong Willed Man.txt", "Books\\Music\\Song Queues\\The Undecided Man.txt", "Books\\Music\\SongWriting\\Songwriting - Bernstein, Samuel.txt", "Books\\Music\\SongWriting\\Writing Better Lyrics - Pattison, Pat.txt", "Books\\Music\\Technique\\Legato.txt", "Books\\Music\\Technique\\StringBending.txt", "Books\\Music\\Theory\\7thChordMap.txt", "Books\\Music\\Theory\\BendingIntervalReference.txt", "Books\\Music\\Theory\\CAGED.txt", "Books\\Music\\Theory\\Chord Tones and Tensions.txt", "Books\\Music\\Theory\\Foundations.txt", "Books\\Music\\Theory\\Fretboard Theory 2008 E-Book - Desi Serna.txt", "Books\\Music\\Theory\\Guitar_ The Circle of Fifths fo - Joseph Alexander.txt", "Books\\Music\\Theory\\Modal Interchange.txt", "Books\\Music\\Theory\\Modal_Scale_Reference.txt", "Books\\Music\\Theory\\Mode Formulas.txt", "Books\\Music\\Theory\\mode_lookup.txt", "Books\\Music\\Theory\\mode_rules.txt", "Books\\Music\\Theory\\PentatonicScaleReference.txt"], "file_sizes": {"Books\\Music\\Articles\\Gilmore.txt": 7450, "Books\\Music\\Articles\\Gilmore2.txt": 709, "Books\\Music\\Articles\\Gilmore3.txt": 3707, "Books\\Music\\Articles\\Gilmore4.txt": 21460, "Books\\Music\\Articles\\Gilmore5.txt": 8430, "Books\\Music\\Articles\\Jeff Beck 1.txt": 53725, "Books\\Music\\Articles\\Jeff Beck 2.txt": 8456, "Books\\Music\\Articles\\Jeff Beck 3.txt": 14345, "Books\\Music\\Articles\\Satriani.txt": 79253, "Books\\Music\\Articles\\Satriani2.txt": 5774, "Books\\Music\\Articles\\Satriani3.txt": 4084, "Books\\Music\\Articles\\Satriani4.txt": 4190, "Books\\Music\\Articles\\Satriani5.txt": 2177, "Books\\Music\\Articles\\Satriani6.txt": 1949, "Books\\Music\\Books\\Jazz Theory From Basics To Advanced.txt": 511297, "Books\\Music\\Books\\Strange Beautiful Music - Joe Satriani.txt": 497968, "Books\\Music\\Books\\The Jazz Theory Book - SHER Music.txt": 336003, "Books\\Music\\Production\\Jimmy Page Guitar World 1993.txt": 47444, "Books\\Music\\Soloing\\BuildingSolosWithMotifs_1.txt": 3217, "Books\\Music\\Soloing\\BuildingSolosWithMotifs_2.txt": 2458, "Books\\Music\\Song Queues\\Seans Mission.txt": 1108, "Books\\Music\\Song Queues\\The Strong Willed Man Lyrics.txt": 1194, "Books\\Music\\Song Queues\\The Strong Willed Man.txt": 758, "Books\\Music\\Song Queues\\The Undecided Man.txt": 811, "Books\\Music\\SongWriting\\Songwriting - Bernstein, Samuel.txt": 221980, "Books\\Music\\SongWriting\\Writing Better Lyrics - Pattison, Pat.txt": 445637, "Books\\Music\\Technique\\Legato.txt": 5209, "Books\\Music\\Technique\\StringBending.txt": 6956, "Books\\Music\\Theory\\7thChordMap.txt": 3867, "Books\\Music\\Theory\\BendingIntervalReference.txt": 1966, "Books\\Music\\Theory\\CAGED.txt": 2249, "Books\\Music\\Theory\\Chord Tones and Tensions.txt": 11506, "Books\\Music\\Theory\\Foundations.txt": 3433, "Books\\Music\\Theory\\Fretboard Theory 2008 E-Book - Desi Serna.txt": 57092, "Books\\Music\\Theory\\Guitar_ The Circle of Fifths fo - Joseph Alexander.txt": 7266, "Books\\Music\\Theory\\Modal Interchange.txt": 34428, "Books\\Music\\Theory\\Modal_Scale_Reference.txt": 4917, "Books\\Music\\Theory\\Mode Formulas.txt": 5825, "Books\\Music\\Theory\\mode_lookup.txt": 962, "Books\\Music\\Theory\\mode_rules.txt": 1522, "Books\\Music\\Theory\\PentatonicScaleReference.txt": 4848}}
{"book_files": ["Books\\Music\\Articles\\Chord Progressions Berklee Press.txt", "Books\\Music\\Articles\\Gilmore.txt", "Books\\Music\\Articles\\Gilmore2.txt", "Books\\Music\\Articles\\Gilmore3.txt", "Books\\Music\\Articles\\Gilmore4.txt", "Books\\Music\\Articles\\Gilmore5.txt", "Books\\Music\\Articles\\Jeff Beck 1.txt", "Books\\Music\\Articles\\Jeff Beck 2.txt", "Books\\Music\\Articles\\Jeff Beck 3.txt", "Books\\Music\\Articles\\Satriani.txt", "Books\\Music\\Articles\\Satriani2.txt", "Books\\Music\\Articles\\Satriani3.txt", "Books\\Music\\Articles\\Satriani4.txt", "Books\\Music\\Articles\\Satriani5.txt", "Books\\Music\\Articles\\Satriani6.txt", "Books\\Music\\Articles\\Tonal Ambiguity In Axis Progressions.txt", "Books\\Music\\Books\\Jazz Theory From Basics To Advanced.txt", "Books\\Music\\Books\\The Jazz Theory Book - SHER Music.txt", "Books\\Music\\Production\\Jimmy Page Guitar World 1993.txt", "Books\\Music\\Soloing\\BuildingSolosWithMotifs_1.txt", "Books\\Music\\Soloing\\BuildingSolosWithMotifs_2.txt", "Books\\Music\\Song Queues\\Seans Mission.txt", "Books\\Music\\Song Queues\\The Strong Willed Man Lyrics.txt", "Books\\Music\\Song Queues\\The Strong Willed Man.txt", "Books\\Music\\Song Queues\\The Undecided Man.txt", "Books\\Music\\SongWriting\\Songwriting - Bernstein, Samuel.txt", "Books\\Music\\SongWriting\\Writing Better Lyrics - Pattison, Pat.txt", "Books\\Music\\Synthesis\\How to Make a Noise_ Analog Syn - Cann, Simon.txt", "Books\\Music\\Technique\\Legato.txt", "Books\\Music\\Technique\\StringBending.txt", "Books\\Music\\Theory\\7thChordMap.txt", "Books\\Music\\Theory\\BendingIntervalReference.txt", "Books\\Music\\Theory\\CAGED.txt", "Books\\Music\\Theory\\Chord Tones and Tensions.txt", "Books\\Music\\Theory\\Fretboard Theory 2008 E-Book - Desi Serna.txt", "Books\\Music\\Theory\\Guitar_ The Circle of Fifths fo - Joseph Alexander.txt", "Books\\Music\\Theory\\Modal Interchange.txt", "Books\\Music\\Theory\\Modal_Scale_Reference.txt", "Books\\Music\\Theory\\Mode Formulas.txt", "Books\\Music\\Theory\\mode_lookup.txt", "Books\\Music\\Theory\\mode_rules.txt", "Books\\Music\\Theory\\PentatonicScaleReference.txt"], "file_sizes": {"Books\\Music\\Articles\\Chord Progressions Berklee Press.txt": 9932, "Books\\Music\\Articles\\Gilmore.txt": 7450, "Books\\Music\\Articles\\Gilmore2.txt": 709, "Books\\Music\\Articles\\Gilmore3.txt": 3707, "Books\\Music\\Articles\\Gilmore4.txt": 21460, "Books\\Music\\Articles\\Gilmore5.txt": 8430, "Books\\Music\\Articles\\Jeff Beck 1.txt": 53725, "Books\\Music\\Articles\\Jeff Beck 2.txt": 8456, "Books\\Music\\Articles\\Jeff Beck 3.txt": 14345, "Books\\Music\\Articles\\Satriani.txt": 79253, "Books\\Music\\Articles\\Satriani2.txt": 5774, "Books\\Music\\Articles\\Satriani3.txt": 4084, "Books\\Music\\Articles\\Satriani4.txt": 4190, "Books\\Music\\Articles\\Satriani5.txt": 2177, "Books\\Music\\Articles\\Satriani6.txt": 1949, "Books\\Music\\Articles\\Tonal Ambiguity In Axis Progressions.txt": 60570, "Books\\Music\\Books\\Jazz Theory From Basics To Advanced.txt": 511297, "Books\\Music\\Books\\The Jazz Theory Book - SHER Music.txt": 338444, "Books\\Music\\Production\\Jimmy Page Guitar World 1993.txt": 47444, "Books\\Music\\Soloing\\BuildingSolosWithMotifs_1.txt": 3217, "Books\\Music\\Soloing\\BuildingSolosWithMotifs_2.txt": 2458, "Books\\Music\\Song Queues\\Seans Mission.txt": 1108, "Books\\Music\\Song Queues\\The Strong Willed Man Lyrics.txt": 1194, "Books\\Music\\Song Queues\\The Strong Willed Man.txt": 758, "Books\\Music\\Song Queues\\The Undecided Man.txt": 811, "Books\\Music\\SongWriting\\Songwriting - Bernstein, Samuel.txt": 221980, "Books\\Music\\SongWriting\\Writing Better Lyrics - Pattison, Pat.txt": 445637, "Books\\Music\\Synthesis\\How to Make a Noise_ Analog Syn - Cann, Simon.txt": 192080, "Books\\Music\\Technique\\Legato.txt": 5209, "Books\\Music\\Technique\\StringBending.txt": 6956, "Books\\Music\\Theory\\7thChordMap.txt": 3867, "Books\\Music\\Theory\\BendingIntervalReference.txt": 1966, "Books\\Music\\Theory\\CAGED.txt": 2249, "Books\\Music\\Theory\\Chord Tones and Tensions.txt": 11506, "Books\\Music\\Theory\\Fretboard Theory 2008 E-Book - Desi Serna.txt": 57092, "Books\\Music\\Theory\\Guitar_ The Circle of Fifths fo - Joseph Alexander.txt": 7266, "Books\\Music\\Theory\\Modal Interchange.txt": 34428, "Books\\Music\\Theory\\Modal_Scale_Reference.txt": 4917, "Books\\Music\\Theory\\Mode Formulas.txt": 5825, "Books\\Music\\Theory\\mode_lookup.txt": 962, "Books\\Music\\Theory\\mode_rules.txt": 1522, "Books\\Music\\Theory\\PentatonicScaleReference.txt": 4848}}