Well not all of them all the time, but in quite a few cases analog still reigns supreme over plugins. The funny thing is, it's not some voodoo magic or gut feelings, it's facts and science that slant the tables in favor of analog.
Note, I'm not saying that you can't make a No. 1 hit "in the box", lots of them have been and will be made that way and that's great! But if you're after the best, deepest, most open and most pleasant sound possible, analog is still the way to go. And that's also the reason mixanalog.com was created! We believe that if not absolutely necessary, you shouldn't be forced to make compromises while creating a piece of art!
And, a foreword before you dive in. Because the devil is in the details and these details unfortunately happen to dwell in the most murky corners of digital signal processing math and electric circuit modeling, some paragraphs can get seriously nerdy. But please, don't freak out just yet - I promise I will try my very best to provide explanations that even my mom would understand, at least in principle.
All systems have limits. But it so happens that analog circuits aren't always limited to 22kHz or 48kHz (as is working digitally at 44.1kHz or 96kHz sample rate), and even if the combined in-to-out bandwidth is only 20kHz, the individual branches of that same circuit can have bandwidths in the range of megahertz or even higher!
But why would this be important? We only hear up to 20kHz if we're very lucky and still pretty young...?
It's important because during the processing itself, especially when any kind of fast timing constants are involved in dynamics processing (I'm looking at you 1176...), the side-products of synthesizing the control signals or even the control signal itself can have a much greater bandwidth than 20Hz-20kHz. If compared to basic digital processing algorithms, this allows the analog circuits to react to "inter-sample peaks" and produce control signals from frequency content that greatly exceeds the human hearing range or Nyquist frequency limit of a digital system.
To be honest, this does not come in handy on every occasion - a circuit with a greater bandwidth can, if not designed really carefully, be more prone to oscillations at frequencies far above our hearing range. That phenomena can cause reduced headroom, weird distortion and intermodulation and none of those sound exactly pleasant.
But on the other hand, if the circuit is designed properly, the "built-in" true peak processing is nothing to scoff at and can save you quite a few dB of headroom when going into the AD converter!
Analog gear also doesn't have to somehow stitch together different mathematical curves to simulate saturation, distortion and other nonlinearities - things digital sound processing has the hardest time getting right.
Instead, analog gear can go after the exact same effect with vastly superior accuracy and it does it perfectly - especially the imperfections! No tricks like oversampling needed, no CPU hogging, and it still works after you update your OS or plugins.
Oversampling* is in principle a process of guessing what the missing audio samples in between the actually recorded ones would have been if the audio was recorded at a higher sample rate.
*The correct terms for this process when done digitally/ITB would be upsampling, processing and downsampling, but because the effect of increased bandwidth is the same as it would be if the signal was oversampled originally when converted to digital, most of the plugin manufacturers use that term to describe the process.
This process has to include some carefully designed filtering in plugins to avoid producing frequency content that wasn't there in the original audio and consequently inducing some weird behavior and sound artefacts.
But even oversampling will only get you so far towards perfect accuracy if compared to a well designed analog circuit. This is especially significant in all time-dependant processing. The difference is therefore much more apparent in dynamics processors, where the accuracy of attack and release timing curves decidedly influences the sonic character of the compression, limiting, expansion or gating. Plugins need to at least oversample both audio and timing circuits to have a shot at being convincing.
I've touched upon that subject briefly in the first section, but let's take a closer look.
Every time something nonlinear happens to the signal, it generates additional frequency content. Energy cannot vanish into thin air and if you lop a peak off, everything above the cut has to go somewhere and that somewhere is higher frequencies.
In the analog world, those higher frequencies either vanish in the circuit, heating all sorts of resistances, or get all the way to the A/D converter and die a peaceful death there in the anti-aliasing filter.
With plugins, things get complicated. Being ever so faithful to the sampling theorem, a digital algorithm has to manage this energy redistribution with a much more limited set of available frequencies.
What are the options?
So instead of taking the first choice of nice, pleasant 2nd and 3rd harmonics when saturation occurs on a 14kHz component of an "S" sound at 48kHz sampling rate, it has to resort to more drastic measures. 28kHz and 42kHz frequencies simply "don't exist" in 48kHz sampling rate because of they exceed the Nyquist frequency limit because of the way physics works.
If not taken proper care of, those higher harmonics will falsely appear at 20kHz and 6kHz. That phenomenon is called aliasing, and it's bad. Not only in theory, but you can hear a weird metallic or plasticky quality in the processing. A good solution to solve these problems in plugins is not trivial and always requires trade-offs in filter design and CPU usage.
Without properly implemented oversampling and anti-aliasing filters, things can get pretty ugly in the high frequency end of the spectrum when it comes to any kind of saturation or distortion. White Sea Studio made a great practical video about that:
Want to make a faithful analog recreation? Get ready for trouble. It's not going to be easy.
First, a basic overview of the tech swamp that follows, in simple terms.
Modeling an analog circuit means that we have to figure out how the circuits behave in regards to the varying input signal level and frequency content, so we know what will come out on the other end for any given input. It's not enough to try the emulation with one test signal and say "yes this will work just as well for all sounds". You have to make it work with many and it becomes a balancing act when coding plugins.
To do it properly, we need to know the electrical components of that circuit (resistors, capacitors, transistors, tubes, inductors etc.) and the more accurate we want the model to be, the better we have to describe those elements. Sure, a resistor is just a wire that dislikes a lot of electrons pushed across it, but does it do that equally for all kinds of audio signals? What about if it lives near a bulky inductor or a superheated tube?
Every time we make a better, more complete description for every element to achieve higher accuracy, the math gets more intricate and complicated. It soon becomes so complex that it's either nigh impossible to navigate or too hard to calculate in real time. Don't believe me? Keep reading, there's an example of the math spaghetti further below.
Needless to say, it takes much more development time (= money) to get that math right, so many plugins get a simplified version of it instead. It works in real time, from a distance it kind of resembles what a real circuit would do, but falls short on the nitty-gritty details.
Stuff is missing in the algorithms and we can hear it. If you really want to entrust your tracks to a half-attempt, that's cool. But I believe many of us would rather have the real deal.
More in depth: where does the trouble come from?
One example are the side-chain circuits of dynamic processors such as audio compression or limiting plugins. They use diodes (amongst many other elements) to generate the control voltage from an audio signal. A diode is a "one-way valve" for electric current and every time this valve opens or closes, it does that very fast and consequently produces a short burst of high-frequency noise, similar to a "click" that a missing or bad sample produces in your digital recording.
That's no problem for an analog circuit with a naturally limited bandwidth that simply filters out that noise, but in digital plugins, you have to take good care of phenomena like that to not cause more trouble than it's worth (like aliasing from the previous part).
And then there's the nonlinearity in every third component. Different for every single transistor, tube, transformer and many other elements. Again, analog doesn't give a flying duck about math equations. It just does it's thing and it does it with smooth transitions and complex curves or weird cut-offs, with zero latency. Beat that, computer!
Nonlinearities in digital processing are usually done with more or less complex polynomial tranfer functions - curves that tell you what level an output sample should be for any given input sample. Sometimes a couple of different ones have to be stiched together to saturate differently for different levels or different polarities if it's an asymmetric saturation. Those equations and their curves are usually approximations of what a single transistor, tube, transformer or a whole electrical circuit might do, but that's exactly what they are - approximations.
The more accurate you want those approximations to be, the longer and bigger equations you have to come up with, which means more complex math, longer computing times and more room for error. So in most cases it's a compromise between accuracy, CPU usage and sanity of the DSP engineer. That's the missing "last few percent".
No approximations! I want the real deal!
Ok, no shortcuts. Let's make a true, electric model of the circuit that we love for it's warm sound and depth and nice punch when pushed a bit harder.
Warning!!! This is the nerdiest part of the blog!
But it's not ment to serve as an electrical engineering theory study material. You don't have to understand every bit of it, it's purpose is just to illustrate the complexity of true analog modeling, so you can cherish the good sounding plugins even more and get a notion of why so many still fall short.
Not for the faint-hearted.
So we start off with Kirchhoff's equations that are used to calculate the currents and voltages inside electrical circuits and get to work! After days of running simulations and wading through 20cm long equations over several pages of paper, we finally get a 7-th order transfer function (because a fairly small audio circuit can easily have 7 capacitors). Yay, success!
Well, not so fast. Every capacitor is a tiny bit of an inductor too and it has some leakage (parallel resistance) and some series resistance (ESR). That practically doubles the number of reactive parts in the circuit and doubles the order of the transfer function.
A slight feeling of anxiety emerges, but we don't want to give up! Let's do the math again...
Oh, forgot to mention, different operational amplifiers have different slew-rates that translate into level-dependant high-cut behaviour.
Don't have op-amps, only transistors or tubes? Parasitic capacitances, non-linear transfer functions, leakage resistances between N- and P-layers, ...
Despair starts to creep up on you. The math seems to explode with every proper model that you want to incorporate. You only wanted to do good, why this?!?
Did I see an input, interstage and output transformer in this lovely Neve 33609? Ooooh, one of them even has a tertiary winding - splendid!
You can hear the universe laughing back at you for even trying.
Before you even think of applying only the transfer ratio and some saturation... there's also inter-winding parasitic capacitance that constitutes a resonant loop with the inductance of the winding, series resistance of every winding, and probably a little less than ten other electrical parasitic sidekick nuisances that you have to take into account if you set out to do a perfect transformer model.
The core saturation and hysteresis characteristics depend on its alloy, production process, shape, construction process and possible physical imperfections like an unintended air gap in a core that's not stacked together perfectly.
Some transformers and inductors even start to physically produce sound because the magnetic forces move them enough so that you can hear them! That is one more non-linear loss of energy that a perfect model has to take into account.
By the way, most of the above applies to inductors used in lots of vintage EQ designs (Pultec EQP-1A for instance), to some extent to magnetic tape and in a lot of ways to tape heads.
A bitter conclusion
Now, with all the above put into perspective, how about properly modeling a vintage, discrete class A, transformer-balanced tape machine?
Did I hear you say you'd rather chew off your own foot?
Guessed as much - that's a sane response!
So as you can see, digital audio tools have the most trouble when it comes to imperfections. But as luck would have it, we kind of love them because we figured out thay make the recordings sound fuller, smoother, more euphonic - subjectively better in many aspects.
Our ears and brains also learned to like the specific sonic imprint of the analog devices used on our favourite songs. This is because we love the songs and the sounds the engineers and producers picked are a part of the combined aesthetic. When you are making a track, those sounds are the reference you want to achieve and exceed.
With that in mind, it's no coincidence that a lot of audio engineers running hybrid setups converged to a workflow that goes something along the lines of "digital for surgical and corrective, analog for colour". It's taking the best of both worlds and using different technologies for tasks they perform best!