Tuesday, October 26, 2010

We've moved to
www.tkmajor.com/mbo/

Friday, September 24, 2010

Why you shouldn't let the music biz spoil your love of music...

One of the burdens of being "young and talented" is the potential for getting confused about why you are pursuing your efforts. People can fill your head up with a lot of making it big nonsense.

It's usually put there by both well-meaning folks who don't get it and, all too often, by 'music biz pros' intent on exploiting the artist's ambitions and dreams -- often for the short term goals of simply extracting money for various production an promotional services -- it's a dirty little secret that that is considerably more than a cottage industry within the larger music business, and that it is also a steady ancillary source of revenue for 'legit' service providers in the industry, too -- as a former studio engineer at the low end of the food chain, I can tell you that the attitude of "If they're stupid enough to spend the money, I'm smart enough to take it" is a highly prevalent one.

That can really get in the way of having a clear relationship with your music and your writing -- and then the burn-out from that and from all the usual lilttle unpleasant brushes with the user/loser denizens of the music biz can sometimes drive a wedge between an artist and his work.

Which is a damn shame, because the problem is not with music. The problem is often with people. Both ourselves, if we're unclear about why we're pursuing music and writing, and, of course, with those who would, knowingly or not, lead us down the garden path to "big dreams" of success, a life of glamor, wealth, and no day jobs...


If musicians keep their heads straight about their relationship with music and love of it, they can survive with their love of music intact. But I've known far too many people who, one day, sometimes in their late 20s, sometimes in their 30s, just put the guitar in a closet and forget to pull it back out for, sometimes, years...

And that is a very sad. 

Monday, September 6, 2010

A reader shares info on getting an indie band into Apple's new Ping social media site

After readung a post elsewhere on Apple's exceedingly rocky launch of Ping, the new social media extension to iTunes, a correspondent sent this into my KS2 Problema news blog  -- sharing a post he had made offering info on getting independent bands into Ping:

“I had fundamental questions more related to how the independent musician can actually make use of the system as we all had hope Ping might replace MySpace.
I finally have gotten some actual directions from Apple about how to proceed getting a profile approved for an Indie artist and I have a contact email for people to write to. I’ve also gotten a comment back from one of TuneCore’s founders:
Whether they actually develop this into something significant or not, at least everyone needs to be able to come to the party. Apple needs to be more open about what they are doing for the musicians, the engines that will help drive the service.”
Please pass the info on to your musician friends.
Special thanks to Frank Colin and his Waist Deep in the Media Swamp blog!

Thursday, September 2, 2010

More on DAW latency...

Q: I tried using a computer for recording around 10 years ago and didn't like it because of the latency. Now I want to try to get into it again but I hear Cubase still has big latency problems but that Sonar is better... is that true? Is latency still a problem?

A: First, any digital audio system has latency. It takes around a ms each for the A/D and D/A processes and almost all multichannel devices use a digital cue mixer (analog mixers would be prohibitively expensive for most interfaces). So even onboard cue mixing -- often inappropriately labeled as zero latency monitoring when it should be near-zero latency.

For instance, when MOTU introduced their popular 828mkII, they properly labeled the onboard cue mixing as near-zero latency. But when many of their competitors labeled their own near-zero digital cue mixing as zero latency, MOTU did what all the other kids were doing and started lying about latency. This is marketing lie is nearly universal among the makers and they all point to each other's prevarications as the justification. It's a sham an disgrace.

But, hey, it's only a couple milliseconds, right?

Except that by the 5-10 ms (and certainly by 15 ms or so), most musically critical listeners will begin to detect serious problems with timing mismatches in the monitoring. And there are reports of discomfort from some over as little as a 2 ms latency. These are very small amounts of time -- but they are right at the threshold of perception and it's not just absurd for the makers to call that zero latency -- it's an outright lie.

But -- it might not be enough to bug you -- particularly if your POD's own latency doesn't bug you (for clean tones, it's probably around that latency; as more FX modules get brought into a patch preset, the latency goes up, not unexpectedly). So if your POD's delayed output doesn't bug you, the near-zero latency from an outboard device's onboard cue probably won't.

(I realize you're not in the market for such an unit right now. But since you're a little hazy on latency issues, I thought it was important to get some context.)

So using either near-zero onboard monitoring -- or outboard analog monitoring through a mixer if that's a problem -- should be fine for you as far as that goes.

But... what if you want to hear what you're tracking into your computer with?

That is a whole 'nother can of bees.

You will start appreciating the timing efficiencies that the makers of a dedicated all in one device like your POD can build in -- since they know what all the performance variables out in front and can build a sort of just in time signal delivery efficiency into the device -- while a desktop PC, with its numerous subsystems from multiple parties and unknown performance characteristics must typically use a large amount of buffering for both data transmission to and from the hardware interface (hardware buffering) as well as (typically user-set) variable monitoring buffers.

Monitoring buffers are typically the big hold-up. If you're not making your DAW do heavy lifting, such as convolution reverbs and fancy FX or having heavy live virtual synth playback while you are tracking, you can probably get away with minimal monitoring buffer size, possibly little more than that of the hardware buffer size. But if you've got a bunch of heavy duty plugs and v-synths going while you're tracking -- you're going to find that you may have to increase that monitoring buffer so far that the delay between the live sound and the realtime-but-latent cue monitoring from the computer is noticeable, even so much as to give an 'echo effect.' (And when it gets that noticeable, at least you know you have a problem. It's the not-quite noticeable stuff that gets into your tracks, subtly mucking up your player's timing when they overdub.)


Now, with regard to Cubase, Sonar, et al.

Who told you that? (Rhetorical. Don't answer. )

'Cause, as much as I love Sonar (and I do, been using it since '97), it is not as efficient as Cubase, as measured by the ability to carry a heavy load of plug ins with low latencies.

And Cubase is not as efficient as Reaper.

(I'm speaking of Windows machines, here. I don't have any performance figures for Reaper on the Mac [Sonar is Win-only] but Cubase is considerably less efficient on the Mac, presumably because of problems with the foundational architecture of OS X with regards to multithreaded communications within that OS's various cobbled together parts. The third party Mach kernel at the core of the OS was designed for modern, multiple, parallel messaging whereas the open source Darwin Layer 'above' it is an older paradigm, a monolithic system oriented to serial messaging. The results appear to be ongoing problems for Macs when it comes to scaling processes across multiple cores -- not important for workstations when OS X was developed, since most were still single core -- but one of the main reasons that OS X Server has been one of the worst performing network OS's in the modern arena, with Linux and Windows blowing past it in terms of supporting network effiencies to multiple users. That said, Apple has apparently been working to shore up performance in this area and the latest DAW benchmarks show a considerable improvement in Cubase's performance on the latest version of OS X.)

You can see shootouts between the big 3 on the PC, Sonar, Cubase, and Reaper here: http://www.dawbench.com/dawbenchdsp-x-scaling.htm

And you can check out the Cubase Win vs Cubase OS X benchmarks here: http://www.dawbench.com/win7-v-osx-1.htm


Now, on to getting latency down... there are a lot of things you can do to make your host PC or Mac more efficient. (I'm a PC guy, of course, so I'll leave the light side to the experts in that quite different platform.)

In fact, it can be a detailed subject, but here's the broad overline: you want your hardware buffers (audio interface buffers) set as low as they will cleanly function. (If -- with your monitoring latency set way up to take that out of the picture for now -- you get glitches, stutters, etc from your incoming or playback audio, the HW buffering may be set too low. Check your maker's specs -- but they often have the very lowest that could possibly work as their spec [for the types of reasons cited above] so you may well have to set it above their spec'd minimum.)

You'll be able to keep both HW and monitoring latencies down if your host machine is running as lean and mean as possible.

That typically means making sure no other applications are running -- but you're probably also going to need to look at background operations/services/applets... the kinds of things that show up on the Task Manager's Services tab.

That's usually the province of background programs like system utilities, always on anti-malware software, but also, annoyingly, many things that just should not be running in background, often vanity/promotiuonal programs designed just to put 'quick launch' icons in the systray -- yet they can take up many megabytes of memory. Another prime offender are 'updater' programs. Apple is notorious for having buckets of background programs installed with their apps. (Got iTunes? You got background services up the backside. And try to get rid of them. Ha!) Quicktime itself has components it tries to install as always running. The Sun Java Updater (now owned by evil empire Oracle) can take as much as 12-15 MB of your precious RAM just sitting around waiting to phone home every once in a while to check for an update.

Many -- most on many consumer machines -- of these background processes are absolutely unnecessary, except for possible tiny gains in load time for the individual apps or for just getting a corporate icon into your systray.

Responsible software makers give you a way to tell these services and applets not to load at startup -- but, unfortunately, many more do not. And many will load those services when the app runs and then not remove them when the main app closes. It's just irresponsible software design. You're paying for their incompetence, vanity and arrogance.

A lean, mean, XP machine (for instance) might have as few as 10-20 background processes at startup and load in a RAM footprint as small as 100-120 MB -- but many consumer machines will have 70 or 80 background processes and their RAM footprint will be 300 or more MB just sitting there.

And more if you run background anti-virus, which often does little to protect careful users who don't do the stupid stuff like opening unexpected email attachments, visiting (and even worse downloading -- but sometimes just visiting is all it takes) porn and warez sites, using uncontrolled p2p softare (torrentz, etc).

In fact, a study a couple years ago found that major anti-malware from companies like Norton, McAffee, and even the respected Trend Micro (maker of the normally very good, free online scanning service, HouseCall) typically miss as much as 80% of current threats -- (which is typically all you really need protection from if you keep your browser, OS, and net-using programs properly patched).

For that reason, many wised up power user types eschew background A-V software.

But they do follow the best practices that everyone should follow: keep your browsers, OS, and net-using programs (which, these days, may be most of them) patched and up to date, and avoid known transmission vectors like porn and warez sites and torrentz, or installing untested/unknown software, making sure your system has thumb drive auto-run turned off,* and so on.
__________________
* The early editions of XP had thumb drive auto-run defaultto on -- that's how 1/3 of the US private and military PCs in Afghanistan got infected back in the mid decade, GIs would buy cheap thumbdrives in the marketplaces -- most of them with factory preinstalled spyware from our friends at China, Inc., who like to make sure they know what other military powers are doing near their borders and has the technology to put it over on the US Pentagon -- which is long on particle beam weapons mounted on satellites but apparently quite short on everyday technology common sense.)
__________________


And finally, the other place where you can cadge a little extra efficiency, if you're a bit tech savvy, is in disabling unnecessary Windows system background processes (or setting them to start manually). That's well beyond the scope of a BB post but you can find various optimization strategies for the Windows flavors around the web.

Now on to a trickier, thornier issue:
A somewhat separate issue that arises out of various hardware and software system latencies is the problem of timeline (sometimes called tracking) misalignment.

This is a much more common problem than many people realize or than some commercial interests would like to admit.

And it was a problem where the industry was slow coming to the table, with DAW makers pointing at hardware makers (and the HW maker's driver programmers) and the HW makers sometimes pointing back at the DAW makers.

The problem was that many DAWs and driver systems either did not accomodate for HW latencies at all, or drivers misreported actual latencies. And that resulted in overdubs that were placed incorrectly on the timeline. (Usually by a stable amount, but in the case of some devices -- I had a USB mic with the issue -- the latency of incoming signals varied from 25 ms late to 40 ms late. If it had only been constant form session to session, one could have simply slid all new tracks x ms toward the timeline. A pain, but doable. But this device had a different effective latency every session. Fortunatley, I was able to use Mackie's old (abandoned) Tracktion DAW to record using it, and Tracktion had an quasi-automated ping-loopback timeline/tracking calibration tool that you could use to set a new compensation every time.

But not every DAW had that and some came to the table late on that. My own, beloved Sonar, only added a way to compensate for such tracking misalignment around 2006, IIRC, and a pretty minimal one at that, that forces the user to do his own measurement and calibration to adjust for unfixed latencies under either the advanced WDM-KS kernel streaming drivers or under ASIO drivers, which have some compensation but are usually not accurate in my experience.

A lot of folks will say things like, Oh, what's 5 or 10 ms? -- the time it takes sound to travel about 5-9 feet?

It might not seem like a lot, but at 10 ms, most folks can start to tell things are off -- they may not know how or why, but it's there.

And even less timing misalignment can become multiplied when a number of overdubs with such misalignement are made and summed. It's not too bad if all your overdubs pay strict timing reference to a central reference (like a drum or clicktrack) but if the overdubbers start taking their timing cues from other overdubs, you're wading into a sea of potential rhythmic confusion and imprecision as what was once an unambiguous rhythmic center becomes 'smeared' by numerous, seemingly arbitrarily off time overdubs.

Sunday, July 4, 2010

Tech junk: On radio interference...

I know there's a lot of text below, but the upshot is that I'm trying to save folks money -- or at least hopefully keep people from spending money on 'solutions' that probably won't be effective, money that can be put to more effective use inother ways, so it may be worth at least glancing over...


Quote:
Originally Posted by --------
Yes, we  have Monster cables going from mics to our interface. I'll try Mogami cables.  
I heard from a friend that buying a power conditioner can also help reduce interference. However I'm reluctant to buy a $100 power strip...
First, regarding radio interference specifically -- if the problem is related to cabling -- it's not so much that you don't have high enough quality cabling -- it's that you have a defective cable.

Radio interference typically comes from a connection that should be solid but which, for some reason, has become reduced to something like a strand or two of wire (which may even be intermittent). That can form something that acts like the cat's whisker capacitors that were at the heart of early, very simple radio receivers.

That usually means a bad connection at a solder or other connection point or, as sometimes happens, a broken solid wire lead that is being held together in intermittent or minimal contact by insulation. You found that last in older gear with point to point wiring.

In modern gear it's usually found where wire leads or other components contact the circuit board. Stress points around connectors are another place to look (particularly where there are external connectors mounted in a case that are also mounted on the circuit board -- as in many laptops and other pieces of modern, low production cost gear). In such cases, a circuit board often becomes broken and can be held together -- such as it is -- by the actual conductor substrate, potentially tearing and stressing micro-thin printed circuit conductors which then may begin acting like cat's whisker capacitors.

So, your problem could be cable-related (a defective cable -- likely a bad connetion at the connector), but it could also be from a large piece of gear.

On radio interference in general: I think you said you were a mile and a half from a transmitter. If it's the source of your problem, my guess is it's a 50 KW AM station. At that distance, interference is likely only if there is a problem in your gear -- ie, a defective piece of gear/connection.

But, if such a transmitter is in your backyard, it may so thoroughly saturate the area with transmitted power that signal is jumping everywhere, and interference is not limited to gear made vulnerable by defect or poor design. Nearby illegally high powered CB or ham radio transmitters may also cause problems.

When dealing with radio interference issues, remember this: radio power from a non-directional antenna, like sound from a single point sound source, diminishes with the square of the distance from the source -- in other words, a radio signal will be 9 times weaker at 3 feet from the source than it is at 1 foot; 16 times weaker at 4 feet compared to 1 foot, etc.

Because FM radio is not amplitude modulated, it does not typically present the same sorts of contamination issues.



With regard to premium cable, quality and expectations...

Mogami makes good cables but they're very expensive.

You can also buy good cables very inexpensively. It's not that Monster is too cheap. Many would say they tend to be overpriced, by many measures, even when they are of adequate quality.

If there's a respected pro shop oriented to commercial recording and sound reinforcement in your community and they make their own cables, as many do, I would suggest that, since they will very likely be competitively priced and will likely provide a solid bargain, since the shop will likely stand behind them.

Paying premium prices for good cable (like Mogami) is OK if you're willing to shell out as much as 3 or 4 times the commodity rate for equal quality cables for the 'assurance' that the reputation provides.

But the job of wire is very simple. As long as the cable is made of standard materials and appropriate configuration for your purpose, has insulation of an appropriate material (which is not necessarily expensive and that will not build up static charge and so produce the crackling microphonics associated with improper insulation choice) and has well made connectors attached appropriately, it should perform fine and last a long time.

And with regard to the outlandish claims of the "magic cable" people -- fuggedaboutit. Paying hundreds or even thousands of dollars for short runs of cable, whether it's speaker leads, power cables, digital signal cables, whatever -- is hooey. Nonsense. Nonsense bordering on fraud.



With regard to power conditioning and people's expectations...

Power conditioners are largely not effective for most of the purposes people want to put them with regard to improved sound quality.  There are a lot of reasons for that that others can explain better.*

Uninterruptible power supplies can provide protection for your power failure, allowing you to shut down your gear within a safety window. But they are often not effective at other uses people want to put them to (like 'conditioning' the power for better sound). The ability to provide protection against power surges/spikes may not be nearly as effective as one might like; I suggest further reading on that subject.

Some people do buy into the notion that power conditioning is likely to improve the sound of their gear's operation. Suffice it to say that there's little chance of that with modern gear, which tends to have its own internal regulators, for the most part -- unless one is spending serious money (not talking mere hundreds here) for a system which essentially provides a regulated supply from a bank of continually recharging batteries, completely isolating the power supplied from that coming into the building from the power company.

*For further reading, try Googling something like power conditioning myths.


PS... you can probably expect people who have a large investment in supposedly high end cables or in power condtioning (and likely both) to pop in here with some strident defenses of their belief system. Since the claims they often make are extraordinary, I would suggest asking them for empirical evidence to support any extraordinary claims. (Expect to get the ol' Well, if you can't hear the difference, you must be deaf! routine. )

Friday, June 25, 2010

Tech Time: Signal Phase vs. Signal Polarity

 Someone elsewhere asked about double miking snare drums, it raised (but) one of my pet issues --  proper terminology when talking about issues of signal phase and signal polarity...


You reverse polarity in order to get the two signals from mics on opposite sides of a drum head to reinforce each other instead of (potentially) partially canceling each other out.

By contrast, when someone suggests moving the mic to change the relative distances from the drum head of the mics, he is talking about changing the relative phase relationships of the two signals, vis a vis the sound emanating from the head.

But phase is a term strictly relative to the frequency of a wave. Since the cycle period of every frequency is different, those phase relationships are strictly related to the frequency of any given wave component of the aggregate sound. And that is something a lot of folks who spend a lot of time talking about recording either don't get -- or simply are too lazy to address correctly.

With a single skin drum, by moving one mic far enough away from the struck skin so that the signal reaching the mic is now delayed by precisely 1/2 the wavelength of the fundamental, we achieve a change in that phase relationship equivalent -- in a sense -- to reversing the polarity -- but only at that fundamental. Other frequencies will have varying amounts of cancellation or reinforcement when the two signals are summed, often leading to the familiar 'comb filter' effect.


So, under that latter scenario (moving one mic), if we consider the fundamental of the drum to be our primary concern with regard to phase (assumptions are often dangerous in audio) and the fundamental to be 800 Hz (to pick a number I've heard a few times, though, of course, the fundamental tone of a drum, and the concentration of energy at a specific frequency depends in large part on how -- and how well -- it's tuned. (Info on snare drum physics: The Snare Drum)

Here's a (simple) wavelength calculator (it assumes 'standard' values for temperature and altitude/air pressure): Wavelength

From that we get a ballpark figure of ~17 inches for the WL of our 800 Hz fundamental. So, to change the phase of that signal in such a way as to invert an 800 Hz tone 180 degrees, you would move that mic ~ 8.5 inches farther (or nearer) the drumhead, vis a vis the other mic.


But it's important to remember that that snare sound is not only comprised of its fundamental pitch -- drums -- and particularly snares -- have a tendency to produce extremely complex waveforms with a lot of different frequency components. There are the myriad of issues revolving around the complex character of the snare drum, particularly the fact that (while many drummers remove the bottom skins from most of the drums in their recording kits) the drum will generally have two skins. At lower frequencies, the skins will tend to move in the same direction. But at the higher mode formed by the enclosed space, the skins will actually be moving apart, making the sound quite complex. And then there is the snare 'spring' itself.

So, in all likelihood, moving one of the snare mics may produce pleasing results in the sum of the signals but it will not be that much like what would be accomplished by having both mics equidistant from the drum head and reversing signal polarity of one of them.



The 3:1 relative distance rule of thumb will help save your sanity. Keep in mind it's a relative guideline ballparking the relative levels of a given signal reaching each mic. (You're basically trying to get the level of a given drum loud enough in its own mic that it will dominate when that mic is summed with other mics, and relative effects of cacellation are minimized.)

When considering phase relationships in complex drum miking scenaria, another sanity-saver is to focus on one drum and its relationship with its mic vis a vis the other mics around the kit at a time.

Since sound radiating in a free space is basically inversely proportional to the square of the distance, we know that if mic X is one unit of measure away from a single point sound source and mic Y is three units away, the level of the signal reaching mic Y will be nine times (3 squared) less than that reaching mic X. (However, there is a lot more chaos there, though, since a drum head is certainly not a single point source.)

Tuesday, June 1, 2010

What mastering is for...

Talk about a bad penny topic...

Someone was asking recently about a problem he was having with his mixes sounding good in his studio -- but falling apart in the car or on other playback systems. He asked if that was a problem or if that could be fixed in mastering -- and someone helpfully answered, Yes, that's what mastering is for.
In today's paradigm, I'd make that a highly qualified maybe.

Even the best ME can't turn a sow's ear into the proverbial silk purse. If a track is mixed badly or has fundamental flaws, you can try to put sonic band-aids on it, but it's always going to be fundamentally compromised -- and it will be likely to cost you more money as the ME struggles to overcome problems with tracking or mix.

In the old days of then-high tech, computer controlled cutting lathes, creating a master for pressing disks required very expensive gear and a lot of skill and knowledge to use -- making the process quite expensive -- fixes at the cutting lab were for last minute and emergency fixes.

Also, in those days, mastering labs were seldom outfitted as high quality mixing rooms. Not only was it wildly expensive to do a fix there, but it was entirely likely that if you tried to make decisions there, they would be compromised by far less than ideal monitoring.

That's why mastering jobs were, by accepted practice, submitted with an edit list of any EQ or other fixes one wanted imposed. The ME might change the sound (say rolling off bass) in order optimize the signal for the rigid requirements of vinyl (narrow dynamic range, limits on bass levels) but there were other things like phase content that he typically had little control over (unless it was simply a polarity error).

In those days, the ME was expected to not make aesthetic decisions but only apply the requested changes or those absolutely necessary for the format.

But mastering changed with digital -- but the cost of entry was even higher at first. But as CD-R masters became acceptable at rep houses, it became entirely possible for the average home recordist to prepare his own replication masters.

And at that point, mastering houses -- and those who had simply noticed that mastering houses had traditionally commanded hourly rates sometimes 5-10 times higher than studios -- realized there was a challenge but also an opportunity -- to expand on the traditional last-minute-fix aspect into something oriented to corrections few would have made -- or wanted made -- in the old days and to plant the idea that was 'normal practice' and that a mix was not 'finished' unless it had been 'fixed again' by an ME.

So is recording history rewritten to help shore up demand for what is often an unnecessary -- and sometimes aesthetically disastrous -- step.

Now there is another aspect to mastering for those putting together album packages. It's also one where the traditional role of the ME has been expanded. That is in last minute fixes to help try to give some consistency in timbre as well as level to album tracks that may have been recorded in different times and places by different production staff, as so often is the case these days.

When one is putting together a package for replication and commercial release, it may well make perfect sense to use an ME in order to assist in producing a package that fits together well.

And in an era when many of those recording do not have long experience, necessarily the greatest gear (and likely not the knowledge of how to get the most of it), that court of last resort at the ME's may well make some kind of sense. Do make sure that you have chosen an ME that is not just experienced but thinks like you do. There are a lot of different approaches. Validity of approach is contingent on the nature of the project/genre.

But when there is no budget, when you are releasing the music for free or as one-off sales through online stores, I recommend at least trying to do it oneself.

Really, the right place to get things right is in tracking and mixdown. Learn to get that right and, even if you always have everything externally mastered, you'll still be helping to make sure that things end up sounding  as close as possible to what you want and that the ME has to make as few of his own aesthetic/corrective decisions as possible.