Hello Guest! Welcome to our Website.
Something you might want to know about us.
Don't be hesitated to contact us if you have something to say.

Car Audio - Starting with Sound Advice

| 0 comments | Thursday, August 6, 2009
|

Figure 1-2: A wild system looks great but can leave you with no trunk space.

Upgrade your factory-installed system
If you really want to stay on the mild end of the scale and keep from altering your car too much — as well as protect against theft — you can keep the factory radio and add components such as amplifiers and subwoofers. Inversely, you could always change out your factory radio and keep your factory speakers intact.

I did this in one of my own vehicles, a 1997 VW Eurovan Camper that’s a family mobile. After talking it over with my installer, we decided I could get the sort of performance I needed in the vehicle (after all, my wife mostly drives it, and I can’t really crank it up with the kids around) just by swapping out the radio. This also gave me the option to add satellite radio and an auxiliary input that allowed me to easily jack in an iPod. And I could always decide to upgrade the speakers and add an outboard amplifier later.

There are several options for upgrading your factory audio system. You should consider these first if you’re primarily looking for better sound. The easiest and least expensive path to better sound is to swap the factory speakers for higher quality aftermarket ones. Many car audio manufacturers offer drop-in speakers that are specifically designed to fit factory provisions in a vehicle with a minimal amount of hassle and little to no modification. Often it’s just a matter of taking out the factory speakers and dropping in new ones.
This approach generally offers the most bang for your buck because many stock car audio systems use cheap and poor performing speakers, and even inexpensive aftermarket speakers can offer a dramatic difference in sound quality.

Car Audio - Choosing between Mild and Wild

| 0 comments | Wednesday, August 5, 2009
|

For many people, car audio is as much about show as it is about sound. After all, chrome wheels won’t make your car go any faster, but they look good, they’re fun, and they tell people you care about your car. Nothing wrong with that. After all, people have been pimpin’ their rides for years.

If you want a flashy car audio system, go for it. Just keep in mind that there are trade-offs. If your car is a daily driver and you use it to haul people and other things, then going with a flashy system may be impractical. For instance, I once put a show system in my 1996 Chevy Impala for a cross-country promotional trip I did for a magazine. It was the first time I installed a huge system in my own personal car after nearly 10 years in the car audio business. Although the interior was kept pretty low-key except for custom door and rear-deck panels for the speakers, the car’s trunk was turned into a veritable car audio showcase.

It included five amps in a rack in the floor and three 10-inch subwoofers in a bandpass box with a see-through Plexiglas panel under the rear deck. A massive 100-disc CD changer was installed against the driver’s side trunk wall, with a bank of capacitors and power-supply accessories on the other. It was all trimmed with custom vinyl-covered wood and Plexiglas panels.

It sounded great and looked awesome. The car was a hit at the shows I attended and my neighbors would bring their friends over just to see it and listen to it. It was covered in magazines several times, and it was cool to have a celebrity car.
But the car didn’t handle and accelerate the same due to all that extra weight from the car audio components. About a year or so later, after my first child was born, my wife and I couldn’t even fit a baby stroller in the trunk because of all the car audio gear.

The reason I relate this story is to show you both sides of the coin. You can go with a mild system, like the one shown in Figure 1-1. Or, go nuts with a system like that in Figure 1-2. If you want that showy system, by all means, you should have one. But a great-sounding but more discreet system can usually serve the same purpose. Plus, with a showy system you run the risk of attracting the wrong kind of attention: from thieves.

                   Figure 1-1: A mild system can sound good and leave you with trunk space.

Exploring the World of Car Audio

| 0 comments |
|

Taking the Car Audio Plunge
For my money, there’s no better place to listen to music than in a car. When you’re listening at home, the phone always rings or someone tells you to turn it down. Even with headphones, distractions occur and the music is all in your head, so to speak. But the car is like your own private listening room: a mobile sound cocoon that isolates you from the outside world. You can turn it up as loud as you want (as long as you’re not disturbing others) and feel the visceral impact that comes from the music pulsing around you.I’ve been fortunate enough to hear some ultra high-end home-audio systems and I’ve been in state-of-the-art recording studios and witnessed some amazing live performances. But none of these live up to the feeling I get while listening to a well-designed car audio system in a cool car on a fun road. Music justseems to sound better when asphalt is flying under your feet!

The best time ever for car audio fans
There’s never been a better time to be a mobile music lover. Not only have components such as amplifiers and speakers reached an apex of performance and offer more bang for the buck than ever, but the recent explosion in media options has made the DVD radios that were state-of-the-art a decade ago seem almost antiquated now. The advent of MP3 has freed music from a disc based format so that now you’re able to carry your entire music library on a small portable player such as an iPod. Alternatively, you can load hundreds of songs onto a single disc or even a USB thumb drive. Satellite radio has gained ground against traditional terrestrial radio, while high-definition (HD) radio promises to make AM and FM better and offer more content. Plus, in just a few short years, mobile video has turned “Are we there yet?” to “Are we here already?”

Your roadmap to awesome car tunes
Consider Car Audio For Dummies your roadmap to awesome car tunes. You know that there’s this wonderful world of car audio out there, but you don’t know how to get started planning a sound system, shopping for components, or installing everything, let alone getting the most out of your system, protecting it, and fully enjoying it. In this book, I take you through each step of the process so that you can make informed decisions without wasting time and money and so that you’ll ultimately end up with a car audio system that will give you years of listening pleasure.

You’ve come to the right place
You probably heard someone’s car audio system — a friend’s, your older sibling’s, or maybe one at a car show — and now you want something similar. You used to think your car’s system sounded pretty good, but now that you’ve heard something better, it just doesn’t stack up. I’ve always referred to this as the ice cream theory. After you’ve tasted Ben & Jerry’s, for example, you can’t go back to the grocery-store brand. It’s just not the same.

Unbalanced / Balanced Plugs.

| 1 comments | Tuesday, August 4, 2009
|

How an unbalanced mono plug is connected to a one-conductor shielded wire. Or, in other words, to a wire that has one internal conductor which is surrounded by a shield, that also functions as the ‘ low ’ side for the audio signal.
The same fi gure also shows how an unbalanced stereo plug is connected to a two-conductor shielded wire. That is, to a wire with two internal conductors, both of which are surrounded by a shield. The two signals are discrete, but share a common ground.
The two signals could be left and right of a stereo signal or they could be two totally unrelated signals, so the nomenclature of ‘ stereo guitar plug ’ is ubiquitous, but not really accurate. Hopefully, with the aid of the previous picture, you now have a clear concept of how unbalanced audio is connected. The same rules apply whether you are using guitar plugs, RCA plugs or whatever the ‘ plug dejour ’ happens to be today.
But what about balanced audio? Why is it called ‘ balanced ’ , and how does it differ from an unbalanced signal? This is where we come to some very clever voodoo.

Balanced audio is created by splitting the audio signal into two separate but equal parts, and then inverting (flipping) the phase of one of the two.
Your instantaneous question may be ‘Why bother?’. The reason is that when the in- phase and the out-of-phase signal are properly recombined (by uninverting the flipped phase side in a particular way), the result is that our desired audio signal is not only amplified, but any stray noise it has picked up is immediately nullified, leaving only the pure signal.
This is such an important concept that I’m going to repeat it in different words, hoping that it will embed itself deeply in your minds.

Balanced audio reduces or eliminates unwanted noise picked up in wires by flipping (inverting) the phase of one of the two conductors that carry the signal. When the signal is properly recombined, its amplitude (volume) is increased and the unwanted noise is nulled out.Yet another way to describe this is that when the plus (+) noise is summed (added) to the minus (-) noise, the result is no noise. Or at least very little noise.

What this means is that balanced audio runs can be hundreds of feet long without degrading the signal by adding noise to it. Pretty cool, huh?
Figure 2.3.6 shows a balanced mono guitar plug, and also the noise cancelling concepts we’ve talked about above. Pay particular attention to it, as the subsequent discussion in this section is based on you having a clear understanding of how balanced audio works.
Hopefully, I ’ ve now tossed this information at you in enough different ways that you’ve got a decent grasp of it. Let’s put it in still another way.
Balanced audio lines help cancel out interference of many types. Not only hum (ground loops), but also buzz (60 Hz harmonics), thermal sound (white noise), digital clock jitter and lots of other bad stuff, too numerous to mention.
Next up is an example of a typical balanced  +4 dBu audio connection, the kind of connection you might make from a pro-level recording console to a pro-level audio recorder – analog or digital. This example is shown in Figure 2.3.7 for an XLR type (three-pin) connection.
You don ’t have to pay too much attention to the voltage values – they represent an ideal you might see on your DVOM, on a clear day with a favoring tailwind.
The only function of the voltages in this diagram is to give you some idea of what you might encounter in the real world, and reinforce the concepts of balanced audio.
With luck – and attention on your part. you’ve now seen the advantages of balanced audio. You will restrict your unbalanced connections to short runs and, if given the option, always wire gear in balanced mode, right?

Now we come to the real mind-blowing part. Electrical power is basically an audio frequency signal! We’re all painfully familiar with the sound of 60 Hz hum. it’s ubiquitous. No matter where you go, you hear it – anywhere within the AC power grid, and often up to several miles away from it.
But is our regular run-of-the-mill 120 V, 60 Hz electrical power distributed in a manner similar to balanced audio in a studio, to reduce noise pick-up? No! Regrettably, all standard 120 V power distribution systems are wired in an unbalanced mode – this makes them highly susceptible to picking up all kinds of crud!
Every time you hear 60 Hz (or any other noise) in an audio system, it’s
degrading the sound quality and robbing your amplifiers of power.
This brings us to our next section in this module. The truly observant among you noticed that the last fi gure included a credit to something called Equi=Tech. In the next part, you ’ ll fi nd out why that mysterious credit is there. Can you wait that long?

Audio - Unbalanced/Balanced Wiring

| 0 comments | Monday, August 3, 2009
|

Unbalanced/balanced wiring is after we take a quick peek at a couple of guitar plugs to show you the physical difference between balanced/unbalanced connectors.

What you ’ ll see next are the solder tab ends of a stereo and mono guitar plug, followed by the ‘ business end s of the plugs that actually get inserted into guitars, amplifi ers and other gear. And if you guessed that the mono plug is unbalanced, while the stereo plug can be wired balanced, you get a gold star!
There are always caveats and this example is no exception. The so-called ‘stereo’ guitar plug can be wired as a single balanced connection, or two unbalanced mono connections that share a common ground. So don’t assume, always check.

A close-up of the two solder tabs on a stereo male guitar plug is shown in Figure 2.3.2 . I ’ ve drawn two arrows to show exactly what part(s) I ’ m talking about. The longer part, that extends to the upper left in this picture, is both a strain relief for the wire and the part that the shield/drain gets soldered to.

                                          Figure 2.3.2 Solder tabs of stereo male guitar plug.


Let ’s call the two tabs I show the ‘ upper ’ and ‘ lower ’ tabs in this picture.The lower tab goes down to the tip of the plug. It ’s the high/hot conductor.The upper tab goes to the ring of a stereo plug, but is omitted (not present) in a mono plug. It ’s the low/cold conductor. As a general rule, tip is high, ring is low, and the long barrel of the plug is used for drain/shield. Since I want everyone to be totally clear on the difference between stereo and mono plugs, I ’ ve got a couple of side-by-side comparisons ready.

These pesky plugs are so shiny I had to put some white artist ’s tape behind the solder tabs, so you could see them against the strain relief behind them ( Figure 2.3.3 ). I hope it ’s all clear. On the left is a mono plug with one tab. On the right, a splendid example of a stereo plug with two solder tabs. Now that we ’ re straight on the tabs, let ’s see the whole plug ( Figure 2.3.4 ). Here we can see the business ends of our plugs – mono on the bottom and stereo on the top. Notice the ring on the stereo plug? That ’s the part thelow conductor is connected to – and is clearly omitted in the mono plug below it. So one tab no ring, mono plug. Two tabs has ring, stereo plug. And remember, a stereo plug can be wired as unbalanced stereo or balanced mono – the wiring will look thesame.

                                                 Figure 2.3.3 Mono/stereo comparison.

Balanced and unbalanced audio and AC power

| 0 comments | Sunday, August 2, 2009
|

After much skull scratching and soul searching, I decided to combine several concepts into one section, because they are so intimately interconnected. No, not that intimately, they ’ re just good friends. So in this section I ’ ll talk about unbalanced and balanced audio, unbalanced and balanced AC power, and the best ways to wire and clean up the sound (and picture) of your studio/disco/home theater/whatever.

Unbalanced/balanced audio
Let ’s start with audio; a nice, simple bit of audio – a sine wave. Some of you may have seen a sine wave on an oscilloscope or in a picture. They all look more or less like the one in Figure 2.3.1 .

Since the sine wave is AC (alternating current), it will start at 0 V (zero volts), rise to a positive peak, then reverse itself, cross the 0 V reference line again, and rise (inversely) to its negative peak. Or it will do what I’ve shown here: start negative and fl ip positive. And it will keep doing this, over and over, until we get bored and turn it off.

If the sine wave repeated this action 1000 times in a second, we ’ d say it has a frequency of 1000 Hz (hertz) or, in older terminology, 1000 cps (cycles per second). We audio folks got tired of saying ‘ see-pee-ess ’ and renamed the unit of measurement ‘hertz’ as it ’s shorter.
Higher frequency sine waves will appear more squished together horizontally;
lower frequency sines will look more spread out. The reason for this is that the horizontal axis in an oscilloscope is the ‘ time base ’ – it shows the progression of the waveform from the past into the future. The more times a signal reverses polarity, the higher its frequency and the more reversals present in a given period of time.

All sound (almost) is made up of complex combinations of AC (alternating current) waveforms, most of which are not sine waves. The only exception is a DC (direct current) pulse, which will make a one-time ‘ click ’ when connected to a speaker or headphone, but not much else.

We use sine waves for measurement because they ’ re easy to quantify.
I hope you are now fine with sine, as it were, and ready to see how this applies to real-world situations.There are only two ways that an analog audio signal can be carried along in a wired connection. For the sake of brevity (and sanity – mine), I ’ m not going to expound on digital or RF transmission of audio.
The simplest way for an audio signal to be carried on a wire is as an unbalanced signal. This means that there is a center conductor (hot), and (typically) shield and ground are combined in the outer layer of the wire. So half of the signal path is (sort of) shielded by the outer layer, and the outer layer itself is tragically vulnerable to interference from sources in the outside world.
What this means is that unbalanced audio is basically limited to runs of 20 feet or less, and even then it lacks the ability to null out induced noise, hum and the other crud we encounter with great ah, frequency.

Balanced audio, on the other hand, can survive runs of hundreds of feet, so all pro audio facilities use balanced mic lines, balanced transmission lines, and do most of their internal wiring in a balanced manner. We ’ ll explore what unbalanced/balanced wiring is after we take a quickpeek at a couple of guitar plugs to show you the physical difference between balanced/unbalanced connectors.


Wires in a harness

| 0 comments | Friday, July 31, 2009
|


Spiral Shield Wire



Braid Shield Wire

All the concepts in the AWG are explained like this more than once, in fact, so you can follow along easily and understand every point. And the illustrations will show you exactly what I ’ m talking about.
There are two other common types of wire: these are spiral shield wire and braided shield wire. They do the exact same thing as the mylar foil shielded wire in Figure 1.1 , but the shield construction is different.
In spiral shield wire ( Figure 1.3 ), the shield layer is actual strands of copper, wound in a spiral around the inner conductors. The two inner conductors here are the blue and the translucent-over-copper colored items in the picture. The two thinner pale white strands have no electrical function, they are ‘ packing strands ’ that help keep the wire round when it ’s made.
This type of wire is stronger and more noise-resistant then the mylar shield type in Figure 1.1 , but it ’s also larger and costs more. It ’s fl exible and fast to work with, as opposed to the next type of wire I want to discuss.
Braided shield wire offers top notch shielding, and it ’s very durable. But it ’s a real pain to work with, because you have to carefully unbraid the shield to connectorize it. Not recommended for the impatient.
Still with me? The three types of wire I ’ ve shown you all do the same thing, but they look different, require different techniques, and offer different pros and cons in terms of use. I ’ m showing all of them to you, because you ’ re likely to encounter all of them in your wiring saga.
A lot of wiring work is like the examples above; the diversity of options available make it seem complicated and confusing. The trick is to see the underlying unity among the options. Three kinds of wire all do the same thing – cool!
If you ever do get confused, just stop, back up a page and read it over which is a lot easier than hoping for the best, doing it wrong and doing it over. Take your time, and the AWG will soon have you soldering like a pro.
Other terms used in this section are explained the fi rst time they are used in the text. If you skip a section where a defi nition is given, or if you forget it, you can look it up in the online glossary we ’ ve added to the AWG website.
We (Focal Press and myself) chose to keep the glossary on the web in order to update it, and to allow more space in the book itself for vital information.
Some sections of the book (like the soldering instructions) are written with deliberate redundancy. If I tell you how to wire a connector, I have to give all the steps in the proper sequence. If you have to fl ip back and forth in the book to see how a connector is wired, it will only slow you down. So each connector section is designed to be read and followed as a piece of standalone text.
A caution, however: the illustrations show the ground wire always connected, since this is how an individual cable would be wired. A star-grounded system would have ground connected at only one end, not both (star grounding is thoroughly covered in a later chapter).
However, be sure you understand the concept of star grounding before doing work on previously installed wiring or starting construction of a new system. The difference in a star-grounded system is that the shield (ground) wire is connected at only one end, rather than both. Connecting shield at both ends of a wire can cause ‘ ground loops ’ , which induce ‘ hum ’ and other types of noise in audio systems. Star-ground installations are always customwired
and therefore costly – but they radically reduce system noise.


Audio Wiring Guide

| 0 comments | Thursday, July 30, 2009
|

Often the people who had done the wiring were highly intelligent, motivated individuals. But craftsmanship is not synonymous with either intelligence or motivation. True craftsmanship also requires a thorough understanding of the materials you ’ re working with, an understanding that can be gained only through experience. In this book I ’ ll be sharing with you the experience I ’ ve gained during decades of audio/video wiring.
The Audio Wiring Guide (hereafter AWG) is designed for use by both the amateur and the professional. Whether you ’ re wiring a home studio, a PA (public address system) or a commercial multi-track installation, this book will help you do it better, faster, cheaper, and with fewer mistakes. No matter what the size of your wiring project or installation, the AWG provides you with the essential information you need and the techniques to use it.
One of the biggest differences between the AWG and other books is that the steps you need to do for a particular sequence of work are illustrated with photos that look exactly like the wires in your set-up. The instructions are written so you can understand them the fi rst time you read them, no matter what your experience level.

Let ’s take a trial run now to see how it works.
Wiring nomenclature is often ambiguous and confusing. For example, the word ‘ wire ’ could refer to any of these:
● The individual copper strands inside a conductor.
● The strands and their insulating jacket.
● The cluster of conductors and the shield layer in a microphone or other cable.
All very confusing – and for no good reason! So listen up. In every part ofthis Arthicle, I ’ ll use certain terms in specifi c ways. Here ’s an example:

● Strands are the individual copper strands of a wire.
● Conductors are made up of copper strands that are covered with aninsulating jacket (different colors of pliable plastic).
● Shield is a metallic, conductive layer wrapped around the innerconductors to reduce noise. It may be a metalized mylar foil, anelectrically conductive plastic or actual strands of copper wire that arecommonly not insulated.
● Wires are made up of the conductors (strands and insulating jackets) in ashield, and commonly surrounded by an outer plastic or rubber jacket.
● A harness or cable is a collection of wires that are bundled together for aspecific purpose.
The copper strands go into an insulating jacket to becomeconductors . Conductors and their shields in an outer jacket are wires .

Video - Cross Pulse Display

| 3 comments | Wednesday, July 29, 2009
|

Cross Pulse Display
On a professional video monitor, the image can be shifted horizontally to make the horizontal blanking period visible. The image can also be shifted vertically to make the vertical blanking interval visible. When the image is shifted both horizontally and vertically at the same time, the display is known as a cross pulse or pulse cross display. A cross pulse display is a visual image of what is represented electronically on a waveform monitor. This display shows several of the signals created in the sync generator.

Other Signal Outputs
There are several other outputs from the synchronizing generator that are used for testing or other purposes. These test signal outputs are not so much used for driving the system as they are for checking it, or checking the synchronizing generator itself.
Quite often, test signal outputs and black burst or color subcarrier appear at the front of the sync generator for ease of access, though they are also available at the back of the sync generator. Horizontal and vertical drive pulses may be available at the rear of the sync generator, as they are not used for testing purposes or to drive any other piece of equipment other than a tube camera. Test signals that are available from a sync generator are discussed in Chapter 21.
Vertical Interval Signals
The NTSC analog video image is 525 lines, 480 of which represent picture information, referred to as active video. The remaining lines in the vertical interval are used for synchronizing information. Test signals are inserted in the vertical interval as well. While not part of the active video, they are a valuable part of the composite signal.
These signals are usually created by devices connected to one or more of the outputs of a sync generator. These extra signals can then be inserted in the vertical interval. These signals may include vertical interval test signals, vertical interval reference signals, closed captioning, teletext, commercial insertion data, and satellite data.
In the case of the vertical interval test signals (VITS), a test signal generator can create one-line representations of several test signals.
These one-line test signals are inserted in one of the unused video lines in the vertical interval. The VITS can be displayed on an oscilloscope.
This test signal provides a constant reference with respect to the active video contained within the frame.
The vertical interval reference signal (VIRS) was developed to maintain color fidelity. Small differences in color synchronization can occur when signals are switched between pieces of equipment. The VIRS provides a constant color reference for the monitor or receiver.
Without the VIRS, the color balance of the image may change.
Closed captioning was originally developed so the hearing impaired could watch a program and understand the dialogue. In closed captioning, a special receiver takes the information from the verticalinterval and decodes it into subtitles in the active video. Closed captioning may also be used in environments where the audio may not be appropriate or desired. Technically, since closed captioning appears on line 21, which is active video, the data is not truly in
the vertical interval.
Teletext can be used for broadcasting completely separate information unrelated to program content. An example of this is seen on many cable news stations. While the camera may be covering a news story or pointing to an anchor, the ticker tape of information below the image is an ongoing feed of text.
Commercial insertion data can be used to automatically initiate the playback of a commercial. This can eliminate the possibility of operator error. The data are designed to trigger the playback of the required material at the appropriate time, as well as for verification that the commercial was broadcast as ordered.
Satellite data contains information about the satellite being used, the specific channel or transponder on the satellite, and the frequencies used for the audio signals.
The blanking portions of the video signal, both horizontal and vertical, carry critical information. In addition to synchronizing, the blanking periods are used to carry other data that enhance the quality and usefulness of the video signal.

Video Equalizing Pulses

| 0 comments | Tuesday, July 28, 2009
|

Equalizing Pulses
During the vertical blanking interval, the sync generator puts out equalizing pulses. Equalizing pulses occur both before and after the vertical sync signal. The equalizing pulses that occur before the vertical sync are called pre-equalizing pulses. Those that occur after vertical sync are called post-equalizing pulses. Equalizing pulses in video assure continued synchronization during vertical retrace as well as proper interlace of the odd and even fields.
Lines 1 through 9 in each field actually consist of pre-equalizing pulses, vertical sync pulses, and post-equalizing pulses. The 6 preequalizing pulses break up the first 3 lines of a field into 6 half-lines.The next 3 lines consist of 6 vertical sync pulses. Lines 7, 8, and 9 are separated by post-equalizing pulses.

Depending on whether it is the odd or even field, there will be 6 postequalizing pulses, but either 5 or 6 half-lines. In the even field, there are only 5 half-lines. The first half-line of inactive video is called line 9. In the odd field, there are 6 post-equalizing pulses and 6 halflines, so that the first full line of inactive video is called line 10.

There are other ways of defining fields. Each field consists of 2621/2 lines. The odd field begins with a whole line of active video on line 21 and ends with a half-line of video. The even field is defined as starting active video with a half-line on line 20 and ending with a whole line of video. In either case, each field is handled individually, and line counting is done within each field.
It is the equalizing pulses that allow the system to distinguish the odd from the even fields and therefore interlace the two proper fields together to create one frame. If the fields were not properly interlaced, it would be possible to be off by one field in the interlace process.

Color Subcarrier
With the advent of color television, a new signal was introduced to carry the color information. This signal, known as the color subcarrier, became the most important signal of the sync generator. Most sync generators combine color subcarrier with horizontal sync, vertical sync, blanking, and a black video signal to produce a composite signal called black burst or color black. The color subcarrier signal, or any of the synchronizing or blanking pulses, can be taken as a separate output from a sync generator. However, the combination of sync pulses in a black burst signal is much more useful.

The frequency of the color subcarrier is 3,579,545 cycles per second. This frequency must be maintained within plus or minus 10 cycles per second. If this frequency changes, the rate of changecannot be greater than one cycle per second every second. The exactness of this specification has to do with the sensitivity of the human eye to changes in color. As this color subcarrier signal is the reference for color information, any change in the frequency would cause a shift in the color balance. The color subcarrier is also used as the main reference signal for the entire video signal. If the colorsubcarrier is incorrect, then all the signals in the television system will be affected.

Video - Horizontal Blanking

| 1 comments | Monday, July 27, 2009
|

Horizontal retrace occurs during the horizontal blanking period.
The horizontal blanking period can be viewed on a waveform monitor, which displays an electronic representation of the visual image (Figure 4.3A). (The waveform monitor is discussed in detail in Chapter 8.) Several critical synchronizing signals occur during this horizontal blanking period. These signals appear in the followingorder: the front porch, the horizontal synchronizing pulse, the breezeway, and color burst reference (see definitions of these terms below). The breezeway and color burst reference occur during the period of time referred to as the back porch.
The front porch is the period of time that begins at the end of active video. It initiates the retrace and is the beginning of the synchronizing period of time. A single scan line is defined as starting at the front porch and ending with active video before the next front porch begins.
Following the front porch is the horizontal synchronizing pulse. This pulse synchronizes the receiver with the originating source that
created the image. Following the horizontal synchronizing pulse is the area known as the back porch. With the advent of color, the color burst signal was inserted in the back porch. The area betweenhorizontal sync and color burst on the back porch is called the breezeway. Following the end of the back porch, the active video scanning portion of the line begins.

Vertical Blanking
Vertical blanking is somewhat more complex. During the vertical blanking period, there are pre- and post-equalizing pulses and vertical sync pulses, as well as several lines of blanked video. These are full lines of video on which there is no active picture. The vertical blanking period can also be seen on a waveform monitor.

Vertical Synchronizing Pulses
Vertical synchronizing pulses, which are part of the broadcast signal, are used to drive the electron beam back to the beginning of the next field so that the horizontal trace can be initiated. There are six vertical synchronizing pulses that occur between fields to initiate this process. Vertical synchronizing pulses only occur between fields.

Video Drive Pulses

| 0 comments |
|

Drive Pulses
Horizontal and vertical drive pulses are used for driving the camera and are never broadcast. These pulses trigger circuits in the camera called sawtooth waveform generators. The name “sawtooth waveform” refers to the shape of its signal, which looks like the serrations on the edge of a wood saw (Figure 4.2). Both the horizontal and vertical circuits are driven by the same sawtooth waveform.

In horizontal deflection circuits, the long slope on the sawtooth waveform drives the scanning electron beam horizontally across the target face of the pickup tube. In vertical deflection circuits, the long slope moves the beam vertically from one scanning line to the next. In both

horizontal and vertical deflection circuits, the shorter and steeper slope of the sawtooth waveform causes the beam to retrace. In horizontal deflection circuits, the beam moves back to start scanning another line. In vertical deflection circuits, the beam moves back to begin scanning another field.

Blanking Pulses
Horizontal and vertical blanking pulses cause the electron beam in a video camera to go into blanking. In other words, they cause the electron beam to shut off during the retrace period at the end of each line and the retrace period at the end of each field. Blanking pulses, like horizontal and vertical drive pulses, are fed to cameras.
However, unlike drive pulses, the blanking pulses are broadcast as part of the overall video signal.


Synchronizing the Analog Signal

| 0 comments | Sunday, July 26, 2009
|

Synchronizing the Analog Signal
Video images are generated from a source, such as a camera or computer, and viewed on a source, such as a monitor. In order for the viewed image to be seen in exactly the same way and the same time frame as the generated or original image, there has to be a method for synchronizing the elements of the image. Synchronizing an image is a critical part of the analog video process.
Synchronizing Generators A synchronizing generator, or sync generator as it is called, is the heart of the analog video system. The sync generator creates a seriesof pulses that drive all the different equipment in the entire video facility, from cameras to monitors. When viewing analog signals, the synchronizing pulses also drive the monitors.
The heart of the sync generator is an oscillator that puts out a signal called the color subcarrier, which is the reference signal that carries the color information portion of the signal (discussed in more detail later in this chapter). The frequency of the color subcarrier is 3,579,545 cycles per second, rounded off and more commonly referred to as simply 3.58. Starting with this basic signal, the sync generator, through a process of electronic multiplication and division, outputs other frequencies in order to create the other pulses that are necessary for driving video equipment. These pulses include horizontal and vertical synchronizing pulses, horizontal and vertical drive pulses, horizontal and vertical blanking pulses, and equalizing pulses.

These pulses are often combined so that one signal will contain multiple synchronizing components. Combination signals are referred to as composite signals. Terms such as composite blanking and composite video refer to such signals.
Synchronizing Pulses
The sync generator puts out both horizontal and vertical synchronizing pulses. These synchronizing pulses ensure that all of the equipment within the system is in time or synchronized. Horizontal and vertical synchronizing pulses are part of the composite signal, so they can be easily fed to any piece of equipment that requires async reference signal.
Horizontal synchronizing pulses appear at the beginning of each line of video. They assure that monitors and receivers are in synchronization on a line-by-line basis with the information that the camera is creating. Vertical synchronizing pulses appear during the vertical interval, which will be discussed later in this chapter. These pulses assure that the retrace is taking place properly, so that the gun is inits proper position for painting the beginning of the next field.
The composite sync signal ensures that each piece of equipment is operating within the system on a line-by-line, field-by-field basis.
If equipment is not synchronized, switching between images can cause the image in the monitor to lose stability. Dissolves and special effects can change color or shift position. Character generators or computer-generated images might appear in a different position in the image from where they were originally placed.

Video - Interlace Scanning

| 0 comments | Thursday, July 23, 2009
|

Interlace Scanning
The process of this field-by-field scanning is known as interlace scanning because the lines in each field interlace with the alternate lines of the other field. There are two fields for each frame.
Because the images are appearing at the rate of 1/60 of a second, the eye does not see the interval between the two fields. Therefore, the eye perceives continuous motion.

An interesting experiment that illustrates the concept of interlace scanning is to follow a similar scanning pattern as the electron beam would on a frame of video. Look at the paragraph below and first read just the boldfaced, odd lines. Then go back to the top of the paragraph and read the non-boldfaced, even lines. Notice the
way the eyes retrace from the end of a line back to the left margin to begin scanning the next odd line. At the end of the paragraph, the eyes retrace from the last line back to the top again to read or scan the even lines. This is what the electron beam does during its blanking periods.

A television image is created through Interlace scanning.
Interlace scanning is the process of scanning every other line from top to bottom. The beam first scans the odd lines top to bottom,and then it scans the even lines top to bottom. Each scan from top to bottom is a field. It is the combination of the two successive fields that make up an entire frame of a video image.

It is not until both sets (or fields) of odd lines and even lines are interlaced together that the full meaning of the paragraph (or full-frame image) becomes clear. This holds true especially with electronic graphics. When viewing only one field, the letters look ragged and uneven. Only when viewing a complete interlaced frame do the letters look smooth and even.

Black and White Specifications
Dividing the video image into two fields, each with 2621/2 lines, provides an advantage when broadcasting a video signal. Since there is much less information in 2621/2 lines than there is in 525 lines, the video signal does not require as much bandwidth or spectrum space for transmission. For black and white video images, the original NTSC standards
were as follows:
• 525 lines per frame
• 480 lines per frame of active video
• 30 frames per second
• 15,750 lines per second (line frequency)
• 2621/2 lines per field
• 2 fields per frame
• 60 fields per second
• Horizontal blanking before each line
• Vertical blanking between successive fields

These specifications are different when color is added to the video signal.


Video Scanning

| 0 comments |
|

Scanning
When looking at a picture, such as a photograph or a drawing, the human eye takes the scene in all at once. The eye can move from spot to spot to examine details, but in essence, the entire picture is seen at one time. Likewise, when watching a film, the eye sees moving images go by on the screen. The illusion of motion is created by projecting many pictures or frames of film each second. The eye perceives motion, even though the film is made up of thousands of individual still pictures. Video is different from film in that a complete frame of video is broken up into component parts when it is created.

Video Lines
The electron beam inside a video camera transforms a light image into an electronic signal. Then, an electron beam within a video receiver or monitor causes chemicals called phosphors to glow so they transform the electrical signal back into light.
The specifications for this process were standardized by the NTSC (National Television System Committee) when the television system was originally conceived in the late 1930s. The NTSC standard is used in North America and parts of Asia and Latin America. As other countries developed their own television systems, other video
standards were created. Eastern and Western Europe use a system called PAL (Phase Alternate Line). France and the countries of the former Soviet Union use a system known as SECAM (Séquential Colour Avec Mémoire, or Sequential Color with Memory).
For each NTSC video frame, the electron beam scans a total of 525 lines. There are 30 frames scanned each second, which means that a total of 15,750 lines (black and white video) are scanned each second (30 frames  525 lines per frame). This rate is called the line frequency. The NTSC line frequency and frame rate changed with the addition of color. Both PAL and SECAM use 625 lines per frame at 25 frames per second. These two systems were developed after the introduction of color television and consequently did not require any additional changes. There are variations and combinations that attempt to combine the best elements of all of these standards.
Scanning 15,750 lines per second is so fast that the eye never notices the traveling beam. The video image is constantly refreshed as the electron beam scans the 525 lines in each frame. As soon as one frame is completely displayed, scanning begins on the next frame, so the whole process appears seamless to the viewer.
The electron beam in a video camera is made to scan by electronic signals called drive pulses. Horizontal drive pulses move the beam back and forth; and vertical drive pulses move the horizontally scanning beam up the face of the pickup tube. These drive pulses are generated inside the camera.
Blanking An electron beam scanning a picture tube is like an old typewriter.
It works in only one direction. When it reaches the end of a line of video, it must retrace or go back to the other side of the screen to start the next line. Likewise, when it reaches the bottom of the image, it must retrace or go back to the top of the image to begin scanning the next frame (Figure 3.1).
The period of time during which the electron beam retraces to beginscanning or tracing the next line is part of a larger time intervalcalled horizontal blanking. The period of time that the electron gunis retracing to the top of the image to begin scanning another frameis called vertical blanking. During horizontal or vertical blanking,the beam of electrons is blanked out or turned off, so as not tocause any voltage to flow. This way the retrace is not visible.
The horizontal blanking interval is the separation between consecutivelines. The vertical blanking interval is the separation betweenconsecutive frames. As the video image is integrated with otherimages, using equipment such as video editing systems or videoswitchers, the change from source to source occurs during thevertical blanking interval after a complete image has been drawn.
This can be compared to splicing on the frame line of a film frame.Horizontal blanking actually occurs slightly before the beginning ofeach line of video information. Vertical blanking occurs after eachframe. The video picture itself is referred to as active video. In the NTSC system, active video uses 480 out of the 525 lines contained in one frame. PAL and SECAM use 580 active linesout of the 625 total lines. Blanking functions as the picture frame around the active video. It is a necessary component of the TV signal, even though the electron beam is shut off. Blanking specifications are an important part of the picture specifications.
Persistence of Vision Film is shot at 24 frames per second. However, if it were projected at that rate, a flickering quality to the moving image would be noticeable. The flickering is a result of the phenomenon that lets us perceive motion in a movie in the first place. That phenomenon
is called persistence of vision.
Persistence of vision means that the retina, the light-sensitive portion of the human eye, retains the image exposed to it for a certain period of time. This image then fades as the eye waits to
receive the next image. The threshold of retention is 1/30 to 1/32 of a second. If the images change on the retina at a rate slower than that, the eye sees the light and then the dark that follows. If the images change at a faster rate, the eye sees the images as continuous motion and not as individual images. This concept was the basis of a device developed in the 19th century called the Zoetrope

(Figure 3.3). By viewing a series of still images through a small slit in a spinning wheel, the images appeared to move.In film, this concept is exploited by simply showing each frame twice. The picture in the gate of the film projector is held, and the shutter opens twice. Then the film moves to the next frame and the shutter again reveals the picture twice. In this way, 48 frames per second are shown while the projector runs at 24 frames per second, and the eye perceives smooth, continuous motion.

Camera Tube

| 0 comments |
|

In the camera pickup tube, there are horizontal deflection coils and vertical deflection coils. They move the electron beam across the target as well as up and down (Figure 2.4). A series of grids insidethe neck of the pickup tube focuses the electron beam and keepsthe beam perpendicular to the target. This keeps the aperture assmall as possible and, therefore, the image as sharp as possible.
In the television system that is used in the United States, the electronbeam will scan back and forth across the target 525 times in eachtelevision frame. Thus each frame in the television signal is composedof 525 scan lines. It does not matter what size the camerais or what size the pickup tube or monitor is. The total number oflines scanned from the top of the frame to the bottom of the framewill always be 525.
The image created in the video camera has now been turned intoan electronic signal of varying voltages. As an electronic signal, thetelevision image can be carried by cables, recorded on videotapemachines, or even transmitted through the air.

Displaying the Image
There is a peculiar problem that is caused by lenses. A right-side-up image coming through the face of a lens will be inverted, or turned upside down, as it comes out of that lens. In film, this is not a serious problem. Although the image is recorded upside down on thefilm, when it goes back through a lens during projection, it is once again inverted, and the image on the movie screen is displayed rightside up.

In video, the camera lens causes the image to be focused upside down on the face of the target (see Figure 2.1). There is no lens in front of a television monitor or receiver to flip the upside-down image right side up again. The television image is inverted by scanning the image in the camera from the bottom to the top, instead of from the top down. On the receiver, or monitor, the scan is from top to bottom. This way the image appears right side up on the monitor.

The varying voltages generated by the camera can be converted back into light. This electrical energy powers an electron gun in the television receiver or monitor. That gun sends a stream of electrons to the face of the picture tube in the receiver. Changing voltages in the video signal cause chemical phosphors on the inside face ofthe receiver tube to glow with intensity in direct proportion to the amount of voltage. The image that originated in the tube camera is thus recreated, line by line. Motion and detail are all reproduced.

CCD Cameras
The pickup tubes and the scanning yokes needed to drive the tube cameras have been eliminated and replaced by a light-sensitive chip (Figure 2.5). The chip is a charge-coupled device, or CCD, from which this type of camera gets its name. CCD cameras are also referred to as chip cameras.
A CCD is a chip that contains an area, or site, covered with thousands, and in some instances millions, of tiny capacitors or condensers (devices for storing electrical energy). Consumer digital still cameras have chips that can contain as many as five million sites, or five megapixels. This chip came out of the technology that was developed for EPROM (Erasable Programmable Read-Only Memory) chips.They are used for computer software where updates or changes can occur. When the information is burned onto an EPROM, it is meant to be semi-permanent. It is erasable only under high-intensity ultraviolet light.
In a CCD camera, the light information that is converted to electrical energy is deposited on sites on the chip. Unlike an EPROM, however, it is easily removed or changed. The sites are tiny condensers that hold an electrical charge and are separated from each other by insulating material. This prevents the charge from leaking off. The chip is very efficient and can hold the information for extended periods of time. The charge can be released and then replaced by the next set of charges.

Camera Chips
Inside the chip camera, light coming through the lens is focused on a chip (Figure 2.5). In the case of cameras that use multiple chips, light entering the camera goes through a beam splitter and is then focused onto the chips, rather than passing through a pickup tube or tubes. A beam splitter is an optical device that takes the light coming in through the lens and divides or splits it. It directs the light through filters that filter out all but one color for each of the chips. One chip sees only red light, one only blue, and one only green. The filters are called dichroic because they filter out two of the three colors. These chips are photosensitive, integrated circuits.When light strikes the chip, it charges the chip’s sites with electrical energy in proportion to the amount of light that strikes the chip.
In other words, the image that is focused on the chip is captured by the photosensitive surface as an electrical charge. This electrical charge is then read off the chip, site by site. The technology behind these chips allows them to shoot bright light without overloading.
However, if the light is bright enough, the charge can spill over from one site to the next. This can cause the edges of an object within an image to smear or lag.
To prevent this, an optical grid or black screen is laid over the face of the chip so that between the light-sensitive sites there is both insulation and light absorbing material. The same process is usedin a video monitor where a shadow mask is used to prevent excess light from spilling over between adjacent phosphor groups on the screen, which would cause a blurring of the image.
To capture the information stored on the chip, the chip is scanned from site to site, and the energy is discharged as this happens. A numerical value is assigned as each site is scanned, according to the amount of electrical energy present. This numerical information is converted to electrical energy at the output of the camera. This is part of the digitizing process, as the numerical value is converted tocomputer data for storage and transmission.
Lower-end consumer cameras typically have one CCD chip, while most professional or prosumer cameras have three. In consumer cameras, chips resemble the construction of TV receiver tubes. All three colors (red, green, and blue) are present on the one chip. There is no need for three chips and a beam splitter. Typically, the largerthe size of the CCD or CCDs in the camera, the better the image quality. For example, a camera with a 2/3-inch chip will capture a better quality image than a camera with a 1/2-inch chip.
On professional cameras, there is one chip for each color: red, green, and blue (Figure 2.6). The resolution in these cameras is much greater; that is, the chips are better able to reproduce details in an image (resolution), which is determined by the number of sites on the chip. The more sites a chip has, the more detailed the stored video information will be. The chip will also be more expensive. Also, through

the camera’s electronic processing ability, the video image can be altered in several ways. For example, the resolution of an image can be increased without actually having more sites on the chip. An image can be enlarged digitally within the camera, beyond the optical ability of the lens. This same processing can also eliminate noise, or spurious information, and enhance the image. During the digitizing process, certain artifacts can occur in the video that can be a problem. Through image processing in the camera, these artifacts can be blended to make them less noticeable.

Sometimes these problems can also be overcome by changing a camera angle or altering the lighting.Because of their small size and minimal weight, chip cameras have become very useful in field production, news work, documentaries, and even low-budget films. With their resistance to smearing and lagging, and their ability to work in low-light situations, they have also found a use in studios.

Electronic Photography

| 0 comments |
|

Video starts with a camera, as does all picture taking. In still and motion-picture film photography, there is a mechanical system that controls the amount of light falling on a strip of film. Light is then converted into a pattern of varying chemical densities on the film.
As a physical medium, film can be cut, spliced, edited, and manipulated in other ways as well. In electronic photography, the light from an object goes through a lens, as it does in film photography. On the other side of the video camera lens, however, light is converted to an image by an electronic process as opposed to a mechanical or chemical process. The medium for this conversion has changed over the years. It began with tube cameras and has progressed to completely electronic components. The tube cameras will be discussed first, followed by a discussion of the same process as it occurs in digital cameras.

Tube Cameras
In a video tube camera, the lens focuses the image on the face of a pickup tube inside the camera. The face of the pickup tube is known as the target (Figure 2.1). The target is light-sensitive, like a piece of film. When light shines on the face of the target, it conducts electricity in proportion to the amount of light that is striking its surface. Without light on the face of the target, the target resiststhe flow of electricity.

A stream of electrons, called the beam, comes from the back end of the tube and scans back and forth across the face of the target on the inside of the pickup tube. The electrical current generated is either allowed to pass from the target to the camera output or not, depending on the amount of resistance at the face of the target. The amount of resistance varies depending on how much light is shiningon the target. In every video camera, there are adjustments for the beam intensity and the sensitivity of the face of the target. The target acts as either an insulator, when it’s not exposed to light, or as a conductor when light shines on its face. The electrical signal that flows from the target is, in effect, the electronic recreation of the light coming from the scene at which the camera is aimed.

Scanning the Image
Scanning the image begins with the beam of electrons sweeping back and forth across the inside face of the target. Where the electron beam strikes the face of the target, it illuminates an area the same size as the electron beam. This “dot” of electron illumination is called the aperture (Figure 2.2).


The dot or aperture is the smallest size that an element of picture information can be. The larger the aperture, the less detail in the picture. The smaller the aperture, the more detail in the picture. Dot size, or beam aperture, is comparable to drawing with large, blunt crayons or fine-tipped pens. Crayons can outline shapes or color them in. A fine-tipped pen can add texture and small highlights toa drawing. In a digital video signal, these picture elements are called pixels, short for picture elements (Figure 2.3).
The electron beam must always be kept perpendicular to the face of the target. If it were not perpendicular as it scanned back and forth, then only one line in the center of the television image would be in focus. The lines closest to the top and bottom of the picture would be badly distorted, because at these angles the aperture dot would be shaped like an ellipse.

How Video Works

| 0 comments |
|

Introduction
Since the development of broadcast cameras and television sets in the early 1940s, video has slowly become more and more a part of everyday life. In the early 50s, it was a treat simply to have a television set in one’s own home. In the 60s, television brought the world live coverage of an astronaut walking on the moon. With the 70s, the immediacy of television brought the events of the Vietnam War into living rooms. In the 21st century, with additional modes of delivery such as satellite, cable and the Internet, video has developed into the primary source of world communication.

Video Evolution
Just as the use of this medium has changed over the years, so has its physical nature evolved. The video signal started as analog and has developed into digital with different types of digital formats, including some for the digital enthusiast at home. When television was first created, cameras and television sets required a great deal of room to house the original tube technology of the analog world. In today’s digital society, camera size and media continue to get smaller as the quality continues to improve.
Today, a video image is conveyed using digital components and chips rather than tubes. Although the equipment has changed, some of the processes involved in the origination of the video signal have remained the same. This makes the progression of video from analog to digital not only interesting to study, but crucial in providing a foundation of knowledge upon which the current digital video world operates. So much of today’s digital technology is the way it is because it evolved from analog.

Analog and Digital
No matter how digital the equipment is that is used to capture an image, the eyes and ears see the final result as analog. All information from the physical world is analog. A cloud floating by, an ocean wave, and the sounds of a marching band all exist within a spectrum of frequencies that comprise human experience. Thisspectrum of frequencies can be converted to digital data, or zeros and ones. Human beings, however, do not process digital information, and eventually what a human being sees or hears must be converted back from digital data to an analog form. Even with a digital home receiver, the zeros and ones of the digital signal mustbe reproduced as an analog experience (Figure 1.1).
In the early days of television, video was captured, recorded, and reproduced as an analog signal. The primary medium for storage was videotape, which is a magnetic medium. The primary system for reproduction was mechanical, using a videotape machine.
 


Videotape, which was developed based on mechanical concepts, is a linear medium. This means that information can only be recorded or reproduced in the order in which it was created. With the advent of digital, the primary system for signal reproduction has become solid-state electronics, incorporating servers and computers. This change has created a file-based system, rather than the tapedbased system of the analog era. File-based systems allow random, or nonlinear, access to information without respect to the order in which it was produced or its placement within the storage medium.

Video Applications
Facilities such as cable or broadcast stations, as well as production or post-production companies, are constantly transmitting and receiving video signals. They generally have a number of devices that can be used to capture and reproduce a video signal, such as cameras, videotape recorders (VTRs), videocassette recorders (VCRs),
computer hard drives, FireWire drives, and multiple hard drives called RAID arrays, short for Redundant Array of Independent (or Inexpensive) Disks, which are controlled by computer servers. shows different ways in which VTRs or computers might be used to capture, transmit, or reproduce a video signal.

70V audio speaker systems

| 1 comments | Thursday, July 16, 2009
|

70 volt systems are generally used in commercial applications where many speakers need to be run from one amplifier. It is also called high impedance speaker system. The advantage is that, being high impedance, long cable runs of relatively small gauge (usually 20-24 gauges) do not significantly affect the output as they would in a common low impedance speaker system.

The speakers themselves are commonly 4 or 8 ohms, but there's a transformer at each speaker that matches that low impedance, to a high impedance, which is on the line side. Typically those transformers have multiple output taps so the sensitivity of the speaker system (output volume) can be adjusted as needed.

Amplifiers designed for 70 volt operation often have an output transformer as well for matching purposes. Typically a 70V line can be driven with normal audio amplifiers if that matching transformer is added. If you have powerful enough amplifier (high output voltage), you might be able to run the line directly (for example some powerful PA amplifiers can be used to drive a 70V line directly in bridged mode).

Where 70 volt system is used?
A 70 volt system is used in restaurants, small bars, department stores etc. You would want to use this type of system if you plan on powering say 10 speakers with one amp. If the amp produces 100 watts then each speaker would get 10 watts. The speakers have a transformer inside to prevent them from blowing up. You couldn't really connect 10 normal speakers to an average amp because the impedance would be too low, and it's not the best idea to parallel and series your speakers.

Advantages
To an extent, speakers can be added or removed from a 70 volt system without regard for impedance matching. Something you simply can't do with a low impedance system.

Disadvantages
There are two primary disadvantages to a 70 volt system.
1. You are limited to a max amplifier power of 250 watts. Beyond that the transformers saturate and your signal goes to hell.
2. Frequency response is limited, unless serious money is spent on transformers (and it usually isn't). I don't have hard numbers off the top of my head, but a range of 250-10k is in the ballpark. This means it works well for voice application (hence where it's usually seen), but poorly for music applications.

Direct Injection Box

| 0 comments | Friday, March 6, 2009
|

DI Boxes and Their Use in PA Systems

The DI (Direct Input or Direct Injection) Box is one of the less visible but nevertheless valuable parts of a system.
What it is
Usually, it is a small box with one or more inputs (usually 1/4" jack sockets, although some also have phono and/or XLR sockets) and one or more outputs (always one balanced XLR socket, sometimes one or more jack sockets as well).
What it does
A DI box converts an unbalanced, high impedance signal (the kind of signal generated by most pickups and contact mics) into a balanced, low impedance signal (the kind of signal required by most desks). It also isolates the output signal from the input signal (and most also incorporate an earth lift facility), so it can be used - as a temporary measure in an emergency - to cure ground loop problems in other parts of the signal chain (e.g. between desk and power amplifier).
How it works
DI boxes come in two main flavours: active and passive.

Active DI boxes use electronic circuits to convert and isolate the output signal from the input signal. For this reason, they always require power. Most active DI boxes use batteries - usually 9V PP3 - or phantom power (our own can use either source). However, some DI boxes cannot run on phantom power (i.e. they need batteries), and some cannot use batteries (i.e. they won't work unless the desk can supply phantom power). A few varieties will run from separate power supply units.

Passive DI boxes use transformers to convert and isolate the output signal from the input signal. They are usually cheaper than active DI boxes, and do not require any power. However, the reactance of a transformer increases as frequency rises, so most passive DI boxes will exhibit some high-frequency signal loss. In all but the cheapest passive DI boxes this will not greatly affect the signal within its useful range and the degree of difference is comparable with the difference between dynamic and condenser microphones.
How do you use it?

If all else fails, read the manual!
You plug a lead carrying the source signal into the input socket, and connect a balanced mic lead to the output socket. You connect the other end of the mic lead to the desk or multicore, switch the phantom power on, and forget about it, unless:

• It distorts (in which case either reduce the input signal at source, or reconnect/switch to a less sensitive input). Exhausted batteries can also cause distortion, so replace the batteries if the first two suggestions have no effect;

• The output is too small, and needs too much gain at the desk (in which case either increase the input signal at source, or reconnect/switch to a more sensitive input);

• It doesn't seem to be working (in which case check on/off and mute switches if it has them, as well as the connections and the phantom power or battery).
Do you need one?

You will benefit from using a DI box if:

• The output signal from an instrument (e.g. keyboard, acoustic guitar pickup, violin contact mic) is unbalanced and/or high impedance, and the desk inputs are more than a few meters away;

• Connecting the instrument directly to the desk's input creates a ground loop (characterized by a hum or buzz at 50 Hz and/or whole multiples of that frequency). This situation can arise where on-stage effects-units or amplifiers form part of the signal path, and have earthed power supplies.
What sort do you need?
It is good practice to install new batteries in all battery-driven devices before the start of any show (so you can be confident they won't die during the encore). This involves the bother and expense of changing batteries frequently, as well as remembering to change them. A passive DI box does not need batteries. If phantom power is not usually available, a high-quality passive DI box will probably be your best option.

In all other cases, active DI boxes are generally a better choice. However, you may want to look for some or all of the following facilities, whichever type you use:

• Extra outputs (either paralleled with the input, or separate buffered sends), so you can use an on-stage combo or monitor amplifier as well as providing a balanced signal for the desk;

• Inputs that cater for both small (pickup level) and large (line or even loudspeaker level) signals. Some DI boxes have a switch for this, and others have an extra input socket, but beware if a DI box has neither;

• A ground-lift switch, or built-in ground-compensation circuit;

• Something to indicate that it is switched on and receiving power.

Mixers in PA Systems Part. 1

| 0 comments | Wednesday, February 18, 2009
|

A mixer is an essential piece of hardware for capturing interviews that incorporate more than one talking head. Into the mixer you plug all the microphones and other sound sources you are harvesting during a single recording session. The mixer provides needed amplification, VU (volume) readouts that allow you to monitor input levels for each microphone, and controls for adjusting each input level until a perfectly balanced mix is achieved.

Mixers are rated by the number of channels (inputs) they support; a mixer that provides ports for plugging in four microphones, plus two stereo inputs and two line inputs will be labeled as an 8-channel mixer. The more expensive the mixer, the larger the number of input channels it supports and the greater control it provides for filtering and tweaking each incoming sound.

Most mixers used in podcasting productions are set up to output a single stereo track or two mono tracks to the sound card in your PC through a line-in or USB port. Yes, you can buy mixers that output multiple simultaneous channels. These are great for recording professional music sessions, but this level of control is usually overkill for most podcasting sessions.

Even the cheapest of mixers will however allow you to split your output into two separate mono tracks. Doing so is a great idea for recording sessions using two microphones to record a two-person interview. You use the mixer to bring each voice in separately and adjust them independently later on using a multi-track sound editor — a very useful trick in cases where one voice gets louder and softer and the other stays constant. Truth be told, 90 percent of the podcast developers I know routinely mix all the inputs together in the mixer and send out two identical mono tracks outto their PC.

Most mixers provide both XLR and 1/4 inch inputs for plugging in traditional broadcast microphones. Now that some podcasters are using USB mics, mixers that accept USB input are also becoming common. USB input is also useful for bringing in sound from USB-based recorders or MP3 players.

You need to make sure the mixer you select supports enough channels to connect to all the other devices you will need to plug into the mixer (CD players, turntable, midi out from your synthesizer, etc.). Give yourself room to grow — if you plan to use two microphones for a typical podcast, get a mixer that accepts input from at least four microphones. I can’t tell you how many podcasters I have talked to who underestimated the amount of equipment they would be using three months later.

Also make sure your mixer is powered (amplified). You plug the un-amplified microphones into the mixer; it boosts the incoming signal or each mic, mixes them together and sends out a powered output that is required for the line-in port of your computer. Some mixers also provide phantom power — a necessary feature for those of you using professional condenser mics that need to get their power from an external source.

Acoustics

| 0 comments | Tuesday, February 17, 2009
|

Acoustics is the interdisciplinary science that deals with the study of sound, ultrasound and infrasound (all mechanical waves in gases, liquids, and solids). A scientist who works in the field of acoustics is an acoustician. The application of acoustics in technology is called acoustical engineering. There is often much overlap and interaction between the interests of acousticians and acoustical engineers.

Fundamental concepts of acoustics
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into acoustical energy, producing the acoustic wave. There is one fundamental equation that describes acoustic wave propagation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.

The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.

Acoustics looks first at the pressure levels and frequencies in the sound wave. Transduction processes are also of special importance.


 

Followers