Preamplifiers: Internal and External, Smart and Not

  Boost those electrons at your first opportunity

Preamplifiers are a time-tested way to improve measurement sensitivity and accuracy for small signals, especially those near noise. Some new external preamps, used alone or along with those internal to signal analyzers, may give your tiny signals the right boost in the right place to make better measurements. In the bargain, they’ll simplify small-signal and noise-figure measurements.

Once you’ve switched attenuation to 0 dB in a signal analyzer, the next step toward better sensitivity is some sort of amplifier. Many signal analyzers offer internal preamplifiers as an option, and it’s generally easier than employing an external preamp. The manufacturer can characterize the internal preamp in terms of gain and frequency response, and this can be reflected in the analyzer’s accuracy specifications.

However, internal preamplifiers may not have quite the gain you want over the frequency range you need, and an internal unit can’t be placed as close as possible to the signal-under–test (SUT). This is important for microwave and millimeter signals because they can’t travel far without significant attenuation, and because they tend to gather unknown and unwanted signals along the way. This is especially troublesome when you’re measuring small signals close to noise.

External preamplifiers are available in a wide range of frequency ranges, gains and noise figures, in both custom and off-the-shelf configurations, and can provide excellent performance. Unfortunately, it can be a challenge to integrate them into an end-to-end measurement. Accurate measurements require correcting for gain versus frequency and, if possible, noise figure, impedance match and temperature coefficients.

That’s where Keysight comes in. It recently introduced several external “smart” preamplifiers that automatically integrate with the measurement system and are compatible with all of the X-Series signal analyzers. They connect directly to the RF input of the signal analyzers, as shown below, and can function as a remote test head, providing amplification closest to the SUT.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

The U7227A/C/F preamplifiers use a single USB connection to identify themselves to the analyzer and download essential information such as gain versus frequency, noise figure and S-parameters.

As described in a previous post about smart external mixers, the combination of downloaded data and analyzer firmware fully integrates the amplifier into the measurement setup and effectively extends the measurement plane to its input. This allows Keysight to provide a complete measurement solution with very high performance and allows you to focus on critical measurements instead of system integration.

The USB preamplifiers have high gain and very low noise figure, and can be used in combination with the optional internal preamplifiers of the X-Series signal analyzers. The result is a very impressive system noise figure, as shown in the example below.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The performance and USB connectivity of the external preamps improves and simplifies noise-figure measurements and analyzer sensitivity, giving those few critical electrons a boost just when they need it most.

For more detail please see the USB preamplifier technical overview.

Share
Tagged with: , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

EVM: Its Uses and Meanings as a Residual Measurement

  Is technology repeating itself or just rhyming?

History does not repeat itself, but it rhymes” is one of the most popular quotes attributed to Mark Twain. Though there is no clear evidence that he ever said this, it certainly feels like one of his. It says so much in a few words, and reflects his fascination with history and the behavior of people and institutions.

Recently, history rhymed for me while making some OFDM demodulation measurements and looking at the spectrum of the error vector signal. It brought to mind the first time I looked beyond simple error vector magnitude (EVM) measurements to the full error vector signal and understood the extra insight it could provide in both spectrum and time-domain forms.

The rhyme in the error vector measurements—as residual error or distortion measurements—took me all the way back to the first distortion measurements I made with a simple analog distortion analyzer. Variations on that method are still used today, and the approach is summarized below.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

The basic distortion analyzer approach uses a power meter and a switchable band-reject or notch filter. First, the full signal is measured to provide a power reference, and then the filter is switched in to remove the fundamental.

The signal that remains is a residual, containing distortion and noise, and can be measured with great sensitivity because it’s so much smaller than the full signal. That’s a big benefit of this technique, and why filters—including lowpass and highpass—are still used to improve the sensitivity and accuracy of signal measurements. Those basic distortion analyzers usually had a post-filter output that could be connected to an oscilloscope to see if the distortion could be further characterized.

To complete the rhyme, today’s digital demodulation measurements and quality metrics such as EVM or modulation error ratio (MER) are also residual measurements. Signal analyzers and VSAs first demodulate the incoming signal to recover the physical-layer data. They then use this data and fast math to generate a perfect version of the input signal. The perfect or reference signal is subtracted from the input signal to yield a residual, also called the error vector. This subtraction does the job that the notch filter did previously.

The residual can be summarized in simple terms such as EVM or MER. But if you want to understand the nature or cause of a problem and not just its magnitude, you can look at error vector time, spectrum, phase, etc. Here’s an example of measurements on a simple QPSK signal containing a spurious signal with power 36 dB lower.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

Demodulation and subtraction remove the desirable part of the signal, providing more sensitivity and a tighter focus on distortion or interference. Because all these operations and displays are performed within the signal analyzer application or VSA, you need just one tool to help you understand both the magnitude and cause of problems.

At this point you may also be thinking that demodulation and subtraction could be a way to recover one signal deliberately hidden inside another. They can! I’ve experimented with that very interesting technique, and will explain more in a future post.

To make these explanations clearer, I’ve focused here on single-carrier modulation. These approaches to residual analysis work well for OFDM signals, and you can see examples at my previous posts The Right View Makes an Obscure Problem Obvious and A Different View Makes a Different Problem Obvious.

Share
Tagged with: , , , , , , , , , , , , , , , , ,
Posted in History, Measurement techniques, Measurement theory, Signal analysis, Wireless

Frequency vs. Recency in Advanced Signal Analyzer Displays

  Is “recency” really a word?

My spell checker nags me with a jagged red underline, but yes, “recency” is a legitimate word. And it isn’t one of those newly invented for a marketing campaign words: Merriam-Webster traces it back to 1612, and others go back even further.

It’s a good word that means exactly what it sounds like: the quality of being recent. In our world of highly dynamic signals and spectral bands, this quantity is becoming ever more useful.

Of course, recency-coded displays have been around for a long time, though more commonly in oscilloscopes than spectrum analyzers. Traditional analog variable-persistence displays naturally highlighted recent phenomena, as the glow from excited phosphors decayed over time. Extending this decay time by an adjustable amount made the displays even more useful.

A current term for this sort of display is “digital persistence” and in the 89600 VSA software it produces this display of oscillator frequency and amplitude settling:

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

A good complement to recency is “frequency,” which—in this context—is defined as how often specific frequency and amplitude values occur in a spectrum measurement.

Common terms for this sort of display are frequency of occurrence or density or DPX or cumulative history. It’s a kind of historical measure of probability, and for the balance of this post I’ll just use the term density.

Thus, recency is a measure of when something happened, while density is a measure of how often.

In real-time analyzers—and analog persistence displays—the two phenomena are generally combined in some way. However, although related, they indicate different things about the signals we measure.

Because the 89600 VSA provides them independently, as separate traces with separate controls, I’ll use it for another example and discuss combined real-time analyzer displays in a future post. Here’s a frequency or density display of the infamous 2.4 GHz ISM band:

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

This pure density display represents a great deal of information about the time occupancy of the ISM band, showing the relatively long duration of the WLAN frames and the brevity of the Bluetooth hops. However, it offers nothing about signal timing: how many bursts, whether they overlap or not, or even whether the Bluetooth hops are sequential.

That leads, perhaps, to a suggestion. While both display types present a lot of information at once,—and can show very infrequent signals or behavior—they are optimal for different measurement purposes: if you want to know when something happened, with an emphasis on the recent, use persistence; if you want to distinguish signals by how often they appeared, use density.

It’s an over-simplification to say that persistence is best for viewing signals and density is best for viewing spectral bands, but that’s not a bad place to start.

If you’ve used a real-time analyzer you probably noticed that the density displays are usually a kind of hybrid, with an added element of persistence. And you’ve probably heard at least a little about spectrogram displays, which add the time element in a different and very useful way. They’re all excellent tools, and will be good subjects for future posts.

 

 

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Are Your Measurements Corrected? Calibrated? Aligned? Or What?

  Getting the accuracy you’ve paid for

You’ve probably had this experience while using one of our signal analyzers: The instrument pauses what it’s doing and, for some time—a few seconds, maybe a minute, maybe longer—it seems lost in its own world. Relays click while messages flash on the screen, telling you it’s aligning parts you didn’t know it had. What’s going on? Is it important? Can you somehow avoid this inconvenience?

There’s a short answer: The analyzer decided it was time to measure, adjust and check itself to ensure that you’re getting the promised accuracy.

That seems mostly reasonable. After all, you bought a piece of precision test equipment (thanks!) to get reliable answers, so you can do your real job: using RF/microwave technology to make things happen—important things. The last thing you want is a misleading measurement.

That’s not the whole story. Your time is valuable and it’s useful to understand the importance of these operations and whether you can stop them from interrupting your work.

The second short answer: the automatic operations are sometimes important but not crucial (usually). You can do several things to avoid the inconvenience, but it helps to first understand a few terms:

  • Calibrations are the tests, adjustments and verifications performed on an instrument every one to three years. The box is usually sent to a separate facility where calibrations are performed with the assistance of other test equipment.
  • Alignments are the periodic checks and adjustments that an in situ analyzer performs on itself without other equipment or user intervention. The combination of calibration and alignment ensures that the analyzer meets its warranted specifications.
  • Corrections are mathematical operations the analyzer performs internally on measurement results to compensate for known imperfections. These are quantified by calibration and alignment operations.

Alas, this terminology isn’t universal. For example, if you execute the query “*Cal?” the analyzer will tell you whether it is properly (and recently) aligned, but will say nothing about periodic calibration. Still, the terms are useful guides to getting reliable measurements while avoiding inconvenience.

As a starting point, you can use the default automatic mode. The designers have decided which circuits need alignment, how often and over what temperature ranges. Unfortunately, this may result in interruptions, and these can be a problem when you’re prevented from observing a signal or behavior you’re trying to understand. It’s especially frustrating when you’re ready to make a measurement and find that the analyzer has shifted into navel-gazing mode.

Switching off the automatic alignments will ensure that the instrument is always ready to measure—and it will notify you when it decides that alignments are needed. You can decide for yourself when it’s convenient to perform them, though this creates a risk that alignments won’t be current when you’re ready to make a critical measurement.

You can schedule alignments on your own, and tell the instrument to remind you once a day or once a week. This is a relatively low-risk approach if the instrument resides in a temperature-stable environment. However, with your best interests in mind, the analyzer will display this stern warning:

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

The default setting is governed by time and temperature, and in my experience it’s temperature that makes the biggest difference. I once retrieved an analyzer that had been left in a car overnight in freezing weather and, upon power up, found that for the first half hour it was slewing temperature so fast that alignments occurred almost constantly.

If you want to optimize alignments for your own situation, just check out the built-in help in the X-Series signal analyzers. You can even go online and download the spectrum analyzer mode help file to your PC and run it from there.

 

Share
Tagged with: , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Cat’s Whiskers, Spark Gaps, Coherers and Signal Analyzer Detectors

  A rich, entertaining, and enlightening history

As RF engineers we spend most of our time working on the technologies of the present and the future. It can be a constant footrace, so sometimes it’s refreshing to take a look back and see how far we and our predecessors have come. With a history dating back to the very beginnings of RF, detectors or demodulators are an excellent historical example of innovations and progress.

In my post on signal analyzer detectors I mentioned cat’s whiskers and coherers. Though I had never used one, I was familiar with the cat’s whisker detector as a central element in a crystal radio set. The cat’s whisker is a simple rectifier, made by touching a thin metal wire (the “whisker”) to a semiconductor, typically a raw chunk of natural galena.

A tuned circuit is used in the crystal receiver to select a radio station, and the rectifying function of the cat’s whisker serves to extract the audio signal from the RF carrier. In other words, the cat’s whisker is a detector, acting as a demodulator. It’s essentially the same as the IF or video detector found in a signal analyzer, as shown at left in this block diagram.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The first detector demodulates the IF signal, converting it into a magnitude value in the same way a cat’s whisker would perform demodulation in a crystal radio.

This partial, simplified block diagram of a spectrum analyzer shows the two detectors and their locations in the signal path. The first detector demodulates the IF signal, converting it into a magnitude value in the same way a cat’s whisker would perform demodulation in a crystal radio.

Note that in this diagram the envelope detector is symbolized by a diode. Detection is accomplished with the assistance of other components such as a capacitor and resistor, but the critical and most challenging component is the rectifier.

From the standpoint of technical progress it’s interesting to note that a solid-state component—the cat’s whisker—came before vacuum tube solutions. The tube solutions were then superseded by newer solid-state diodes.

History has another curve to throw at us as we move further back in time: the coherer. When I first heard the term I had been already been working on HF and RF applications for more than a dozen years and was surprised I hadn’t encountered it before. The coherer is another—even older—solid-state device that performed a kind of RF detection.

Though complicated in theory, the coherer is simple in practice: it’s just a pair of electrodes in an insulating capsule, separated by metal filings. RF energy causes the metal particles to cohere and the resistance between the electrodes drops dramatically. The coherer detects RF energy in a way that’s useful for electrical circuits such as an RF telegraph.

A coherer, an early RF detector. Two metal electrodes are separated by metal particles and the resistance between them drops in the presence of RF energy. (Image from Wikimedia Commons)

A coherer, an early RF detector. Two metal electrodes are separated by metal particles and the resistance between them drops in the presence of RF energy. (Image from Wikimedia Commons)

The earliest radios were RF telegraphs, using spark-gap transmitters in an on/off configuration. They were entirely broadband, in a way that is shocking to modern RF sensibilities. When I screened video of a large spark gap transmitter to groups of RF application engineers in the mid-1990s it always produced an audible gasp as they intuitively grasped the nature of the emissions.

If you’re interested in the topic, this bit of RF history gives me a chance to recommend one of my favorite TV series on technology, the low-budget, refreshingly British, amazingly enlightening Secret Life of Machines by Tim Hunkin.  These programs from the late 1980s do an incredible job of explaining commonplace technology of the home and office such as refrigerators, vacuum cleaners, TVs, VCRs, word processors, fax machines and radios. I think the episodes on radios and fax machines are some of the very best, and are worth watching for their quirky perspective and humor, in addition to their brilliant explanations.

This all reminds me of my adventures many years ago with spark gaps driving large Tesla coils. Perhaps that’s a topic for a future post—but perhaps some RF interference sins should go unconfessed!

 

 

Share
Tagged with: , , , , , , , , , , , , , , , ,
Posted in History, Measurement theory, Signal analysis, Wireless

Your Spectrum Measurements May Be More Average Than You Know

  I’m not talking about the quality, just the variance

The basic amplitude accuracy of today’s signal analyzers is amazingly good, sometimes significantly better than ±0.2 dB. Combining this accuracy with precise frequency selectivity over a range of bandwidths—from very narrow to very wide—yields good power measurements of simple or complex signals. It’s great for all of us who seek better measurements!

However, if you’re working with time-varying or noisy signals—including almost all measurements made near noise—you’ve probably needed to do some averaging to reduce measurement variance as a way to improve amplitude accuracy.

As a matter of fact, you may already be doing two or more types of averaging at once. Here’s a summary of the four main averaging processes in spectrum/signal analyzers:

  • Video bandwidth filtering is the traditional averaging technique of swept spectrum analyzers. The signal representing the detected magnitude and driving the display’s Y-axis is lowpass filtered.
  • Trace averaging is a newer technique in which the value of each trace point (bin or bucket) is averaged each time a new sweep is made.
  • The average detector is a type of display detector that combines all the individual measurements making up each trace point into an average for that point.
  • Band power averaging combines a specified range of trace points to calculate a single value for a frequency band.

Depending on how you set up a measurement, some or all of these averaging processes may be operating together to produce the results you see.

The use of multiple averaging processes may be desirable and effective, but as I mentioned in The Average of the Log is Not the Log of the Average, different types of averages—different averaging scales—can produce different average values for the same signal.

How do you make the best choice for your measurement, and make sure the averaging scales used are consistent? The good news is that in most cases there is nothing you need to do. Agilent signal analyzers will ensure that consistent averaging scales are used, locking the scale to one of three: power, voltage or log-power (dB).

In addition, Agilent analyzers choose the appropriate scale depending on the type of measurement you’re making. Selecting marker types and measurement applications—such as adjacent channel power, spurious or phase noise—gives the analyzer all the information it needs to make an accurate choice.

If you’re making a more general measurement in which the analyzer does not know the characteristics of the signal, there are a couple of choices you can make to ensure accurate results and optimize speed.

When you want to quickly reduce variance and get accurate results—regardless of signal characteristics—use the average detector.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

The function of the average detector is enlarged and shown over an interval of slightly more than one display point. The average detector collects many measurements of IF magnitude to calculate one value that will be displayed at each bucket boundary.

Beyond the accuracy it provides for all signal types, the average detector is extremely efficient at quickly reducing variance and is very easy to optimize: If you want more averaging, just select a slower sweep speed. The analyzer will have more time to make individual measurements for each display point and will automatically add them to the average. Simply keep on reducing sweep speed until you get the amount of averaging you want.

The exception to this approach is when you’re measuring small CW spurs near noise, and in that case you may want to use a narrower video bandwidth filter for averaging.

With these two approaches you’ll improve both the quality of your signal measurements and the variance, with a minimum of effort and no accidental inconsistencies. Once again, a combination of techniques provides the desired results. For more detail, see Chapter 2 of the updated Application Note 150 Spectrum Analysis Basics.

 

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

YIG Spheres: The Gems in your Signal Analyzer

  Like an Italian sports car, they combine impressive performance and design challenges

In a 1947 speech, Winston Churchill remarked “…it has been said that democracy is the worst form of government except all those other forms…” Today, I suspect that some microwave engineers feel the same way about YIG spheres as microwave resonator elements. They’re an incredibly useful building block for high-frequency oscillators and filters, but it takes creativity and careful design to tame their sensitive and challenging nature.

The “G” in “YIG” stands for garnet, a material better known in gemstone form. A YIG or yttrium-iron-garnet resonator element is a pinhead-sized single-crystal sphere of iron, oxygen and yttrium. These spheres resonate over a wide range of microwave frequencies, with very high Q, and the resonant frequency is tunable by a magnetic field.

That makes them perfect as tunable elements for microwave oscillators and filters, and in this post I’ll focus on their role in the YIG-tuned filters (YTFs) used as preselectors in microwave and millimeter signal analyzers.

These analyzers typically use an internal version of the external harmonic mixing technique described in the previous post. It’s an efficient way to cover a very wide range of input frequencies using different harmonics of a microwave local oscillator—itself often YIG-tuned!

However, mixers produce a number of different outputs from the same input frequencies, including high-side and low-side products plus many others, typically smaller in magnitude. These undesired mixer products will cause erroneous responses or false signals in the spectrum analyzer display, making wide-span signal analysis very confusing.

One straightforward solution to this problem is a bandpass filter in the signal analyzer that tracks the input frequency. Here’s an example:

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

The yellow trace is the frequency response of a YIG preselector bandpass filter as it appears at the signal analyzer IF section. The blue trace shows the raw frequency response, with the preselector bypassed.

YIG technology enables the construction of a tunable preselector filter, wider than the widest analyzer RBW, whose center frequency can be synchronously swept with the analyzer’s center frequency. This bandpass filter rejects any other signals that would cause undesirable responses in the analyzer display.

Problem solved! So why the Churchillian perspective on YIGs? It’s a matter of the costs that come with the compelling YIG benefits:

  • Sensitivity is reduced: The preselector’s insertion loss has a direct impact on analyzer sensitivity.
  • Stability and tuning are challenging: The preselector’s wide, magnetic tuning range comes with temperature sensitivity and a degree of hysteresis. It is a challenge to consistently tune it precisely to the desired frequency, requiring careful characterization and compensation.
  • Bandwidth is limited: The preselector passband is wider than the analyzer’s widest RBW filter, but narrower than some wideband signals that would normally be measured using a digitized IF and fixed LO.

Fortunately signal analyzer designers have implemented a number of techniques to optimize preselector performance and mitigate problems, as described in Agilent Application Note 1586 Preselector Tuning for Amplitude Accuracy in Microwave Spectrum Analysis.

An alternative approach is simply to bypass the preselector for wideband measurements and whenever conditions allow. Many measured spans are not wide enough to show the undesirable mixing products, or the unwanted signal responses can be noted and ignored.

So, just as with democracy and its alternatives, YIG preselectors offer compelling benefits that far outweigh their disadvantages.

If you’d like to know more about harmonic mixing and preselection, see Chapter 7 of the new version of Application Note 150 Spectrum Analysis Basics.

 

Share
Tagged with: , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal analysis

External Mixing and Signal Analysis: You may be Doing it Already

  Where should your first mixer be when you’re making high-frequency measurements?

In Torque for Microwave & Millimeter Connections, I complained that engineering was inherently more challenging at microwave and millimeter frequencies. One reason: many factors that can be ignored at lower frequencies really begin to matter. Therefore, it’s important to consider all the tools and approaches that can help you optimize measurements at these frequencies, and this includes external mixing.

In my years of working at lower frequencies I knew about external mixing, but I always thought of it as a rather exotic and probably difficult technique. In reality, it’s a straightforward approach that has significant benefits, and modern hardware is making it both better and easier.

I also realized that I had been using external mixing for years, but at home: the low noise block (LNB) downconverter in my satellite dish. Satellite receivers use external mixing for many of the same reasons engineers do.

For satellite receivers and signal analyzers it’s a matter of where you place the first mixer. In analyzing microwave and millimeter signals, the first signal-processing element—other than a preamplifier or attenuator—is generally a mixer that downconverts the signal to a much lower frequency.

There’s no requirement that this mixer be inside the analyzer itself. In some cases there are benefits to moving the mixer outside the analyzer and closer to the signal under test, as shown below.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

In external mixing, the analyzer supplies an LO signal output and its harmonics are used by the mixer to downconvert high frequencies from a waveguide input. The result is sent to the analyzer as an IF signal that’s processed by the analyzer’s normal IF section.

External mixing has a number of benefits:

  • Flexible, low-loss connection between signal and analyzer. The vital first downconverting element can be placed at the closest and best location to analyze the signal, typically with a waveguide connection. The analyzer can be located for convenience without a loss penalty from sending high frequencies over a distance.
  • Frequency coverage. External mixers are available for frequencies from 10 GHz to the terahertz range, in passive and active configurations.
  • Cost. Signal analysis may be needed over only a limited set of frequencies in the microwave or millimeter range, and a banded external mixer can extend the coverage of an RF signal analyzer to these frequencies.
  • Performance. Measurement sensitivity and phase noise performance can be excellent due to reduced connection loss and the use of high-frequency and high-stability LO outputs from the signal analyzer.

Some recent innovations have made external mixers easier to use and provide improved performance. These “smart” mixers add a USB connection to the signal analyzer to enable automatic configuration and power calibration. The only other connection needed is a combined LO output/IF input connection, as shown below.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

Agilent’s M1970 waveguide harmonic mixers are self-configuring and calibrating, requiring only USB and SMA connections to PXA and MXA signal analyzers.

The new mixers enhance ease of use, including automatic download of conversion loss for amplitude correction. Nonetheless, they can’t match the convenience and wide frequency coverage of a one-box internal solution that has direct microwave and millimeter coverage. And because external mixing doesn’t include a preselector filter, some sort of signal-identification function will be necessary to highlight and remove signals generated by a mode—LO harmonic or mixing—other than the one for which the display is calibrated (more on this in a future post).

External mixing is now a supported option in Agilent’s PXA and MXA signal analyzers. This is described in the new version of Application Note 150 Spectrum Analysis Basics and in the application note Microwave and Millimeter Signal Measurements: Tools and Best Practices.

 

 

Share
Tagged with: , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

Comparing Coax and Waveguide

  Making the choice for microwave and millimeter connections

I haven’t used waveguide very much, but it’s been an interesting technology to me for many years. I always enjoyed mechanical engineering and I’ve done my share of plumbing—everything from water to oil to milk—so waveguide engages my curiosity in multiple domains.

Coaxial cables and connectors are now readily available at frequencies to 110 GHz and at first glance they seem so much easier and simpler than waveguide. I wondered why waveguide is still in use at these frequencies, so a couple of years ago, while writing an application note, I spoke to electrical and mechanical engineers to understand the choices and tradeoffs.

It’s perhaps no surprise that there are both electrical and mechanical factors involved in the connection decision. At microwave frequencies, and especially in the millimeter range and above, electrical and mechanical characteristics do an intricate dance. Understanding how they intertwine is essential to making better measurements.

Coaxial connections: flexible and convenient. Direct wiring, in its coaxial incarnation, is the obvious choice wherever it can do the job acceptably well. The advances of the past several decades in connectors, cables and manufacturing techniques have provided a wide range of choices at reasonable cost. Coax is available at different price/performance points from metrology-grade to production-quality, and flexibility varies from extreme to semi-rigid. While the cost is significant, especially for precision coaxial hardware, it is generally less expensive than waveguide.

Coax can be an elegant and efficient solution when device connections require some kind of power or bias, such as probing and component test. A single cable can perform multiple functions, and the technique of frequency multiplexing can allow coax to carry multiple signals, including signals moving in different directions. For example, Agilent’s M1970 waveguide harmonic mixers use a single coaxial connection to carry an LO signal from a signal analyzer to an external mixer and to carry the IF output of the mixer back to the analyzer.

All is not lost for waveguide. Indeed, loss is an important reason waveguide may be chosen over coax.

Waveguide: power and performance. Power considerations, both low and high, are often the reasons engineers trade away the flexibility and convenience of coax. In most cases, the loss in waveguide at microwave and millimeter frequencies is significantly less than that for coax, and the difference increases at higher frequencies.

For signal analysis, this lower loss translates to increased sensitivity and potentially better accuracy. Because analyzer sensitivity generally declines with increasing frequency and increasing band or harmonic numbers, the lower loss of waveguide can make a critical difference in some measurements. Also, because available power is increasingly precious and expensive at higher frequencies, the typical cost increment of waveguide may be lessened.

On the subject of power, the lower loss in waveguide comes with high power-handling capability. As occurs with small signals, the benefit increases with increasing frequency.

As you can see from the summary below, other coax/waveguide tradeoffs may factor in your decision.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Comparing the benefits of coaxial and waveguide connections for microwave and millimeter frequency applications.

Mainstream technologies are extending to significantly higher frequencies and I have already wondered if you can push SMA cables and connectors to millimeter frequencies. In some cases, however, the question may be whether cables of any kind are the best solution, and whether it’s time to switch from wiring to plumbing.

Several application notes are available with information on measurements at high frequencies, including Microwave and Millimeter Signal Measurements: Tools and Best Practices.

 

Share
Tagged with: , , , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal analysis, Signal generation

Signal Analysis: What Would You Do With an Extra 9 dB?

  Mapping the benefits of noise subtraction to your own priorities

Otto von Bismarck said that “politics is the art of the possible” and he might as well have been speaking about RF engineering, where the art is to get the most possible from our circuits and our measurements.

The previous post on noise subtraction described a couple of ways that RF measurements could be improved by subtracting most of the noise power in a measuring instrument such as a spectrum or signal analyzer. In some instruments this process is now automated and it’s worth exploring the benefits and tradeoffs as a way to understand the limits of what’s possible.

In the last post I briefly mentioned sensitivity and potential speed improvements and in this post I’d like to discuss one example of what a potent technique noise subtraction can be. One diagram can summarize the benefits and tradeoffs for this example, but it’s an unusual format and a little bit complex so it deserves some explanation.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

Accuracy vs. SNR for noise-like signals and a 95% coverage interval. The blue curves show the error bounds for measurements with noise subtraction and the red curves show the bounds without noise subtraction. Using subtraction provides a 9.1 dB improvement in the required SNR for a measurement with 1 dB error.

I didn’t produce this diagram and confess that I didn’t understand it very well at first glance. The 9.1 dB figure annotating the difference between two curves sounds impressive, but just what does it mean for real measurements?

Let me explain: This is a plot of accuracy (y-axis) vs. signal/noise ratio (SNR, x-axis) for a 95% error coverage interval and for noise-like signals. Many digitally modulated signals are good examples of noise-like signals.

The red curves and the yellow fill indicate the error bounds for measurements made without noise subtraction. Achieving 95% confidence that the measurement error will be less than 1 dB requires an SNR of 7.5 dB or better, keeping error below 2 dB requires an SNR of 3.5 dB, and so on. Note that the mean error is always positive and increases rapidly as SNR is degraded.

Now look at the blue curves and green fill to see the benefit of noise subtraction. In this example the effectiveness of the noise subtraction is sufficient to reduce noise level by about 8 dB, a conservative estimate of the performance of this technique, whether manual or automatic.

First, you can see that the mean error is now zero, removing a bias from the measurement error. Second, the required SNR for 1 dB error has been reduced to -1.6 dB, a 9.1 dB improvement from the measurement made without noise subtraction.

I have complained in the past about the effects of noise on RF measurements and it’s a frustration that many share. However, this example demonstrates the other side of the situation: Subtracting analyzer noise power, either manually or automatically, with technologies such as noise floor extension (NFE) provides big performance benefits.

What would you do with an extra 9 dB? You might use it to improve accuracy. You could trade some of it away for faster test time, improved manufacturing yields, a little increased attenuation to improve SWR, or perhaps eliminate the cost of a preamplifier. Use it well and pursue your own version of “the art of the possible.”

 

Share
Tagged with: , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless
About

Agilent Technologies Electronic Measurement Group is now Keysight Technologies http://www.keysight.com.

My name is Ben Zarlingo and I’m an applications specialist for Keysight Technologies. I’ve been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis. For the past 20 years I’ve been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions. I work at the interface between Agilent’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments. Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.