More Tactics to Civilize OFDM

   “Tone reservation” is not an agency to book a band for your wedding

I recently explained some of the reasons why OFDM has become ubiquitous in wireless applications, but didn’t say much about the drawbacks or tradeoffs. As an RF engineer, you know there will be many—and they’ll create challenges in measurement and implementation. It’s time to look at one or two.

Closely spaced subcarriers demand good phase noise in frequency conversion, and the wide bandwidth of many signals means that system response and channel frequency response both matter. Fortunately, it’s quite practical to trade away a few of the data subcarriers as reference signals or pilots, and to use them to continuously correct problems including phase noise, flatness and timing errors.

However, pilot tracking cannot improve amplifier linearity, which is another characteristic that’s at a premium in OFDM systems. A consequence of the central limit theorem is that the large number of independently modulated OFDM subcarriers will produce a total signal very similar to additive white Gaussian noise (AWGN), a rather unruly beast in the RF world.

The standard measure for the unruliness of RF signals is peak/average power ratio (PAPR or PAR). The PAPR of OFDM signals approaches that of white noise, which is about 10-12 dB. This far exceeds most single-carrier signals, and the cost and reduced power efficiency of the ultra-linear amplifiers needed to cope with it can counter the benefits of OFDM.

A variety of tactics have been used to reduce PAPR and civilize OFDM, and they’re generally called crest factor reduction (CFR). These range from simple peak clipping to selective compression and rescaling, to more computationally intensive approaches such as active constellation extension and tone reservation. The effectiveness of these techniques on PAPR is best seen in a complementary cumulative distribution function (CCDF) display:

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

Peak clipping and compression have not been especially successful because they are nonlinear transformations. Their inherent nonlinearity can cause the same problems we’re trying to fix.

As you’d expect, it’s the more DSP-heavy techniques that provide better PAPR reduction without undue damage to modulation quality or adjacent spectrum users. This is yet another example of using today’s rapid increases in processing power to improve the effective performance of analog circuits that otherwise improve much more slowly on their own.

In the tone reservation technique, a subset of the OFDM data subcarriers is sacrificed or reserved for CFR. The tones are individually modulated, but not with data. Instead, appropriate I/Q values are calculated on the fly to counter the highest I/Q excursions (total RF power peaks) caused by the addition of the other subcarriers.

Since all subcarriers are by definition orthogonal, the reserved ones can be freely manipulated without affecting the pilots or those carrying data. Thus, in theory the cost of CFR is primarily computational power along with the payload capacity lost from the sacrificed subcarriers.

The full nature of the cost/benefit tradeoff is more complicated in practice but one example is discussed in an IEEE paper: “By sacrificing 11 out of 256 OFDM tones (4.3%) for tone reservation, over 3 dB of analog PAR reduction can be obtained for a wireless system.” [1]

That’s a pretty good deal, but not the only one available to RF engineers. I mentioned the active constellation extension technique above, and other approaches include selective mapping and digital predistortion. They all have their pros and cons, and I’ll look at those in future posts.

[1] An active-set approach for OFDM PAR reduction via tone reservation, available at IEEE.org.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Signal analysis, Signal generation, Wireless

Condition Numbers and MIMO Insights

  What condition is your condition in?*

As we know all too well, the RF spectrum is a limited resource in an environment of ever-increasing demand. Engineers have been working hard to increase channel capacity, and one of the most effective techniques is spatial multiplexing via multiple-input, multiple-output (MIMO) transmission.

MIMO allows multiple data streams to be sent over a single frequency band, dramatically increasing channel capacity. For an intuitive approach to understanding the technique, see previous posts here: MIMO: Distinguishing an Advanced Technology from Magic and Hand-Waving Illustrates MIMO Signal Transmission. And if MIMO sounds a little like CDMA, the difference is explained intuitively here as well.

Intuitive explanations are fine things, but engineering also requires quantitative analysis. Designs must be validated and optimized. Problems and impairments must be isolated, understood and solved. Tradeoffs must be made, costs reduced and yields increased.

Quantitative analysis is a special challenge for MIMO applications. For example, a 4×4 MIMO system using OFDM has 16 transmit paths to measure, with vector results for each subcarrier and each path.

The challenge is multiplied by the fact that successful MIMO operation requires more than an adequate signal/noise ratio (SNR). Specifically, the capacity gain depends on how well receivers can separate the simultaneous transmissions from each other at each antenna. This separation requires that the paths be different from each other, and that SNR be sufficient to allow the receiver to detect the differences. Consider the artificially-generated example channel frequency responses shown below.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

The bandpass filter applied to one channel will impair MIMO operation for the affected OFDM subcarriers in some proportion to the amount of attenuation. The measurement challenge is then to quantify the effect on MIMO operation.

The answer is a measurement called the MIMO condition number. It’s a ratio of the maximum to minimum singular values of the matrix derived from the channel frequency responses and used to separate the transmitted signals. You can find a more thorough explanation in the application note MIMO Performance and Condition Number in LTE Test, but from an RF engineering point of view it’s simply a quantitative measure of how good the MIMO operation is.

Condition number quantifies the two most important problems in MIMO transmission: undesirable signal correlation and noise. I’ll discuss signal correlation in a future post; here I’ll focus on the effects of SNR by showing the condition number resulting from the filtering in the example above.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition-number measurements are an excellent engineering tool for several reasons:

  • They effectively measure MIMO operation by combining the effects of noise and undesirable channel correlation.
  • They are measured directly from channel frequency response, without the need for demodulation or a matrix decoder.
  • They are a frequency- or subcarrier-specific measurement, useful for uncovering frequency response effects.
  • They relate a somewhat abstract matrix characteristic to practical RF signal characteristics such as SNR.

The last point above is especially significant for understanding MIMO operation: With condition number expressed in dB, if the condition number is larger than the SNR of the signal, it’s likely that MIMO separation of the multiple data streams will not work correctly.

*I don’t know about you, but I can’t hear the phrase “condition number” without thinking of the Mickey Newbury song Just Dropped In (To See What Condition My Condition Was In), made famous by Kenny Rogers & the First Edition way back in 1968. Did Newbury anticipate MIMO?
Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Measurement theory, Signal analysis, Signal generation, Wireless

Direct Digital Synthesis Pays Dividends in Signal Analyzer Performance

  An alternative to PLLs changes the phase noise landscape

Phase-locked loops (PLLs) in radio receivers date back to the first half of the 20th Century, and made their way into test equipment in the second half. In the 1970s, PLLs supplanted direct analog synthesizers in many signal generators and were used as the local oscillator (LO) sections of some spectrum analyzers.

In another example of Moore’s Law, the late 1970s and 1980s saw rapid improvements in PLL technology, driven by the evolution of powerful digital ICs. These controlled increasingly sophisticated PLLs and were the key enabling technology for complex techniques such as fractional-N synthesis.

PLLs are still key to the performance and wide frequency range of all kinds of signal generators and signal analyzers. However, as I mentioned in a recent post, direct digital synthesis (DDS) is coming of age in RF and microwave applications, and signal analyzers are the newest beneficiary.

A good example is the recently introduced Keysight UXA signal analyzer. DDS is used in the LO of this signal analyzer to improve performance in several areas, particularly close-in phase noise. The figure below compares the phase noise of three high-performance signal analyzers at 1 GHz.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

Phase noise is a critical specification for signal analyzers, determining the phase noise limits of the signals and devices they can test, and the accuracy of measurements. For example, radar systems need oscillators with very low phase noise to ensure that the returns from small, slow-moving targets are not lost in the phase noise sidebands of those oscillators.

A spectrum/signal analyzer’s close-in phase noise reflects the phase noise of its frequency-conversion circuitry, particularly the local oscillator and frequency reference. The phase noise of PLL-based LOs typically includes a frequency region in which the phase noise is approximately flat with frequency offset. This is called a phase noise pedestal and its shape and corner frequency are determined in part by the frequency response of the filters in the PLL’s feedback loop(s). The PLL’s loop-filter characteristics are adjusted automatically, and sometimes selectable by the analyzer user as a way to optimize phase noise performance in the most important offset region for a measurement.

With the DDS technology in the UXA, the absence of a pedestal means that improved performance is available over a wide range of offsets up to about 1 MHz. For very wide offsets, a PLL is used along with the DDS to get a lower phase noise floor from its YIG-tuned oscillator.

Despite its obvious advantages, DDS will not fully replace PLLs any time soon. DDS technology is generally more expensive than PLLs, requiring very high-speed digital-to-analog converters with extremely good spurious performance, and high-speed DSP to drive the DACs. In addition, PLLs still offer the widest frequency range, and therefore most DDS solutions will continue to include PLLs.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, History, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Mating Habits of Microwave and Millimeter Connectors

  The tiny ones and the giants

Even if you have especially good eyes—and I do not—it can be difficult to identify the various connector types found on the bench of the typical microwave/millimeter engineer. This is because the frequencies are very high the the dimensions are very small! Nonetheless, accuracy and repeatability are expensive and hard-won at these extreme frequencies, and so it’s worth it to get interconnections right.

Getting things right also helps avoid the cost and inconvenience of connector damage. Connectors are designed with mechanical characteristics to avoid mating operations that would cause gross connector damage, but these measures sometimes fail, subjecting you to hazards such as loose nut danger.

The vast majority of intermating possibilities can be summarized in two sentences:

  • SMA, 3.5 mm and 2.92 mm (“K”) connectors are mechanically compatible for connections.
  • 2.4 mm and 1.85 mm connectors are compatible with each other, but not with the SMA/3.5 mm/2.92 mm.

A good single-page visual summary is available from Keysight. Here’s a portion of it.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

Avoiding outright damage is important; however, performance-wise, it’s a pretty low bar for the RF engineer. Our goal is to optimize performance where it counts, and microwave and millimeter frequencies demand particular care.

For example, intermating different connector types, even when they’re physically compatible, has a real cost in impedance match (return loss) and impedance consistency. This has implications for amplitude accuracy and repeatability, with examples described in the March 2007 Microwave Journal article Intermateability of SMA, 3.5 mm and 2.92 mm connectors.

And it isn’t just mating different connector types that will give you fits. Like teenagers, it seems you can’t send millimeter signals anywhere without them suffering some sorts of issues. All kinds of connectors, adapters and even continuous cabling will affect signals to some degree, and suboptimal connection performance can be a hard problem to isolate.

Even connector savers, a good practice recommended here, add the effects of one more electrical and mechanical interface. As always, it’s a matter of optimizing tradeoffs, though of course that’s job security for RF engineers.

One approach to mastering the connection tradeoffs is to eliminate some adapters by using cables different from the usual male-at-each-end custom. Cables can take the place of connector savers and streamline test connections, especially when you’ll be removing them infrequently.

While you’re at it, consider cable length and quality carefully. Good cables can be expensive but may be the most cost effective way to improve accuracy and repeatability.

Finally, what about those huge connectors you see on some network analyzers and oscilloscopes? These are the ones that require a 20 mm wrench or a special spanner or both. The threads on some connector parts appear to be missing, though there’s a heck of a lot of metal otherwise. Here are two examples:

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

The large connectors are actually NMD or ruggedized versions of 2.4 mm and 1.8 mm connectors, providing increased mechanical robustness and stability. They’re designed to mate with regular connectors of the same type, or as a mount for connector savers, typically female-to-female. Test port extension and other cables are also available with these connectors.

I’ve previously discussed the role of torque in these connections. If you’d like something to post near your test equipment, a good summary of the torque values and wrench sizes is available from Keysight.

Share
Tagged with: , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Hazards, Microwave, Millimeter, Signal generation, Uncategorized, Wireless

Direct Digital Synthesis and Our Share of Moore’s Law

  RF and microwave applications get their own benefits from semiconductor advances

Gordon Moore is well known for his 1965 prediction that the number of transistors in high-density digital ICs would double every two years, give or take. While the implications for processors and memory are well understood, perhaps only RF and microwave engineers recall Moore’s other prediction in that same paper: “Integration will not change linear systems as radically as digital systems.”

Though it’s hard to quantify, it seems that the pace of advances in combined digital/analog circuits is somewhere in the middle: slower than that of processors and memory, but faster than purely analog circuits. To many of us, the actual rate means a lot because analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) bring the power, speed and flexibility of digital circuits to real-world challenges in electronic warfare (EW), wireless, radar, and beyond.

Direct digital synthesis (DDS) technologies are an excellent example, and they’re becoming more important and more prominent in demanding RF and microwave applications. The essentials of DDS are straightforward, as shown in the diagram of a signal source below.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

Deep memory and high-speed digital processing—using ASICs and FPGAs—have long been used to drive DACs. Unfortunately, most DACs have been too narrowband for frequency synthesis, and wideband units lacked the necessary signal purity. The Holy Grail has been a DAC that can deliver instrument-quality performance over wide RF bandwidths.

Engineers at Keysight Technologies (formerly Agilent) realized that new semiconductor technology was intersecting with this need. They used an advanced silicon-germanium BiCMOS process to fabricate a DAC that is the perfect core for a DDS-based RF/microwave signal generator. Signal purity is excellent even at microwave frequencies, as shown in the spectrum measurement below.

 

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

Compared to traditional phase-locked loops (PLLs) and direct analog synthesizers, DDS promises a number of advantages:

  • Frequency and amplitude agility. With no loop-filter settling, new output values can be implemented at the next DAC clock cycle. As a result, the UXG can switch as fast as 250 ηs.
  • Multiple, coherent signals can be generated from one source. DDS can generate both CW and complex or composite signals, continuously or in a changing sequence. This enables generation of scenarios instead of just signals, making the technology well-suited to EW or signal-environment simulation.
  • No phase noise pedestal from a PLL, and no need to trade phase noise performance for agility. PLLs provide a wide frequency range, high resolution and good signal quality, but often require tradeoffs between frequency agility and phase noise performance.
  • Signal coherence, phase continuity and signal generator coordination. Multiple signals from a single generator can be aligned in any way desired, and switching can be phase continuous. Triggers and a shared master clock allow multiple DDS generators to produce coordinated outputs easily and with great precision.

With sufficient DAC performance, DDS is clearly a good fit for signal generation in radar and EW, which need agility and wide bandwidth. DDS also can be valuable in signal analyzers and receivers because fast sweeping/tuning and the lack of a phase noise pedestal enables LO designs with better performance and fewer tradeoffs.

DDS implementations are generally more expensive than PLL technologies. However, as Moore predicted, technological evolution creates a dynamic environment in which the optimal solutions change over time. It seems clear that DDS will have an expanding role in better RF measurements, even if it doesn’t happen at the pace of Moore’s law.

For more about the use of DDS in a specific implementation, go to www.keysight.com/find/UXG and download a relevant app note.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal generation, Wireless

OFDM is Ubiquitous. Why?

  One transport scheme to rule them all. And I get to use the word ubiquitous!

In the early 1990s, working with the first vector signal analyzers, I had a front row seat as digital modulation schemes came to the fore. Digital modulation wasn’t new, but the advent of second-generation cellular standards such as GSM, NADC, CDMA/IS-95 and PDC put digital modulation in the hands of the masses.

The pace of innovation seemed never to slacken during the decade: broadcast television began to go digital, and third-generation cellular consumed vast amounts of money and brainpower.

Over a period of years, I was amazed at the proliferation of modulation types and transport schemes, and the apparently endless combinations and refinements. These required an equally constant flow of innovations to enable understanding, analysis, optimization and troubleshooting.

With mild exasperation, I asked my expert colleagues: “Are we going to continue to see this constant rollout of different modulation types and transport schemes?” The nearly universal answer was, “Yes, for quite a while.”

They were correct, but an important trend emerged late in the decade. One transport scheme grew from niche to dominance in the following decade and beyond: orthogonal frequency division multiplexing or OFDM.

I’ve mentioned various aspects of OFDM and its analysis in past posts, but haven’t explained the fundamentals and why it has become so widely used. I can only scratch the surface in this blog format, but can summarize the technological and environmental drivers.

The first word in the acronym is key: Orthogonality of a large number of RF subcarriers is the central feature of this transport scheme. As a transport scheme, rather than modulation type, it can employ multiple different modulations, typically simultaneously. The figure below illustrates this RF subcarrier orthogonality.

The spectrum of three overlapping OFDM subcarriers, in which the center of each subcarrier corresponds with spectral nulls for all of the other subcarriers. This non-interfering overlap provides the orthogonality necessary to allow independent modulation of each subcarrier.

The spectrum of three overlapping OFDM subcarriers, in which the center of each subcarrier corresponds with spectral nulls for all of the other subcarriers. This non-interfering overlap provides the orthogonality necessary to allow independent modulation of each subcarrier.

In OFDM, orthogonality and carrier independence do not mean that the subcarriers are non-overlapping. Indeed, they are heavily overlapped and the center frequencies are arranged with a specific close spacing that places the main spectral peak of every subcarrier on frequencies where all other subcarriers have nulls.

With the independence of its subcarriers, OFDM can be seen as a multiplexing or multiple-access technique, somewhat similar to CDMA. It doesn’t increase theoretical channel capacity, but it has benefits that allow systems to operate closer to their theoretical capacity in real-world environments:

  • A high degree of operational flexibility by allocating subcarriers and symbols as needed, along with signal coding schemes, to accommodate different users with different needs for data rates, latency, priority, and more.
  • Multiple access (OFDMA) to support multiple users (radios) simultaneously using flexible and efficient subcarrier allocations.
  • High symbol and data integrity by transmitting at a relatively slow symbol rate to mitigate multipath effects and reduce the impact of impulsive noise, and by spreading data streams over multiple subcarriers with symbol coding and forward error correction.
  • High data throughput by transmitting on hundreds or thousands of carriers simultaneously and using appropriate signal coding.
  • Robust operation in interference-prone environments due to its spread spectrum structure and tolerance for the loss of a subset of subcarriers.
  • High spectral efficiency by spacing many subcarriers very closely and arranging them to be independent, allowing each subcarrier to be separately modulated.
  • High spatial efficiency through compatibility with spatial multiplexing techniques such as multiple-input/multiple-output (MIMO) transmission.

Potential benefits of OFDM were anticipated for years, but the technique only became practical for wide use as signal processing power became available in high quantity at low cost. As that performance/cost ratio improved, OFDM increased its dominance, and that is a major RF wireless story of the past 15 years or so.

You can read more about the technique in a recent OFDM introduction application note, and I’ll discuss some of the implementation and test implications in future posts.

Share
Tagged with: , , , , , , , , ,
Posted in Aero/Def, History, Wireless

Preamplifiers: Internal and External, Smart and Not

  Boost those electrons at your first opportunity

Preamplifiers are a time-tested way to improve measurement sensitivity and accuracy for small signals, especially those near noise. Some new external preamps, used alone or along with those internal to signal analyzers, may give your tiny signals the right boost in the right place to make better measurements. In the bargain, they’ll simplify small-signal and noise-figure measurements.

Once you’ve switched attenuation to 0 dB in a signal analyzer, the next step toward better sensitivity is some sort of amplifier. Many signal analyzers offer internal preamplifiers as an option, and it’s generally easier than employing an external preamp. The manufacturer can characterize the internal preamp in terms of gain and frequency response, and this can be reflected in the analyzer’s accuracy specifications.

However, internal preamplifiers may not have quite the gain you want over the frequency range you need, and an internal unit can’t be placed as close as possible to the signal-under–test (SUT). This is important for microwave and millimeter signals because they can’t travel far without significant attenuation, and because they tend to gather unknown and unwanted signals along the way. This is especially troublesome when you’re measuring small signals close to noise.

External preamplifiers are available in a wide range of frequency ranges, gains and noise figures, in both custom and off-the-shelf configurations, and can provide excellent performance. Unfortunately, it can be a challenge to integrate them into an end-to-end measurement. Accurate measurements require correcting for gain versus frequency and, if possible, noise figure, impedance match and temperature coefficients.

That’s where Keysight comes in. It recently introduced several external “smart” preamplifiers that automatically integrate with the measurement system and are compatible with all of the X-Series signal analyzers. They connect directly to the RF input of the signal analyzers, as shown below, and can function as a remote test head, providing amplification closest to the SUT.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

An external USB smart preamplifier connected to an X-Series signal analyzer. The preamp can serve as a high-performance remote test head for spectrum and noise-figure measurements. The USB cable connecting the analyzer and preamplifier is not shown.

The U7227A/C/F preamplifiers use a single USB connection to identify themselves to the analyzer and download essential information such as gain versus frequency, noise figure and S-parameters.

As described in a previous post about smart external mixers, the combination of downloaded data and analyzer firmware fully integrates the amplifier into the measurement setup and effectively extends the measurement plane to its input. This allows Keysight to provide a complete measurement solution with very high performance and allows you to focus on critical measurements instead of system integration.

The USB preamplifiers have high gain and very low noise figure, and can be used in combination with the optional internal preamplifiers of the X-Series signal analyzers. The result is a very impressive system noise figure, as shown in the example below.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The displayed average noise level of the Keysight PXA signal analyzer is shown without a preamp (top), with the internal preamp (middle) and with the addition of the external USB preamp (bottom). Note the measured 13 GHz noise density at the bottom of the marker table of -171 dBm/Hz.

The performance and USB connectivity of the external preamps improves and simplifies noise-figure measurements and analyzer sensitivity, giving those few critical electrons a boost just when they need it most.

For more detail please see the USB preamplifier technical overview.

Share
Tagged with: , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

EVM: Its Uses and Meanings as a Residual Measurement

  Is technology repeating itself or just rhyming?

History does not repeat itself, but it rhymes” is one of the most popular quotes attributed to Mark Twain. Though there is no clear evidence that he ever said this, it certainly feels like one of his. It says so much in a few words, and reflects his fascination with history and the behavior of people and institutions.

Recently, history rhymed for me while making some OFDM demodulation measurements and looking at the spectrum of the error vector signal. It brought to mind the first time I looked beyond simple error vector magnitude (EVM) measurements to the full error vector signal and understood the extra insight it could provide in both spectrum and time-domain forms.

The rhyme in the error vector measurements—as residual error or distortion measurements—took me all the way back to the first distortion measurements I made with a simple analog distortion analyzer. Variations on that method are still used today, and the approach is summarized below.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

A simple distortion analyzer uses a notch filter to remove the fundamental of a signal and a power meter to measure the rest. This is a measurement of the signal’s residual components, which can also be analyzed in other ways to better understand the distortion.

The basic distortion analyzer approach uses a power meter and a switchable band-reject or notch filter. First, the full signal is measured to provide a power reference, and then the filter is switched in to remove the fundamental.

The signal that remains is a residual, containing distortion and noise, and can be measured with great sensitivity because it’s so much smaller than the full signal. That’s a big benefit of this technique, and why filters—including lowpass and highpass—are still used to improve the sensitivity and accuracy of signal measurements. Those basic distortion analyzers usually had a post-filter output that could be connected to an oscilloscope to see if the distortion could be further characterized.

To complete the rhyme, today’s digital demodulation measurements and quality metrics such as EVM or modulation error ratio (MER) are also residual measurements. Signal analyzers and VSAs first demodulate the incoming signal to recover the physical-layer data. They then use this data and fast math to generate a perfect version of the input signal. The perfect or reference signal is subtracted from the input signal to yield a residual, also called the error vector. This subtraction does the job that the notch filter did previously.

The residual can be summarized in simple terms such as EVM or MER. But if you want to understand the nature or cause of a problem and not just its magnitude, you can look at error vector time, spectrum, phase, etc. Here’s an example of measurements on a simple QPSK signal containing a spurious signal with power 36 dB lower.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

A QPSK signal in blue contains a spurious signal 36 dB lower. The green trace is error vector spectrum, revealing the spur. A close look at a constellation point (upper left) shows repeating equal-amplitude errors that indicate that the spur is harmonically related to the modulation frequency.

Demodulation and subtraction remove the desirable part of the signal, providing more sensitivity and a tighter focus on distortion or interference. Because all these operations and displays are performed within the signal analyzer application or VSA, you need just one tool to help you understand both the magnitude and cause of problems.

At this point you may also be thinking that demodulation and subtraction could be a way to recover one signal deliberately hidden inside another. They can! I’ve experimented with that very interesting technique, and will explain more in a future post.

To make these explanations clearer, I’ve focused here on single-carrier modulation. These approaches to residual analysis work well for OFDM signals, and you can see examples at my previous posts The Right View Makes an Obscure Problem Obvious and A Different View Makes a Different Problem Obvious.

Share
Tagged with: , , , , , , , , , , , , , , , , ,
Posted in History, Measurement techniques, Measurement theory, Signal analysis, Wireless

Frequency vs. Recency in Advanced Signal Analyzer Displays

  Is “recency” really a word?

My spell checker nags me with a jagged red underline, but yes, “recency” is a legitimate word. And it isn’t one of those newly invented for a marketing campaign words: Merriam-Webster traces it back to 1612, and others go back even further.

It’s a good word that means exactly what it sounds like: the quality of being recent. In our world of highly dynamic signals and spectral bands, this quantity is becoming ever more useful.

Of course, recency-coded displays have been around for a long time, though more commonly in oscilloscopes than spectrum analyzers. Traditional analog variable-persistence displays naturally highlighted recent phenomena, as the glow from excited phosphors decayed over time. Extending this decay time by an adjustable amount made the displays even more useful.

A current term for this sort of display is “digital persistence” and in the 89600 VSA software it produces this display of oscillator frequency and amplitude settling:

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

Digital-persistence spectrum of an oscillator as it settles to new frequency and amplitude values. The brighter traces are more recent, though recency could also be indicated by color mapping.

A good complement to recency is “frequency,” which—in this context—is defined as how often specific frequency and amplitude values occur in a spectrum measurement.

Common terms for this sort of display are frequency of occurrence or density or DPX or cumulative history. It’s a kind of historical measure of probability, and for the balance of this post I’ll just use the term density.

Thus, recency is a measure of when something happened, while density is a measure of how often.

In real-time analyzers—and analog persistence displays—the two phenomena are generally combined in some way. However, although related, they indicate different things about the signals we measure.

Because the 89600 VSA provides them independently, as separate traces with separate controls, I’ll use it for another example and discuss combined real-time analyzer displays in a future post. Here’s a frequency or density display of the infamous 2.4 GHz ISM band:

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

Off-air spectrum density measurement of 2.4 GHz ISM band, including brief Bluetooth hops and longer WLAN transmissions. Signal values of amplitude and frequency that occur more often are represented by red and yellow, while less-frequent values such as those from the Bluetooth hops are shown in blue.

This pure density display represents a great deal of information about the time occupancy of the ISM band, showing the relatively long duration of the WLAN frames and the brevity of the Bluetooth hops. However, it offers nothing about signal timing: how many bursts, whether they overlap or not, or even whether the Bluetooth hops are sequential.

That leads, perhaps, to a suggestion. While both display types present a lot of information at once,—and can show very infrequent signals or behavior—they are optimal for different measurement purposes: if you want to know when something happened, with an emphasis on the recent, use persistence; if you want to distinguish signals by how often they appeared, use density.

It’s an over-simplification to say that persistence is best for viewing signals and density is best for viewing spectral bands, but that’s not a bad place to start.

If you’ve used a real-time analyzer you probably noticed that the density displays are usually a kind of hybrid, with an added element of persistence. And you’ve probably heard at least a little about spectrogram displays, which add the time element in a different and very useful way. They’re all excellent tools, and will be good subjects for future posts.

 

 

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Are Your Measurements Corrected? Calibrated? Aligned? Or What?

  Getting the accuracy you’ve paid for

You’ve probably had this experience while using one of our signal analyzers: The instrument pauses what it’s doing and, for some time—a few seconds, maybe a minute, maybe longer—it seems lost in its own world. Relays click while messages flash on the screen, telling you it’s aligning parts you didn’t know it had. What’s going on? Is it important? Can you somehow avoid this inconvenience?

There’s a short answer: The analyzer decided it was time to measure, adjust and check itself to ensure that you’re getting the promised accuracy.

That seems mostly reasonable. After all, you bought a piece of precision test equipment (thanks!) to get reliable answers, so you can do your real job: using RF/microwave technology to make things happen—important things. The last thing you want is a misleading measurement.

That’s not the whole story. Your time is valuable and it’s useful to understand the importance of these operations and whether you can stop them from interrupting your work.

The second short answer: the automatic operations are sometimes important but not crucial (usually). You can do several things to avoid the inconvenience, but it helps to first understand a few terms:

  • Calibrations are the tests, adjustments and verifications performed on an instrument every one to three years. The box is usually sent to a separate facility where calibrations are performed with the assistance of other test equipment.
  • Alignments are the periodic checks and adjustments that an in situ analyzer performs on itself without other equipment or user intervention. The combination of calibration and alignment ensures that the analyzer meets its warranted specifications.
  • Corrections are mathematical operations the analyzer performs internally on measurement results to compensate for known imperfections. These are quantified by calibration and alignment operations.

Alas, this terminology isn’t universal. For example, if you execute the query “*Cal?” the analyzer will tell you whether it is properly (and recently) aligned, but will say nothing about periodic calibration. Still, the terms are useful guides to getting reliable measurements while avoiding inconvenience.

As a starting point, you can use the default automatic mode. The designers have decided which circuits need alignment, how often and over what temperature ranges. Unfortunately, this may result in interruptions, and these can be a problem when you’re prevented from observing a signal or behavior you’re trying to understand. It’s especially frustrating when you’re ready to make a measurement and find that the analyzer has shifted into navel-gazing mode.

Switching off the automatic alignments will ensure that the instrument is always ready to measure—and it will notify you when it decides that alignments are needed. You can decide for yourself when it’s convenient to perform them, though this creates a risk that alignments won’t be current when you’re ready to make a critical measurement.

You can schedule alignments on your own, and tell the instrument to remind you once a day or once a week. This is a relatively low-risk approach if the instrument resides in a temperature-stable environment. However, with your best interests in mind, the analyzer will display this stern warning:

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

Switching off automatic alignment creates a small risk of compromised performance, and produces this popup.

The default setting is governed by time and temperature, and in my experience it’s temperature that makes the biggest difference. I once retrieved an analyzer that had been left in a car overnight in freezing weather and, upon power up, found that for the first half hour it was slewing temperature so fast that alignments occurred almost constantly.

If you want to optimize alignments for your own situation, just check out the built-in help in the X-Series signal analyzers. You can even go online and download the spectrum analyzer mode help file to your PC and run it from there.

 

Share
Tagged with: , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless
About

Agilent Technologies Electronic Measurement Group is now Keysight Technologies http://www.keysight.com.

My name is Ben Zarlingo and I’m an applications specialist for Keysight Technologies. I’ve been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis. For the past 20 years I’ve been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions. I work at the interface between Agilent’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments. Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.