Measurement Statistics: Comparing Standard Deviation and Mean Deviation

  A nagging little question finally gets my attention

In a recent post on measurement accuracy and the use of supplemental measurement data, the measured accuracy in the figure was given in terms of the mean and standard deviations. Error bounds or statistics are often provided in terms of standard deviation, but why that measure? Why not the mean or average deviation, something that is conceptually similar and measures approximately the same thing?

I’ve wondered about standard and average deviation since my college days, but my curiosity was never quite strong enough to compel me to find the differences, and I don’t recall my books or my teachers ever explaining the practicalities of the choice. Because I’m working on a post on variance reduction in measurements, this blog is the spur I need to learn a little more about how statistics meets the needs of real-world measurements.

First, a quick summary: Standard deviation and mean absolute—or mean average—deviation are both ways to express the spread of sampled data. If you average the absolute value of sample deviations from the mean, you get the mean or average deviation. If you instead square the deviations, the average of the squares is the variance, and the square root of the variance is the standard deviation.

For the normal or Gaussian distributions that we see so often, expressing sample spread in terms of standard deviations neatly represents how often certain deviations from the mean can be expected to occur.

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

Totaling up the percentages in each standard deviation band provides some convenient rules of thumb for expected sample spread:

  • About one in three samples will fall outside one standard deviation
  • About one in twenty samples will fall outside two standard deviations
  • About one in 300 samples will fall outside three standard deviations

Compared to mean deviation, the squaring operation makes standard deviation more sensitive to samples with larger deviation. This sensitivity to outliers is often appropriate in engineering, due to their rarity and potentially larger effects.

Standard deviation is also friendlier to mathematical operations because squares and roots are generally easier to handle than absolute values in operations such as differentiation and integration.

Engineering use of standard deviation and Gaussian distribution is not limited to one dimension. For example, in new calculations of mismatch error the complementary elements of the reflection coefficient both have Gaussian distributions. Standard deviation measures—such as the 95% or two standard deviation limit—provide a practical representation of the expected error distribution.

I’ve written previously about how different views of data can each be useful, depending on your focus. Standard and mean deviation measures are no exception, and it turns out there’s a pretty lively debate in some quarters. Some contend, for example, that mean deviation is a better basis on which to make conclusions if the samples include any significant amount of error.

I have no particular affection for statistics, but I have lots of respect for the insight it can provide and its power in making better and more efficient measurements in our noisy world.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Measurement theory

A Signal Analyzer Connector Puzzler

  Is something wrong with this picture?

Many of the things that intrigue me do not have the same effect on an average person. However, you are also not an average person—or you wouldn’t be reading this blog. Thus, I hope you’ll find the following image and explanation as interesting and useful as I did. Take a close look at this Keysight X-Series signal analyzer and the bits I’ve highlighted:

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

One up-front confession: I looked at this combination of frequency range and input connector for years before it struck me as strange. I vaguely remembered that N connectors were meant for lower frequencies and finally took the time to look it up.

The explanation is only a little complicated, including some clever engineering to optimize tradeoffs, and it’s worth understanding. As always with microwaves and connections, it’s a matter of materials, precision and geometry.

First, the short summary: The N connectors used in Keysight’s 26 GHz instruments are specially designed and constructed, and their characteristics are accounted for in the instrument specifications. If you’re working above 18 GHz and using appropriate adapters such as those in the 11878 Adapter Kit, you can measure with confidence. Just connect the N-to-3.5mm adapter at the instrument front panel and use 3.5 mm or SMA hardware from there.

Why use the N connector on a 26 GHz instrument in the first place? Why not an instrument-grade 3.5 mm connector that will readily connect to common SMA connectors as well? The main reason is the strength and durability of the N connector when dealing with the bumps, twists and frequent reconnections that test equipment must endure—and still ensure excellent performance. Precision N connectors offer a combination of robustness and consistent performance that is unique in the RF/microwave world. They’re also easy to align and are generally tightened by hand.

However, there is that small matter of limited frequency range. Standard N connectors are rated to 11 GHz and precision ones to 18 GHz. Above 18 GHz, conductor size and geometry can allow amplitude and phase errors due to the moding phenomenon I described in a previous post.

Moding is a resonance phenomenon from the larger dimensions of the N connector, and the solution involves a change in the construction of the instrument’s precision N connector. This special connector has a combination of a slotless inner shield, a support bead of a special material, and higher precision construction. As a result, resonances can be eliminated or reduced to such a small magnitude that the N connector is the overall best choice for test equipment over this frequency range.

There you have it, the practical advantages of N connectors over the full 26.5 GHz frequency range, without a performance penalty.

Share
Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Microwave, Signal analysis, Signal generation, Wireless

Signal Generators and Confidence in the Very Small

   Precisely small is more of a challenge than precisely big

Recently, I’ve been looking at sensitivity measurements and getting acquainted with the difficulty of doing things correctly at very low signal levels. It’s an interesting challenge and I thought it would be useful to share a couple of surprising lessons about specifications and real-world performance.

From the outset, I’ll concede that data sheets and detailed specifications can be boring. Wading through all that information is a tedious task, but it’s the key to performance you can count on, and specs are a reason to buy test equipment in the first place. Also, extensive specifications are better than the alternative.

Sensitivity measurements show the role and benefits of a good data sheet in helping you perform challenging tests. Say, for example, you’ve got a sensitivity target of 1 µV and you need a signal just that size because the desired tolerance is ±1 dB. In a 50Ω system, that single microvolt is −107 dBm, and 1 dB differences amount to only about 10 nV.

The hard specs for a Keysight MXG X-Series microwave signal generator are ±1.6 dB and extend to −90 dBm, so there are issues with the performance required in this situation. However, it’s worth keeping in mind that the specs cover a wide range of operating conditions, well beyond what you’ll encounter in this case.

Once again this is a good time to consider adding information to the measurement process as a way to get more from it without changing the test equipment. A relevant item from the signal generator data sheet illustrates my point.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The performance suggested by this graph is very impressive—much better than the hard specs over a very wide frequency range—and it applies to the kind of low output level we need for our sensitivity measurement. Accuracy is almost always better than ±0.1 dB, dramatically better than the hard spec.

The graph also includes statistical information that relates to the task at hand. Performance bounds are given for ±one standard deviation, and this provides a 68% confidence level if the distribution is normal (Gaussian). If I understand the math, a tolerance of ±0.2 dB would then correspond to two standard deviations and better than 95% confidence.

The time spent wading through a data sheet is amply rewarded, and the right confidence can then be attached to the performance of a tricky measurement. The confidence you need in your own measurements may be different, but the principle is the same and the process of adding information will improve your results.

So far, we’ve taken advantage of information that is generic to the instrument model involved. Even more specific information may be available to you, and I’ll discuss that in a future post.

Share
Tagged with: , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal generation, Wireless

RF Intuition: A Strength or a Vulnerability?

  Intuition is powerful but if you don’t frame questions well it can mislead you

Contrary to the popular stereotype, good engineers are creative and intuitive. Indeed, these characteristics are essential tools for successful engineering.

I have great respect for the power of intuitive approaches to problems, and I see at least two big benefits. First, intuition can gather diffuse or apparently unrelated facts that enable exceptionally powerful analysis. Second, it often provides an effective shortcut for answers to complex questions, saving time and adding efficiency.

Of course, intuition is not infallible, and I’m always intrigued by its failure. It makes sense to pay attention to these situations because they provide lessons about using intuitive thinking without being misled by it. Two of my favorite examples are the Monty Hall Problem and why mirrors appear to reverse left and right but not up and down.

I’d argue that a common factor in most intuition failures is not so much the reasoning process itself but the initial framing of the question. If you start with a misapprehension of some part of the problem or question, even a perfect chain of reasoning will fail you.

As a useful RF example, let’s look at an intuition failure in “sub-kTB” signal measurements. Among RF engineers, kTB is shorthand for -174 dBm/Hz*, which is the power delivered by a 50Ω thermal source into a 50Ω load at room temperature. It should therefore be the best possible noise level—or, more accurately, noise density or PSD—you could obtain in a signal analyzer that has a perfect 0 dB noise figure.

Not surprisingly, many engineers also see this as the lowest possible signal level one could measure, a kind of noise floor or barrier that one could not see beyond or measure beneath. As a matter of fact, even this level should not be achievable because signal analyzers contribute some noise of their own.

This intuitive expectation of an impenetrable noise floor is logical but flawed, as demonstrated by the measurement example below that uses Keysight’s Noise Floor Extension (NFE) feature in a signal analyzer. Here, a multi-tone signal with very low amplitude is measured near the signal analyzer’s noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

I’ve previously described NFE, and for this discussion I’ll summarize by saying that it allows some analyzers to accurately estimate their own noise contribution and then automatically subtract most of it from the measurement. The result is a substantial improvement in effective noise floor and the ability to separate very small signals from noise.

While it is indeed correct that kTB is a noise floor that cannot be improved, or even matched in an analyzer, the error in intuition is in associating this in a 1:1 fashion with an ultimate measurement limit. As discussed previously, signal and noise power levels—even very small ones—can be reliably added or subtracted to refine raw measurement results.

kTB and related noise in analyzers are phenomena whose values, when averaged, are predictable when the measurement conditions and configuration are known. Consequently, subtracting analyzer noise power can be seen as adding information to the measurement process, in turn allowing more information to be taken from the measurement result.

OK, so measuring below kTB is perhaps more of a parlor trick than a practical need. However, an intuitive understanding of its possibility illuminates some important aspects of making better RF measurements of those tiny signals that so frequently challenge us.

* You may instead see the figure -177 dBm/Hz for kTB. This refers to a slightly different noise level measurement than that of a spectrum or signal analyzer, as explained at the link.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Envelope Tracking and Riding the Gain

  You can “turn it up to eleven” as long as you don’t leave it there

When I first heard the term “envelope tracking” I thought of the classic investigative/surveillance technique called “mail cover” in which law enforcement gets the postal service to compile information from the outside of envelopes. The practice was in the news a while back due to its use with digital communications.

Learning a little more, I quickly realized that it has nothing to do with the mail but, like the mail, has precedent that reaches back many years. “Riding the gain” or “gain riding” is a manual process that has been used for decades in audio recording and other applications where excessive dynamic range is a problem. Its use predates vinyl records, though I first encountered it in my previous life as a radio announcer, broadcasting live events.

When I was riding the gain, it was a manual process of twisting a knob, trying to reduce input dynamic range to something a small-town AM transmitter could handle. I was part of a crude feedback system, prone to delay and overshoot, as I’m sure my listeners would attest.

These days, envelope tracking is another example of how digital processing is used to solve analog problems. In this case it’s the conflict between amplifier efficiency and the wide variations in the RF envelope of digital modulation. If the power supply of an RF amplifier can be dynamically adjusted according to the power needed by modulation, it can—at every instant—be operating at its most efficient point.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

Power efficiency has always been a major driver in mobile communications and its importance continues to grow. Batteries are limited by the size and weight of the handsets users are willing to carry and, yet again, Moore’s Law points the way to improvement. Available DSP now has the high speed and low power consumption to calculate RF envelope power on the fly. The envelope value is fed to a power supply with sufficient bandwidth or response time to adjust its drive of the RF power amplifier accordingly.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

This all seems fairly straightforward but, of course, is anything but. The calculation and response times are very short, and a high degree of time alignment is required. Power supplies must be extremely responsive and still very efficient. All of the DSP must itself be power efficient, to avoid compromising the fundamental power benefit.

Envelope tracking is a downstream solution to power amplifier efficiency, joining previous upstream techniques such as crest factor reduction and macro-scale approaches such as digital predistortion. To a great extent, all rely on sophisticated algorithms implemented in fast DSP.

That’s where Keysight’s design and test tools come in. You can find a collection of application notes and other information at www.keysight.com/find/ET.

With envelope tracking you can now turn your amplifiers up to eleven when you need to, and still have a battery that lasts all day.

Share
Tagged with: , , , , , , , , , , , , ,
Posted in Signal analysis, Signal generation, Wireless

Spurious Measurements: Making the Best of a Tedious Situation

   Sometimes I need to be reminded to take my own advice

Recently, I’ve been looking into measuring spurious signals and the possibility of using periodic calibration results to improve productivity. I’ll share more about that in a future post, but for now it seemed useful to summarize what I’ve learned—or re-learned—about new and traditional ways to measure spurs.

Spur measurements can be especially time-consuming because they’re usually made over wide frequency ranges and require high sensitivity. Unlike harmonics, spur locations are typically not known beforehand so the only choice is to sweep across wide spans using narrow resolution bandwidths (RBWs) to reduce the analysis noise floor. With spurs near that noise floor, getting the required accuracy and repeatability can be a slow, tedious job.

An engineer experienced in optimizing these measurements reminded me of advice I’ve heard—and shared—before: Don’t measure where you don’t need to, and don’t make measurements that don’t matter.

The first “don’t” is self-explanatory. The frequency spectrum is wide, but the important region is mercifully much narrower—and we should enjoy every possible respite from tedium.

The second “don’t” is less obvious. It’s a reminder to begin with a careful look at the DUT and which measurements are required, and how good those measurements need to be. For example:

  • Do you need to measure specific spur frequencies and amplitudes, or is a limit test sufficient?
  • How much accuracy and variance are acceptable? What noise floor or signal/noise and averaging are needed to achieve this?
  • Are the potential spurs CW? Modulated? Impulsive?

The answers will help you define an efficient test plan and select the features in a signal analyzer that dramatically improve spur measurements.

One especially useful feature is the spurious measurement application. It allows you to build a custom set of frequency ranges, each with optimized settings for RBW, filtering, detectors, etc. You measure only where needed, as shown below.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

This application is helpful in ATE environments, offloading tasks from the system CPU. It’s also worth considering in R&D because, unfortunately, spur measurements usually have to be repeated… many, many times.

Some recent innovations in digital filters and DSP have dramatically improved sweep rates for narrow RBWs in signal analyzers such as Keysight’s PXA. With sufficient processing, RBW filters can now be swept up to 50 times faster without compromising amplitude or frequency accuracy. The benefit is greatest for RBWs of several to several hundred kilohertz, as is typical for spur measurements (see this recent app note).

One factor that can muck up the works is the presence of non-CW spurs. For example, TDMA schemes often produce time-varying spurs. This violates key assumptions underlying traditional search techniques and makes it much tougher to detect and measure spurs.

Fortunately, signal analyzers have evolved to handle these challenges. In TDMA systems, sync or trigger signals are often available to align gated sweeps that analyze signals only during the desired part of the TDMA frame.

Perhaps the most powerful tool for finding impulsive or transient spurs is the real-time analyzer, which can process all the information in a frequency span without gaps and produce a spectrogram or density display that reveals even the most intermittent signals.

The best tool for precisely measuring time-varying spurs is vector signal analyzer (VSA) software. The software uses RF/microwave signal analyzers, oscilloscopes, etc., to completely capture signals for any type of frequency-, time- and modulation-domain analysis. Signals can be recorded for flexible post-processing as a way to accurately measure all their characteristics from a single acquisition, solving the problem of aligning measurement to the time-varying nature of the signal.

It’s no secret that spur detection and measurement are both difficult and essential, but with the right advice and the right equipment you can minimize the tedium.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

Does RF Noise have Mass?

   I’m not usually a fan of noise, but there are exceptions

My provisional assumption is that noise does indeed have mass. I support that notion with the following hare-brained chain of reasoning: The subject of noise has a gravity-like pull that compels me to write about it more than anything else. Because gravity comes from mass, noise therefore must have mass. Voila!

My previous posts dealing with noise have all been about minimizing it. Averaging away its effects, estimating the errors it causes, predicting and then subtracting noise power, and so on. Sometimes I just complain about it or wax philosophical.

I even created a webcast titled “conquering noise” but, of course, that was a bit of a conceit. Noise is a fundamental natural phenomenon and it is never vanquished. Instead, I have mentioned that noise can be beneficial in some circumstances—and now it’s time to describe one.

A few years ago, a colleague was using Keysight’s Advanced Design System (ADS) software to create 10 MHz WiMAX MIMO signals that included impairments. He started by adding attenuation to one transmitter, but after finding little or no effect on modulation quality, he added a 2 MHz bandpass filter to one channel, as shown below.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

Surely a filter that removed most of one channel would confound the demodulator. Comparing the spectra of the two channels, the effect is dramatic.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

All that filtering in one channel had no significant effect on modulation quality! The VSA software he was using—as an embedded element in the simulation—showed the filter in the spectrum and the channel frequency response, but in demodulation it caused no problem.

He emailed a recording of the signal and I duplicated his results using the VSA software on my PC. I then told him he could “fix” the problem by simply adding some noise to the signals.

This may seem like an odd way to solve the problem, but in this case the simulation didn’t match reality in the way it responded to drastic channel filtering. The mismatch was due to the fact that the simulated signals were noise-free, and the channel equalization in demodulation operations could therefore perfectly correct for filter impairments, no matter how large they were.

In many ways it’s the opposite of the adaptive equalization used in real-world situations with high noise levels, and I have previously cautioned you to be careful what you ask for. When there is no noise, you can correct signals as much as you want, without ill effects.

Of course, “no noise” is not the world we live in or design for, and as much as I hate to admit it, there are times when it’s beneficial to add some.

There are certainly other uses for noise. Those also have that peculiar massive attraction and I know I’ll write about it again soon.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Low frequency/baseband, Measurement techniques, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

RF, Protocol and the Law

   RF engineering and a layer hierarchy that extends all the way to the spectral authorities

In our day jobs we focus mainly on the physical layer of RF communications, and there is certainly enough challenge there for a lifetime of productive work. The analog and digitally modulated signals we wrestle with are the foundation of an astonishing worldwide expansion of communications.

Of course, the physical layer is just the first of many in modern systems. Engineering success often involves interaction with higher layers that are commonly described in diagrams such as the OSI model shown below.

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The OSI model is a good way to build an understanding of systems and to figure out how to make them work, but sometimes we need to add even more layers to see the whole picture. A good example comes from a recent event that caught my eye.

Many news outlets reported that some hotels in one chain in the US were “jamming” private Wi-Fi hotspots to force convention-goers to use the hotel’s for-fee Wi-Fi service. The term jamming grabbed my attention because it sounded like a very aggressive thing to do to the 2.4 GHz ISM band, which functions as a sort of worldwide public square in the spectral world. I figured regulatory authorities such as our FCC would take a pretty dim view of this sort of thing.

As is so often the case, many general news organizations were being less than precise. The hotel chain was actually blocking Wi-Fi rather than jamming it. This is something that happens not at the physical layer—RF jamming—but a few layers higher.

According to the FCC, hotel employees “had used containment features of a Wi-Fi monitoring system” to prevent people from connecting to their own personal Wi-Fi networks. Speculation from network experts is that the Wi-Fi monitoring system could be programmed to flood the area with de-authentication or disassociation packets that would affect access points and clients other than those of the hotel.

It may not surprise you that the FCC also objected to this use of the ISM band, and the result was a $600,000 settlement with the hotel to resolve the issue. The whole RF story thus extends the OSI model to at least a few more levels, including the vendor of the monitoring system, the hotel management and—at least one layer above them!—the FCC itself.

I suppose you can insert some legislative and political layers in there somewhere if you want, but I’m happy to focus my effort on wrangling the physical layer and those near it. Keysight signal generators and signal analyzers are becoming more capable above the physical layer, with features such as Wireless Link Analysis to perform layer 2 and layer 3 analysis of LTE-FDD UL and DL signals.

In the end, I hope there are ways to resolve these issues and give everyone fair access to the unlicensed portions of our shared spectrum. I dread a situation in which a market emerges for access points or hotspots with counter-blocking technology and a resulting arms race that could leave us all without access.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Signal analysis, Signal generation, Wireless

MIMO Streams, Channels and Condition Number Reveal a Defect

   Streams multiply complexity but they can also add insight

Multiple-input multiple-output (MIMO) techniques are powerful ways to make efficient use of scarce RF spectrum. In a bit of engineering good fortune, MIMO methods are also generally most effective where they’re most needed: crowded, reflective environments.

However, MIMO systems and signals—and the RF environments they occupy—can be difficult to troubleshoot and optimize. The number of signal paths goes up with the square of the number of transmitters, so even “simple” 2×2 MIMO provides the engineer with four paths to examine. 4×4 systems yield 16 paths, and in some systems 8×8 is very much on the table!

All these channels and streams, each with several associated measurements, can provide good hiding places for defects and impairments. One approach for tracking down problems in the thicket of results is to use a large display and view many traces at once, the subject of my Big Data post a while back. Engineers have powerful pattern recognition and this is a good way to use it.

Another way to boil down lots of measurements and produce some insight—measuring condition number—is specific to MIMO. This trace is a single value for every subcarrier, no matter how many channels are used, and it quantifies how well MIMO is working overall. Sometimes not too well, as in this measurement:

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

The signal for the measurement above was produced with four linked signal generators, so SNR should not be a problem. However, the fact that the condition number is far above 0 dB certainly indicates that there is a problem somewhere.

Analysis software such as the 89600 VSA provides several other tools to peer into the thicket from a different angle. As mentioned previously, this 4×4 MIMO system has 16 possible signal paths, and they can be overlaid on a single grid. In this instance a dozen of the paths looked good, while four showed a flat loss about 25 dB greater than the others. That is suspiciously close to the 25 dB condition number.

Of course, when engineers see two sets of related parameters they tend to think about using a matrix to get a holistic view of the situation. That’s just what’s provided by MIMO demodulation in the 89600 VSA software as the MIMO channel matrix trace, and in this case it reveals the nature of the problem.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4x4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4×4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

This MIMO signal was using spatial expansion or spatial encoding, as I described recently. Four streams are combined in different ways to spread across four RF channels. The complex magnitudes are all different—to facilitate MIMO signal separation—and very much non-zero.

All except for channel 4, where the problem is revealed. The matrix shows that the spatial encoding is working for all four streams, but one channel is weak for every stream. In this case the signal generator producing channel four had a malfunctioning RF attenuator, reducing output by about 25 dB.

As is so often the case, the solution comes down to engineers using pattern recognition, deduction and intuition in combination with the right tools. For Keysight, Job 1 is bringing the tools that help you unlock the necessary insights.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Signal analysis, Wireless

Spectrum and Network Measurements: A non-eBook

   What is the role of a physical book in an electronic world?

I recently got a copy of the newest edition of Spectrum and Network Measurements by Bob Witte. This is the second edition, and it was a good time for an update. It’s been more than a dozen years since the previous one, and I think an earlier, similar work by Bob first appeared in the early 1990s. Bob has a deep background in measurement technology and was, among other things, a project manager on the first swept analyzer with an all-digital IF section. That was back in the early 1980s!

One of the reasons for the update is apparent from the snapshot I took of the cover.

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

The cover highlights a relatively recent display type, variously referred to as density, cumulative history, digital phosphor, persistence, etc. These displays are a characteristic of real-time spectrum analyzers, and both the analyzers and displays were not in mainstream RF use when the previous edition of the book appeared.

An update to a useful book is great, of course, but why paper? What about a website or a wiki or an eBook of some kind? Digital media types can be easily updated to match the rate of change of new signals, analyzers and displays.

In looking through Bob’s book I’ve been trying to understand and to put into words how useful it feels, in just the form it’s in. It’s different from an app note online, or article, or Wikipedia entry. Not universally better or worse, but different.

Perhaps it’s because while some things have changed in spectrum and network measurements, so many things are timeless and universal. The book is particularly good at providing a full view of the measurement techniques and challenges that have been a part of RF engineering for decades. It’s a reminder that making valid, reliable, repeatable measurements is mostly a matter of understanding the essentials and getting them right every time.

Resources online are an excellent way to focus on a specific signal or measurement, especially new ones. Sometimes that’s just what you need if you’re confident you have the rest of your measurements well in hand.

I guess that’s the rub, and why a comprehensive book like this is both enlightening and reassuring. RF engineering is a challenging discipline and there are many ways, large and small, to get it wrong. This book collects the essentials in one place, with the techniques, equations, explanations and examples that you’ll need to do the whole measurement job.

Of course there are other good books with a role to play in RF measurements. While Bob’s book is comprehensive in terms of spectrum and network measurements, one with a complementary focus on wireless measurements is RF Measurements for Cellular Phones and Wireless Data Systems by Rex Frobenius and Allen Scott. And when you need to focus even tighter on a specific wireless scheme you may need something like LTE and the Evolution to 4G Wireless: Design and Measurement Challenges*, edited by Moray Rumney.

All of these are non-eBooks, with broad coverage including many examples, block diagrams and equations. Together with the resources you’ll find using a good search engine, you’ll have what you need to make better measurements of everything you find in the RF spectrum.

 

*Full disclosure: I had a small role in writing the signal analysis section of the first edition of the LTE book. But it turned out well nonetheless!

Share
Tagged with: , , ,
Posted in Aero/Def, EMI, Hazards, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless
About

Agilent Technologies Electronic Measurement Group is now Keysight Technologies http://www.keysight.com.

My name is Ben Zarlingo and I’m an applications specialist for Keysight Technologies. I’ve been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis. For the past 20 years I’ve been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions. I work at the interface between Agilent’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments. Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.