Dynamic Range and a Different Kind of Analyzer

  Signals and noise in the optical realm

It looks like I’m not the only one who finds myself wrestling with noise quite a bit, and recent developments in digital photography spurred me to briefly depart from my usual focus (pun intended) on the RF world .

I’m not departing very much, though, because digital photography can be seen as a two-dimensional type of signal analysis. Not surprisingly, many of the electrical engineers I know have at least a hobbyist interest in photography, and for quite a few it’s more than that. Our engineering knowledge helps a lot in understanding the technical aspects of making a good photograph, and I’d like to explain one recent development here.

The megapixel race in digital imaging is abating, perhaps because sensor resolution now exceeds the performance of some lenses and autofocus systems. I see this as a positive development, shifting attention to other important factors such as sensitivity or low-light performance. Sensitivity is as critical as resolution in those all-too-common situations when light is scarce and camera shake or subject movement render long exposures impractical.

Camera sensitivity goes back to the days of film, and the parameter called ISO quantifies it. In film, this sensitivity is related to grain size, but in digital imaging it’s more closely related to gain applied to the signal coming from the sensor. In an interesting correspondence, high ISO settings in a digital camera will produce noisier images that echo the coarser grain of high-ISO film.

This dance of gain and noise is awfully familiar to all of us, and I wonder if we should be suggesting to the digital imaging folks some sort of measure based on noise figure.

Today’s best digital cameras offer impressive sensitivity, driving new emphasis on a parameter near and dear to all of us: dynamic range. In the last several years, dramatic improvements in dynamic range have made cameras that are almost ISO-invariant, and this provides a big benefit for photographers.

Here’s my crude attempt at a graphical representation of the situation.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

This digital image “tone flow” diagram shows how a scene with wide dynamic range may be clipped and compressed in the process of capture and conversion to JPEG format. If you rotate this diagram 90 degrees to the left, it corresponds well with the amplitude levels of an RF signal measurement.

For RF engineers, this is familiar territory. Wider dynamic range in a measurement tool is always a good thing, and sometimes there is no substitute.

Taking advantage of this ISO-invariance is simple, though perhaps not intuitive. Instead of exposing normally for a challenging scene, the metering is set to capture desired highlights as a raw sensor output—not JPEG—file format. This may leave parts of the scene apparently underexposed, but the raw format preserves the full dynamic range of the sensor, and this allows all the tones to be brought into the desired relationship for the end result. In an ISO-invariant camera deep shadows may be brought up several stops or more without significant noise problems.

The result is more easily demonstrated than described, and an article at dpreview.com discusses the theory with examples. The folks at DPReview even consulted with Professor Eric Fossum, the inventor of the modern CMOS camera sensors that make this possible.

In a related article they also discuss the sources of noise in digital imaging, and once again there are parallels to our common vexations. I’m sure Boltzmann is in there somewhere.

Share
Tagged with: , , , , ,
Posted in Off-topic (almost), Signal analysis

Faster-Sweeping Signal Analyzers: An Invisible Technology that Just Works

  With a benefit or two that should not remain invisible

Though we don’t always think of them in quite this way, signal measurements such as low-level spurious involve the collection of a great deal of information, and thus can be frustratingly slow. I’ve described how the laws of physics sometimes help us, but this bit of good fortune confers only a modest benefit.

Some years ago, the advent of digital RBW filters in signal analyzers brought gains in speed and performance. The improved shape factor and consistent bandwidth yielded better accuracy, and the predictable dynamic response allowed sweep speeds to be increased by a factor of two to four. The effects of a faster sweep were correctable in real time as long as the speed wasn’t increased too much.

The idea of correcting for even faster sweep speeds was promising, and the benefits have gotten more attractive as spurious, harmonics and other performance specifications get ever tighter. To meet these requirements, the principal technique for reducing noise level in a spectrum or signal analyzer is to reduce RBW, with noise floor dropping 10 dB for each 10x reduction in RBW.

Unfortunately, sweep time lengthens with the square of the RBW reduction. A 100x increase in measurement time for a 10 dB improvement in signal-to-noise is a painful tradeoff.

As has occurred in the past, clever algorithms and faster DSP have combined to improve measurements and relieve the tedium for the RF engineer:

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement speed by about 50 times.

These two measurements cover the same frequency span with the same resolution bandwidth. Option FS1 in the Keysight X-Series signal analyzers (bottom) improves measurement rate by about 50 times.

Fast ASIC processing in the signal analyzer corrects for the frequency, amplitude and bandwidth effects of sweeping the RBW filters at speeds up to about 50 times faster than the traditional minimal-error speed. This improvement applies to swept—not FFT—measurements and is most beneficial when RBW is approximately 10 kHz or greater.

While the speed benefits are obvious, another may be nearly invisible:  narrower RBWs also [update: see note below]  improve repeatability.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

This graph compares the repeatability (vertical axis) of fast sweep and traditional sweep. The lower level and shallower slope of the blue line show both improved repeatability and less dependence on sweep time.

The magnitude of the speed improvement depends on measurement specifics and analyzer configuration, but they’re achieved automatically and with no tradeoff in specifications. If slow measurements are increasing your ambient level of tedium, find more information about this technique in our fast sweep application note.

Note: Improved measurement speed and repeatability are alternative benefits in this case, contrary to the implication of my original wording. You can use the same measurement time and get improved repeatability, or you can improve measurement time without improving repeatability. I apologize for the confusion.
Share
Tagged with: , , , , , , , , , , , ,
Posted in Aero/Def, EMI, History, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Measurement Statistics: Comparing Standard Deviation and Mean Deviation

  A nagging little question finally gets my attention

In a recent post on measurement accuracy and the use of supplemental measurement data, the measured accuracy in the figure was given in terms of the mean and standard deviations. Error bounds or statistics are often provided in terms of standard deviation, but why that measure? Why not the mean or average deviation, something that is conceptually similar and measures approximately the same thing?

I’ve wondered about standard and average deviation since my college days, but my curiosity was never quite strong enough to compel me to find the differences, and I don’t recall my books or my teachers ever explaining the practicalities of the choice. Because I’m working on a post on variance reduction in measurements, this blog is the spur I need to learn a little more about how statistics meets the needs of real-world measurements.

First, a quick summary: Standard deviation and mean absolute—or mean average—deviation are both ways to express the spread of sampled data. If you average the absolute value of sample deviations from the mean, you get the mean or average deviation. If you instead square the deviations, the average of the squares is the variance, and the square root of the variance is the standard deviation.

For the normal or Gaussian distributions that we see so often, expressing sample spread in terms of standard deviations neatly represents how often certain deviations from the mean can be expected to occur.

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

This plot of a normal or Gaussian distribution is labeled with bands that are one standard deviation in width. The percentage of samples expected to fall within that band is shown numerically. (Image from Wikimedia Commons)

Totaling up the percentages in each standard deviation band provides some convenient rules of thumb for expected sample spread:

  • About one in three samples will fall outside one standard deviation
  • About one in twenty samples will fall outside two standard deviations
  • About one in 300 samples will fall outside three standard deviations

Compared to mean deviation, the squaring operation makes standard deviation more sensitive to samples with larger deviation. This sensitivity to outliers is often appropriate in engineering, due to their rarity and potentially larger effects.

Standard deviation is also friendlier to mathematical operations because squares and roots are generally easier to handle than absolute values in operations such as differentiation and integration.

Engineering use of standard deviation and Gaussian distribution is not limited to one dimension. For example, in new calculations of mismatch error the complementary elements of the reflection coefficient both have Gaussian distributions. Standard deviation measures—such as the 95% or two standard deviation limit—provide a practical representation of the expected error distribution.

I’ve written previously about how different views of data can each be useful, depending on your focus. Standard and mean deviation measures are no exception, and it turns out there’s a pretty lively debate in some quarters. Some contend, for example, that mean deviation is a better basis on which to make conclusions if the samples include any significant amount of error.

I have no particular affection for statistics, but I have lots of respect for the insight it can provide and its power in making better and more efficient measurements in our noisy world.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Measurement theory

A Signal Analyzer Connector Puzzler

  Is something wrong with this picture?

Many of the things that intrigue me do not have the same effect on an average person. However, you are also not an average person—or you wouldn’t be reading this blog. Thus, I hope you’ll find the following image and explanation as interesting and useful as I did. Take a close look at this Keysight X-Series signal analyzer and the bits I’ve highlighted:

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

The frequency range of this MXA signal analyzer extends to 26.5 GHz but it is equipped with a Type N input connector. Because N connectors are normally rated to 11 or 18 GHz, do we have a problem?

One up-front confession: I looked at this combination of frequency range and input connector for years before it struck me as strange. I vaguely remembered that N connectors were meant for lower frequencies and finally took the time to look it up.

The explanation is only a little complicated, including some clever engineering to optimize tradeoffs, and it’s worth understanding. As always with microwaves and connections, it’s a matter of materials, precision and geometry.

First, the short summary: The N connectors used in Keysight’s 26 GHz instruments are specially designed and constructed, and their characteristics are accounted for in the instrument specifications. If you’re working above 18 GHz and using appropriate adapters such as those in the 11878 Adapter Kit, you can measure with confidence. Just connect the N-to-3.5mm adapter at the instrument front panel and use 3.5 mm or SMA hardware from there.

Why use the N connector on a 26 GHz instrument in the first place? Why not an instrument-grade 3.5 mm connector that will readily connect to common SMA connectors as well? The main reason is the strength and durability of the N connector when dealing with the bumps, twists and frequent reconnections that test equipment must endure—and still ensure excellent performance. Precision N connectors offer a combination of robustness and consistent performance that is unique in the RF/microwave world. They’re also easy to align and are generally tightened by hand.

However, there is that small matter of limited frequency range. Standard N connectors are rated to 11 GHz and precision ones to 18 GHz. Above 18 GHz, conductor size and geometry can allow amplitude and phase errors due to the moding phenomenon I described in a previous post.

Moding is a resonance phenomenon from the larger dimensions of the N connector, and the solution involves a change in the construction of the instrument’s precision N connector. This special connector has a combination of a slotless inner shield, a support bead of a special material, and higher precision construction. As a result, resonances can be eliminated or reduced to such a small magnitude that the N connector is the overall best choice for test equipment over this frequency range.

There you have it, the practical advantages of N connectors over the full 26.5 GHz frequency range, without a performance penalty.

Share
Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Microwave, Signal analysis, Signal generation, Wireless

Signal Generators and Confidence in the Very Small

   Precisely small is more of a challenge than precisely big

Recently, I’ve been looking at sensitivity measurements and getting acquainted with the difficulty of doing things correctly at very low signal levels. It’s an interesting challenge and I thought it would be useful to share a couple of surprising lessons about specifications and real-world performance.

From the outset, I’ll concede that data sheets and detailed specifications can be boring. Wading through all that information is a tedious task, but it’s the key to performance you can count on, and specs are a reason to buy test equipment in the first place. Also, extensive specifications are better than the alternative.

Sensitivity measurements show the role and benefits of a good data sheet in helping you perform challenging tests. Say, for example, you’ve got a sensitivity target of 1 µV and you need a signal just that size because the desired tolerance is ±1 dB. In a 50Ω system, that single microvolt is −107 dBm, and 1 dB differences amount to only about 10 nV.

The hard specs for a Keysight MXG X-Series microwave signal generator are ±1.6 dB and extend to −90 dBm, so there are issues with the performance required in this situation. However, it’s worth keeping in mind that the specs cover a wide range of operating conditions, well beyond what you’ll encounter in this case.

Once again this is a good time to consider adding information to the measurement process as a way to get more from it without changing the test equipment. A relevant item from the signal generator data sheet illustrates my point.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The actual performance of a set of MXG microwave signal generators is shown over 20 GHz, and the statistical distribution is provided as well. Though the measurement conditions are not as wide as for hard specs, these figures are a better indication of performance in most situations.

The performance suggested by this graph is very impressive—much better than the hard specs over a very wide frequency range—and it applies to the kind of low output level we need for our sensitivity measurement. Accuracy is almost always better than ±0.1 dB, dramatically better than the hard spec.

The graph also includes statistical information that relates to the task at hand. Performance bounds are given for ±one standard deviation, and this provides a 68% confidence level if the distribution is normal (Gaussian). If I understand the math, a tolerance of ±0.2 dB would then correspond to two standard deviations and better than 95% confidence.

The time spent wading through a data sheet is amply rewarded, and the right confidence can then be attached to the performance of a tricky measurement. The confidence you need in your own measurements may be different, but the principle is the same and the process of adding information will improve your results.

So far, we’ve taken advantage of information that is generic to the instrument model involved. Even more specific information may be available to you, and I’ll discuss that in a future post.

Share
Tagged with: , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal generation, Wireless

RF Intuition: A Strength or a Vulnerability?

  Intuition is powerful but if you don’t frame questions well it can mislead you

Contrary to the popular stereotype, good engineers are creative and intuitive. Indeed, these characteristics are essential tools for successful engineering.

I have great respect for the power of intuitive approaches to problems, and I see at least two big benefits. First, intuition can gather diffuse or apparently unrelated facts that enable exceptionally powerful analysis. Second, it often provides an effective shortcut for answers to complex questions, saving time and adding efficiency.

Of course, intuition is not infallible, and I’m always intrigued by its failure. It makes sense to pay attention to these situations because they provide lessons about using intuitive thinking without being misled by it. Two of my favorite examples are the Monty Hall Problem and why mirrors appear to reverse left and right but not up and down.

I’d argue that a common factor in most intuition failures is not so much the reasoning process itself but the initial framing of the question. If you start with a misapprehension of some part of the problem or question, even a perfect chain of reasoning will fail you.

As a useful RF example, let’s look at an intuition failure in “sub-kTB” signal measurements. Among RF engineers, kTB is shorthand for -174 dBm/Hz*, which is the power delivered by a 50Ω thermal source into a 50Ω load at room temperature. It should therefore be the best possible noise level—or, more accurately, noise density or PSD—you could obtain in a signal analyzer that has a perfect 0 dB noise figure.

Not surprisingly, many engineers also see this as the lowest possible signal level one could measure, a kind of noise floor or barrier that one could not see beyond or measure beneath. As a matter of fact, even this level should not be achievable because signal analyzers contribute some noise of their own.

This intuitive expectation of an impenetrable noise floor is logical but flawed, as demonstrated by the measurement example below that uses Keysight’s Noise Floor Extension (NFE) feature in a signal analyzer. Here, a multi-tone signal with very low amplitude is measured near the signal analyzer’s noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

The noise marker shows that the effective noise floor of the measurement (blue) is actually below kTB after NFE removes most of the analyzer’s noise. The inset figure shows how a signal produces a detectable bump in the analyzer’s pre-NFE noise floor (yellow), even though it’s about 5 dB below that noise floor.

I’ve previously described NFE, and for this discussion I’ll summarize by saying that it allows some analyzers to accurately estimate their own noise contribution and then automatically subtract most of it from the measurement. The result is a substantial improvement in effective noise floor and the ability to separate very small signals from noise.

While it is indeed correct that kTB is a noise floor that cannot be improved, or even matched in an analyzer, the error in intuition is in associating this in a 1:1 fashion with an ultimate measurement limit. As discussed previously, signal and noise power levels—even very small ones—can be reliably added or subtracted to refine raw measurement results.

kTB and related noise in analyzers are phenomena whose values, when averaged, are predictable when the measurement conditions and configuration are known. Consequently, subtracting analyzer noise power can be seen as adding information to the measurement process, in turn allowing more information to be taken from the measurement result.

OK, so measuring below kTB is perhaps more of a parlor trick than a practical need. However, an intuitive understanding of its possibility illuminates some important aspects of making better RF measurements of those tiny signals that so frequently challenge us.

* You may instead see the figure -177 dBm/Hz for kTB. This refers to a slightly different noise level measurement than that of a spectrum or signal analyzer, as explained at the link.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Envelope Tracking and Riding the Gain

  You can “turn it up to eleven” as long as you don’t leave it there

When I first heard the term “envelope tracking” I thought of the classic investigative/surveillance technique called “mail cover” in which law enforcement gets the postal service to compile information from the outside of envelopes. The practice was in the news a while back due to its use with digital communications.

Learning a little more, I quickly realized that it has nothing to do with the mail but, like the mail, has precedent that reaches back many years. “Riding the gain” or “gain riding” is a manual process that has been used for decades in audio recording and other applications where excessive dynamic range is a problem. Its use predates vinyl records, though I first encountered it in my previous life as a radio announcer, broadcasting live events.

When I was riding the gain, it was a manual process of twisting a knob, trying to reduce input dynamic range to something a small-town AM transmitter could handle. I was part of a crude feedback system, prone to delay and overshoot, as I’m sure my listeners would attest.

These days, envelope tracking is another example of how digital processing is used to solve analog problems. In this case it’s the conflict between amplifier efficiency and the wide variations in the RF envelope of digital modulation. If the power supply of an RF amplifier can be dynamically adjusted according to the power needed by modulation, it can—at every instant—be operating at its most efficient point.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

In envelope tracking an RF power amplifier is constantly adjusted to track the envelope of the modulated input signal. The amplifier operates at higher efficiency and lower temperature, using less battery power and potentially creating less adjacent-channel interference.

Power efficiency has always been a major driver in mobile communications and its importance continues to grow. Batteries are limited by the size and weight of the handsets users are willing to carry and, yet again, Moore’s Law points the way to improvement. Available DSP now has the high speed and low power consumption to calculate RF envelope power on the fly. The envelope value is fed to a power supply with sufficient bandwidth or response time to adjust its drive of the RF power amplifier accordingly.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

An envelope tracking power amplifier (ETPA) is dynamically controlled for optimum efficiency by tracking the required RF envelope power. The tracking is based on envelope calculations from the I/Q signal, modified by a shaping table.

This all seems fairly straightforward but, of course, is anything but. The calculation and response times are very short, and a high degree of time alignment is required. Power supplies must be extremely responsive and still very efficient. All of the DSP must itself be power efficient, to avoid compromising the fundamental power benefit.

Envelope tracking is a downstream solution to power amplifier efficiency, joining previous upstream techniques such as crest factor reduction and macro-scale approaches such as digital predistortion. To a great extent, all rely on sophisticated algorithms implemented in fast DSP.

That’s where Keysight’s design and test tools come in. You can find a collection of application notes and other information at www.keysight.com/find/ET.

With envelope tracking you can now turn your amplifiers up to eleven when you need to, and still have a battery that lasts all day.

Share
Tagged with: , , , , , , , , , , , , ,
Posted in Signal analysis, Signal generation, Wireless

Spurious Measurements: Making the Best of a Tedious Situation

   Sometimes I need to be reminded to take my own advice

Recently, I’ve been looking into measuring spurious signals and the possibility of using periodic calibration results to improve productivity. I’ll share more about that in a future post, but for now it seemed useful to summarize what I’ve learned—or re-learned—about new and traditional ways to measure spurs.

Spur measurements can be especially time-consuming because they’re usually made over wide frequency ranges and require high sensitivity. Unlike harmonics, spur locations are typically not known beforehand so the only choice is to sweep across wide spans using narrow resolution bandwidths (RBWs) to reduce the analysis noise floor. With spurs near that noise floor, getting the required accuracy and repeatability can be a slow, tedious job.

An engineer experienced in optimizing these measurements reminded me of advice I’ve heard—and shared—before: Don’t measure where you don’t need to, and don’t make measurements that don’t matter.

The first “don’t” is self-explanatory. The frequency spectrum is wide, but the important region is mercifully much narrower—and we should enjoy every possible respite from tedium.

The second “don’t” is less obvious. It’s a reminder to begin with a careful look at the DUT and which measurements are required, and how good those measurements need to be. For example:

  • Do you need to measure specific spur frequencies and amplitudes, or is a limit test sufficient?
  • How much accuracy and variance are acceptable? What noise floor or signal/noise and averaging are needed to achieve this?
  • Are the potential spurs CW? Modulated? Impulsive?

The answers will help you define an efficient test plan and select the features in a signal analyzer that dramatically improve spur measurements.

One especially useful feature is the spurious measurement application. It allows you to build a custom set of frequency ranges, each with optimized settings for RBW, filtering, detectors, etc. You measure only where needed, as shown below.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

With the measurement application, you can set up multiple analysis ranges and optimize the settings for each. Measurements are made automatically, with pass/fail limit testing.

This application is helpful in ATE environments, offloading tasks from the system CPU. It’s also worth considering in R&D because, unfortunately, spur measurements usually have to be repeated… many, many times.

Some recent innovations in digital filters and DSP have dramatically improved sweep rates for narrow RBWs in signal analyzers such as Keysight’s PXA. With sufficient processing, RBW filters can now be swept up to 50 times faster without compromising amplitude or frequency accuracy. The benefit is greatest for RBWs of several to several hundred kilohertz, as is typical for spur measurements (see this recent app note).

One factor that can muck up the works is the presence of non-CW spurs. For example, TDMA schemes often produce time-varying spurs. This violates key assumptions underlying traditional search techniques and makes it much tougher to detect and measure spurs.

Fortunately, signal analyzers have evolved to handle these challenges. In TDMA systems, sync or trigger signals are often available to align gated sweeps that analyze signals only during the desired part of the TDMA frame.

Perhaps the most powerful tool for finding impulsive or transient spurs is the real-time analyzer, which can process all the information in a frequency span without gaps and produce a spectrogram or density display that reveals even the most intermittent signals.

The best tool for precisely measuring time-varying spurs is vector signal analyzer (VSA) software. The software uses RF/microwave signal analyzers, oscilloscopes, etc., to completely capture signals for any type of frequency-, time- and modulation-domain analysis. Signals can be recorded for flexible post-processing as a way to accurately measure all their characteristics from a single acquisition, solving the problem of aligning measurement to the time-varying nature of the signal.

It’s no secret that spur detection and measurement are both difficult and essential, but with the right advice and the right equipment you can minimize the tedium.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

Does RF Noise have Mass?

   I’m not usually a fan of noise, but there are exceptions

My provisional assumption is that noise does indeed have mass. I support that notion with the following hare-brained chain of reasoning: The subject of noise has a gravity-like pull that compels me to write about it more than anything else. Because gravity comes from mass, noise therefore must have mass. Voila!

My previous posts dealing with noise have all been about minimizing it. Averaging away its effects, estimating the errors it causes, predicting and then subtracting noise power, and so on. Sometimes I just complain about it or wax philosophical.

I even created a webcast titled “conquering noise” but, of course, that was a bit of a conceit. Noise is a fundamental natural phenomenon and it is never vanquished. Instead, I have mentioned that noise can be beneficial in some circumstances—and now it’s time to describe one.

A few years ago, a colleague was using Keysight’s Advanced Design System (ADS) software to create 10 MHz WiMAX MIMO signals that included impairments. He started by adding attenuation to one transmitter, but after finding little or no effect on modulation quality, he added a 2 MHz bandpass filter to one channel, as shown below.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

Surely a filter that removed most of one channel would confound the demodulator. Comparing the spectra of the two channels, the effect is dramatic.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

All that filtering in one channel had no significant effect on modulation quality! The VSA software he was using—as an embedded element in the simulation—showed the filter in the spectrum and the channel frequency response, but in demodulation it caused no problem.

He emailed a recording of the signal and I duplicated his results using the VSA software on my PC. I then told him he could “fix” the problem by simply adding some noise to the signals.

This may seem like an odd way to solve the problem, but in this case the simulation didn’t match reality in the way it responded to drastic channel filtering. The mismatch was due to the fact that the simulated signals were noise-free, and the channel equalization in demodulation operations could therefore perfectly correct for filter impairments, no matter how large they were.

In many ways it’s the opposite of the adaptive equalization used in real-world situations with high noise levels, and I have previously cautioned you to be careful what you ask for. When there is no noise, you can correct signals as much as you want, without ill effects.

Of course, “no noise” is not the world we live in or design for, and as much as I hate to admit it, there are times when it’s beneficial to add some.

There are certainly other uses for noise. Those also have that peculiar massive attraction and I know I’ll write about it again soon.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Low frequency/baseband, Measurement techniques, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

RF, Protocol and the Law

   RF engineering and a layer hierarchy that extends all the way to the spectral authorities

In our day jobs we focus mainly on the physical layer of RF communications, and there is certainly enough challenge there for a lifetime of productive work. The analog and digitally modulated signals we wrestle with are the foundation of an astonishing worldwide expansion of communications.

Of course, the physical layer is just the first of many in modern systems. Engineering success often involves interaction with higher layers that are commonly described in diagrams such as the OSI model shown below.

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The OSI model is a good way to build an understanding of systems and to figure out how to make them work, but sometimes we need to add even more layers to see the whole picture. A good example comes from a recent event that caught my eye.

Many news outlets reported that some hotels in one chain in the US were “jamming” private Wi-Fi hotspots to force convention-goers to use the hotel’s for-fee Wi-Fi service. The term jamming grabbed my attention because it sounded like a very aggressive thing to do to the 2.4 GHz ISM band, which functions as a sort of worldwide public square in the spectral world. I figured regulatory authorities such as our FCC would take a pretty dim view of this sort of thing.

As is so often the case, many general news organizations were being less than precise. The hotel chain was actually blocking Wi-Fi rather than jamming it. This is something that happens not at the physical layer—RF jamming—but a few layers higher.

According to the FCC, hotel employees “had used containment features of a Wi-Fi monitoring system” to prevent people from connecting to their own personal Wi-Fi networks. Speculation from network experts is that the Wi-Fi monitoring system could be programmed to flood the area with de-authentication or disassociation packets that would affect access points and clients other than those of the hotel.

It may not surprise you that the FCC also objected to this use of the ISM band, and the result was a $600,000 settlement with the hotel to resolve the issue. The whole RF story thus extends the OSI model to at least a few more levels, including the vendor of the monitoring system, the hotel management and—at least one layer above them!—the FCC itself.

I suppose you can insert some legislative and political layers in there somewhere if you want, but I’m happy to focus my effort on wrangling the physical layer and those near it. Keysight signal generators and signal analyzers are becoming more capable above the physical layer, with features such as Wireless Link Analysis to perform layer 2 and layer 3 analysis of LTE-FDD UL and DL signals.

In the end, I hope there are ways to resolve these issues and give everyone fair access to the unlicensed portions of our shared spectrum. I dread a situation in which a market emerges for access points or hotspots with counter-blocking technology and a resulting arms race that could leave us all without access.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Signal analysis, Signal generation, Wireless
About

Agilent Technologies Electronic Measurement Group is now Keysight Technologies http://www.keysight.com.

My name is Ben Zarlingo and I’m an applications specialist for Keysight Technologies. I’ve been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis. For the past 20 years I’ve been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions. I work at the interface between Agilent’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments. Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.