Exploring “Near-Far” Problems—and Opportunities

  Sometimes a common understanding is not common

It’s always an interesting experience when I find that one of my assumptions is wrong, or at least not very right. Yes, it’s a chance to learn and grow and all that, but it sometimes provokes puzzlement or a little disappointment. This is one of those puzzling and slightly disappointing cases.

I first encountered the “near-far” term and associated concepts in an internal training talk by an R&D project manager with deep experience in RF circuits and measurements. He was explaining different dynamic range specifications in terms of distortion and interference. His context was the type of real-world problems that affect spec and design tradeoffs.

In his talk, the project manager explained some of the ways design considerations and performance requirements depend on distance. After a couple of examples his meaning was clear in terms of the wide range of distances involved and the multitude of implications for RF engineering. Whether over-the-air or within a device, physical distance really matters.

At the time, I thought it was a neat umbrella concept, linking everything from intentional wireless communications and associated unintentional interference to the undesirable coupling that is an ever-present challenge in today’s multi-transceiver wireless devices. For example, even small spurious or harmonic products can cause problems with unrelated radios in a compact device where 5 cm is near and 5 km—the base station you want to reach—is far.

That talk was a formative experience for me, way back in the 1980s. I kept near-far considerations and configurations in my mind as I learned about wireless technologies, equipment design and tradeoffs, and avoidance of interference problems. The near-far concept illuminated the issues behind a wide range of schemes and implementations.

Though I didn’t hear the near-far concept too often, I assumed most RF engineers thought along those lines and presumably used those terms. A recent Web search for near-far problem let me know my assumption was faulty. The search results are relatively modest and mostly focus on the “hearability problem” in CDMA wireless schemes. This is an excellent example of a near-far situation, where transmitters at shorter distances are received at higher power, making it difficult for correlators—which see every signal but the target as noisy interference—to extract smaller signals.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal. This example is from the N9073C W-CDMA/HSPA+ X-Series Measurement App.

Measuring power in the code domain separates received power according to individual transmitters and their codes. Demodulation is most effective when code powers are equal. This example is from the N9073C W-CDMA/HSPA+ X-Series Measurement App.

However, it’s disappointing to see a powerful term and concept narrowed so much. The CDMA hearability problem in the code domain has analogs in the frequency domain with OFDMA, and in both cases various power-control schemes are used to minimize the problems.

I don’t think the narrowing of meaning is a matter of the era when the term came into use, since I heard the term years before CDMA, and some people were using it years before that. Perhaps it reflects the fact that CDMA was a very interesting and high profile example, and a single association with the term was thus established.

In any case, I think this shrunken use of the term is unfortunate. Careful consideration of potential near-far issues can help engineers avoid serious problems, or at least address them early on, before solutions are foreclosed or too much money is spent.

One cautionary example is the c.2010 effort by LightSquared (now Ligado Networks) to expand mobile 4G-LTE coverage using terrestrial base stations in a band originally intended for satellites. The band was adjacent to some GPS frequencies, and the switch from satellite distances (far) to terrestrial ones (near) dramatically increased the likelihood and severity of interference problems. The large reduction in distance upset earlier assumptions about relative signal strength—assumptions that drove the design, performance, and cost of many GPS receivers.

The potential interference problems prevented approval of the original LightSquared plan, and the fate of its portion of the L-Band is not yet determined. Whatever it is, I expect it will more fully account for the near-far issues, along with the cost and performance requirements related to both new and existing equipment.

The near-far concept also has a probability dimension. As you’d expect, some sins of RF interference are more likely to be a critical issue as the density of radios in our environment continues its dramatic increase. Some problems that were once far away are getting nearer all the time.

To appease my own curiosity, I’ll leave you with two questions: Have you encountered the near-far concept? Or do you rely on a touchstone idea, learned from an experienced hand, that isn’t as widely known as you once thought?

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Microwave, Millimeter, Signal analysis, Wireless

Baseband and IF Sampling

  Different ways to get your signal bits

There’s a long history of synergy and a kind of mutual bootstrapping in the technology of test equipment and the devices it’s used to develop and manufacture. Constant technology improvements lead to welcome—and sometimes crucial—improvements in RF performance. It’s a virtuous cycle that powers our field, but it also presents us with some challenging choices as the landscape evolves.

Signal analyzers and digital oscilloscopes have exemplified these improvements and illustrate the complex choices facing RF engineers. The latest signal analyzer, for example, covers bandwidths as wide as 1 GHz at frequencies up to 50 GHz. New oscilloscopes offer bandwidths as wide as 63 GHz, solidly in the millimeter range. Other oscilloscopes and digitizers, at more modest prices, cover the cellular and WLAN RF bands.

The established solution for spectrum analysis and demodulation of RF/microwave signals is the signal analyzer, and it’s logical to wonder if the technology advances in digital oscilloscopes and signal analysis software have changed your choices. If both hardware platforms can sample the bandwidths and operating frequencies used, how do you get your bits and, ultimately, the results you need?

The answer begins with an understanding of the two different approaches to sampling signals, summarized in these dramatically simplified block diagrams. First, a look at IF sampling:

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In this architecture, the signal is downconverted and band-limited before being digitized. Sampling is performed on the intermediate frequency (IF) stage output.

In signal analyzers, the sampling frequency is related to the maximum bandwidth required to represent the signal under test. That frequency is usually low compared to the center frequency of the signal under test, and there is no need to change it with changes in signal center frequency.

The alternative, called baseband sampling, involves direct sampling of the entire signal under test, from DC to at least its highest occupied frequency: CF + ½ OccBW.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

Here, the signal undergoes minimal processing before being digitized. The lowpass filter ensures that frequencies above the ADC’s Nyquist sampling criterion do not produce false or alias products in the processed results.

The signal under test is completely represented by baseband sampling and any type of analysis can be performed. Narrowband analysis as performed with a spectrum/signal analyzer—in the time, frequency, and modulation domains—is achieved by implementing filters, mixers, resamplers, and demodulators in DSP. Keysight’s 89600 VSA software is the primary tool for these tasks and many others, and it runs on a variety of sampling platforms.

We thus have two paths to the signal analysis we need, and we’re back to the earlier question about the best sampling choice among evolving technologies. The answer is primarily driven by performance requirements, the operating frequencies and bandwidths involved, and the resulting demands on sample rate.

The architecture of IF sampling allows for analog downconversion and filtering to dramatically reduce the required sample rate. This process has been thoroughly optimized in performance and cost, and focuses ADC performance on the essential signal. Other frequencies are excluded, and the limited bandwidth allows for ADCs with the best resolution, accuracy, and dynamic range.

With baseband sampling, frequency conversion and filtering are done in DSP, requiring a vast amount of digital data reduction to focus analysis on the band in question. This must precede processing for signal-analysis results such as spectrum or demodulation.

The tradeoffs explain why spectrum analysis and demodulation are generally performed using IF sampling. However, the technological evolution mentioned above explains the increasing use of baseband sampling for RF and microwave signal analysis. ADCs and DSPs are improving in cost and quality, and are frequently available on the RF engineer’s bench in the form of high-resolution oscilloscopes. RF and modulation quality performance may be adequate for many measurements, and the extremely wide analysis bandwidths available may be an excellent solution to the demands of radar, EW, and the latest wideband or aggregated-carrier wireless schemes.

Ultimately, personal preference is a factor that can’t be ignored. Do you look for your first insights in the time or frequency domain before delving into measurements such as demodulation? The software and hardware available these days may give you just the choice you want.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Low frequency/baseband, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

An Intuitive Look at Noise Figure

  An overview to complement your equations

After some recent conversations about noise figure measurements, I’ve been working to refresh my knowledge of what they mean and how they’re made. My goal was to get the essential concepts intuitively clear in my mind, in a way that would persist and therefore guide me as I looked at measurement issues that I’ll be writing about soon.

Maybe my summary will be helpful to you, too. As always, feel free to comment with any suggestions or corrections.

  • Noise figure is a two-port measurement, defined as an input/output ratio of signal-to-noise measurements—so it’s a ratio of ratios. The input ratio may be explicitly measured or may be implied, such as assuming that it’s simply relative to the thermal noise of a perfect passive 50Ω.
  • The input/output ratio is called noise factor, and when expressed in dB it’s called noise figure. Noise figure is easier to understand in the context of typical RF measurements, and therefore more common.
  • It’s a measure of the extra noise contributed by a circuit, such as an amplifier, beyond that of an ideal element that would provide gain with no added noise. For example, an ideal amplifier with 10 dB of gain would have 10 dB more noise power at its output than its input, but would still have a perfect noise figure of 0 dB.

It’s important to understand that noise figure measurements must accurately account for circuit gain because it directly affects measured output noise and therefore noise figure. Gain errors translate directly to noise figure errors.

The Y factor method is the most common way to make these measurements. A switchable, calibrated noise source is connected to the DUT input and a noise figure analyzer or signal analyzer is connected to the output. An external preamp may be added to optimize analyzer signal/noise and improve the measurement.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50  impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The noise figure meter or signal analyzer switches the noise source on and off and compares the results, deriving both DUT gain and noise figure versus frequency. It’s a convenient way to make the measurements needed for noise figure, and specifications are readily available for both the noise source and the analyzer.

However, the impedance match between the noise source and the DUT affects the power that is actually delivered to the DUT and therefore the gain calculated by measuring its output. The impedance match is generally very good at low frequencies and with an attenuator in the noise source output. This enables accurate estimates of measurement uncertainty.

Unfortunately, as you approach millimeter frequencies, impedances are less ideal, gains are lower, and noise source output declines. Noise figure measurements are more challenging, and uncertainty is harder to estimate. In at least one upcoming post, I’ll discuss these problems and some practical solutions and measurement choices.

Why go to all the trouble? Whether or not it has mass, noise is a critical factor in many applications. By making individual or incremental noise figure measurements, you can identify and quantify noise contributors in your designs. This is the knowledge that will help you minimize noise, and optimize the cost and performance tradeoffs that are an important part of the value you add as an RF engineer.

Share
1 Comment ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Predicting the Technological Future vs. Adapting to It

  GPS and the skill, creativity and imagination of engineers

When it comes to predicting the future, I’m not sure if RF engineers are any better or worse than others—say economists or the general public. If you limit predictions to the field of electronics and communications, engineers have special insight, but they will still be subject to typical human biases and foibles.

However, when it comes to adapting to the future as it becomes their present, I’d argue that engineers show amazing skill. The problem solving and optimizing that comes instinctively to engineers give them tremendous ability to take advantage of opportunities, both technical and otherwise. Some say skillful problem solving is the defining characteristic of an engineer.

GPS is a good example of both adaptation and problem-solving, and it’s on my mind because of historical and recent developments.

It was originally envisioned as primarily a navigation system, and the scientists and engineers involved did an impressive job of predicting a technological future that could be implemented on a practical basis. Development began in 1973, with the first satellite lunch in 1978, so the current system that includes highly accurate but very inexpensive receivers demonstrates impressive foresight. Indeed, the achievable accuracy is so high in some implementations that it is much better than even the dimensions of receive antennas, and special choke ring antennas are used to take advantage of it.

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

Over the years, GPS has increasingly been used provide another essential parameter: time. As a matter of fact, the timing information from GPS may now be a more important element of our daily lives than navigation or location information. It’s especially important in keeping cellular systems synchronized, and it’s also used with some wireline networks, the electrical power grid, and even financial banking and trading operations.

As is so often the case, the dependencies and associated risks are exposed when something goes wrong. In January of this year, in the process of decommissioning one GPS satellite, the U.S. Air Force set the clocks wrong on about 15 others. The error was only 13 microseconds, but it caused about 12 hours of system problems and alarms for telecommunications companies. Local oscillators can provide a “holdover time” of about a day in these systems, so a 12-hour disturbance got everyone’s attention.

Outages such as this are a predictable part of our technological future, whether from human error, jamming, hardware failure, or a natural disaster such as the Carrington Event. The fundamental challenge is to find ways to adapt or, better yet, to do the engineering in advance to be able to respond without undue hardship or delay.

RF engineering obviously has a major role to play here, and at least two technologies are currently practical as alternates or supplements to GPS:

  • The proposed eLORAN system would replace several earlier LORAN systems that have been shut down in recent years. The required engineering is no barrier, but legislative support is another matter. In addition to serving as a GPS backup, eLORAN offers better signal penetration into buildings, land and water.
  • Compact, low-power atomic frequency references can offer independence from GPS, or may provide greatly extended holdover times. Their modest cost should allow wide adoption in communications systems.

As legendary computer scientist Alan Kay once said, “The best way to predict the future is to invent it.” If past is prolog, and I believe it is, I’m confident RF engineers will continue to be among the best at designing for the future, adapting to technology opportunities, and solving the problems that arise along the way.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, History, Low frequency/baseband, Microwave, Signal analysis, Wireless

Insidious Measurement Errors

  How to avoid fooling yourself

Some years ago I bought an old building lot for a house, and hired a surveyor because the original lot markers were all gone. It was a tough measurement task because nearby reference monuments had also gone missing since the lot was originally platted. Working from the markers in several adjacent plats, the surveyor placed new ones on my lot—but when he delayed the official recording of the survey I asked him why. His reply: he didn’t want to “drag other plat errors into the new survey.” Ultimately, it took three attempts before he was satisfied with the placement.

Land surveys are different from RF measurements, but some important principles apply to both. Errors sometimes stack up in unfortunate ways, and an understanding of insidious error mechanisms is essential if you want to avoid fooling yourself. This is especially true when you’re gathering more information to better understand measurement uncertainty.

Keysight engineers have the advantage of working in an environment that is rich in measurement hardware and expertise. They have access to multiple measurement tools for comparing different approaches, along with calibration and metrology resources. I thought I’d take a minute to discuss a few things they’ve learned and approaches they’ve taken that may help you avoid sneaky errors.

Make multiple measurements and compare. I’m sure you’re already doing this in some ways—it’s an instinctive practice for test engineers, and can give you an intuitive sense of consistency and measurement variability. Here’s an example of three VSWR measurements.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

It’s always a good idea to keep connections short and simple, but it’s worth trying different DUT connections to ensure that a cable or connector—or even a specific bit of contamination—isn’t impairing many measurements in a consistent way that’s otherwise hard to spot. The same thing applies to calibration standards and adapters.

The multiple-measurements approach also applies when using different types of analyzer. Signal analyzers can approach the accuracy of RF/microwave power meters, and each can provide a check on an error by the other.

Adjust with one set of equipment and verify with another. DUTs may be switched from one station to another, or elements such as power sensors may be exchanged periodically to spot problems. This can be done on a sample or audit basis to minimize cost impacts.

In estimating uncertainty, understand the difference between worst case and best estimate. As Joe Gorin noted in a comment on an earlier post “The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate. When we know the standard deviation, we can make better estimates of the uncertainty than we can when we have only warranted specifications.” A more thorough understanding of the performance of the tools you have may be an inexpensive way to make measurements better.

Make sure the uncertainties you estimate are applicable to the measurements you make. Room temperature specifications generally apply from 20 to 30 °C, but the “chimney effect” within system racks and equipment stacks can make instruments much warmer than the ambient temperature.

Take extra care as frequencies increase. Mismatch can be the largest source of uncertainty in RF/microwave measurements, and it generally gets worse as frequencies increase. Minimizing it can be worth an investment in better cables, attenuators, adapters, and torque wrenches.

This isn’t meant to suggest that you adopt an excessively paranoid outlook—but it’s safe to assume the subtle errors really are doing their best to hide from you while they subvert your efforts. Said another way, it’s always best to be alert and diverse in your approaches.

Share
1 Comment ↓

Tagged with: , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Wireless Data Rates and a Failure of Imagination

  Is there an unlimited need for speed, or are we perfecting buggy whips?

Contrary to some stereotypes, engineers—especially the most effective ones—are both intuitive and creative. The benefits and tradeoffs of intuition are significant, and that’s why I’ve written here about its limits and ways it might be enhanced. As for creativity, I don’t think I understand it very well at all, and would welcome any perspective that will improve my own.

Recently, on the same day, I ran across two articles that, together, made me think about the role of imagination in engineering. It is clearly another vital talent to develop, and maybe I can learn a little about this element of creativity in the bargain.

The first was Lou Frenzel’s piece in Electronic Design, wondering about the practical limits of data bandwidth, along with its potential uses. The other was the announcement by Facebook and Microsoft of an upcoming 160 Tbit/s transatlantic cable link called MAREA that will reach from Spain to Virginia. I had to blink and read the figure again: the units really are terabits per second.

That figure made me dredge up past skepticism about an announcement, some years back, of 40 Gbit optical links. I remember wondering what applications—even in aggregate—could possibly consume such vast capacity, especially because many such links were in the works. I also wondered just how much higher things could go, scratching my head in the same way Lou did. Now I find myself reading about a cable that will carry 4,000 times more information than the 40 Gbit one, and concluding that I was suffering from a failure of imagination.

Imagination is anything but a mystical concept, and has real technical and business consequences. One of the most famous examples is the 1967 fire in the Apollo 1 spacecraft, where the unforeseen effects of a high-pressure oxygen environment during a ground test turned a small fire into a catastrophe. In congressional hearings about the accident, astronaut Frank Borman—by many accounts the most blunt and plain-spoken engineer around—spoke of the fatal fire’s ultimate cause as a failure of imagination.

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

In RF engineering, the hazards we face are much more benign, but are still significant for our work and our careers. We may miss a better way to solve a problem, fail to anticipate a competitor’s move, or underestimate the opportunity for—or the demands on—a new application.

That’s certainly what happened when I tried to imagine how large numbers of 40 Gbit links could ever be occupied. I thought about exchanging big data files, transmitting lots of live video—even the HD that was then on its way—and a considerable expansion of mobile services. However, I completely failed to imagine things like video streaming from smartphones en masse, ubiquitous cloud computing, an Internet of umpteen things, and virtual reality (VR).

VR stands out as an example of multi-faceted innovation that was hardly a blip on the horizon a few years ago—but it now promises to be a huge consumer of both downlink and uplink bandwidth. It demands fast, high-resolution video and typically low latency, and the benefits are compelling. As one example, it converts 3D video from a passively consumed product to an immersive experience with lots of instructional and entertainment applications. Just one among many future bandwidth drivers, I’m sure.

It’s been observed that data bandwidth for wireless links is often about one generation behind that of common wired ones. Both have been growing rapidly, and though I can’t imagine exactly what the demand drivers will be, I agree with Lou that we won’t reach limits soon, and there will continue to be plenty of interesting challenges.

For RF engineers it will mean solving the countless problems that stand between lofty wireless (and wired) standards and the practical, affordable products that will make them reality.

Note: This post has been edited to correct a bits/bytes error.

Share
2 Comments ↓

Tagged with: , , , , , , , , , , , , ,
Posted in Aero/Def, History, Off-topic (almost), Wireless

RTSA: How “Real” is Real Enough?

  Improving your chances of finding the signals you want

Real-time spectrum analyzers (RTSAs) are very useful tools when you’re working with time-varying or agile signals and dynamic signal environments. That’s true of lots of RF engineering tasks these days.

A good way to define real time is that every digital sample of the signal is used in calculating spectrum results. The practical implication is that you don’t miss any signals or behavior, no matter how brief or infrequent. In other words, the probability of intercept (POI) is 100 percent or nearly so.

Discussions of real-time analysis and the tracking of elusive signals are often all-or-nothing, implying that RTSAs are the only effective way to find and measure elusive signals. In many cases, however, the problems we face aren’t so clear-cut. Duty cycles may be low, or the signal behavior in question very infrequent and inconsistent, but the phenomenon to be measured still occurs perhaps once per second or more often. You need a POI much greater than the fraction of a percent that’s typical of wideband swept spectrum measurements, but you may not need the full 100 percent. In this post I’d like to talk about a couple of alternatives that will make use of tools that may already be on your bench.

A good example is the infamous 2.4 GHz ISM band, home to WLANs, Bluetooth, cordless phones, barbecue thermometers, microwave ovens, and any odd thing you IoT engineers may dream up. Using the 89600 VSA software, I made two measurements of this 100 MHz band, changing only the number of frequency points calculated. That setting affected the RBW and time-record length, as you can see here.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

The 102,400-point measurement has several advantages for a measurement such as this. First, it truly is a gap-free measurement: For the duration of the longer time record, it is a real-time measurement. Next, it contains more information and is much more likely to catch signals with a low duty cycle. It has a narrower RBW, making it easier to separate signals in the band, and revealing more of the structure of each signal. When viewed in the time domain, it can show much more of the pulse and burst signal behaviors in the band.

Another advantage of the larger/longer 100K-point measurement is not as obvious. The total calculation and display time does not increase nearly as rapidly as the number of points, making the larger FFT more efficient and increasing the POI. In my specific example, the overall compute and display speed is almost 20 times faster per point, with a corresponding increase in POI. It’s that much more likely that elusive signals will be found—or noticed—even without an RTSA.

For the RF engineer, however, this flood of results can be hard to use effectively. It’s difficult to relate the many successive traces to signal behavior or band activity as they fly by. The key to a solution is to add another dimension to the display, typically representing when or how often amplitude and frequency values occurred. Here are two displays of a 40 MHz portion of that ISM band.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

These traces make it easier to intuitively interpret dynamic behavior over time and understand the frequency vs. recency of that behavior. Thus, the combination of large FFT size and cumulative color displays may provide the dramatic improvement in POI that you need to find a problem. For precise measurements of elusive signals and dynamic behavior, the 89600 VSA offers other features, including time capture/playback (another variation on real-time measurements over a finite period) and spectrograms created from captured signals.

As professional problem solvers, we can figure out when a finite-duration, gap-free measurement is sufficient and when the continuous capability of an RTSA is the turbo-charged tool we need. In either case, it’s all about harnessing the right amounts of processing power and display capability for the task at hand.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Phase Noise and Distortion Measurements

  Understanding how it limits dynamic range and what to do about it

A talented RF engineer and friend of mine is known for saying “Life is a microcosm of phase noise.” He’s an expert in designing low-noise oscillators and measuring phase noise, so I suppose this conceptual inversion is a natural way for him to look at life. He taught me a lot about phase noise and, although I never matched his near-mystical perspective on it, I have been led to wonder if noise has mass.

A distinctly non-mystical aspect of phase noise is its effect on optimizing distortion measurements, and I recently ran across an explanation worth sharing.

A critical element of engineering is developing an understanding of how phenomena interact in the real world, and making the best of them. For example, to analyze signal distortion with the best dynamic range you need to understand the relationships between second- and third-order dynamic range and noise in a spectrum analyzer. These curves illustrate how the phenomena relate:

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

From classics such as Application Note 150, you probably already know the drill here: In relative terms, analyzer noise floor and second-order distortion change 1 dB—albeit in opposite directions—for every 1 dB change in attenuation or mixer level, and third-order distortion increases 2 dB for each 1 dB increase in mixer level. Therefore, the best attenuation setting for distortion depends on how these phenomena interact, especially where the curves intersect.

The optimum attenuator setting does not precisely match the intersections, though it is very close. The actual dynamic range at that setting is also very close to optimum, though it is about 3 dB worse than the intersection minimum suggests, due to the addition of the noise and distortion.

That’s where your own knowledge and insight come in. The attenuation for the best second-order dynamic range is different from that for the best third-order dynamic range, and the choice depends on your signals and the frequency range you want to measure. Will analyzer-generated second-order or third-order distortion be the limiting factor?

Of course, you can shift the intersections to better locations if you reduce RBW to lower the analyzer noise floor, but that can make sweeps painfully slow.

Fortunately, because you’re the kind of clever engineer who reads this blog, you know about technologies such as noise power subtraction and fast sweep that reduce noise or increase sweep speed without the need to make other tradeoffs.

Another factor may need to be considered if measuring third-order products, one that is often overlooked: analyzer phase noise.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

I suppose it’s easiest to think of the analyzer’s phase noise as contributing to its noise floor in an amount corresponding to its phase noise at the tone offset you’re using. Narrower offsets will be more challenging and, as usual, better phase noise will yield better measurements.

That’s where clever engineering comes in again. Analyzer designers are always working to improve phase noise, and the latest approach is a major change to the architecture of the analyzer’s local oscillator (LO): the direct digital synthesizer LO. This technology is now available in two of Keysight’s high-performance signal analyzers and will improve a variety of measurements.

The focus of this post has been on two-tone measurements but, of course, many digitally modulated signals can be modeled as large numbers of closely spaced tones. Phase noise continues to matter, even if the equivalent distortion measurements are ACP/ACPR instead of IMD.

Once again, noise is intruding on our measurement plans—or maybe it’s just lurking nearby.

Perhaps this post only proves that my perceptions of phase noise still don’t reach into the mystical realm. Here’s hoping your adventures in phase noise will help you achieve second- and third-order insights.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Competition and the Semiconductor Triple-Play

  Get all the signal analyzer performance you pay for

After perusing a pair of new application briefs, I was impressed by the improvements in signal analyzers driven by the combination of evolving semiconductor technology and plain old competition. I’ll write a little about the history here, but will also highlight a few of the benefits and encourage you to take advantage of them as much as possible. They’ll be welcome help with the complex signals and stringent standards you deal with every day.

Intel’s Gordon Moore and David House are credited with a 1960s prediction that has come to be known as Moore’s law. With uncanny accuracy, it continues to forecast the accelerating performance of computers, and this means a lot to RF engineers, too. Dr. Moore explicitly considered the prospects for analog semiconductors in his 1965 paper, writing that “Integration will not change linear systems as radically as digital systems.” *

Note that as radically qualifier. Here’s my mental model for the relative change rates.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

In analyzer architecture and performance, it seems sensible to separate components and systems into those that are mainly digital, mainly analog, or depend on both for improved performance.

It’s no surprise that improvements in each of these areas reinforce each other. Indeed, the performance of today’s signal analyzers is possible only because of substantial, coordinated improvement throughout the block diagram.

Digital technology has worked its way through the signal analyzer processing chain, beginning with the analog signal representing the detected power in the IF. Instead of being fed to the Y-axis of the display to drive a storage CRT, the signal was digitized at a low rate to be sent to memory and then on to a screen.

The next step—probably the most consequential for RF engineers—was to sample the downconverted IF signal directly. With enough sampling speed and fidelity, complex I/Q (vector) sampling could represent the complete IF signal, opening the door to vector signal analysis.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

This is where competition comes in, as the semiconductor triple-play produced an alternative to traditional RF spectrum analyzers: the vector signal analyzer (VSA). Keysight—then part of HP—introduced these analyzers as a way to handle the demands of the time-varying and digitally modulated signals that were critical to rapid wireless growth in the 1990s.

A dozen years later, competitive forces and incredibly fast processing produced RF real-time spectrum analyzers (RTSAs) that calculated the scalar spectrum as fast as IF samples came in. Even the most elusive signals had no place to hide.

VSAs and RTSAs were originally separate types of analyzers, but continuing progress in semiconductors has allowed both to become options for signal-analyzer platforms such as Keysight’s X-Series.

This takes us back to my opening admonition that you should get all you’ve paid for in signal analyzer performance and functionality. Fast processing and digital IF technologies improve effective RF performance through features such as fast sweep, fast ACPR measurements, and noise power subtraction. These capabilities may already be in your signal analyzers if they’re part of the X-Series. If these features are absent, you can add them through license key upgrades.

The upgrade situation is the same with frequency ranges, VSA, RTSA, and related features such as signal capture/playback and advanced triggering (e.g., frequency-mask and time-qualified). The compounding benefits of semiconductor advances yield enhanced performance and functionality to meet wireless challenges, and those two new application briefs may give you some useful suggestions.

* “Cramming more components onto integrated circuits,” Electronics, Volume 38, Number 8, April 19, 1965

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, History, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

The Power and Peril of Intuition

  This one really is rocket science

For engineers, intuition is uniquely powerful, but not infallible. It can come from physical analogies, mathematics, similar previous experiences, pattern recognition, the implications of fundamental natural laws, and many other sources.

I have a lot of respect for intuitive analysis and conclusions, but I’m especially interested in situations where it fails us, and especially in understanding why it failed. Because I like to avoid erroneous conclusions, I often find that understanding the specific error points to a better intuitive approach.

Previously, I wrote about one intuitive assumption and its flaws, and also about a case in which a simple intuitive approach was perfectly accurate. This time, I’d like to discuss another example related to power and energy, albeit one that has very little to do with RF measurements.

Let me set the stage with this diagram of an interplanetary spacecraft, making a close pass to a planet as a way to steal kinetic energy and use that boost to get where it’s going faster.

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

One of the most powerful tools in the engineer’s kit is the law of conservation of energy. RF engineers may not use it as often as mechanical engineers or rocket scientists, but it’s an inherent part of many of our calculations. For reasons that escape me now, I was thinking about how we get spacecraft to other planets using a combination of rockets and gravity assist maneuvers, and I encountered a statement that initially played havoc with my concept of the conservation of energy.

Because most rocket engines burn with a constant thrust and have a fixed amount of fuel, I intuitively assumed that it wouldn’t matter when, in the gravity-assist sequence, the spacecraft burned its fuel. Thrust over time produces a fixed delta-V and that should be it… right?

Nobody has repealed the law of conservation of energy, but I was misapplying it. One clue is the simple equation for work or energy, which is force multiplied by distance. When a spacecraft is traveling faster—say finishing its descent into a planet’s gravity well before climbing back out—it will travel farther during the fixed-duration burn. Force multiplied by an increased distance produces an increase in kinetic energy and a higher spacecraft speed.*

My intuition protested: “How could this be? The math is unassailable, but the consequences don’t make sense. Where did the extra energy come from?”

One answer that satisfied, at least partially, is that burning at time b rather than time a in the diagram above gives the planet the chance to accelerate the spacecraft’s fuel before it’s burned off. The spacecraft has more kinetic energy at the start of the burn than it would have otherwise.

Another answer is that the law of conservation of energy applies to systems, and I had defined the system too narrowly. The planet, its gravity field, and its own kinetic energy must all be considered.

Fortunately, intuition is as much about opportunity for extra insight as it is about the perils of misunderstanding. Lots of RF innovations have come directly from better, deeper intuitive approaches. In the wireless world, CDMA and this discussion of MIMO illustrate intuition-driven opportunities pretty well. Refining and validating your own intuition can’t help but make you a better engineer.

 

* This effect is named for Hermann Oberth, an early rocket pioneer with astonishing foresight.

Share
1 Comment ↓

Tagged with: , , , , , ,
Posted in Aero/Def, Off-topic (almost)

About

My name is Ben Zarlingo and I'm an applications specialist for Keysight Technologies.  I've been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis.  For the past 20 years I've been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions.  I work at the interface between Keysight’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments.  Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.