An Intuitive Look at Noise Figure

  An overview to complement your equations

After some recent conversations about noise figure measurements, I’ve been working to refresh my knowledge of what they mean and how they’re made. My goal was to get the essential concepts intuitively clear in my mind, in a way that would persist and therefore guide me as I looked at measurement issues that I’ll be writing about soon.

Maybe my summary will be helpful to you, too. As always, feel free to comment with any suggestions or corrections.

  • Noise figure is a two-port measurement, defined as an input/output ratio of signal-to-noise measurements—so it’s a ratio of ratios. The input ratio may be explicitly measured or may be implied, such as assuming that it’s simply relative to the thermal noise of a perfect passive 50Ω.
  • The input/output ratio is called noise factor, and when expressed in dB it’s called noise figure. Noise figure is easier to understand in the context of typical RF measurements, and therefore more common.
  • It’s a measure of the extra noise contributed by a circuit, such as an amplifier, beyond that of an ideal element that would provide gain with no added noise. For example, an ideal amplifier with 10 dB of gain would have 10 dB more noise power at its output than its input, but would still have a perfect noise figure of 0 dB.

It’s important to understand that noise figure measurements must accurately account for circuit gain because it directly affects measured output noise and therefore noise figure. Gain errors translate directly to noise figure errors.

The Y factor method is the most common way to make these measurements. A switchable, calibrated noise source is connected to the DUT input and a noise figure analyzer or signal analyzer is connected to the output. An external preamp may be added to optimize analyzer signal/noise and improve the measurement.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50  impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The central element of the noise source is a diode, driven to an avalanche condition to produce a known quantity of noise power. The diode is not a very good 50Ω impedance, so it is often followed by an attenuator to improve impedance match with the presumed 50Ω DUT.

The noise figure meter or signal analyzer switches the noise source on and off and compares the results, deriving both DUT gain and noise figure versus frequency. It’s a convenient way to make the measurements needed for noise figure, and specifications are readily available for both the noise source and the analyzer.

However, the impedance match between the noise source and the DUT affects the power that is actually delivered to the DUT and therefore the gain calculated by measuring its output. The impedance match is generally very good at low frequencies and with an attenuator in the noise source output. This enables accurate estimates of measurement uncertainty.

Unfortunately, as you approach millimeter frequencies, impedances are less ideal, gains are lower, and noise source output declines. Noise figure measurements are more challenging, and uncertainty is harder to estimate. In at least one upcoming post, I’ll discuss these problems and some practical solutions and measurement choices.

Why go to all the trouble? Whether or not it has mass, noise is a critical factor in many applications. By making individual or incremental noise figure measurements, you can identify and quantify noise contributors in your designs. This is the knowledge that will help you minimize noise, and optimize the cost and performance tradeoffs that are an important part of the value you add as an RF engineer.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Predicting the Technological Future vs. Adapting to It

  GPS and the skill, creativity and imagination of engineers

When it comes to predicting the future, I’m not sure if RF engineers are any better or worse than others—say economists or the general public. If you limit predictions to the field of electronics and communications, engineers have special insight, but they will still be subject to typical human biases and foibles.

However, when it comes to adapting to the future as it becomes their present, I’d argue that engineers show amazing skill. The problem solving and optimizing that comes instinctively to engineers give them tremendous ability to take advantage of opportunities, both technical and otherwise. Some say skillful problem solving is the defining characteristic of an engineer.

GPS is a good example of both adaptation and problem-solving, and it’s on my mind because of historical and recent developments.

It was originally envisioned as primarily a navigation system, and the scientists and engineers involved did an impressive job of predicting a technological future that could be implemented on a practical basis. Development began in 1973, with the first satellite lunch in 1978, so the current system that includes highly accurate but very inexpensive receivers demonstrates impressive foresight. Indeed, the achievable accuracy is so high in some implementations that it is much better than even the dimensions of receive antennas, and special choke ring antennas are used to take advantage of it.

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

In some systems, GPS accuracy is better than the dimensions of the receive antenna, and in surveying you’ve probably seen precision radially symmetric antennas such as this ring type. Diagram from the US Patent and Trademark Office, patent #6040805

Over the years, GPS has increasingly been used provide another essential parameter: time. As a matter of fact, the timing information from GPS may now be a more important element of our daily lives than navigation or location information. It’s especially important in keeping cellular systems synchronized, and it’s also used with some wireline networks, the electrical power grid, and even financial banking and trading operations.

As is so often the case, the dependencies and associated risks are exposed when something goes wrong. In January of this year, in the process of decommissioning one GPS satellite, the U.S. Air Force set the clocks wrong on about 15 others. The error was only 13 microseconds, but it caused about 12 hours of system problems and alarms for telecommunications companies. Local oscillators can provide a “holdover time” of about a day in these systems, so a 12-hour disturbance got everyone’s attention.

Outages such as this are a predictable part of our technological future, whether from human error, jamming, hardware failure, or a natural disaster such as the Carrington Event. The fundamental challenge is to find ways to adapt or, better yet, to do the engineering in advance to be able to respond without undue hardship or delay.

RF engineering obviously has a major role to play here, and at least two technologies are currently practical as alternates or supplements to GPS:

  • The proposed eLORAN system would replace several earlier LORAN systems that have been shut down in recent years. The required engineering is no barrier, but legislative support is another matter. In addition to serving as a GPS backup, eLORAN offers better signal penetration into buildings, land and water.
  • Compact, low-power atomic frequency references can offer independence from GPS, or may provide greatly extended holdover times. Their modest cost should allow wide adoption in communications systems.

As legendary computer scientist Alan Kay once said, “The best way to predict the future is to invent it.” If past is prolog, and I believe it is, I’m confident RF engineers will continue to be among the best at designing for the future, adapting to technology opportunities, and solving the problems that arise along the way.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, History, Low frequency/baseband, Microwave, Signal analysis, Wireless

Insidious Measurement Errors

  How to avoid fooling yourself

Some years ago I bought an old building lot for a house, and hired a surveyor because the original lot markers were all gone. It was a tough measurement task because nearby reference monuments had also gone missing since the lot was originally platted. Working from the markers in several adjacent plats, the surveyor placed new ones on my lot—but when he delayed the official recording of the survey I asked him why. His reply: he didn’t want to “drag other plat errors into the new survey.” Ultimately, it took three attempts before he was satisfied with the placement.

Land surveys are different from RF measurements, but some important principles apply to both. Errors sometimes stack up in unfortunate ways, and an understanding of insidious error mechanisms is essential if you want to avoid fooling yourself. This is especially true when you’re gathering more information to better understand measurement uncertainty.

Keysight engineers have the advantage of working in an environment that is rich in measurement hardware and expertise. They have access to multiple measurement tools for comparing different approaches, along with calibration and metrology resources. I thought I’d take a minute to discuss a few things they’ve learned and approaches they’ve taken that may help you avoid sneaky errors.

Make multiple measurements and compare. I’m sure you’re already doing this in some ways—it’s an instinctive practice for test engineers, and can give you an intuitive sense of consistency and measurement variability. Here’s an example of three VSWR measurements.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

VSWR of three different signal analyzers in harmonic bands 1-4. With no input attenuation, mismatch is larger than it would otherwise be. The 95% band for VSWR is about 1.6 dB.

It’s always a good idea to keep connections short and simple, but it’s worth trying different DUT connections to ensure that a cable or connector—or even a specific bit of contamination—isn’t impairing many measurements in a consistent way that’s otherwise hard to spot. The same thing applies to calibration standards and adapters.

The multiple-measurements approach also applies when using different types of analyzer. Signal analyzers can approach the accuracy of RF/microwave power meters, and each can provide a check on an error by the other.

Adjust with one set of equipment and verify with another. DUTs may be switched from one station to another, or elements such as power sensors may be exchanged periodically to spot problems. This can be done on a sample or audit basis to minimize cost impacts.

In estimating uncertainty, understand the difference between worst case and best estimate. As Joe Gorin noted in a comment on an earlier post “The GUM, in an appendix, explains that the measurement uncertainty should be the best possible estimate, not a conservative estimate. When we know the standard deviation, we can make better estimates of the uncertainty than we can when we have only warranted specifications.” A more thorough understanding of the performance of the tools you have may be an inexpensive way to make measurements better.

Make sure the uncertainties you estimate are applicable to the measurements you make. Room temperature specifications generally apply from 20 to 30 °C, but the “chimney effect” within system racks and equipment stacks can make instruments much warmer than the ambient temperature.

Take extra care as frequencies increase. Mismatch can be the largest source of uncertainty in RF/microwave measurements, and it generally gets worse as frequencies increase. Minimizing it can be worth an investment in better cables, attenuators, adapters, and torque wrenches.

This isn’t meant to suggest that you adopt an excessively paranoid outlook—but it’s safe to assume the subtle errors really are doing their best to hide from you while they subvert your efforts. Said another way, it’s always best to be alert and diverse in your approaches.

Share
1 Comment ↓

Tagged with: , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Wireless Data Rates and a Failure of Imagination

  Is there an unlimited need for speed, or are we perfecting buggy whips?

Contrary to some stereotypes, engineers—especially the most effective ones—are both intuitive and creative. The benefits and tradeoffs of intuition are significant, and that’s why I’ve written here about its limits and ways it might be enhanced. As for creativity, I don’t think I understand it very well at all, and would welcome any perspective that will improve my own.

Recently, on the same day, I ran across two articles that, together, made me think about the role of imagination in engineering. It is clearly another vital talent to develop, and maybe I can learn a little about this element of creativity in the bargain.

The first was Lou Frenzel’s piece in Electronic Design, wondering about the practical limits of data bandwidth, along with its potential uses. The other was the announcement by Facebook and Microsoft of an upcoming 160 Tbit/s transatlantic cable link called MAREA that will reach from Spain to Virginia. I had to blink and read the figure again: the units really are terabits per second.

That figure made me dredge up past skepticism about an announcement, some years back, of 40 Gbit optical links. I remember wondering what applications—even in aggregate—could possibly consume such vast capacity, especially because many such links were in the works. I also wondered just how much higher things could go, scratching my head in the same way Lou did. Now I find myself reading about a cable that will carry 4,000 times more information than the 40 Gbit one, and concluding that I was suffering from a failure of imagination.

Imagination is anything but a mystical concept, and has real technical and business consequences. One of the most famous examples is the 1967 fire in the Apollo 1 spacecraft, where the unforeseen effects of a high-pressure oxygen environment during a ground test turned a small fire into a catastrophe. In congressional hearings about the accident, astronaut Frank Borman—by many accounts the most blunt and plain-spoken engineer around—spoke of the fatal fire’s ultimate cause as a failure of imagination.

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

Frank Borman, right, the astronaut representative on the Apollo 1 review board, reframed the cause of the Apollo 1 tragedy as essentially a failure of imagination. One clear risk was eliminated because the rocket wasn’t fueled, and perhaps that blinded NASA to the certainly fatal consequences of fire plus high-pressure oxygen in an enclosed space. (Images from Wikimedia Commons)

In RF engineering, the hazards we face are much more benign, but are still significant for our work and our careers. We may miss a better way to solve a problem, fail to anticipate a competitor’s move, or underestimate the opportunity for—or the demands on—a new application.

That’s certainly what happened when I tried to imagine how large numbers of 40 Gbit links could ever be occupied. I thought about exchanging big data files, transmitting lots of live video—even the HD that was then on its way—and a considerable expansion of mobile services. However, I completely failed to imagine things like video streaming from smartphones en masse, ubiquitous cloud computing, an Internet of umpteen things, and virtual reality (VR).

VR stands out as an example of multi-faceted innovation that was hardly a blip on the horizon a few years ago—but it now promises to be a huge consumer of both downlink and uplink bandwidth. It demands fast, high-resolution video and typically low latency, and the benefits are compelling. As one example, it converts 3D video from a passively consumed product to an immersive experience with lots of instructional and entertainment applications. Just one among many future bandwidth drivers, I’m sure.

It’s been observed that data bandwidth for wireless links is often about one generation behind that of common wired ones. Both have been growing rapidly, and though I can’t imagine exactly what the demand drivers will be, I agree with Lou that we won’t reach limits soon, and there will continue to be plenty of interesting challenges.

For RF engineers it will mean solving the countless problems that stand between lofty wireless (and wired) standards and the practical, affordable products that will make them reality.

Note: This post has been edited to correct a bits/bytes error.

Share
2 Comments ↓

Tagged with: , , , , , , , , , , , , ,
Posted in Aero/Def, History, Off-topic (almost), Wireless

RTSA: How “Real” is Real Enough?

  Improving your chances of finding the signals you want

Real-time spectrum analyzers (RTSAs) are very useful tools when you’re working with time-varying or agile signals and dynamic signal environments. That’s true of lots of RF engineering tasks these days.

A good way to define real time is that every digital sample of the signal is used in calculating spectrum results. The practical implication is that you don’t miss any signals or behavior, no matter how brief or infrequent. In other words, the probability of intercept (POI) is 100 percent or nearly so.

Discussions of real-time analysis and the tracking of elusive signals are often all-or-nothing, implying that RTSAs are the only effective way to find and measure elusive signals. In many cases, however, the problems we face aren’t so clear-cut. Duty cycles may be low, or the signal behavior in question very infrequent and inconsistent, but the phenomenon to be measured still occurs perhaps once per second or more often. You need a POI much greater than the fraction of a percent that’s typical of wideband swept spectrum measurements, but you may not need the full 100 percent. In this post I’d like to talk about a couple of alternatives that will make use of tools that may already be on your bench.

A good example is the infamous 2.4 GHz ISM band, home to WLANs, Bluetooth, cordless phones, barbecue thermometers, microwave ovens, and any odd thing you IoT engineers may dream up. Using the 89600 VSA software, I made two measurements of this 100 MHz band, changing only the number of frequency points calculated. That setting affected the RBW and time-record length, as you can see here.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

Two spectrum measurements of the 2.4 GHz ISM band, made with the 89600 VSA software. The upper trace is the default 800-point result, while the lower trace represents 102,400 points. This represents a 128x longer time record, long enough to include a Bluetooth hop in addition to the wider WLAN burst.

The 102,400-point measurement has several advantages for a measurement such as this. First, it truly is a gap-free measurement: For the duration of the longer time record, it is a real-time measurement. Next, it contains more information and is much more likely to catch signals with a low duty cycle. It has a narrower RBW, making it easier to separate signals in the band, and revealing more of the structure of each signal. When viewed in the time domain, it can show much more of the pulse and burst signal behaviors in the band.

Another advantage of the larger/longer 100K-point measurement is not as obvious. The total calculation and display time does not increase nearly as rapidly as the number of points, making the larger FFT more efficient and increasing the POI. In my specific example, the overall compute and display speed is almost 20 times faster per point, with a corresponding increase in POI. It’s that much more likely that elusive signals will be found—or noticed—even without an RTSA.

For the RF engineer, however, this flood of results can be hard to use effectively. It’s difficult to relate the many successive traces to signal behavior or band activity as they fly by. The key to a solution is to add another dimension to the display, typically representing when or how often amplitude and frequency values occurred. Here are two displays of a 40 MHz portion of that ISM band.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

Many measurement results can be combined in a single trace to help understand the behavior of a signal or the activity in a frequency band. The top trace shows how often specific amplitude and frequency values occurred over many measurements. The bottom trace uses color to show how recently the values occurred, producing a persistence display.

These traces make it easier to intuitively interpret dynamic behavior over time and understand the frequency vs. recency of that behavior. Thus, the combination of large FFT size and cumulative color displays may provide the dramatic improvement in POI that you need to find a problem. For precise measurements of elusive signals and dynamic behavior, the 89600 VSA offers other features, including time capture/playback (another variation on real-time measurements over a finite period) and spectrograms created from captured signals.

As professional problem solvers, we can figure out when a finite-duration, gap-free measurement is sufficient and when the continuous capability of an RTSA is the turbo-charged tool we need. In either case, it’s all about harnessing the right amounts of processing power and display capability for the task at hand.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Phase Noise and Distortion Measurements

  Understanding how it limits dynamic range and what to do about it

A talented RF engineer and friend of mine is known for saying “Life is a microcosm of phase noise.” He’s an expert in designing low-noise oscillators and measuring phase noise, so I suppose this conceptual inversion is a natural way for him to look at life. He taught me a lot about phase noise and, although I never matched his near-mystical perspective on it, I have been led to wonder if noise has mass.

A distinctly non-mystical aspect of phase noise is its effect on optimizing distortion measurements, and I recently ran across an explanation worth sharing.

A critical element of engineering is developing an understanding of how phenomena interact in the real world, and making the best of them. For example, to analyze signal distortion with the best dynamic range you need to understand the relationships between second- and third-order dynamic range and noise in a spectrum analyzer. These curves illustrate how the phenomena relate:

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

The interaction of mixer level with noise and second- and third-order distortion determine the best dynamic range in a signal analyzer. The mixer level is set by changing the analyzer’s input attenuator.

From classics such as Application Note 150, you probably already know the drill here: In relative terms, analyzer noise floor and second-order distortion change 1 dB—albeit in opposite directions—for every 1 dB change in attenuation or mixer level, and third-order distortion increases 2 dB for each 1 dB increase in mixer level. Therefore, the best attenuation setting for distortion depends on how these phenomena interact, especially where the curves intersect.

The optimum attenuator setting does not precisely match the intersections, though it is very close. The actual dynamic range at that setting is also very close to optimum, though it is about 3 dB worse than the intersection minimum suggests, due to the addition of the noise and distortion.

That’s where your own knowledge and insight come in. The attenuation for the best second-order dynamic range is different from that for the best third-order dynamic range, and the choice depends on your signals and the frequency range you want to measure. Will analyzer-generated second-order or third-order distortion be the limiting factor?

Of course, you can shift the intersections to better locations if you reduce RBW to lower the analyzer noise floor, but that can make sweeps painfully slow.

Fortunately, because you’re the kind of clever engineer who reads this blog, you know about technologies such as noise power subtraction and fast sweep that reduce noise or increase sweep speed without the need to make other tradeoffs.

Another factor may need to be considered if measuring third-order products, one that is often overlooked: analyzer phase noise.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

In this two-tone intermod example with a 10 kHz tone spacing, the analyzer’s phase noise at that same 10 kHz offset limits distortion measurement performance to -80 dBc. Without this phase noise the dynamic range would be about 88 dB.

I suppose it’s easiest to think of the analyzer’s phase noise as contributing to its noise floor in an amount corresponding to its phase noise at the tone offset you’re using. Narrower offsets will be more challenging and, as usual, better phase noise will yield better measurements.

That’s where clever engineering comes in again. Analyzer designers are always working to improve phase noise, and the latest approach is a major change to the architecture of the analyzer’s local oscillator (LO): the direct digital synthesizer LO. This technology is now available in two of Keysight’s high-performance signal analyzers and will improve a variety of measurements.

The focus of this post has been on two-tone measurements but, of course, many digitally modulated signals can be modeled as large numbers of closely spaced tones. Phase noise continues to matter, even if the equivalent distortion measurements are ACP/ACPR instead of IMD.

Once again, noise is intruding on our measurement plans—or maybe it’s just lurking nearby.

Perhaps this post only proves that my perceptions of phase noise still don’t reach into the mystical realm. Here’s hoping your adventures in phase noise will help you achieve second- and third-order insights.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Competition and the Semiconductor Triple-Play

  Get all the signal analyzer performance you pay for

After perusing a pair of new application briefs, I was impressed by the improvements in signal analyzers driven by the combination of evolving semiconductor technology and plain old competition. I’ll write a little about the history here, but will also highlight a few of the benefits and encourage you to take advantage of them as much as possible. They’ll be welcome help with the complex signals and stringent standards you deal with every day.

Intel’s Gordon Moore and David House are credited with a 1960s prediction that has come to be known as Moore’s law. With uncanny accuracy, it continues to forecast the accelerating performance of computers, and this means a lot to RF engineers, too. Dr. Moore explicitly considered the prospects for analog semiconductors in his 1965 paper, writing that “Integration will not change linear systems as radically as digital systems.” *

Note that as radically qualifier. Here’s my mental model for the relative change rates.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

Comparing the rate of change in performance over time of the semiconductor-based elements of signal analyzers. Processors have improved the fastest, though analog circuit performance has improved dramatically as well.

In analyzer architecture and performance, it seems sensible to separate components and systems into those that are mainly digital, mainly analog, or depend on both for improved performance.

It’s no surprise that improvements in each of these areas reinforce each other. Indeed, the performance of today’s signal analyzers is possible only because of substantial, coordinated improvement throughout the block diagram.

Digital technology has worked its way through the signal analyzer processing chain, beginning with the analog signal representing the detected power in the IF. Instead of being fed to the Y-axis of the display to drive a storage CRT, the signal was digitized at a low rate to be sent to memory and then on to a screen.

The next step—probably the most consequential for RF engineers—was to sample the downconverted IF signal directly. With enough sampling speed and fidelity, complex I/Q (vector) sampling could represent the complete IF signal, opening the door to vector signal analysis.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

Sampling technology has worked its way through the processing chain in signal analyzers. It’s now possible to make measurements at millimeter frequencies with direct (baseband) sampling, though limited performance and high cost mean most that RF/microwave measurements will continue to be made by IF sampling and processing.

This is where competition comes in, as the semiconductor triple-play produced an alternative to traditional RF spectrum analyzers: the vector signal analyzer (VSA). Keysight—then part of HP—introduced these analyzers as a way to handle the demands of the time-varying and digitally modulated signals that were critical to rapid wireless growth in the 1990s.

A dozen years later, competitive forces and incredibly fast processing produced RF real-time spectrum analyzers (RTSAs) that calculated the scalar spectrum as fast as IF samples came in. Even the most elusive signals had no place to hide.

VSAs and RTSAs were originally separate types of analyzers, but continuing progress in semiconductors has allowed both to become options for signal-analyzer platforms such as Keysight’s X-Series.

This takes us back to my opening admonition that you should get all you’ve paid for in signal analyzer performance and functionality. Fast processing and digital IF technologies improve effective RF performance through features such as fast sweep, fast ACPR measurements, and noise power subtraction. These capabilities may already be in your signal analyzers if they’re part of the X-Series. If these features are absent, you can add them through license key upgrades.

The upgrade situation is the same with frequency ranges, VSA, RTSA, and related features such as signal capture/playback and advanced triggering (e.g., frequency-mask and time-qualified). The compounding benefits of semiconductor advances yield enhanced performance and functionality to meet wireless challenges, and those two new application briefs may give you some useful suggestions.

* “Cramming more components onto integrated circuits,” Electronics, Volume 38, Number 8, April 19, 1965

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, History, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

The Power and Peril of Intuition

  This one really is rocket science

For engineers, intuition is uniquely powerful, but not infallible. It can come from physical analogies, mathematics, similar previous experiences, pattern recognition, the implications of fundamental natural laws, and many other sources.

I have a lot of respect for intuitive analysis and conclusions, but I’m especially interested in situations where it fails us, and especially in understanding why it failed. Because I like to avoid erroneous conclusions, I often find that understanding the specific error points to a better intuitive approach.

Previously, I wrote about one intuitive assumption and its flaws, and also about a case in which a simple intuitive approach was perfectly accurate. This time, I’d like to discuss another example related to power and energy, albeit one that has very little to do with RF measurements.

Let me set the stage with this diagram of an interplanetary spacecraft, making a close pass to a planet as a way to steal kinetic energy and use that boost to get where it’s going faster.

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

An interplanetary spacecraft is directed to pass near a planet, using the planet’s gravity field to change the spacecraft trajectory and accelerate it in the desired direction. Equivalent rocket burns at time intervals a and b produce different changes in the spacecraft’s speed and kinetic energy. (Mars photo courtesy of NASA)

One of the most powerful tools in the engineer’s kit is the law of conservation of energy. RF engineers may not use it as often as mechanical engineers or rocket scientists, but it’s an inherent part of many of our calculations. For reasons that escape me now, I was thinking about how we get spacecraft to other planets using a combination of rockets and gravity assist maneuvers, and I encountered a statement that initially played havoc with my concept of the conservation of energy.

Because most rocket engines burn with a constant thrust and have a fixed amount of fuel, I intuitively assumed that it wouldn’t matter when, in the gravity-assist sequence, the spacecraft burned its fuel. Thrust over time produces a fixed delta-V and that should be it… right?

Nobody has repealed the law of conservation of energy, but I was misapplying it. One clue is the simple equation for work or energy, which is force multiplied by distance. When a spacecraft is traveling faster—say finishing its descent into a planet’s gravity well before climbing back out—it will travel farther during the fixed-duration burn. Force multiplied by an increased distance produces an increase in kinetic energy and a higher spacecraft speed.*

My intuition protested: “How could this be? The math is unassailable, but the consequences don’t make sense. Where did the extra energy come from?”

One answer that satisfied, at least partially, is that burning at time b rather than time a in the diagram above gives the planet the chance to accelerate the spacecraft’s fuel before it’s burned off. The spacecraft has more kinetic energy at the start of the burn than it would have otherwise.

Another answer is that the law of conservation of energy applies to systems, and I had defined the system too narrowly. The planet, its gravity field, and its own kinetic energy must all be considered.

Fortunately, intuition is as much about opportunity for extra insight as it is about the perils of misunderstanding. Lots of RF innovations have come directly from better, deeper intuitive approaches. In the wireless world, CDMA and this discussion of MIMO illustrate intuition-driven opportunities pretty well. Refining and validating your own intuition can’t help but make you a better engineer.

 

* This effect is named for Hermann Oberth, an early rocket pioneer with astonishing foresight.

Share
1 Comment ↓

Tagged with: , , , , , ,
Posted in Aero/Def, Off-topic (almost)

Signal Capture and Playback: A DVR for RF/Microwave Engineers

  “OK, Jamie, let’s go to the high-speed”

There are times when understanding an event or phenomenon—or simply finding a problem—demands a view using a different time scale. I’m a fan of the Mythbusters TV series, and I can’t count the number of times when the critical element in understanding the myth was a review of high-speed camera footage. I’m sure the priority was mostly on getting exciting images for good TV, but high-speed footage was often the factor that really explained what was going on.

Another common element of those Mythbusters experiments was their frequent one-shot nature, and high-speed cameras were critical for this as well. Single events were trapped or captured so that they could be examined over and over, from different angles and at different speeds.

Time capture, also called signal capture, came to the general RF analyzer world in the early 1990s with the introduction of vector signal analyzers (VSAs), whose block diagram was a natural fit for the capability. While it was primarily a matter of adding fast memory and a user interface for playback or post-processing, significant innovation went into implementing a practical magnitude trigger and achieving trigger-timing alignment.

The block diagram of a VSA or a signal analyzer with a digital IF section is good foundation for time capture, and it required the expansion of just two blocks. Capture/playback is especially useful for the time-varying signals that VSAs were designed to handle.

The block diagram of a VSA or a signal analyzer with a digital IF section is good foundation for time capture, and it required the expansion of just two blocks. Capture/playback is especially useful for the time-varying signals that VSAs were designed to handle.

Over the years, I’ve used time capture for many different measurements and think it’s really under-used as a tool for RF/microwave applications in wireless, aerospace/defense, and EMI. It’s an excellent way to leverage the knowledge of the RF engineer, and it’s easy to use: first select the desired frequency and span and then press the record button.

The insight-creating and problem-solving magic comes during playback or post-processing. Captures are gap-free, and playback speed in VSA software can be adjusted over a huge range. Just press the play button and explore wherever your insight leads. You can see the time, frequency, and modulation domains at the same time, with any number of different measurements and trace types. You can easily navigate large capture buffers with numeric and graphical controls, and even mark a specific section to replay or loop continuously so you can see everything that happened.

A simple capture/playback of a transmitter switching on shows a transient amplitude event in the bottom trace. The top two traces use variable persistence to show the signal spectrum and RF envelope as playback proceeds in small steps.

A simple capture/playback of a transmitter switching on shows a transient amplitude event in the bottom trace. The top two traces use variable persistence to show the signal spectrum and RF envelope as playback proceeds in small steps.

Today’s signals are highly dynamic, the RF spectrum is crowded, and design requirements are stringent. You often need to optimize and troubleshoot in all three domains—time, frequency, and modulation—at once. You have the skill and the knowledge, but you need a total view of the signal or system behavior. In my experience, there’s nothing to match the confidence and insight that follow from seeing everything that happened during a particular time and frequency interval.

I’ll write about some specific measurement examples and techniques in posts to come. In the meantime, feel free to try out time capture on your own signals. The 89600 VSA software includes a free trial mode that works with all of the Keysight X-Series signal analyzers and many other Keysight instruments, too. Just press that red record button and then press play. It’ll make you feel like an RF Mythbuster, too.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, History, Measurement techniques, Microwave, Millimeter, Signal analysis, Wireless

RF Engineers to the Rescue—in Space

  Here we come to save the day!

“Space” said author Douglas Adams, “is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space.”*

Low Earth orbit (LEO) is a significant chunk of space, but its bigness wasn’t enough to save the working satellite Iridium 33 from a 2009 collision with Russia’s defunct Kosmos 2251 communications satellite. The impact at an altitude of about 500 miles was the first time two satellites had collided at hypervelocity, and the results were not pretty.

Yellow dots represent estimated debris from the 2009 collision of two LEO satellites. This graphic from Wikimedia Commons shows the debris location about 50 minutes after the collision. The structure of the debris field is an excellent example of conservation of momentum.

Yellow dots represent estimated debris from the 2009 collision of two LEO satellites. This graphic from Wikimedia Commons shows the debris location about 50 minutes after the collision. The structure of the debris field is an excellent example of conservation of momentum.

The danger of collisions such as this is highest in LEO, which spans altitudes of 100 – 1200 miles. The danger is a function of the volume of that space, the large number of objects there, and their widely varying orbits. The intersections of these orbital paths provide countless opportunities for destructive high-velocity collisions.

It’s estimated that the 2009 collision alone produced 1000 pieces of debris four inches or larger in size, and countless smaller fragments. Because objects as small as a pea can disable a satellite, and because larger ones can turn a satellite into another cloud of impactors, the danger to vital resources in LEO is clear.

This chain reaction hazard or “debris cascade” was detailed as far back as 1978 by NASA’s Donald Kessler in a paper that led to the scary term Kessler syndrome.

The concept is scary because there’s no simple way to avoid the problem. What’s worse, our existing tools aren’t fully up to the task of identifying objects and accurately predicting collisions. The earlier 1961-vintage ground-based VHF radar system could track only those objects bigger than a large beach ball, and accuracy was not sufficient to allow the Iridium satellite to move out of danger.

Cue the RF/microwave engineering cavalry: With their skill and the aid of signal analyzers, signal generators, network analyzers, and the rest of the gear we’re so fond of, they have created a new space fence. Operating in the S-band, this large-scale phased-array radar will have a wide field of view and the ability to track hundreds of thousands of objects as small as a marble with the accuracy required to predict collisions.

Alas, predicting collisions is most of what we can do to avoid a Kessler catastrophe. Though the company designing and building the fence mentions “cleaning up the stratosphere,” it’s Mother Nature and the very faint traces of atmosphere in LEO that will do most of the job. Depending on altitude, mass, and cross-section, the cleaning process can take decades or longer.

In the meantime, we’ll have to make the most of our new tools, avoid creating new debris, and perhaps de-orbit a few big potential offenders such as Envistat.

There may be another opportunity for the engineering cavalry to save the day. There are proposals for powerful lasers, aimed with unbelievable precision, to blast one side of orbiting debris, creating a pulse of vapor that will aim objects to atmospheric destruction and render them mostly harmless.*  I’m looking forward to the RF/microwave designs for tracking and aiming that will make that possible.

 

* From the book The Hitchhiker’s Guide to the Galaxy

Share
2 Comments ↓

Tagged with: , , , , , , , , ,
Posted in Aero/Def, Hazards, History, Microwave, Millimeter

About

My name is Ben Zarlingo and I'm an applications specialist for Keysight Technologies.  I've been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis.  For the past 20 years I've been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions.  I work at the interface between Keysight’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments.  Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.