Vexing Connections

   RF sins that may not be your fault

Many years ago, I knew an engineer who used to chuckle when he used the term “compromising emanations” to describe unintentional RF output. Today, most of us use the less colorful term “interference” to refer to signals that appear where they should not, or are more powerful than regulations allow. We’re likely more concerned about coexisting in the RF spectrum or violating a standard than we are about revealing something.

Wireless systems seem infinitely capable of generating unintended signals, and one of the more interesting is the rusty bolt effect.

A rusty bolt can form a metal-to-metal oxide connection that is rectifying rather than simply resistive. (Image from Wikimedia Commons)

A rusty bolt can form a metal-to-metal oxide connection that is rectifying rather than simply resistive. (Image from Wikimedia Commons)

I recently ran across a discussion of this when looking into the causes and consequences of imperfect connections in RF systems. Though I’ve previously written about connections of various kinds, including coaxial connectors, cables, adapters and waveguide, I’ve focused more on geometry and impedance than metal-to-metal contact.

Dealing with the wrong impedance is one thing, but for some time I’ve wanted to better understand why so many bad electrical contacts tend to be rectifying rather than Ohmic. Not surprisingly, it involves semiconductors. Accidental semiconductors, but semiconductors nonetheless.

Some oxides are conductive and some are insulating, but a number of common metal oxides are semiconductors. Oxidation or other corrosion—say from skin oils—makes it easy to produce a metal-to-semiconductor contact and a resulting nonlinearity.

Voltage/current curves for Ohmic and rectifying contacts. The nonlinear curve of a rectifying contact is essentially that of a diode.

Voltage/current curves for Ohmic and rectifying contacts. The nonlinear curve of a rectifying contact is essentially that of a diode.

Nonlinear connections are problematic in wireless, primarily because of the RF distortion products they produce. In the simple sinewave case, they create energy at harmonic frequencies, and when multiple signals are present they produce intermodulation distortion. The intermodulation distortion is particularly troublesome because it can appear in-band or in nearby bands, and at many frequencies at once.

Modern multi-channel systems, including base stations and carrier-aggregation schemes, create many simultaneous signals to “exercise” these nonlinearities and create distortion products. The distortion may be described as passive intermodulation (PIM) because it’s generated without powered elements. The rusty bolt example involves high currents through imperfect antenna or metal structure connections, though wireless systems offer many other opportunities for nonlinear mischief.

One of the most maddening characteristics of this phenomenon is its elusive nature. Outdoor antennas are subject to strain from wind and temperature changes as well as weathering from salt air or acid rain. Nonlinearities can appear and disappear, seemingly at random. Even indoor wireless transmitters have to contend with mechanical stresses, changing humidity and temperature, and contamination of all kinds.

In many cases, astute mechanical design and mitigation of oxidation or contamination will help eliminate nonlinear connections. Because Ohmic metal-to-semiconductor connections are essential to their products, semiconductor manufacturers are a good source of information and techniques.

At some point, of course, you need to make spectrum measurements to find intermodulation problems or verify that emissions are within limits. Signal analyzes do the job well, and many measurement applications are available for popular standards to automate setup, perform measurements, and provide pass/fail results. They’re the most efficient way to ensure you avoid sins that you’d rather not be blamed for.

 

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Hazards, Microwave, Millimeter, Signal analysis, Wireless

All Our RF Sins Exposed

  Trespassing is harder to miss in a densely occupied country

The 802.11ah wireless standard mentioned in my last post is promising, but it highlights a challenge that’s facing many engineers in the wireless space: out-of-band or out-of-channel emissions.

In an article from Electronic Products via Digi-Key’s article library, Jack Shandle writes: “Two significant design issues in the 915-MHz band are: 1) The third, fourth, and fifth harmonics all fall in restricted bands, which imposes some design constraints on output filtering. 2) Although it is unlicensed in North America, Australia and South Korea, the band is more strictly regulated in other parts of the world.

Of course, the higher allowed transmit power and improved propagation of the 915 MHz band—compared to the 2.4 GHz band—adds to the potential for interference. But these days, your harmonic and spurious emissions don’t have to fall in restricted bands to be a concern. Compared to previous eras, the modern wireless spectrum is so crowded that excess emissions are far more likely to cause someone a problem and be noticed. Wireless standards are correspondingly stringent.

For RF engineers, the interference challenges exist in both the frequency and time domains, and this can make problems harder to find and diagnose. The time-domain concerns are not new, affecting any TDMA scheme—including the one used in my 20-plus-year-old marine VHF handheld. Using Keysight vector signal analyzers, I long ago discovered that the little radio walked all over a dozen channels in the first 250 ms after each press of the transmit key. A newer handheld was actually worse in terms of spectrum behavior, but settled down more quickly.

Back then, that behavior was neither noticed nor troublesome, and I don’t suppose anyone would complain even today. However, that quaint FM radio is nothing like the vast number of sophisticated wireless devices that crowd the bands today. Even a single smartphone uses multiple radios and multiple bands, and interference is something that must be discovered and fixed at the earliest stages to reduce cost and risk.

Given the dynamic nature of the signals and their interactions, gaining confidence that you’ve found all the undesirable signals is tough. Using the processing power of today’s signal analyzers is a good first step.

This composite real-time spectrum analysis (RTSA) display shows both calculated density and absolute spectrum peaks. Real-time spans of up to 500 MHz are available, letting you know you’ve seen everything that happened over that span and in that measurement interval.

This composite real-time spectrum analysis (RTSA) display shows both calculated density and absolute spectrum peaks. Real-time spans of up to 500 MHz are available, letting you know you’ve seen everything that happened over that span and in that measurement interval.

Though RTSA makes the task easier and the results more certain, RF engineers have been finding small and elusive signals for many years. Peak-hold functions and peak detectors have been available in spectrum analyzers since the early days and they’re effective, if sometimes time-consuming.

Minimizing noise in the measurement is essential for finding small signals, but the traditional approach of reducing RBW can make sweep times unreasonably long. Fast-sweep features and noise subtraction are available in some signal analyzers, leveraging signal processing to expand the speed/performance envelope. Keysight’s noise floor extension is particularly effective with noise-like signals such as digital modulation.

Of course, finding harmonic and spurious emissions is only half the battle. A frequency reading may be all you need to deduce their origin, but in many cases you need more information to decisively place blame.

In addition to frequency, the most useful things to know about undesirable signals are their spectral shape and timing. That means isolating the suspects and relating them to the timing of other signals. One traditional approach is a zero-span measurement, centered on the signal of interest. It’s a narrow view of the problem but it may be enough.

Far more powerful tools are available using the memory and processing power of today’s signal analyzers. Frequency-mask triggering is derived from RTSA and can isolate the signal for display or generate a trigger for complete capture and playback of signals. Signal recording is usually done with the 89600 VSA software and can include capture of events that occurred before the trigger.

For even more complex time relationships, the VSA software borrows from oscilloscopes to provide a time-qualified trigger for RF engineers. Command of both the time and frequency domains is the most effective path to interference solutions.

If you don’t have these advanced tools, you can add them to existing signal analyzers with a minimum of fuss. With your RF intuition and good tools, interference has no place to hide.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Microwave, Signal analysis, Wireless

Turning Back RF Technology to Take a Step Forward

  New activity on familiar terrain

High-profile developments in wireless networking usually involve ever-wider bandwidths and ever-higher operating frequencies. These support the insatiable need for increased wireless capacity, and they parallel mobile developments such as 5G cellular. And if the prognosticators are correct, more users generating more traffic will compete with increasing traffic from non-human users such as the Internet of things (IoT) and machine-to-machine (M2M) communications.

The increasing need for data capacity is undeniable, but the focus on throughput seems a bit narrow to me. I’m probably a typical wireless user—if there is such a thing—and find myself more often dissatisfied with data link availability or reliability than with capacity.

For example, Wi-Fi in my family room is mostly useless when the microwave in the kitchen is on. Sure, I could switch to a 5.8 GHz wireless router, but those signals don’t travel as far, and I would probably relocate the access point if I made the change. Another example: The 1.9 GHz DECT cordless phone in the family room will cover the front yard and the mailbox, but the one in my downstairs office won’t. A phone doesn’t demand much data throughput for voice, but it must provide a reliable connection. Yes, I can carry my mobile phone and forward to it, but I sometimes appreciate the lack of a tether.

I often think about the digital cordless phone I had a dozen years ago, operating on the 900 MHz ISM band with a simple 12-bit PN code for spread spectrum. Its range was hundreds of yards with obstructions and over half a mile in the open.

I’ve been reading a little about the proposed new 802.11ah wireless networking standard in that same 900 MHz band, and thinking about the implications. Two important technical factors are the limited width of the band—902 to 928 MHz—and improved signal propagation compared to the 2.4 and 5.8 GHz bands. In the technical press you’ll frequently see a diagram similar to this one:

Lower frequencies generally propagate better, and the difference can be significant in terms of network coverage in a house or office space. Of course, practical range depends on many other factors as well.

Lower frequencies generally propagate better, and the difference can be significant in terms of network coverage in a house or office space. Of course, practical range depends on many other factors as well.

The diagram is certainly oversimplified, in particular neglecting any band-crowding, interference or obstruction issues. Nonetheless, the potential range benefits are obvious. Some claim that real-world distances of more than a kilometer are feasible, and the 900 MHz band may allow higher effective transmit power than 2.4 or 5.8 GHz.

Throughput, however, is modest compared to other WLAN standards. Data can be sent using a down-scaled version of the 802.11a/g physical layer for data rates ranging from 100 Kb/s to more than 25 Mb/s. Significantly, the standard supports power-saving techniques including predefined active/quiescent periods.

As usual, the standard has many elements, supporting a variety of potential uses, and a few are likely to dominate. Those mentioned most often relate to IoT and M2M. Compared to existing Wi-Fi, 802.11ah should be better optimized for the required combination of data rate, range and power consumption.

Although that presumption seems reasonable, recent history tells us that attractive combinations of spectrum and PHY layer will be bent to unanticipated purposes. I think there are many situations in which users would be happy to trade transfer speed for a high-reliability link with longer range.

From an RF engineering standpoint, improved propagation is a double-edged sword. Current WLAN range relates well to the scale of homes and small businesses, naturally providing a degree of geographic multiplexing and frequency reuse due to lower interference potential. The combination of propagation and transmit power in the narrower 900 MHz band will change tradeoffs and challenge radio designers.

The 802.11ah standard is expected to be ratified sometime in 2016, and RF tools are already available. Keysight’s WLAN measurement applications for RF signal generators and signal analyzers already support the standard, and vector signal analysis is supported with pre-stored settings in the custom OFDM demodulation of the 89600 VSA software Option BHF.

With established alternatives ranging from ZigBee to 802.11ac, some are skeptical about the success of this effort in the relatively neglected 900 MHz ISM band. It’s a fool’s errand to try to predict the future, but it seems to me this band has too much going for it to remain under-occupied.

Share
1 Comment ↓

Tagged with: , , , , , , , , , , , , , , , ,
Posted in EMI, Signal analysis, Signal generation, Wireless

Today’s R&D Comparison: RF Engineering and Rocket Fuel

  An engineering field where the products you’re designing are constantly trying to kill you

The holiday season is here and it’s been a while since I have wandered off topic. I hope you’ll indulge me a little, and trust my promise that this post will bring some insight about the business we’re in, at least by happy contrast.

It’s a good time of the year to reflect on things we’re thankful for, and in this post I’d like to introduce you to a book about a fascinating field of R&D: developing rocket fuel. Compared to work on rocket fuel, our focus on RF technology has at least two major advantages, including longevity and personal safety.

Let’s talk first about safety and the excitement engendered by the lack of it. Robert Goddard is generally credited with the first development and successful launch of a liquid-fueled rocket. Here he is with that rocket, just before its launch in March of 1926.

Robert Goddard stands next to his first liquid-fueled rocket before the successful test flight of March 16, 1926. The combustion chamber is at the top and the fuel tanks are below. The pyramidal structure is the fixed launch frame. (photo by Esther Goddard, from the Great Images in NASA collection)

Robert Goddard stands next to his first liquid-fueled rocket before the successful test flight of March 16, 1926. The combustion chamber is at the top and the fuel tanks are below. The pyramidal structure is the fixed launch frame. (photo by Esther Goddard, from the Great Images in NASA collection)

In my mind, the popular image of Goddard has been primarily that of an experimenter, skewed by the footage we’ve all seen of his launches. In reality, he was also a remarkable theoretician, very early on deriving the fundamental parameters and requirements of atmospheric, lunar, and interplanetary flight.

He also showed good sense in choosing gasoline as his primary rocket fuel, generally with liquid oxygen as the oxidizer. This may seem like a dangerous combination, but it was tame compared to what came just a few years later.

That brings me to the fascinating book about the development of liquid rocket fuels. The author is John D. Clark, a scientist, chemist, science/science-fiction writer, and developer of fuels much more exotic than those Goddard used. The introduction to the book was written by author Isaac Asimov and it describes the danger of these fuels very well:

There are, after all, some chemicals that explode shatteringly, some that flame ravenously, some that corrode hellishly, some that poison sneakily, and some that stink stenchily. As far as I know, though, only liquid rocket fuels have all these delightful properties combined into one delectable whole.

Delectable indeed! And if they don’t get you right away, they’re patient: it’s no surprise that many of these fuels are highly carcinogenic.

The book is titled Ignition! An Informal History of Liquid Rocket Propellants. It was published in 1972 and is long out of print, but a scan is available at the link. Fittingly, the book opens with two pictures of a rocket engine test cell, before and after an event called a “hard start.” Perhaps rocket engineers think the term “massive explosion” is too prejudicial.

For many spacecraft and missiles, the most practical fuels are hypergolic, those that burn instantly on contact, requiring no separate ignition source. Clark describes their benefits and extravagant hazards in the chapter “The Hunting of the Hypergol.” The suit on the technician in this picture and the cautions printed on the tank give some idea of the potential for excitement with these chemicals.

Hydrazine, one part of a hypergolic rocket-fuel pair, is loaded on the Messenger spacecraft. The warnings on the tank note that the fuel is corrosive, flammable, and poisonous. The protective gear on the technician gives some idea of the dangers of this fuel. (NASA image via Wikimedia commons)

Hydrazine, one part of a hypergolic rocket-fuel pair, is loaded on the Messenger spacecraft. The warnings on the tank note that the fuel is corrosive, flammable, and poisonous. The protective gear on the technician gives some idea of the dangers of this fuel. (NASA image via Wikimedia commons)

Clark is a skilled writer with a delightful sense of humor, and the book is a great read for holiday downtime at home or on the road. However, it is also a little sad to hear that most of the development adventure in this area came to an end many years ago. Clark writes:

This is, in many ways, an auspicious moment for such a book. Liquid propellant research, active during the late’40s, the ’50s, and the first half of the ’60s, has tapered off to a trickle, and the time seems ripe for a summing up, while the people who did the work are still around to answer questions.

So in addition to being thankful that we’re doing research on things that aren’t constantly trying to kill us, we can also be grateful for a degree of career longevity. RF/microwave engineering has been a highly active field for decades and promises to continue for decades more. No moon suits or concrete bunkers required.

Plus, while we give up a certain degree of excitement, we don’t need to wear a moon suit to prepare for tests and we don’t need run them from a concrete bunker.

Share
2 Comments ↓

Tagged with: , ,
Posted in Aero/Def, Hazards, History, Humor, Off-topic (almost)

Analyzer Upgrades: Going Back in Time and Changing Your Mind

  A practical way to revisit decisions about bandwidth and frequency range

No matter how carefully you consider your test equipment choices, your needs will sometimes evolve beyond the capabilities you’ve purchased. You may face a change in standards or technology, or the need to improve manufacturing productivity or margins. Business decisions may take you in a different direction, with some opportunities evaporating and new ones cropping up.

The one thing you can predict with confidence is that your technological future won’t turn out quite the way you expect. Since test equipment is probably a significant part of your capital-asset base, and your crystal ball will always have hazy inclusions, you have to find the best ways to adapt after the fact.

Analyzer software and firmware can certainly help. New and updated measurement applications are often available, tracking standards as they evolve. Firmware and operating-system updates can be performed as needed, though they’re likely more difficult and sometimes more disruptive than just installing an app.

In some cases, however, the new demands may be more fundamental. The most common examples are increasing measurement bandwidth and extended frequency range, both being recurring themes in wireless applications.

Of course, the obvious solution is a new analyzer. You get a chance to polish your crystal ball, make new choices, and hope for the best. Unfortunately, there is not always capital budget for new equipment, and the purchase-and-redeployment process burns time and energy better spent on engineering.

If the analyzer is part of a modular system, it may be practical to change individual modules to get the capability you need, without the expense of complete replacement. Of course, there are still details like capital budget, management of asset numbers and instrument re-calibration.

One approach to upgrading instrument fundamentals is sometimes called a “forklift upgrade,” a term borrowed from major IT upgrades requiring actual forklifts. In the case of test equipment, it’s a tongue-in-cheek reference to the process of lifting up the instrument serial-number plate and sliding a new instrument underneath. For instruments not designed to be upgradable, this term applies pretty well.

Fortunately, the forklift upgrade reflects a prejudice that is out of date for analyzers such as Keysight’s X-Series. Almost any available option can be retrofitted after purchase, even for analyzers purchased years ago.

Fundamental characteristics such as analysis bandwidth, frequency range, and real time spectrum analysis (RTSA) can be added to Keysight X-Series signal analyzers at any time after purchase. This example shows a 160 MHz real-time display and a frequency-mask trigger (inset) on an instrument upgraded from the base 3.6 GHz frequency range.

Fundamental characteristics such as analysis bandwidth, frequency range, and real time spectrum analysis (RTSA) can be added to Keysight X-Series signal analyzers at any time after purchase. This example shows a 160 MHz real-time display and a frequency-mask trigger (inset) on an instrument upgraded from the base 3.6 GHz frequency range.

Upgradability is part of the analyzer design, implemented in several ways. The internal architecture is highly modular, including the RF/microwave front end, IF digitizing, and DSP. The main CPU, disk/SSD memory, and its external digital interfaces are directly upgradable by the user.

For RF engineers, this is the best substitute for time travel. Hardware upgrades include installation, calibration, and a new warranty, with performance specifications identical to those of a new instrument.

There are organizational and process benefits as well, avoiding the need for new instrument purchase approvals and changes in tracking for asset and serial numbers.

If the decisions of the past have left you in a box, check out the new application brief on analyzer upgrades for a way out. If the box you’re in looks more like a signal generator, Keysight has solutions to that problem too.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Using Windows for a Better View of RF/Microwave Signals

  How to discard information and improve your measurements

If your primary focus is RF/microwave analysis, you may not be familiar with “windows” or “window functions,” and they may not be a factor in your explicit measurement choices. However, it’s worth knowing a little about them for at least two reasons: you may already be using them, and they can help you make better measurements in demanding situations.

Windows have been an essential feature of the fast Fourier transform (FFT) architecture of low-frequency analyzers for many years. FFT processing has also been added to many high-frequency analyzers as a way to implement narrow resolution bandwidths (RBWs) while optimizing measurement speed. Finally, FFT processing is central to vector signal analyzers (VSAs) and OFDM demodulation in particular.

FFTs calculate a spectrum from a block of samples called a time record, and the FFT algorithm assumes that the time record repeats endlessly. That assumption is valid for signals that are periodic over the selected time record, but it causes discontinuity errors for signals that are not. In the FFT spectrum results, the errors create a spreading of the spectral energy called leakage.

The solution is to somehow force the signal to be periodic within the time record, and the most common approach is to multiply the time record by a weighting function that reduces amplitude to zero at both ends of the time record, as shown below.

In this example of a non-repeating sine wave, the FFT algorithm’s assumption that signals repeat for each time record produces the erroneous signal in the second waveform. The window or weighting function removes the discontinuities before the FFT calculates the spectrum.

In this example of a non-repeating sine wave, the FFT algorithm’s assumption that signals repeat for each time record produces the erroneous signal in the second waveform. The window or weighting function removes the discontinuities before the FFT calculates the spectrum.

As an engineer, you’d expect tradeoffs from this weighting process, which effectively discards some samples and down-weights others. That is indeed the case and, among other things, the windowing widens the RBW. It also creates sidelobes of varying amplitude and frequency spacing, depending on the window shape.

The good news is that window shapes can be chosen to optimize the tradeoffs for specific measurements, such as prioritizing frequency resolution, amplitude accuracy or sidelobe level.

I’ll discuss examples of those tradeoffs in a future post, but first I’d like to show what’s possible in the best-case, where the signal is periodic in the time record and the uniform window—equivalent to no windowing—can be used.

Two gated spectrum measurements are made of an off-air capture of an 802.11n signal in the 5 GHz band. The gate is set to match a portion of the training sequence, which is periodic or self-windowing. The uniform window of the 89600 VSA software can be used in this case, providing enough frequency resolution in the bottom trace to measure individual OFDM subcarriers.

Two gated spectrum measurements are made of an off-air capture of an 802.11n signal in the 5 GHz band. The gate is set to match a portion of the training sequence, which is periodic or self-windowing. The uniform window of the 89600 VSA software can be used in this case, providing enough frequency resolution in the bottom trace to measure individual OFDM subcarriers.

In this measurement, gated sweep in the 89600 VSA software has been configured to align with a portion of the training sequence which is self-windowing. The selected uniform window is actually no window at all, and no signal samples are discarded or down-weighted. In this special case no tradeoffs are needed between accuracy, frequency resolution and dynamic range.

As an aside, this training sequence includes every second subcarrier, transmitted at equal amplitude. The peaks describe the ragged frequency response that receivers have to deal with in the real world.

Vector signal analyzers use FFTs for spectrum analysis all the time, but modern signal analyzers such as Keysight’s X-Series automatically choose between FFTs and swept digital filters as needed. In a future post or two I’ll discuss how to optimize FFT analysis and select windows to extract maximum information and improve measurement speed in swept measurements.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

Measure this, not That: Signal Separation in Optical and Electrical Measurements

  I wonder if these MIT researchers can quantify their directivity?

I’m fascinated by correspondences between phenomena in RF engineering and other fields, and it isn’t just a matter of curiosity. These correspondences are also enlightening, and sometimes guide genuine technological advances.

An interesting cross-domain example is the recent MIT announcement of a technique for removing unwanted reflections from photos taken through windows. We’ve all experienced this problem, feeling the surprised disappointment when the photo includes obvious reflections we didn’t notice when composing the picture. At least with digital cameras, we can usually spot the problem while there’s still a chance to take another photo and fix or reduce it.

That surprised disappointment is actually a pointer to the kind of solution the MIT folks have produced. If you haven’t seen it already, take a look at the before/after in the MIT press release.

The uncorrected image is likely to be familiar, and the strength of the reflections is often much greater in the resulting image than it was perceived by the photographer. The perceptual shift is likely caused by our visual system’s ability to automatically do a version of what the MIT technique attempts to do: separate the reflection from the desired image and subtract or ignore it.

The MIT technique doesn’t identify the reflection directly, but it can recognize pairs of them. That’s useful because the unwanted reflections often come from both the front and rear surfaces of the intervening glass, with an apparent offset.

Unwanted reflections from photography through a window—such as the photographer’s hand or body—usually appear in offset pairs, originating from both the front and rear surfaces of the glass. Blame me, not MIT, for any errors or oversimplification in this diagram.

Unwanted reflections from photography through a window—such as the photographer’s hand or body—usually appear in offset pairs, originating from both the front and rear surfaces of the glass. Blame me, not MIT, for any errors or oversimplification in this diagram.

When reading about the technique, my first thought was the similarity to network analysis and its powerful tools for separating and quantifying incident and reflected energy. The analogy breaks down when considering the separation methods, however. The gang at MIT look for the reflection pairs, perhaps with something similar to two-dimensional autocorrelation. RF/microwave engineers usually make use of a directional coupler or bridge.

Directional couplers separate incident and reflected energy, and a critical performance parameter is directivity or how well the coupler can separate the energy moving in each direction.

Directional couplers separate incident and reflected energy, and a critical performance parameter is directivity or how well the coupler can separate the energy moving in each direction.

Of course, I now find myself wondering about the effective directivity of the MIT separation-and-removal scheme, and if they think of it in those terms. Probably not, though that would be a ready-made way to quantify how well they’re doing and it might help in optimizing the technique.

Recently, I’ve written about improving measurement accuracy. However, in thinking about these tools and techniques, I realized that separating signals to measure the right one is fundamental to making better RF measurements of all kinds. Indeed, the separation process is often more difficult than the core measurement itself.

Spectrum analyzers naturally use their RBW filters to separate signals into their different frequency elements, but it may also be critical to separate them by their behavior or their time duration and timing, or to separate them from the analyzer’s own noise.

I could go on and on, and branch off into optical separation techniques such as steganography. Now that I’m looking for such methods, I see them everywhere and resolve to consider signal separation explicitly as an essential step to accurate measurements.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Measurement theory, Microwave, Millimeter, Off-topic (almost), Signal analysis, Wireless

Efficient Averaging: All Samples are not Created Equal

  Is noise driving me mad, or just driving my interest in statistics?

I suspect that many of you, like me, have no special interest in statistics per se. However, we’ve also learned how powerful statistics can be in revealing the characteristics of our world when noise obscures them. This is especially true with our circuits and signals.

Two commonly used statistical tools are averaging and analysis of sample distributions. A while back I finally got around to looking at a distribution question that had been bugging me, and now it’s time to understand one aspect of averaging a little better.

Averaging is probably the most common statistical tool in our world, and we are often using one or more forms at once, even if we’re not explicitly aware of doing so.

Averaging is used a lot because it’s powerful and easy to implement. Even the early spectrum analyzers had analog video bandwidth filters, typically averaging the log-scaled video signal. These days many signal analyzers perform averaging using fast DSP. The speed is a real benefit because noise may be noisier than we expect and we need all the variance reduction we can get.

Years ago, I learned a rule of thumb for averaging that was useful, even though it was wrong: The variance of a measurement decreases inversely with the square root of the number of independent samples averaged. It’s a way to quantify the amount of averaging required to achieve the measurement consistency you need.

It’s a handy guide, but I remembered it incorrectly. It is the standard deviation that goes down with the square root of the number of independent samples averaged; variance is the square of standard deviation.

An essential condition is sometimes overlooked in applying this rule: The samples must be independent, not correlated with each other by processes such as filtering or smoothing. For example, using a narrow video bandwidth (VBW) will constrain the effective sample rate for averaging by the IF detector, no matter how fast the samples are produced. The same goes for the RBW filter, where the averaging effect of the VBW can be ignored if it is at least three times wider than the RBW (another rule of thumb).

What does the effect of correlated samples look like in real spectrum measurements? Performing a fixed number of averages with independent and correlated samples makes it easy to see.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

A 50-trace average is performed on spectrum measurements in a vector signal analyzer. In the top trace the samples are independent or uncorrelated. In the bottom trace the samples are correlated by overlap processing of the data, resulting in a smaller averaging effect.

For convenience in generating this example I used the 89600 VSA software and trace averaging with overlap processing of recorded data. In overlap processing, successive spectrum calculations include a mix of new and previously processed samples. This is similar in concept to a situation in which an averaging display detector samples much faster than the VBW filter. The average is valid, but variance and standard deviation do not decrease as much as the number of samples in the average would suggest.

You probably won’t face this too often, though if you find yourself frustrated with averaging that doesn’t smooth your data much as expected, you might question the independent samples condition. Fortunately, measurement applications are written with this in mind, and some allow you to increase average counts if you need even more stable results.

If issues such as this are important to you, or if you frequently contend with noise and noise-like signals, I suggest the current version of the classic application note Spectrum and Signal Analyzer Measurements and Noise. The explanations and measurement techniques in the note are some of the most practical and effective you’ll find anywhere.

Finally, it’s time for your daily new three-letter acronym: NID. It stands for normally and independently distributed data. It applies here and it’s a familiar concept in statistics, but apparently an uncommon term for RF engineers.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Wireless

“I Have Two of Those and They’re Both Broken”

  K connectors, microwave measurements and careful plumbing

Over the years, I’ve heard several engineers speculate on alternative lives as plumbers. It’s a career that requires some technical knowledge and pays well, but can be shut off entirely—mentally and physically—at the end of the day and on weekends. One of the engineers lived next door to a plumber, so his wistful musings were probably well informed.

As a homeowner, I’ve done my share of amateur plumbing, and there is certainly satisfaction in a job well done—or at least one that doesn’t leak too much.

Of course, the plumbing that pays my bills is a rather different kind, and requires an even greater degree of care and precision. For example, the specifications for microwave and millimeter connector gauges show resolution better than 1/10,000 inch, or around 0.002 mm.

I’ve been looking into high-frequency connectors to make sense of something a friend said to me while discussing different connector types. When the subject of the 2.92 mm or “K” connector came up he said, “I have two of those and they’re both broken.”

I didn’t ask for details, but had heard elsewhere that 2.92s might not be as robust as their 2.4 or 3.5 mm cousins. One online source mentioned a thinner outer shell for the conductor, while another mentioned potential damage to the center conductor.

On the other hand, the K connector offers some distinct advantages in microwave and millimeter connections. It covers frequencies to 40 GHz or higher and is mode-free to about 45 GHz. It also intermates with 3.5 mm and SMA assemblies.

To help avoid damage, the 2.92 mm male connector is designed with a shorter center pin, ensuring that the outer shell is engaged before the center conductors make contact. The outer shell is thick and should be relatively strong.

The situation became clearer when I got a close look at two damaged 2.92 mm connectors. It helped me understand the dimensional requirements of a 40+ GHz connector that can mate with 3.5 mm and SMA connectors.

Damage to the collet or female center conductors of two 2.92 mm K connectors has rendered them useless. The fingers of the slotted contacts are bent or missing, likely from mating with a bad SMA male connector.

Damage to the collet or female center conductors of two 2.92 mm K connectors has rendered them useless. The fingers of the slotted contacts are bent or missing, likely from mating with a bad SMA male connector.

The 2.92 mm connectors should not be prone to damage when used with other 2.92 mm connectors, but intermating with SMA connectors—one of the benefits of this family—is more likely to be destructive.

For a brief explanation, start with the rule of thumb for determining the maximum frequency of coax: divide 120 GHz by the inner diameter D (in mm) of the outer conductor. The outer diameter d of the inner conductor is constrained to a specific D/d ratio to obtain the desired impedance. With a fixed d, the comparatively large center pin of a K connector results in very thin slotted contacts for the female center conductor.

Combine these thin contacts with SMA connectors that have looser tolerances, and which are more likely to have misaligned or projecting center pins. The result is a higher risk for damage to connectors that are otherwise robust and high-performance when mated with their own kind.

It’s logical to assume that a 3.5 mm connector, with larger d and thicker, stronger contacts, would be less likely to suffer damage from mating with an SMA. This appears to be the case, though insertion forces—and the chance of increased wear—may be higher.

It took a while for me to figure this out. One reason: some resources online were simply wrong, claiming, for example, that 2.92 mm connectors have thin outer walls (often true of SMA) and that metrology-grade versions are not available.

I now understand this small-scale plumbing a little better and can appreciate K connectors more fairly. They perform very well, are durable, and offer intermating advantages. Of course, you’ve got to take care when using them around SMA hardware, but that’s a good idea for 3.5 mm connectors as well.

SMA hardware also can be a hazard to 2.4mm and 1.85 mm connectors, and it’s worth paying close attention to the mating habits of the expensive plumbing on your bench. It’s an essential part of getting the performance you’ve paid for.

Share
2 Comments ↓

Tagged with: , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Hazards, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Calibration Data: How to Use What You Know

  How well do you know what you know, and how well do you need to know it anyway?

We choose, purchase and maintain test equipment because we want answers: how big, how pure, how fast, how wide, and so on. The answers are essential to our success in design and manufacturing, but they come at a cost. Therefore, we want to make the most of them, and I have written previously about improving measurements by adding information.

There are many ways to add information, including time averaging of repetitive signals and subtracting known noise power from a measurement. I’ve recently discussed using the information from periodic calibration of individual instruments as a way to get insight into the likely—as opposed to the specified—accuracy for actual measurements. If you’re paying for calibration and the information gathered during the process, it’s wise to make the most of it. Here’s an example, from calibration, of the measured frequency response of an individual PXA signal analyzer:

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

Frequency response of one PXA signal analyzer as measured during periodic calibration. The measured performance and measurement uncertainty are shown in comparison to the warranted specification value.

In the cal lab, this analyzer is performing much better than its hard specs, even after accounting for measurement uncertainty. That’s not surprising, given that the specs must account for environmental conditions, unit-to-unit variation, drift, and our own measurement uncertainty.

Of course, if you’re using this particular instrument for a similar measurement in similar conditions, it’s logical to expect that flatness will be closer to the measured ±0.1 dB than to the specified ±0.35 dB. How can we take advantage of this extra performance?

Not surprisingly, the answer depends on a number of factors, many specific to your situation. I’ll offer a few thoughts and guidelines here, gathered from experts at Keysight.

Begin by understanding your measurement goals and responsibilities. You may be looking for a best estimate rather than a traceable result to use in the design phase, knowing the ultimate performance will be verified later by other equipment or methods. In this situation, the minimum and maximum uncertainty values shown above (dotted red lines) might lead you to comfortably expect ±0.15 dB flatness.

On the other hand, you may be dealing with the requirements and guidelines in standards documents such as ISO17025, ANSI Z540.3 and ILAC G8. While calibration results are relevant, relying on them is more complicated than using the warranted specs. The calibration results apply only to a specific instrument and measurement conditions, so equivalent instruments can’t be freely swapped. In addition, you must also explicitly account for measurement conditions rather than relying on the estimates of stability and other factors that are embedded in Keysight’s spec margins.

These factors don’t rule out using calibration results in calculating total measurement uncertainty and, in some cases, it may be the most practical way to achieve the lowest levels of measurement uncertainty—but using them can complicate how you verify and maintain test systems. You’ll want to identify the assumptions inherent in your methods and have a process to verify them, to avoid insidious problems.

Measurement uncertainty is not the only element of test plan design, and calibration results can help in other ways. Consider the measured and specified values for displayed average noise level (DANL) in the following graph.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

The actual and specified average noise levels of a PXA signal analyzer are shown over a range of 3.6 to 50 GHz. Where measurement speed is a consideration, the actual DANL may be a better guide than the specifications in optimizing settings such as resolution bandwidth.

In this example the actual DANL is 5 to 10 dB better than specified, and this has implications for the test engineer. When measuring low-level signals or noise, it’s necessary to select an RBW narrow enough to reduce the noise contributed by the signal analyzer. Narrow RBWs can lead to slow measurements, so there’s a real benefit to understanding the actual noise level as a way to use RBWs that are as wide—and therefore as fast—as possible.

When your measurements and test plans are especially demanding, it makes sense to use all the information available. Guardbanding is part of a Keysight calibration service that includes the most complete set of calibration results such as those above. For easy access to calibration results without tracking paper through your organization, you can use the free Infoline service that comes with all calibrations.

Share
No Comments ↓

Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

About

My name is Ben Zarlingo and I'm an applications specialist for Keysight Technologies.  I've been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis.  For the past 20 years I've been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions.  I work at the interface between Keysight’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments.  Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.