Does RF Noise have Mass?

   I’m not usually a fan of noise, but there are exceptions

My provisional assumption is that noise does indeed have mass. I support that notion with the following hare-brained chain of reasoning: The subject of noise has a gravity-like pull that compels me to write about it more than anything else. Because gravity comes from mass, noise therefore must have mass. Voila!

My previous posts dealing with noise have all been about minimizing it. Averaging away its effects, estimating the errors it causes, predicting and then subtracting noise power, and so on. Sometimes I just complain about it or wax philosophical.

I even created a webcast titled “conquering noise” but, of course, that was a bit of a conceit. Noise is a fundamental natural phenomenon and it is never vanquished. Instead, I have mentioned that noise can be beneficial in some circumstances—and now it’s time to describe one.

A few years ago, a colleague was using Keysight’s Advanced Design System (ADS) software to create 10 MHz WiMAX MIMO signals that included impairments. He started by adding attenuation to one transmitter, but after finding little or no effect on modulation quality, he added a 2 MHz bandpass filter to one channel, as shown below.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

EESof ADS simulation of a 10 MHz two-channel MIMO signal with an extra 2 MHz bandpass filter inserted in one channel.

Surely a filter that removed most of one channel would confound the demodulator. Comparing the spectra of the two channels, the effect is dramatic.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

Spectrum of two simulated WiMAX signals with 10 MHz bandwidth. The signal in the bottom trace has been modified by a 2 MHz bandpass filter.

All that filtering in one channel had no significant effect on modulation quality! The VSA software he was using—as an embedded element in the simulation—showed the filter in the spectrum and the channel frequency response, but in demodulation it caused no problem.

He emailed a recording of the signal and I duplicated his results using the VSA software on my PC. I then told him he could “fix” the problem by simply adding some noise to the signals.

This may seem like an odd way to solve the problem, but in this case the simulation didn’t match reality in the way it responded to drastic channel filtering. The mismatch was due to the fact that the simulated signals were noise-free, and the channel equalization in demodulation operations could therefore perfectly correct for filter impairments, no matter how large they were.

In many ways it’s the opposite of the adaptive equalization used in real-world situations with high noise levels, and I have previously cautioned you to be careful what you ask for. When there is no noise, you can correct signals as much as you want, without ill effects.

Of course, “no noise” is not the world we live in or design for, and as much as I hate to admit it, there are times when it’s beneficial to add some.

There are certainly other uses for noise. Those also have that peculiar massive attraction and I know I’ll write about it again soon.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Low frequency/baseband, Measurement techniques, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

RF, Protocol and the Law

   RF engineering and a layer hierarchy that extends all the way to the spectral authorities

In our day jobs we focus mainly on the physical layer of RF communications, and there is certainly enough challenge there for a lifetime of productive work. The analog and digitally modulated signals we wrestle with are the foundation of an astonishing worldwide expansion of communications.

Of course, the physical layer is just the first of many in modern systems. Engineering success often involves interaction with higher layers that are commonly described in diagrams such as the OSI model shown below.

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The Open Systems Interconnection (OSI) model uses abstraction layers to build a conceptual model of the functions of a communications system. The physical layer is the essential foundation, but many other layers are needed to make communication efficient and practical. (Image from Wikimedia Commons)

The OSI model is a good way to build an understanding of systems and to figure out how to make them work, but sometimes we need to add even more layers to see the whole picture. A good example comes from a recent event that caught my eye.

Many news outlets reported that some hotels in one chain in the US were “jamming” private Wi-Fi hotspots to force convention-goers to use the hotel’s for-fee Wi-Fi service. The term jamming grabbed my attention because it sounded like a very aggressive thing to do to the 2.4 GHz ISM band, which functions as a sort of worldwide public square in the spectral world. I figured regulatory authorities such as our FCC would take a pretty dim view of this sort of thing.

As is so often the case, many general news organizations were being less than precise. The hotel chain was actually blocking Wi-Fi rather than jamming it. This is something that happens not at the physical layer—RF jamming—but a few layers higher.

According to the FCC, hotel employees “had used containment features of a Wi-Fi monitoring system” to prevent people from connecting to their own personal Wi-Fi networks. Speculation from network experts is that the Wi-Fi monitoring system could be programmed to flood the area with de-authentication or disassociation packets that would affect access points and clients other than those of the hotel.

It may not surprise you that the FCC also objected to this use of the ISM band, and the result was a $600,000 settlement with the hotel to resolve the issue. The whole RF story thus extends the OSI model to at least a few more levels, including the vendor of the monitoring system, the hotel management and—at least one layer above them!—the FCC itself.

I suppose you can insert some legislative and political layers in there somewhere if you want, but I’m happy to focus my effort on wrangling the physical layer and those near it. Keysight signal generators and signal analyzers are becoming more capable above the physical layer, with features such as Wireless Link Analysis to perform layer 2 and layer 3 analysis of LTE-FDD UL and DL signals.

In the end, I hope there are ways to resolve these issues and give everyone fair access to the unlicensed portions of our shared spectrum. I dread a situation in which a market emerges for access points or hotspots with counter-blocking technology and a resulting arms race that could leave us all without access.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Signal analysis, Signal generation, Wireless

MIMO Streams, Channels and Condition Number Reveal a Defect

   Streams multiply complexity but they can also add insight

Multiple-input multiple-output (MIMO) techniques are powerful ways to make efficient use of scarce RF spectrum. In a bit of engineering good fortune, MIMO methods are also generally most effective where they’re most needed: crowded, reflective environments.

However, MIMO systems and signals—and the RF environments they occupy—can be difficult to troubleshoot and optimize. The number of signal paths goes up with the square of the number of transmitters, so even “simple” 2×2 MIMO provides the engineer with four paths to examine. 4×4 systems yield 16 paths, and in some systems 8×8 is very much on the table!

All these channels and streams, each with several associated measurements, can provide good hiding places for defects and impairments. One approach for tracking down problems in the thicket of results is to use a large display and view many traces at once, the subject of my Big Data post a while back. Engineers have powerful pattern recognition and this is a good way to use it.

Another way to boil down lots of measurements and produce some insight—measuring condition number—is specific to MIMO. This trace is a single value for every subcarrier, no matter how many channels are used, and it quantifies how well MIMO is working overall. Sometimes not too well, as in this measurement:

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

This condition number trace is flat over the channel, at a value of about 25 dB. The ideal value is 0 dB and condition number should be similar to the signal/noise ratio (SNR), so signal separation and demodulation is likely to be very poor unless SNR is very good.

The signal for the measurement above was produced with four linked signal generators, so SNR should not be a problem. However, the fact that the condition number is far above 0 dB certainly indicates that there is a problem somewhere.

Analysis software such as the 89600 VSA provides several other tools to peer into the thicket from a different angle. As mentioned previously, this 4×4 MIMO system has 16 possible signal paths, and they can be overlaid on a single grid. In this instance a dozen of the paths looked good, while four showed a flat loss about 25 dB greater than the others. That is suspiciously close to the 25 dB condition number.

Of course, when engineers see two sets of related parameters they tend to think about using a matrix to get a holistic view of the situation. That’s just what’s provided by MIMO demodulation in the 89600 VSA software as the MIMO channel matrix trace, and in this case it reveals the nature of the problem.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4x4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

The MIMO channel matrix shows the complex magnitude of the 16 possible channel and stream combinations in a 4×4 MIMO system with spatial expansion. Note that the value of channel 4 is low for all four streams.

This MIMO signal was using spatial expansion or spatial encoding, as I described recently. Four streams are combined in different ways to spread across four RF channels. The complex magnitudes are all different—to facilitate MIMO signal separation—and very much non-zero.

All except for channel 4, where the problem is revealed. The matrix shows that the spatial encoding is working for all four streams, but one channel is weak for every stream. In this case the signal generator producing channel four had a malfunctioning RF attenuator, reducing output by about 25 dB.

As is so often the case, the solution comes down to engineers using pattern recognition, deduction and intuition in combination with the right tools. For Keysight, Job 1 is bringing the tools that help you unlock the necessary insights.

Share
Tagged with: , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Signal analysis, Wireless

Spectrum and Network Measurements: A non-eBook

   What is the role of a physical book in an electronic world?

I recently got a copy of the newest edition of Spectrum and Network Measurements by Bob Witte. This is the second edition, and it was a good time for an update. It’s been more than a dozen years since the previous one, and I think an earlier, similar work by Bob first appeared in the early 1990s. Bob has a deep background in measurement technology and was, among other things, a project manager on the first swept analyzer with an all-digital IF section. That was back in the early 1980s!

One of the reasons for the update is apparent from the snapshot I took of the cover.

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

The latest edition of Bob Witte’s book on RF measurements, with a real-time spectrum analysis display featured on the cover. Pardon the clutter but the book just didn’t look right without a few essential items.

The cover highlights a relatively recent display type, variously referred to as density, cumulative history, digital phosphor, persistence, etc. These displays are a characteristic of real-time spectrum analyzers, and both the analyzers and displays were not in mainstream RF use when the previous edition of the book appeared.

An update to a useful book is great, of course, but why paper? What about a website or a wiki or an eBook of some kind? Digital media types can be easily updated to match the rate of change of new signals, analyzers and displays.

In looking through Bob’s book I’ve been trying to understand and to put into words how useful it feels, in just the form it’s in. It’s different from an app note online, or article, or Wikipedia entry. Not universally better or worse, but different.

Perhaps it’s because while some things have changed in spectrum and network measurements, so many things are timeless and universal. The book is particularly good at providing a full view of the measurement techniques and challenges that have been a part of RF engineering for decades. It’s a reminder that making valid, reliable, repeatable measurements is mostly a matter of understanding the essentials and getting them right every time.

Resources online are an excellent way to focus on a specific signal or measurement, especially new ones. Sometimes that’s just what you need if you’re confident you have the rest of your measurements well in hand.

I guess that’s the rub, and why a comprehensive book like this is both enlightening and reassuring. RF engineering is a challenging discipline and there are many ways, large and small, to get it wrong. This book collects the essentials in one place, with the techniques, equations, explanations and examples that you’ll need to do the whole measurement job.

Of course there are other good books with a role to play in RF measurements. While Bob’s book is comprehensive in terms of spectrum and network measurements, one with a complementary focus on wireless measurements is RF Measurements for Cellular Phones and Wireless Data Systems by Rex Frobenius and Allen Scott. And when you need to focus even tighter on a specific wireless scheme you may need something like LTE and the Evolution to 4G Wireless: Design and Measurement Challenges*, edited by Moray Rumney.

All of these are non-eBooks, with broad coverage including many examples, block diagrams and equations. Together with the resources you’ll find using a good search engine, you’ll have what you need to make better measurements of everything you find in the RF spectrum.

 

*Full disclosure: I had a small role in writing the signal analysis section of the first edition of the LTE book. But it turned out well nonetheless!

Share
Tagged with: , , ,
Posted in Aero/Def, EMI, Hazards, Low frequency/baseband, Measurement techniques, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

MIMO Channels and Streams: What’s the Difference?

   Maybe this time I’ll remember my own explanation

At times, my brain seems to have a non-stick coating when it comes to certain technical details. I usually feel confident in my grasp of things while getting an explanation from an expert, watching a video or looking at a diagram. But a month or two later I’ll struggle to remember some critical element or distinction, and heaven help me if I’m called on to explain it to someone else!

One distinction that has vexed me in this way is the difference between MIMO channels and streams. The literature on MIMO is packed with mentions of both terms, but it’s rare to see them both explained in context. This blog has also been guilty of casual treatment—see the first post on MIMO—and it’s time to explain a little more. A clear, intuitive understanding is worthwhile, since both channels and streams are important, and understanding them together can lead to better RF measurements.

As always, different explanations will gain traction with different readers. Some will gain optimal insight from a mathematical discourse. Others, like me, do better with a visual approach and a diagram of a MIMO transmitter chain is the best place to start:

Transmit chain example for 2x2 MIMO in an IEEE802.11n system. The spatial encoding or mapping block determines how streams become RF channel outputs.

Transmit chain example for 2×2 MIMO in an IEEE802.11n system. The spatial encoding or mapping block determines how streams become RF channel outputs.

Streams are the easiest element to understand from a digital point of view. In the 2×2 MIMO case, a single payload data stream is scrambled and interleaved, and error correction is added. The stream is then split in two and multiplexed to the 48 OFDM data subcarriers. Independently, the two streams each carry half of the data payload.

The streams then become I/Q values, and if the streams are separately sent to RF transmit chains—bypassing any spatial encoding or mapping—the distinction between streams and RF channels is trivial. This configuration is called direct mapping.

However, there are several reasons why direct mapping is not the best approach for some real-world conditions. I’ll explain more in a future post, but for now imagine a situation in which one RF channel is nearly perfect and the other is badly impaired. The error-correction overhead required to keep the bad channel functioning would be wasted on the good one, and total throughput would be suboptimal.

An elegant way to solve this problem is to convert streams to RF channels using a scheme that’s more complex than direct mapping. For example, the spatial encoder could add the I/Q values of the two streams and send the sum to one RF channel; it could simultaneously subtract one from the other and send that result to the other RF channel.

In this way—and with appropriate encoding and decoding—the effective impairments are averaged between the two RF channels. An efficient amount of error-correction overhead can be chosen for the channel pair, optimizing overall data transmission. Symmetrical decoding and de-mapping at the receiver recovers the streams from the two incoming RF channels.

To cement the distinctions in my mind, I view streams and channels like this: streams are payload data, transformed to I/Q values on the OFDM subcarriers; and channels are the actual transmitted RF signals. For the direct mapped case, the streams become channels using the modulation processing we are familiar with, and that’s a useful mode of operation for RF troubleshooting. However, it’s likely not the common operating mode and it’s important to understand the difference!

For explaining channels and streams, then, this is a start. I haven’t said much about the implications for RF measurements, but will get to that in future posts.

Share
Tagged with: , , , , , , , , , , , , , ,
Posted in Signal analysis, Signal generation, Wireless

More Tactics to Civilize OFDM

   “Tone reservation” is not an agency to book a band for your wedding

I recently explained some of the reasons why OFDM has become ubiquitous in wireless applications, but didn’t say much about the drawbacks or tradeoffs. As an RF engineer, you know there will be many—and they’ll create challenges in measurement and implementation. It’s time to look at one or two.

Closely spaced subcarriers demand good phase noise in frequency conversion, and the wide bandwidth of many signals means that system response and channel frequency response both matter. Fortunately, it’s quite practical to trade away a few of the data subcarriers as reference signals or pilots, and to use them to continuously correct problems including phase noise, flatness and timing errors.

However, pilot tracking cannot improve amplifier linearity, which is another characteristic that’s at a premium in OFDM systems. A consequence of the central limit theorem is that the large number of independently modulated OFDM subcarriers will produce a total signal very similar to additive white Gaussian noise (AWGN), a rather unruly beast in the RF world.

The standard measure for the unruliness of RF signals is peak/average power ratio (PAPR or PAR). The PAPR of OFDM signals approaches that of white noise, which is about 10-12 dB. This far exceeds most single-carrier signals, and the cost and reduced power efficiency of the ultra-linear amplifiers needed to cope with it can counter the benefits of OFDM.

A variety of tactics have been used to reduce PAPR and civilize OFDM, and they’re generally called crest factor reduction (CFR). These range from simple peak clipping to selective compression and rescaling, to more computationally intensive approaches such as active constellation extension and tone reservation. The effectiveness of these techniques on PAPR is best seen in a complementary cumulative distribution function (CCDF) display:

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

The CCDF of an LTE-Advanced signal with 20 MHz bandwidth is shown before and after the operation of a CFR algorithm. Shifting the curve to the left reduces the amount of linearity demanded of the LTE power amplifier.

Peak clipping and compression have not been especially successful because they are nonlinear transformations. Their inherent nonlinearity can cause the same problems we’re trying to fix.

As you’d expect, it’s the more DSP-heavy techniques that provide better PAPR reduction without undue damage to modulation quality or adjacent spectrum users. This is yet another example of using today’s rapid increases in processing power to improve the effective performance of analog circuits that otherwise improve much more slowly on their own.

In the tone reservation technique, a subset of the OFDM data subcarriers is sacrificed or reserved for CFR. The tones are individually modulated, but not with data. Instead, appropriate I/Q values are calculated on the fly to counter the highest I/Q excursions (total RF power peaks) caused by the addition of the other subcarriers.

Since all subcarriers are by definition orthogonal, the reserved ones can be freely manipulated without affecting the pilots or those carrying data. Thus, in theory the cost of CFR is primarily computational power along with the payload capacity lost from the sacrificed subcarriers.

The full nature of the cost/benefit tradeoff is more complicated in practice but one example is discussed in an IEEE paper: “By sacrificing 11 out of 256 OFDM tones (4.3%) for tone reservation, over 3 dB of analog PAR reduction can be obtained for a wireless system.” [1]

That’s a pretty good deal, but not the only one available to RF engineers. I mentioned the active constellation extension technique above, and other approaches include selective mapping and digital predistortion. They all have their pros and cons, and I’ll look at those in future posts.

[1] An active-set approach for OFDM PAR reduction via tone reservation, available at IEEE.org.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Measurement techniques, Signal analysis, Signal generation, Wireless

Condition Numbers and MIMO Insights

  What condition is your condition in?*

As we know all too well, the RF spectrum is a limited resource in an environment of ever-increasing demand. Engineers have been working hard to increase channel capacity, and one of the most effective techniques is spatial multiplexing via multiple-input, multiple-output (MIMO) transmission.

MIMO allows multiple data streams to be sent over a single frequency band, dramatically increasing channel capacity. For an intuitive approach to understanding the technique, see previous posts here: MIMO: Distinguishing an Advanced Technology from Magic and Hand-Waving Illustrates MIMO Signal Transmission. And if MIMO sounds a little like CDMA, the difference is explained intuitively here as well.

Intuitive explanations are fine things, but engineering also requires quantitative analysis. Designs must be validated and optimized. Problems and impairments must be isolated, understood and solved. Tradeoffs must be made, costs reduced and yields increased.

Quantitative analysis is a special challenge for MIMO applications. For example, a 4×4 MIMO system using OFDM has 16 transmit paths to measure, with vector results for each subcarrier and each path.

The challenge is multiplied by the fact that successful MIMO operation requires more than an adequate signal/noise ratio (SNR). Specifically, the capacity gain depends on how well receivers can separate the simultaneous transmissions from each other at each antenna. This separation requires that the paths be different from each other, and that SNR be sufficient to allow the receiver to detect the differences. Consider the artificially-generated example channel frequency responses shown below.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

A 2 MHz bandpass filter has been inserted into one channel frequency response of a MIMO WiMAX signal with 840 subcarriers. The stopband attenuation of the filter will reduce SNR at the receiver and impair its ability to separate the MIMO signals from each other.

The bandpass filter applied to one channel will impair MIMO operation for the affected OFDM subcarriers in some proportion to the amount of attenuation. The measurement challenge is then to quantify the effect on MIMO operation.

The answer is a measurement called the MIMO condition number. It’s a ratio of the maximum to minimum singular values of the matrix derived from the channel frequency responses and used to separate the transmitted signals. You can find a more thorough explanation in the application note MIMO Performance and Condition Number in LTE Test, but from an RF engineering point of view it’s simply a quantitative measure of how good the MIMO operation is.

Condition number quantifies the two most important problems in MIMO transmission: undesirable signal correlation and noise. I’ll discuss signal correlation in a future post; here I’ll focus on the effects of SNR by showing the condition number resulting from the filtering in the example above.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition number is a ratio of singular values and always a positive real number greater than one. It is often expressed in dB form, plotted for each OFDM subcarrier. The ideal value for MIMO is 1:1 or 0 dB, and values below 10 dB are desirable. In this measurement example the only signal impairment is SNR, degraded by a bandpass filter.

Condition-number measurements are an excellent engineering tool for several reasons:

  • They effectively measure MIMO operation by combining the effects of noise and undesirable channel correlation.
  • They are measured directly from channel frequency response, without the need for demodulation or a matrix decoder.
  • They are a frequency- or subcarrier-specific measurement, useful for uncovering frequency response effects.
  • They relate a somewhat abstract matrix characteristic to practical RF signal characteristics such as SNR.

The last point above is especially significant for understanding MIMO operation: With condition number expressed in dB, if the condition number is larger than the SNR of the signal, it’s likely that MIMO separation of the multiple data streams will not work correctly.

*I don’t know about you, but I can’t hear the phrase “condition number” without thinking of the Mickey Newbury song Just Dropped In (To See What Condition My Condition Was In), made famous by Kenny Rogers & the First Edition way back in 1968. Did Newbury anticipate MIMO?
Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , ,
Posted in Measurement techniques, Measurement theory, Signal analysis, Signal generation, Wireless

Direct Digital Synthesis Pays Dividends in Signal Analyzer Performance

  An alternative to PLLs changes the phase noise landscape

Phase-locked loops (PLLs) in radio receivers date back to the first half of the 20th Century, and made their way into test equipment in the second half. In the 1970s, PLLs supplanted direct analog synthesizers in many signal generators and were used as the local oscillator (LO) sections of some spectrum analyzers.

In another example of Moore’s Law, the late 1970s and 1980s saw rapid improvements in PLL technology, driven by the evolution of powerful digital ICs. These controlled increasingly sophisticated PLLs and were the key enabling technology for complex techniques such as fractional-N synthesis.

PLLs are still key to the performance and wide frequency range of all kinds of signal generators and signal analyzers. However, as I mentioned in a recent post, direct digital synthesis (DDS) is coming of age in RF and microwave applications, and signal analyzers are the newest beneficiary.

A good example is the recently introduced Keysight UXA signal analyzer. DDS is used in the LO of this signal analyzer to improve performance in several areas, particularly close-in phase noise. The figure below compares the phase noise of three high-performance signal analyzers at 1 GHz.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

The phase noise of the UXA signal analyzer is compared with the performance of the PXA and PSA high-performance signal analyzers. Note the UXA’s lack of a phase noise pedestal and significant improvement at narrow frequency offsets.

Phase noise is a critical specification for signal analyzers, determining the phase noise limits of the signals and devices they can test, and the accuracy of measurements. For example, radar systems need oscillators with very low phase noise to ensure that the returns from small, slow-moving targets are not lost in the phase noise sidebands of those oscillators.

A spectrum/signal analyzer’s close-in phase noise reflects the phase noise of its frequency-conversion circuitry, particularly the local oscillator and frequency reference. The phase noise of PLL-based LOs typically includes a frequency region in which the phase noise is approximately flat with frequency offset. This is called a phase noise pedestal and its shape and corner frequency are determined in part by the frequency response of the filters in the PLL’s feedback loop(s). The PLL’s loop-filter characteristics are adjusted automatically, and sometimes selectable by the analyzer user as a way to optimize phase noise performance in the most important offset region for a measurement.

With the DDS technology in the UXA, the absence of a pedestal means that improved performance is available over a wide range of offsets up to about 1 MHz. For very wide offsets, a PLL is used along with the DDS to get a lower phase noise floor from its YIG-tuned oscillator.

Despite its obvious advantages, DDS will not fully replace PLLs any time soon. DDS technology is generally more expensive than PLLs, requiring very high-speed digital-to-analog converters with extremely good spurious performance, and high-speed DSP to drive the DACs. In addition, PLLs still offer the widest frequency range, and therefore most DDS solutions will continue to include PLLs.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, History, Measurement theory, Microwave, Millimeter, Signal analysis, Signal generation, Wireless

Mating Habits of Microwave and Millimeter Connectors

  The tiny ones and the giants

Even if you have especially good eyes—and I do not—it can be difficult to identify the various connector types found on the bench of the typical microwave/millimeter engineer. This is because the frequencies are very high the the dimensions are very small! Nonetheless, accuracy and repeatability are expensive and hard-won at these extreme frequencies, and so it’s worth it to get interconnections right.

Getting things right also helps avoid the cost and inconvenience of connector damage. Connectors are designed with mechanical characteristics to avoid mating operations that would cause gross connector damage, but these measures sometimes fail, subjecting you to hazards such as loose nut danger.

The vast majority of intermating possibilities can be summarized in two sentences:

  • SMA, 3.5 mm and 2.92 mm (“K”) connectors are mechanically compatible for connections.
  • 2.4 mm and 1.85 mm connectors are compatible with each other, but not with the SMA/3.5 mm/2.92 mm.

A good single-page visual summary is available from Keysight. Here’s a portion of it.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

This summary of microwave and millimeter connector types uses color to indicate which types can be intermated without physical damage.

Avoiding outright damage is important; however, performance-wise, it’s a pretty low bar for the RF engineer. Our goal is to optimize performance where it counts, and microwave and millimeter frequencies demand particular care.

For example, intermating different connector types, even when they’re physically compatible, has a real cost in impedance match (return loss) and impedance consistency. This has implications for amplitude accuracy and repeatability, with examples described in the March 2007 Microwave Journal article Intermateability of SMA, 3.5 mm and 2.92 mm connectors.

And it isn’t just mating different connector types that will give you fits. Like teenagers, it seems you can’t send millimeter signals anywhere without them suffering some sorts of issues. All kinds of connectors, adapters and even continuous cabling will affect signals to some degree, and suboptimal connection performance can be a hard problem to isolate.

Even connector savers, a good practice recommended here, add the effects of one more electrical and mechanical interface. As always, it’s a matter of optimizing tradeoffs, though of course that’s job security for RF engineers.

One approach to mastering the connection tradeoffs is to eliminate some adapters by using cables different from the usual male-at-each-end custom. Cables can take the place of connector savers and streamline test connections, especially when you’ll be removing them infrequently.

While you’re at it, consider cable length and quality carefully. Good cables can be expensive but may be the most cost effective way to improve accuracy and repeatability.

Finally, what about those huge connectors you see on some network analyzers and oscilloscopes? These are the ones that require a 20 mm wrench or a special spanner or both. The threads on some connector parts appear to be missing, though there’s a heck of a lot of metal otherwise. Here are two examples:

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

Male and female examples of NMD or ruggedized millimeter connectors. The larger outer dimensions provide increased robustness and stability.

The large connectors are actually NMD or ruggedized versions of 2.4 mm and 1.8 mm connectors, providing increased mechanical robustness and stability. They’re designed to mate with regular connectors of the same type, or as a mount for connector savers, typically female-to-female. Test port extension and other cables are also available with these connectors.

I’ve previously discussed the role of torque in these connections. If you’d like something to post near your test equipment, a good summary of the torque values and wrench sizes is available from Keysight.

Share
Tagged with: , , , , , , , , , , , , , , , ,
Posted in Aero/Def, EMI, Hazards, Microwave, Millimeter, Signal generation, Uncategorized, Wireless

Direct Digital Synthesis and Our Share of Moore’s Law

  RF and microwave applications get their own benefits from semiconductor advances

Gordon Moore is well known for his 1965 prediction that the number of transistors in high-density digital ICs would double every two years, give or take. While the implications for processors and memory are well understood, perhaps only RF and microwave engineers recall Moore’s other prediction in that same paper: “Integration will not change linear systems as radically as digital systems.”

Though it’s hard to quantify, it seems that the pace of advances in combined digital/analog circuits is somewhere in the middle: slower than that of processors and memory, but faster than purely analog circuits. To many of us, the actual rate means a lot because analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) bring the power, speed and flexibility of digital circuits to real-world challenges in electronic warfare (EW), wireless, radar, and beyond.

Direct digital synthesis (DDS) technologies are an excellent example, and they’re becoming more important and more prominent in demanding RF and microwave applications. The essentials of DDS are straightforward, as shown in the diagram of a signal source below.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

In DDS, memory and DSP drive an RF-capable DAC, and its output is filtered to remove harmonics and out-of-band spurs. Other spurs must be kept inherently low by the design of the DAC itself.

Deep memory and high-speed digital processing—using ASICs and FPGAs—have long been used to drive DACs. Unfortunately, most DACs have been too narrowband for frequency synthesis, and wideband units lacked the necessary signal purity. The Holy Grail has been a DAC that can deliver instrument-quality performance over wide RF bandwidths.

Engineers at Keysight Technologies (formerly Agilent) realized that new semiconductor technology was intersecting with this need. They used an advanced silicon-germanium BiCMOS process to fabricate a DAC that is the perfect core for a DDS-based RF/microwave signal generator. Signal purity is excellent even at microwave frequencies, as shown in the spectrum measurement below.

 

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

A 10 GHz CW signal measured over a 20 GHz span, showing the output purity of DDS technology in the Keysight UXG agile signal generator.

Compared to traditional phase-locked loops (PLLs) and direct analog synthesizers, DDS promises a number of advantages:

  • Frequency and amplitude agility. With no loop-filter settling, new output values can be implemented at the next DAC clock cycle. As a result, the UXG can switch as fast as 250 ηs.
  • Multiple, coherent signals can be generated from one source. DDS can generate both CW and complex or composite signals, continuously or in a changing sequence. This enables generation of scenarios instead of just signals, making the technology well-suited to EW or signal-environment simulation.
  • No phase noise pedestal from a PLL, and no need to trade phase noise performance for agility. PLLs provide a wide frequency range, high resolution and good signal quality, but often require tradeoffs between frequency agility and phase noise performance.
  • Signal coherence, phase continuity and signal generator coordination. Multiple signals from a single generator can be aligned in any way desired, and switching can be phase continuous. Triggers and a shared master clock allow multiple DDS generators to produce coordinated outputs easily and with great precision.

With sufficient DAC performance, DDS is clearly a good fit for signal generation in radar and EW, which need agility and wide bandwidth. DDS also can be valuable in signal analyzers and receivers because fast sweeping/tuning and the lack of a phase noise pedestal enables LO designs with better performance and fewer tradeoffs.

DDS implementations are generally more expensive than PLL technologies. However, as Moore predicted, technological evolution creates a dynamic environment in which the optimal solutions change over time. It seems clear that DDS will have an expanding role in better RF measurements, even if it doesn’t happen at the pace of Moore’s law.

For more about the use of DDS in a specific implementation, go to www.keysight.com/find/UXG and download a relevant app note.

Share
Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Aero/Def, Microwave, Millimeter, Signal generation, Wireless
About

Agilent Technologies Electronic Measurement Group is now Keysight Technologies http://www.keysight.com.

My name is Ben Zarlingo and I’m an applications specialist for Keysight Technologies. I’ve been an electrical engineer working in test & measurement for several decades now, mostly in signal analysis. For the past 20 years I’ve been involved primarily in wireless and other RF testing.

RF engineers know that making good measurements is a challenge, and I hope this blog will contribute something to our common efforts to find the best solutions. I work at the interface between Agilent’s R&D engineers and those who make real-world measurements, so I encounter lots of the issues that RF engineers face. Fortunately I also encounter lots of information, equipment, and measurement techniques that improve accuracy, measurement speed, dynamic range, sensitivity, repeatability, etc.

In this blog I’ll share what I know and learn, and I invite you to do the same in the comments. Together we’ll find ways to make better RF measurements no matter what “better” means to you.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.