## Fast-Fourier-Transform-Based Electrical Noise Measurements

Technische Universitat Munchen, Fakultat fur Physik, Walther-Meissner-Institut fur Tieftemperaturforschung

Technische Universitat Munchen, 2013

@misc{schade2013fast,

title={Fast-Fourier-transform-based electrical noise measurements},

author={Schade, Alexander},

year={2013},

publisher={Bachelor thesis, Technische Universit{"a}r M{"u}nchen}

}

We have shown how the Fourier spectrum and the power spectral density can be estimated in concrete measurements. Moreover, we have derived spectral leakage, which is a systematic error in spectrum computation. The Nyquist-Shannon sampling theorem and aliasing have been discussed. Furthermore, we have implemented a spectrum analyzer using a combination of LabView, GPU computing and commercial data acquisition hardware. It is designed as a plugin for the DR4Ever measurement system. The feature set of our spectrum analyzer is alike to the device SR760 from Stanford Research. While the SR760 cannot analyze frequencies above fmax = 100 KHz our spectrum analyzer has an fmax of 100 MHz. Our spectrum analyzer was subsequently tested in four experiments. In one experiment we tried to verify the predicted benefit using an overlap of 50% in combination with windowing. In another experiment we practically archived to record a full spectrum of a signal with bandwidth 100MHz at a very high resolution of 12 Hz. Finally the spectrum analyzer was benchmarked in terms of memory usage and performance. Not surprisingly, we found that handling of large buffers in LabView was very inefficient in terms of memory usage. Memory efficiency basically translates into a maximum frequency resolution Delta f as a function of the sampling rate fs. For the maximum sampling rate of fs = 200 MS/s the maximum frequency resolution was Delta f = 12 Hz. In future this could be much improved by switching to the 64-bit version of LabView and increasing the PC’s main memory and graphics memory. We found that PCI transfer is a performance bottleneck for high sampling rates but it becomes negligible for sampling rates below 10 MS/s. For high sampling rates there is supposedly a performance gain when the LabView part was programmed in C. The GPU part was reasonable fast even for the highest possible sampling rate. When compared to a CPU implementation, the GPU fast Fourier transform was 70 times faster. GPU computing was found to be required for sampling rates such as 10 MS/s; otherwise large losses of performance would be the result. Performance benchmarks were compiled into a rating called blind time as a function of the sample count per block and sampling rate. Blind time should be minimized by an experimentator when selecting a configuration.

October 20, 2014 by hgpu