Recovering Lost Information in the Digital World by Yonina Eldar Resolution
Total Page:16
File Type:pdf, Size:1020Kb
newstest 2018 SIAM NEWS • 1 Recovering Lost Information in the Digital World By Yonina Eldar resolution. Is it possible to use sampling- related ideas to recover information lost due We live in an increasingly digital world to physical principles? where computers and microprocessors per- We consider two methods to recover form data processing and storage. Digital lost information. The first utilizes structure devices are programmed to quickly and that often exists in signals, and the sec- efficiently process sequences of bits. Signal ond accounts for the ultimate processing processing then translates to mathematical task. Together they form the basis for the algorithms that are programmed in a com- Xampling framework, which suggests prac- puter operating on these bits. An analog- tical, sub-Nyquist sampling and processing to-digital converter converts the continuous techniques that result in faster and more time signal into samples; the transition efficient scanning, processing of wideband from the physical world to a sequence signals, smaller devices, improved resolu- Figure 3. Ultrasound imaging at three percent of the Nyquist rate (right), as compared to a of bits causes information loss in both tion, and lower radiation doses [5]. standard image (left). Image courtesy of [1]. time (sampling phase) and amplitude (the The union of subspaces model is a popu- An interesting sampling question is as Combining the aforementioned ideas quantization step). Is it possible to restore lar choice for describing structure in signals follows. What is the rate at which we allows us to create images in a variety of information that is lost in transition to the [7, 9]. As special cases, it includes sparse must sample a stationary ergodic signal contexts at higher resolution using far fewer digital domain? vectors — vectors with a small number to recover its power spectrum? This rate samples. For example, we can recover an The answer depends on what we know of nonzero values in an appropriate repre- can be arbitrarily low using appropriate ultrasound image from only three percent about the signal. One way to ensure a sig- sentation, which is the model underlying nonuniform sampling methods. If we con- of the Nyquist rate without degrading image nal’s recovery from its samples is to limit the compressed sensing field [6]. It also sider practical sampling approaches—such quality (see Figure 3). This ability allows its speed of change. This idea forms the includes some popular examples of finite- as periodic nonuniform sampling with N for multiple technology developments with basis of the famous Nyquist theorem, devel- rate-of-innovation signals, such as streams samplers, each operating at an Nth of the broad clinical significance, such as fast Nyquist rate—then only cardiac and three-dimensional imaging, on the order of N which is currently limited by high data rate. samplers are needed Moreover, the low sampling rate enables the to recover the signal’s replacement of large standard ultrasound second-order statistics. devices and their cumbersome cables with This leads to a reduc- wireless transducers and simple processing tion of sampling rate on devices, such as tablets or phones. The sam- the order of N . Next, pled data’s low rate facilitates its transmis- suppose that we quantize sion over a standard WiFi channel, allowing our signal after sampling a physician to recover the image with a with a finite-resolution handheld device. In parallel, the data may quantizer. Traditionally, be transmitted to the cloud for remote health sampling and quantiza- and further, more elaborate processing. tion are considered sep- Our approaches can also help increase Figure 1. Sub-Nyquist prototypes for different applications developed in the Signal Acquisition Modeling and arately. However, the resolution in florescence microscopy [10]. Processing Lab at Technion – Israel Institute of Technology. Image courtesy of Yonina Eldar Lab. signal introduced by the In 2014, William Moerner, Eric Betzig, oped in parallel by mathematicians Edmund of pulses [11]. An example of this signal quantizer is distorted, which begs the fol- and Stefan Hell received the Nobel Prize in Taylor Whitaker and Vladimir Kotelinkov arises naturally in radar. In a radar system, lowing question: must we still sample at the chemistry for breaking the diffraction limit [5]. The theorem states that we can recover a pulse moves towards the targets, which Nyquist rate, which is the rate required for with florescence imaging. They sought to a signal from its samples as long as the sam- reflect it back to the receiver. The received perfect recovery assuming no distortion? It obtain a high-resolution image by using pling rate (the number of samples per unit pulse hence consists of a stream of pulses, turns out that without assuming any particu- thousands of images, each containing only time) is at least twice the highest frequency where each pulse’s time of arrival is pro- lar structure of the input analog signal, we a small number of fluorescing molecules. in the signal. This result is the cornerstone portional to the distance to the target, and can achieve the minimal possible distortion This method—referred to as photo-activated of all current digital applications, which the amplitude conveys information about by sampling below the signal’s Nyquist localization microscopy (PALM)—allows sample at the Nyquist rate or higher. the target’s velocity through the Doppler rate. We attain this result by extending researchers to localize and average the Despite the tremendous influence of the Nyquist theorem on the digital revolu- tion, satisfying the Nyquist requirement in modern applications often necessitates complicated and expensive hardware that consumes considerable power, time, and space. Many applications use signals with sizable bandwidth to deliver a high rate of information and obtain good resolution in various imaging applications, such as radar and medical imaging. Large bandwidth translates into high sampling rates that are challenging to execute in practice. Thus, an important question is as follows: Do we really have to sample at the Nyquist rate, or Figure 4. Superresolution in optical microscopy. 4a. The image obtained in a standard microscope. 4b. The original image at high resolution. 4c. The image obtained using 12,000 frames via photo-activated localization microscopy. 4d. The image obtained using only 60 frames via sparsity- can we restore information when sampling based super-resolution correlation microscopy. Image courtesy of [10]. at a lower rate? A related concern is the problem of super effect. Several samplers based on union of Claude Shannon’s rate-distortion function molecules in each frame to obtain one high- resolution. Any physical device is limited subspace modeling appear in Figure 1. to describe digital encoding of continuous- resolution image. This leads to high spa- in bandwidth or resolution, meaning that we Researchers have also recently been time signals with a constraint on both the tial resolution but low temporal resolution. cannot obtain infinite precision in time, fre- exploring the actual processing task. We sampling rate and the system’s bit rate [8]. Since estimating each pixel’s variance can quency, and space. For example, the resolu- consider three such examples: (i) scenarios As a final example of task-based sam- form a brightness image, we can exploit our tion of an optical microscope is limited by in which the relevant information is embed- pling, consider a radar or ultrasound image ability to perform power spectrum recovery the Abbe diffraction limit, which is half the ded in the signal’s second-order statistics that is created by beamforming. An antenna from fewer samples to dramatically reduce wavelength used for illumination. We can [3], (ii) cases where the signal is quantized array receives multiple signals reflected the number of samples needed to form thus see large objects such as bacteria in to a low number of bits [8], and (iii) set- off the target; these signals are delayed a super-resolved image. This approach is the optical regime, but proteins and small tings in which multiple antennas form an and summed to form a beamformed output called sparsity-based super-resolution cor- molecules are not visible with sufficient image [1, 2]. that can often be modeled as a stream of relation microscopy (SPARCOM). Due pulses. However, the individual signals do to the small number of required frames, not typically contain significant structure SPARCOM paves the way for live cell and are often buried in noise. Nonetheless, imaging. Figure 4 compares SPARCOM by exploiting the beamforming process with 60 images and PALM with 12,000 we can form the final beamformed output images. Both approaches generate similar from samples of the individual signals at spatial resolution, but SPARCOM requires very low rates, despite the signals’ lack of two-orders-of-magnitude fewer samples. structure. In addition, we can preserve the The same idea is applicable to contrast- beampattern of a uniform linear array by enhanced ultrasound imaging. We may using far fewer elements (a sparse array) treat the contrast agents flowing through and modifying the beamforming process. the blood similarly to the blinking of the By applying convolutional beamforming, florescent molecules; in this way, we per- we can achieve the beampattern associated form ultrasound imaging with high spatial with a uniform linear array of N elements and temporal resolution. This distinguishes Figure 2. The same cardiac image obtained with delay-and-sum beamforming using a uniform using only on the order of N elements between close blood vessels and facilitates linear array of 64 elements (left) and convolutional beamforming using a sparse array with 16 (see Figure 2). the observation of capillary blood flow. elements (right). Image courtesy of [4]. 2 • newstest 2018 SIAM NEWS In summary, to recover information with [6] Eldar, Y.C., & Kutyniok, G. higher precision and minimum data we (2012).