Signal Processing Participants Vijayakumar Bhagavatula, Carnegie Mellon University Dave Blankenbeckler, DataPlay Chong Tow Chong, Data Storage Institute Dennis Howe, University of Arizona Seiji Kobayashi, Sony Hiroshi Koide, Ricoh Jay Livingston, Cirrus Logic Steve McLaughlin, Georgia Institute of Technology (Co-leader) Kees Schep, Philips LuPing Shi, Data Storage Institute Terry Wong, Calimetrics (Co-leader) Fumihiko Yokogawa, Pioneer Introduction In the last NSIC optical data storage roadmap [12], signal processing and multilevel recording were broken out as separate sections. However, there is a great deal of overlap between these two subject areas. In particular, multilevel recording is enabled by signal processing technology improvements in the writing and reading of an optical data storage system. Thus, it seemed logical to combine the subgroups that cover these areas. On the other hand, the technology of multilevel recording has grown substantially in its own right since the last roadmap was published in 2000. In this section, we first introduce the current state of signal processing technology. Within the discussion of the current state of the technology, we have a separate subsection that introduces the basics of multilevel recording and reviews three examples of multilevel systems. We then focus on signal processing areas that, in the future, are likely to become important for optical data storage. And finally, since signal processing in general is mostly an enabling technology and would be difficult to “roadmap”, we present a roadmap for products using multilevel recording. © 2003 Information Storage Industry Consortium – All Rights Reserved INSIC International Optical Data Storage Roadmap Reproduction Without Permission is Prohibited August 2003 1 Signal processing and coding have become increasingly important and powerful parts of optical data storage systems. As densities increase, more powerful methods are required to deal with decreasing signal-to-noise ratio (SNR), intersymbol interference (ISI) from adjacent symbols in the same track, intertrack interference (ITI) or crosstalk from neighboring tracks, nonlinearities in the channel, defects, off-track performance, defocus, etc. Signal processing/coding algorithms and architectures must address the above issues, be cost effective, and operate at increasing data rates. In this section, we will briefly review the major elements of signal processing and coding for optical data storage and summarize, at a high level, the current state of the art. Figure 1 gives a block diagram of the signal processing and coding elements in a typical data storage system. Error correction coding (ECC) is a major component in virtually all systems – it corrects random and burst errors caused by the many impairments that affect the channel. Modulation is used to represent the logical data to be stored by a physical entity (e.g. a sequence of electrical pulses) that is appropriate for transmission through the channel. It can incorporate many things – including control of the low frequency spectral content of the signal to be recorded/recovered (also known as DC control), embedding of timing information in the recorded signal, etc. The sequence of physical symbols, or channel symbols, output from the modulation process is input to a write circuit, which controls the formation of marks or other information-bearing features on the optical media. The servo systems ensure that the disk is spinning at the correct speed, the laser spot is in focus and on track, and the laser power is correct. During the read process, a physical signal is produced that represents the information recorded on the storage medium. The read side portion of Figure 1 uses a variety of signal processing methods to extract the information as reliably as possible from this signal. Note that in some systems, particularly the multilevel systems, a new block has been introduced into the chain – precoding (see block diagram in Figure 7). This block adds an additional layer of coding to improve system performance in terms of density or margins without any physical changes to the optical system or the media. In what follows, we shall highlight the state of the art in signal processing, modulation and coding in optical data storage systems. ECC Write Modulation Encode Circuitry Servo Media ECC Read Demodulation Decode Circuitry © 2003 Information Storage Industry Consortium – All Rights Reserved INSIC International Optical Data Storage Roadmap Reproduction Without Permission is Prohibited August 2003 2 Figure 1. Block diagram of the signal processing and coding elements of a typical optical data storage system. © 2003 Information Storage Industry Consortium – All Rights Reserved INSIC International Optical Data Storage Roadmap Reproduction Without Permission is Prohibited August 2003 3 3.5.1 Signal Processing and Coding Technology Today 3.5.1.1 Error Correction Coding Typically, error correction codes (ECC) are designed to convert raw bit error rates (BER) of the order of 10-3 (at the input of the ECC decoder) to a substantially lower corrected BER of the order of 10-14, or better. Reed Solomon (RS) error correction codes have been the ‘gold standard’ ECC for more than 25 years; they are used universally in systems such as CD and DVD, and will find application in next generation systems such as Blu-ray Disc and beyond. The RS ECC implementations employed in such systems use interleaving to scatter the symbols that comprise any single RS codeword widely over the storage medium’s recording surface. (Note: the symbols that are encoded, or protected by, a RS ECC are comprised of multi-user-bit words, e.g., in the CD and DVD systems, such symbols are eight-bit bytes.) Impairments, such as disc surface scratches, dust, system track-following perturbations, etc., will cause a contiguous block of recovered channel symbols to be contaminated (i.e., produce erroneous data upon demodulation); interleaving therefore causes the ECC symbols obtained from a contiguous sequence of recovered channel symbols that are corrupted by one single impairment event to become spread over several ECC codewords. Such long impairment events are called burst errors. RS ECCs also have the ability to correct short (one or two ECC symbols in duration) randomly occurring errors. The trend has been toward using longer RS ECC block sizes (codewords); this is facilitated by the steadily increasing processing power available in low-cost integrated circuits. The general trend of using RS ECCs is likely to continue, although some research in the area of turbo-like codes is beginning to show promise. Turbo-like codes include the classes of turbo codes (serial and parallel concatenation), turbo- product codes, and low-density parity check codes. While turbo-like codes by themselves are error-correcting codes, they are generally being proposed for use in configurations (described in more detail later) where they augment the primary Reed Solomon code. For example, such codes are used in conjunction with partial-response maximum-likelihood (PRML) detection architectures to reduce the absolute number of erroneously recovered ECC symbols that are sent from the demodulator to the RS ECC decoder. 3.5.1.2 Modulation Modulation is the translation of logical digital information to a sequence of physical symbols (called channel symbols) that are appropriate for the channel. In particular, the physical representation will be designed to augment the reliable transmission of the channel symbols through the channel. For example, the spectrum of the resulting signals (that are output by the modulator) will ‘match’ the bandwidth (DC, mid-range, and high frequency transfer characteristics) of the channel. Run Length Limited Modulation © 2003 Information Storage Industry Consortium – All Rights Reserved INSIC International Optical Data Storage Roadmap Reproduction Without Permission is Prohibited August 2003 4 Run-length limited (RLL) modulation has been a mainstay in optical recording systems like CD and DVD and will continue to appear in future systems like Blu-ray Disc. The basic idea is (i) to increase the channel storage density and transmission (write/read) bit rate by representing several user data bits by each channel symbol and (ii) to introduce a constraint, namely limiting the range of lengths of the channel symbols, so that intersymbol interference can be controlled and minimized (i.e., so that the sequence of recorded channel symbols produce a write/read signal which has a bandwidth that fits nicely within the bandpass of the recorder channel). A particular RLL modulation process is defined by two parameters, d and k, which specify the minimum and maximum pulse lengths in the sequence of electrical pulses which comprise the pulse length modulated electrical waveform used to physically represent the stream of user data to be stored. The individual pulses of this waveform are comprised of an integer number of channel bits (also known as channel bit clock periods or timing intervals); the shortest RLL pulse is comprised of d+1 channel bits while the longest pulse contains k+1 channel bits. More specifically, each distinct pulse in this channel symbol waveform corresponds to a distinct run in a binary sequence called the RLL channel sequence (a channel sequence run comprised of m channel bits will contain a ‘one’ and exactly m-1 contiguous following ‘zeros’, see Figure 2). CD and DVD both use d = 2, k = 10 RLL modulation which maps user data to a pulse length modulated
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages32 Page
-
File Size-