<<

PAPER The Art of Underwater Imaging – With a Glimpse of the Past and Vision of the Future

AUTHORS ABSTRACT Donna M. Kocak This State of Technology Report on Underwater Imaging provides a historical synopsis Green Sky Imaging, LLC of underwater imaging, discusses current state of the art, and suggests future possibilities Chair, MTS Underwater Imaging Committee for continued advancement of the field. The history presented herein provides information Frank M. Caimi assembled in a manner not found in previous reviews. Present work is grouped according Green Sky Imaging, LLC to imaging methodology wherein foremost research and technical innovations of the field are highlighted, with a on the past five years. Trends in research and development are also discussed as they relate to emerging underwater imaging techniques and technologies. INTRODUCTION bservation and exploration of the underwater realm has been of interest since ■ Electromagnetic (EM) methods that vances are often associated with imaging Oancient times and has forged the development utilize transparent regions of the spectrum hardware and methodology, credit must also of many technological advances. Advance- (near , ultraviolet, visible, and very be given to developments in computer vi- ments in diving apparatus, , un- low frequency [VLF]); sion and pattern recognition that allow ex- derwater and , underwater light- ■ Acoustic imaging techniques that include traction or interpretation of useful informa- ing, , remotely operated vehicles (ROV), 3-D reconstruction and mosaicking; and tion from raw imagery or data. This focus and autonomous underwater vehicles (AUV) ■ Temporally discriminant or signal-coded has been noted in a recent publication that are the most notable. Although the quest to methods that support multidimensional underscores the difficulties and challenges travel to ocean depths has been pursued for representations. of underwater imaging, which is written for centuries, emphasis on recording and obtain- Detailed descriptions of many of these the computer vision community (Murino ing or interpreting information beyond that preceding methods have been published else- and Trucco, 2000a). available from the human visual system is an where (Potter, 1999; Kocak and Caimi, 2001), important recent endeavor. Various hardware, and in Jaffe et al., 2001, where an emphasis is software, and algorithmic advancements, as placed on extended range imaging. A Glimpse of the Past well as better understanding of ocean physical Historically, imaging developments have “We can chart our future clearly and wisely parameters make this possible. For example: been slow to evolve and have taken decades only when we know the path which has led to ■ Conventional imaging methods that to reach the present state of the art. This has the present.” utilize standard cameras; been due, in part, to limitations stemming Adlai E. Stevenson (1900-1965) ■ Extended range imaging techniques that from incomplete understanding of radia- utilize time-gating or angular field restriction tive transfer in optical media such as air and “The farther back you can look, the farther for- to reduce unwanted image noise resulting seawater, limitations in elec- ward you are likely to see.” from optical scatter in the transmissive medium; tronic and optical hardware, and restrictions Winston Churchill (1874-1965) ■ line scan methods that produce 2-D in data transfer, recording, and signal pro- or 3-D imagery; cessing throughput. Continued advance- Many advancements have enabled the ■ Holographic or other techniques that ments in these areas have effectively coun- evolution of underwater imaging, including: exploit spatial coherency; tered contrast and range reduction resulting direct methods for observation such as scuba, ■ Optical methods using frequency conversion from severe physical effects of light propa- underwater housing technology and vessels; (fluorescence, Raman, etc.) to develop gation in seawater. Similar observations hold development of the electronic , pho- higher-dimensional image data; for sound propagation and acoustic imag- tography, video recording and light sources; ■ Multiple-perspective methods that achieve ing. In 1999, a review of the field revealed invention of the laser and fluorometric analy- range depth or 3-dimensional (3-D) image dramatic advances in video technology, elec- sis; as well as the fields of information process- formation and display (including mosaics); tronic ballasting of HMI lighting, laser tech- ing, artificial intelligence and computer vision. ■ Image intensity algorithms based on image nology, mechanically and electronically Some key milestones in these topics are illus- processing techniques that exploit effects scanned sonar, and sonar processing algo- trated in the timeline shown in Figure 1 and such as shading, filtering, or motion; rithms (Olsson, 1999). Although these ad- are summarized on the following pages.

Fall 2005 Volume 39, Number 3 5 FIGURE 1 Historical timeline of events leading to present day underwater imaging.

Direct Observation Techniques they could pick up bottom sediment to deter- derwater visualization techniques. Dive appa- As early as 4500 B.C., diving provided a mine the topography (Hohler, 2002). ratus inventions included the first practical means for food gathering, commerce or war- Increasingly, direct observations were SCUBA in 1825 by William James, a diving fare for coastal cultures such as those found in sought by traveling into the depths of the sea. helmet converted from a patented “smoke Greece, Mesopotamia, and China. In 360 The first successful vessel was built helmet” for firefighters in 1828 by Charles B.C., Aristotle wrote of sponge fishers using around 1620 by Cornelius Van Drebbel. The Anthony Deane, and, that same year, the first an air-supply ; and in 332 B.C. watertight vessel was wood-framed and sheathed known compensator by Lemaire Alexander the Great was reported to have in leather, capable of carrying a total of 20 people d’Augerville. However, it was not until 1869 made several dives in a crude glass diving bell to a depth of 20 m. In 1667, when Jules Vern’s novel entitled 20,000 Leagues for observation (Bachrach, 1998). In the observed a gaseous bubble in the of a Under the Sea was published that the inven- 1500’s, Leonardo da Vinci envisioned the first snake during his experiments with compres- tions of SCUBA were introduced to the world. known self-contained underwater sion and ; however, many years apparatus (SCUBA) that combined an air sup- passed before Boyle’s discovery was put to prac- Camera, and Video ply and buoyancy control in a single system. tical use in humans (Borrillo, 2001). John Scott Along with advancements allowing travel In the 1850’s, mariners recorded depth mea- Haldane created a procedure for staged decom- into the ocean depths, other inventions pro- surements, called soundings, by lowering a hemp pression to avoid “the bends” in 1908, which vided a means of documenting what was be- rope marked in equal distances with a lead ball culminated in the publication of the first U.S. ing seen both above and below the ocean’s on the end until the tension in the rope Navy dive tables in 1912. surface. The Chinese philosopher Mo Ti pro- changed to indicate that the ball had reached Developments enabling man to remain vides perhaps the earliest written mention of bottom. By covering the ball with grease, submerged for longer periods led to early un- imaging in his description, dating from the 5th

6 Marine Technology Society Journal century B.C., of formed by a small ber of plates. Three years later Eastman an- Technologies, 2005). In 1979, National Geo- opening. In 4 B.C., Aristotle made similar nounced the invention of photographic graphic was the first to use a CCD television observations. Ibn al-Haitham (Alhazen) re- in rolls and, in 1888, provides the first (TV) camera in the deep ocean. The story ported this concept in 10 A.D. and was cred- camera designed to use roll film. With these behind this expedition is provided here, para- ited with inventing the . In inventions, Eastman established the Eastman phrased from Emory Kristof (2005): the 1500’s, many artists used a dark box or Kodak Company (Bellis, 2005a). In early 1977, the hot water vents room with a hole in one end, known as a The world’s first underwater are discovered about 100 miles north of (Latin for dark room), to project was recorded at a depth of 6 m in 1856, the Galapagos Islands at 8,500 feet inverted images onto the opposite wall where when William Thompson published his ex- (2,590.8 m). National Geographic pub- they could be traced. Giovanni Battista della periment in the Journal of the Society of Arts lishes the first story on the vents in 1977. Porta’s writings in 1591 document the first (Baker, 1997). Thompson took the photo- A decision is made by the National Sci- use of a camera obscura with , to increase graph from the surface using a camera at- ence Foundation (NSF) to fund a second brightness and clarity (Hammond, 1981). In tached to a pole (a precursor to the ROV) Galapagos Rift Expedition with biolo- the early 1800’s, William Hyde Wollaston in- with an exposure time of 10 minutes. Not gists. National Geographic decides to pro- troduced a camera lucida (Latin for light room), surprisingly, the camera flooded but the im- duce a second story and a one-hour PBS which was merely a prism mounted above a age survived. In 1872, Ernest Bazin alleg- TV show. My investigations into TV drawing board that allowed an artist to trace edly took photos at an underwater observa- camera technology reveal that 3 tube cam- the superimposed image in ambient light. tory using an electric lighting system; however, eras are too large to package for exterior use All of these discoveries led up to the in- none of the have been found on ALVIN, and that the quality of 1 vention of photography, a term coined by Sir (Gilbert and Alary, 1996). In 1893, Louis tube cameras are insufficient. All known John F.W. Herschel in 1839, meaning writ- Boutan was credited for taking the first pho- TV cameras use tubes. Shortly thereafter, a ing with light. To accomplish this, a special ink tographs with diving gear, equipped with a group from National Geographic meet would be required that responded to light. (not so portable) purpose-built camera and with diver/film maker Smokey Roberts More than 2000 years before the invention housing weighing 400 pounds. from Lancaster, Pennsylvania in a bar in of the camera obscura, it was known that a In the 1900’s many underwater inven- Woods Hole where Smokey tells us about a particular snail, the purpura, would leave a yel- tions followed: in 1915 John Ernest diving student of his, Harold Krall. low substance in its wake that turned purple Williamson produced the first underwater Harold is Vice President of RCA in in (Grundberg, 2005). Experiments movie; in 1923 William H. Longly and Charles Lancaster and is in charge of building a in photographic chemistry were recorded in Martin took the first underwater photos revolutionary new TV camera based on a 1727 when Johann Heinrich Schulze ob- using magnesium lighting (Gilbert and Alary, radical new solid-state device called a served that silver salts darkened when exposed 1996); in 1939 , considered the CCD. The potential for reducing the size to light. By 1800, chemist Thomas father of modern , of a TV camera is amazing. Four days Wedgwood succeeded in producing images produced the first underwater film entitled later, I head a group from National Geo- using silver salts, but the images eventually Stalking Under Water; in 1957 Jacques Yves graphic to meet with Harold. At the time, faded. In 1826, Joseph Nicephore Niepce used Cousteau and Jean De Wouters invented the RCA is leading the world in this technol- a bitumen-coated plate in camera obscura to first waterproof 35-mm camera named Ca- ogy (about 18 months ahead of Sony) capture the earliest still in exist- lypso Phot, later known as ; and in 1960, and has experimental cameras in its lab. ence. Following this, in 1837, Louis-Jacques- EG&G developed an extreme depth under- This is all new to National Geographic. Mande Daguerre solved Wedgwood’s prob- water camera for the U.S. Navy (Bellis, 2005b). RCA agrees to build a camera for the expe- lem of stopping the exposure process by The decade of the 1950’s brought about dition and Benthos agrees to build the washing away remaining silver iodide using a color television’s large-scale commercial intro- deep-sea housing. The camera is used on 7 of warm water and table salt. Just af- duction, with several notable events includ- dives in early 1979 and the video results ter the announcement of Daguerre’s process, ing development of the first two-inch format are wonderful. The RCA engineer who called , in 1839 William Henry video tape in 1956 by Ampex Corporation. built the camera, Fred Engle, makes the Fox Talbot revealed a similar invention called Mass-market Betamax and VHS videocassette trip and a dive to fine-tune the camera. photogenic drawing. Talbot’s process, later re- recorders became available much later, in 1975 Al Giddings and I operate the camera, fined and renamed the , formed the and 1976 respectively (Fenton, 2004). with Al shooting most of the footage as basis for most modern film technology, which The origin of the charge-coupled device chief TV . The National relies on negatives to produce multiple posi- (CCD) stemmed from the need Geographic Magazine article is published tive prints. By 1880, George Eastman had to store data in a computer memory. In 1969 in 1979 alongside of the PBS show “Dive invented a dry plate formula for film and had Bell Lab scientists Willard Boyle and George to the Edge of Creation.” This Emmy- patented a machine for preparing a large num- Smith proposed the idea of the CCD (Lucent winning production on the vents is the

Fall 2005 Volume 39, Number 3 7 first show ever filmed with a solid-state cial lighting underwater. In 1899, Boutan used sonar listening device to detect icebergs (Bellis, device (in this case a CCD). It is also the a magnesium-filled bulb, designed by 2005e). Soon after, in 1915, Paul Langevin first show ever filmed in the deep ocean Chauffour, at a depth of 50 m (Marshall, invented the first sonar device for listening to with an externally mounted color TV 2005a). Dr. Paul Vierkotter (1925) and submarines. A more comprehensive history of camera. The National Geographic maga- Johannes Ostermeier (1930) made improve- sonar can be found in Theberge (1989). zine ran one color still photo making it ments to this bulb design, which resulted in the first use of in the the first commercially available flash bulbs. In Artificial Intelligence (AI) and magazine. 1931, Harold Edgerton developed the first Computer Vision Soon after, camera technology was high-speed electronic flash called the strobo- Computer vision developed from sig- launched into the digital age with the first scope (Marshall, 2005b). nal processing research, whereas image un- commercial solid-state camera (Sony’s Mavica) In recent history, play an important derstanding is the branch of artificial intel- introduced in 1982 and the first digital cam- role in underwater imaging. In 1917, Albert ligence (AI) that seeks to provide computers era system (DCS) targeted at professionals and Einstein first theorized about “stimulated emis- an ability to derive semantic formalisms that journalists introduced in 1991 by Kodak sion,” the process that makes lasers possible. accurately describe the meaning of a scene (Bellis, 2005c). In 1994 Apple’s digital cam- Forty-one years later, Arthur L. Schawlow and or features of an image. Goals include what era launched home-use of the technology. Charles H. Townes received a patent for theory are thought to be high-level visual func- For nearly 30 years, the CCD was the extending their “maser” (microwave) technol- tions in animals and humans, such as regis- prevailing sensor. However, at ogy to infrared and visible spectrum light tration, segmentation, localization, recogni- the beginning of the 1990s, small packaging (Bertolotti, 1983 and Bellis, 2005d). tion, classification, tracking, motion of the complementary metal oxide semicon- estimation, shape estimation, and depth es- ductor (CMOS) transistor allowed an alterna- Fluorometry timation, as well as methods for enhance- tive imaging technology to develop. Through- Although earlier observations of fluores- ment of images not possible by natural vi- out the 1990s, CMOS imager circuitry cent organisms were reported, C.E.S. Phillips sion systems, such as mosaicking, 3-D evolved from passive to active systems to published the first observation of fluores- reconstruction, and filtering (e.g., provide performance nearly matching that of cence in a marine organism in 1927 deblurring and noise reduction) (Picardi and the CCD process. The active pixel system (NightSea, 2005a). In 1964, the same year Jan, 2003). These functions are achievable (APS) provides random pixel access, on-board WHOI’s Alvin submersible recorded its first using numerous imaging modalities (e.g., amplification and gating, as well as low power dive, Rene Catala was the first researcher to conventional 2-D, 3-D methods, sonar, EM, consumption. Unlike CCDs, CMOS sensors systematically examine and photograph fluo- fluorescent, Raman, etc.). suffer reduced sensitivity; however, CMOS rescence in corals. More recently, diver-held The birth of “modern” AI may have technology continues to be used predomi- fluorescence imaging cameras have been de- been motivated by Alan Turing’s publica- nantly for high-speed imaging and offers the veloped with built-in ultraviolet (UV) light- tion (in 1950) of Computing Machinery and following advantages: freedom from bloom- ing such as light emitting diodes (LED), and Intelligence, wherein he posed the question ing when over-exposed, less power resulting a variety of laser-excited fluorescence imagers “Can machines think?” Others assert that in smaller and less complex drive electronics, for discrimination of manmade, versus natu- AI began in 1955, when Allen Newell and flexible read-out allowing frame rates to be rally occurring, objects for biological investi- Herb Simon developed a computer pro- increased, spectral sensitivity matching that of gation. Digital extraction of fluorescence from gram called The Logic Theorist. Subsequent the , and low cost due to high- regular photographic or videographic images noteworthy events include a 1956 volume of manufactured semiconductors. is also possible. Typically an excitation filter Darmouth computer conference that de- Ongoing developments in for stimulating a fluorescence response is scribed the future of AI research and the technology are expected to support on-board placed over an intense, focused light source development in 1958 of LISP (LISt Pro- pixel specific processing, such as sensitivity nor- such as a high intensity discharge (HID) bulb, cessing) language for AI—both led by John malization, anti-, thresholding, filtering, and a barrier filter is placed over the still or McCarthy. The first attempt at solving image compression and other functions that video to block unwanted reflec- problems in computer vision may have oc- now require separate processing in software or tions (NightSea, 2005b). curred in 1967, when Marvin Minsky so digital signal processing (DSP) hardware. assigned an MIT graduate student, believ- Sonar (SOund Navigation ing that the problem could be largely solved Lighting and Laser (Light Amplifi- And Ranging) within one summer (Olson, 2003). In cation by the Stimulated Emission Although early sounding methods ad- 1982, David Marr’s book Vision: A Com- of Radiation) vanced to a depth of 1 mile, it was not until putational Investigation into the Human is one of the first photogra- 1906 that an electric echo sounder was devel- Representation and Processing of Visual In- phers (often credited as the first) to use artifi- oped when Lewis Nixon invented the first formation further motivated the study of

8 Marine Technology Society Journal human and machine vision as well as the et al., 1997). More recently, streaming video perpendicular strike angle (Cardinal, 2001; application of machine vision to quality compression standards having better compres- Taylor, 1980). control on manufacturing assembly lines. sion ratios are being developed for next gen- Changes in broadcast formats planned for eration commercial video applications. The the past decade, such as the high definition Advanced Video Standard (AVS) codec (com- television (HDTV) 1080i standard, are fi- A Look at the Present pression-decompression hardware/software) nally reaching the consumer market, which “Yesterday’s the past and tomorrow’s the being developed in China is a subset of the has motivated development of HDTV cam- future. Today is a gift – which is why they call popular H.264 standard that offers a bit rate eras and display devices. The packaging of it the present.” 61% of that required for MPEG-4. AVS uses these devices for underwater operation has Bil Keane (1922-present) an 8x8 pixel macro block, while H.264 uses a been a recent design activity: several under- 4x4 block thereby taking advantage of granu- water HDTV cameras are now available. Trans- “Seize the day.” (Carpe Diem) larity in the processing scheme. Standard defi- mission of HDTV signals over long distances Horace (BC 65-8) nition decoders are currently available for the can be accomplished with fiber optic convert- H.264 standard and high definition decod- ers and single mode fiber optic cable, although Today’s technology supports a wide as- ers are expected later this year (Mu, 2005; multiplexing of multiple HDTV signals has sortment of fast, inexpensive specialized im- HD Issues, 2005). Development of the not been well addressed by the industry. Spe- age processing software and hardware, and JPEG2000 real-time video codec utilizing cialty cameras, such as HDTV, IMAX 3-D, high-quality affordable consumer cameras. wavelet compression enables rich content, high Reality Camera System (Anthony, 2001), low Lasers continue to be used for 2-D and 3-D resolution, and scalability features at 30 fps, light silicon intensifier target (SIT), intensi- quantitative measurements, along with sonar and is rapidly making its way into the con- fied SIT (ISIT), and intensified CCD (ICCD), processing utilizing fast microprocessors, field- sumer, medical, defense and aerospace mar- have contributed to deep-sea applications and programmable gate arrays (FPGAs) and DSPs. kets (Johnson, 2005; Pearson and Gill, 2005). cinematography. Algorithms once too computationally expen- A feature of JPEG200 over other streaming Low light HDTV cameras such as the sive are now feasible and as hardware contin- compression schemes is complete indepen- HARP-type employ image pickup tubes that ues to advance, new applications are becom- dence of encoding/decoding for the subject use the avalanche multiplication effect within ing possible. frame from other frames in the sequence. Video photoconductive film. Sensitivity and lag char- compression advancement will no doubt con- acteristics are remarkably improved in the Su- Conventional Imaging Methods tinue to provide benefits for real-time under- per-HARP tube as a result of using a thick 25 Twenty-first century developments in- sea video transmission over bandwidth lim- µm photoconductive film, providing ultra- clude advanced cameras with resolution ex- ited channels, as well as archival data storage. high sensitivity (11 lux at F8 which is 640 ceeding 20 million . High-speed CMOS Video clip recording capability for devices uti- times that of conventional tube camera), neg- imaging systems are now available offering lizing these new techniques will progress from ligible lag and a high resolution of over 700 frame rates of up to 3,000 frames per second minutes to hours over the next few years. TV lines (NHK, 1998). (fps) with 1,0242 pixel image resolution and Output interfaces from digital still cam- Several new processing techniques being frame rates in excess of 150,000 fps with re- eras have evolved from serial RS-232, PCI or developed for conventional imaging systems duced image resolution. The replacement of enhanced PCI bus, to universal serial bus will be directly applicable to underwater cam- film media with digital CCD or CMOS sen- (USB), to IEEE 1394 (FireWire), while video eras. For image enhancement, noise reduction sors is providing numerous options for under- uses NTSC or PAL formats with Ethernet software will be embedded in cameras to pro- water operation. These cameras can prove use- interfaces. Soon, wireless interfaces will be avail- duce apparent improvement in sensitivity (DxO ful for high-speed inspection applications able using ultra wideband (UWB), 802.15.3, Labs, 2005). Lens distortion, , and where image blur would normally be a prob- 802.11n and other protocols. lateral will be corrected lem and promise high-quality images even Additional trends in the development of automatically, even if the amount or type of under adverse illumination (e.g., < 1 lumen high-density CMOS image sensor arrays in- correction differs significantly from one image per square meter [lux]). clude enhancing the image format to allow to the next. Software will take into account the In addition, video sequences can be en- compatibility with standard 35mm format sensor characteristics and will reduce noise to coded with discrete cosine transform (DCT) . Wide angle imaging from CCD arrays produce images characteristic of several f-stops based schemes (H.261, H.263, MPEG1 and is a more difficult problem since the sensitiv- less. Motion stabilization software is being inte- MPEG2), wavelet and sub-band filtering tech- ity drops rapidly away from normal light inci- grated into cameras, to reduce image blur at nology, fractal-based algorithms, or image seg- dence on the array. This effect mandates the longer exposure times thereby increasing sensi- mentation/region based compression schemes use of telecentric lens designs that map an tivity while reducing blur. These techniques (MPEG4) to minimize the data transmission angular input field into a proportional dis- are particularly appropriate for on-the-fly in- rate or on-board storage requirements (Hunter placement at the lens focal plane with near spection application of pipelines or any appli-

Fall 2005 Volume 39, Number 3 9 cation requiring rapid survey and high resolu- confined spaces. A significant advancement transparent particles visible to a camera tion. Spectral filtering algorithms designed spe- in CMOS chip technology allows construc- (Strickler and Hwang, 2000). Backlighting cifically for underwater cameras have been tion of underwater micro-video cameras. and lighting of known water volumes to quan- added to digital cameras and camcorders to cor- Carrillo Underwater Systems has reduced tify the particles and animals (Caimi and rect for the increased red attenuation of seawa- circuit boards to single chips. Tusting, 1988; Carder and Costello, 1994) ter. Range dependent correction is also possible VideoRay has developed a have recently manifested in the Video Plank- using algorithmic approaches. miniROV camera system, VideoRay Pro, hav- ton Recorder (VPR) (Wiebe et al., 2002). This Other methods of enhancement, such as ing a minimum sensitivity of four lux. Op- system utilizes a CCD video camera with a brightening and sharpening, will be integrated tionally, an infrared camera having 0.03 lux high magnification lens and short pulse dura- directly into image sensor chips. Minolta’s sensitivity is available and has been used at tion, collimated 80-W xenon strobe with a Vectis Weathermatic Zoom Camera is the first NASA/Kennedy Space Center to inspect parabolic to image fixed volumes of camera to combine zoom, autofocus (AF) and tanks, culverts, and hazardous submerged ar- water at a rate of 60 Hz. Advanced Photo System features. With a eas, and by Discovery Channel to capture foot- What wavelengths of illumination do depth rating of 33 feet (10 m), the camera’s age of sharks for the documentary Sharks of ocean dwellers see? Recent scientific research is passive AF system determines its focus based the Great White North. devoted to learning how animals perceive their on the contrast of subjects, as opposed to less In addition to camera advancements, new environment (Marshall et al., 2003; Milius, sophisticated active range finding systems that glass housings have been created to withstand 2004; Hart et al., 2004). Studies suggest that often use a light beam that scatters or attenu- hadal below 10 km allowing ad- many of the living near the surface or in ates in the water. This makes clear, sharp pic- vancements in deep-sea photography. One coral habitats are able to see ultraviolet-A tures possible below water without the com- system named GlassCAM, developed at (UVA) wavelengths (Losey et al., 2003). One plexity of added hardware. Range Scripps Institution of , currently advantage of this may be that many plank- measurement systems, however, will be incor- utilizes a CoolPix digital camera (Hardy tonic species transparent at visible wavelengths porated into cameras at the pixel level to allow et al., 2002). The system has been success- appear opaque at UVA wavelengths due to entire 3-D range maps to be produced by a fully deployed 18 times in locations from the increased light scattering and presence of UV- single CMOS sensor (Bamji and Charbon, Sea of Cortez to the Aleutian Trench, reaching protective pigments (Johnsen and Widder, 2003). The idea of recording the zoom set- a maximum depth of 4.2 km. 2001). Improvements in cameras (e.g., UV- ting and adjustment to obtain a sensitivity) and equipment for analyzing light better idea of image size is a possibility that Illumination Methods and color are making this research possible. would obviate the need for auxiliary range Specialized illumination systems have been A recent development, referred to as finding hardware. developed utilizing forward scattered light to “Eye in the Sea,” uses red light outside of Innovative commercial off-the-shelf view small aquatic organisms and bubbles. The the animals’ normal vision range to illumi- (COTS) underwater cameras include diver most recent illumination schemes provide in- nate a video camera’s field of view unob- controllable models that accommodate both formation regarding trajectories of zooplank- trusively (HBOI, 2005). The system is de- wide-angle lenses and macro lenses, and weigh ton and utilize Schlieren optics to make nearly signed to operate on the seafloor less than three pounds in air. SeaView Video Technology offers a “Twin-Cam” system with simultaneous full-color video for bright sun- FIGURE 2 light and clear water, and infrared (IR) illumi- Atolla jellyfish: (a) full view, (b) bioluminescent pattern and (c) LED simulated bioluminescent pattern. (Photos by Edie Widder.) nation used with video for deep water, murky bottoms and night view- ing – an excellent fit for underwater inspec- tion applications. A Bowtech/ROS camera fea- tures a solid-state CCD that has low light level performance nearly matching that of a con- ventionally tubed SIT camera without the concerns of mechanical impact survivability. Undersea cameras are also becoming min- iaturized. Inuktun Services provides a small amphibian camera equipped with a pan as- sembly. At only four inches in diameter, the can fit through small openings for inspection inside of tanks, pipes and other

10 Marine Technology Society Journal automatically and, most importantly, un- can be formed with conventional camera light- optional digital camera assembly for collecting noticed by animals. The video camera is ing at 1-2 attenuation lengths and at 2-3 at- passive light, a position and orientation sys- triggered to record footage at scheduled tenuation lengths with some control of the tem, and processing hardware. The systems intervals or when bioluminescent light is camera and light separation. Extended range produce real-time geo-rectified 3-D images and detected nearby. In a recent deployment image techniques provide the possibility of topography from an airborne platform and in the Gulf of Mexico (just prior to and imaging at 3-7 attenuation lengths. collect time-synchronous LIDAR range and during Hurricane Katrina’s arrival) the sys- A variety of range-gated imaging systems image data from an optical receiver. The re- tem was equipped with a “jellyfish lure” have been developed in the past two decades. sulting image frames can be mosaicked and consisting of blue LEDs programmed to The first systems used lower powered pulsed orthorectified in real-time. The LIDAR range simulate bioluminescent patterns of the lasers and time gated image microchannel plate data and passive image data may be referenced Atolla jellyfish (Figure 2). (MCP) intensifiers to enable the image cap- to a position and orientation system to trans- ture as the laser photon packet arrived after a form the 3-D range images to a single geo- Extended Range Techniques round trip from source to target, thereby graphically referenced multispectral 3-D im- LIDAR (LIght Detection And Ranging) screening photons traveling longer or shorter age (Pack and Frederick, 2003). Deployment Oceanographic LIDAR systems have pri- round-trip distances due to scatter in the near of an airborne system consisting of bathymet- marily been developed for aerial hydrographic field. Over the years, progress in MCP design ric LIDAR, topographic LIDAR and a digital mapping and surveillance. These systems have has reduced noise contributions from random camera is scheduled to include a passive spec- been in use for several decades and have evolved noise (speckle)—newer third generation de- tral imager (Francis and Tuell, 2005). This as the result of improvements in high power vices exhibit much improved signal-to-noise upgraded system will be able to measure LI- laser technology. High repetition rates, high ratios (SNRs) compared to first or second gen- DAR, camera and hyperspectral data simulta- energy, and multispectral capability are impor- eration intensifiers. These improvements, in neously to extend missions beyond conven- tant factors in modern LIDAR systems capable addition to advancements in pulsed laser tech- tional hydrographic surveying—targeted for of 3-D rapid environmental assessment (REA). nology, make possible today’s range-gated REA in the littoral zone. The state of the art in commercially avail- imaging systems. One particular system, Laser able hydrographic LIDAR allows depth mea- Underwater Camera Image Enhancer Time Resolved Imaging LIDAR surements from just under the surface to 50 (LUCIE) is able to use high repetition rates of One approach to obtaining a simultaneous m with an accuracy of 0.25 m over a swath 2000 pulses per second with subsequent av- image and temporal history of the returned pho- width of about 250 m or 0.6 times the flight eraging to reduce image noise (Jaffe et al., ton packet is to use a streak tube to simulta- altitude at a 1000 pulse-per-second data rate 2001). Various forms of this system have been neously record one dimensional (1-D) image data (Optech, 2003). Typical system power is 2 developed—both diver held and ROV/AUV from a pulsed laser fan beam. The Streak Tube kilowatts supplied by the 28-volt aircraft deployable. The most recent systems exhibit Imaging Lidar or STIL (Bowker and Lubard, power system. Imaging of objects suspended reduced size (25 cm diameter and 70 cm 1995) is a classic example illustrating performance within this depth range is a byproduct of the length), operating with a 22 KHz repetition achievable with this method. The system com- hydrographic mapping capability and the rate and 5 ns pulse length (Weidemann et al., prises a scannerless design that uses only a cylin- horizontal accuracy of 2.5 m or 1 part in 100. 2002). Image performance is 3-5 times that drical lens to form a fan beam from the pulsed The speed of objects moving in the water col- of conventional image systems under artificial laser source. Since the entire time history is avail- umn can also be estimated from LIDAR mea- lighting and approximately twice that of divers’ able for each channel across the fan beam at the surements (Wilkerson et al., 2003). Software visual discrimination under natural illumina- detector, the system can generate a full 3-D range for geo-referencing, as well as data manage- tion. Future improvements to the system will map by successively combining azimuth-range ment and analysis has also reached maturity. include automated control, improved image images along the track. The STIL has evolved Range binning of LIDAR return imagery filtering, and sonar assisted range estimation into a tow body deployed surveillance imaging has the effect of removing scattered light ef- via automated gate setting. system, and has been tested extensively in turbid fects from water volumes outside the range More recent developments in LIDAR stem waters (Jaffe et al., 2001). bin and is therefore useful for imaging through from the multispectral capabilities of pulsed highly turbid media. Using amplitude returns lasers, allowing multispectral and hyperspectral Laser Line Scan (LLS) Systems from a single range bin (range gating) removes data sets to be obtained. Water quality, object Synchronous Scanning scattered light from much of the illuminated recognition, and biological and geological as- Synchronous Scan imaging systems tend volume within the camera’s field-of-view sessment are potential byproducts of analysis to outperform other system types due to reduc- (FOV), thereby improving contrast at a fixed of multispectral LIDAR images. Systems typi- tion of shared volume between the illumina- range distance and more importantly, increas- cally consist of a laser transmitter light source, tion and the detection systems, but at the ex- ing the distance over which images can be a laser detection assembly, optics that couple pense of complexity. By limiting the FOV, both formed at a given contrast. Typically images the outgoing and incoming laser signals, an forward scatter and backscatter effects are mini-

Fall 2005 Volume 39, Number 3 11 mized. These systems were originally developed mapping assessments (Carey et al., 2003; The scanner uses mechanically coupled - by Raytheon (Leathem and Coles, 1993 and Amend et al., 2004). Although storage and rors to scan an Argon laser at 488nm, as well Coles, 1997). Current systems employ multi- bandwidth limitations were revealed, the sur- as four narrow FOV PMT receivers fitted spectral line rather than single line laser illumi- veys demonstrated feasibility of the technology with interference filters—typically at 488nm, nation, and use simultaneously scanned mul- for conducting benthic assessments. Second 520 nm, 580 nm and 685nm. The imaging tiple Photomultiplier tube (PMT) receivers with generation LLS have since improved data cap- system therefore produces outputs for red, separate color filtering to render output imag- ture and storage methods, and currently offer green and yellow fluorescence, as well as at ery in 12-bit color (Nevis and Strand, 2000). three-color imaging (Mazel et al., 2003) and 3- the excitation wavelength. The cross track The 12-bit dynamic range produces highly D imaging (Moore et al., 2000). For example, resolution is typically 10 milliradians or less, detailed images at optical ranges of up to 7 at- 3D Sea Scan uses a triangulation method imple- while the in track resolution is determined tenuation lengths. mented in terms of a 1-D point-by-point laser by the sensing configuration speed, beam scan and 1-D CCD detector. The system pro- divergence and scan mirror rotational speeds. 3-D Optical Image Formation duces a full 3-D range map by using vehicle The system produces images useful for dis- Methods based on triangulation have been motion (y-axis) perpendicular to the scan (x- crimination of different substrates and biota demonstrated for obtaining 3-D range and in- axis) to accumulate sequential azimuth-range (Strand, 2002). tensity maps (Caimi et al., 1993; Caimi et al. (x-z plane) data slices. This system has been used More recent studies have improved the 1996; Kocak and Caimi, 1999). Those sys- to produce bathymetric data sets as well as sur- FILLS images by Discrete Fourier Transform tems relied upon a 2-D laser scan of a volume of face maps of mine like objects (Moore, 2001). (DFT) filtering and logarithmic scaling. Iden- water (nominally 1 cubic meter) and a high tification studies have been performed using sensitivity, image intensified 2-axis photode- Temporally Coded/Coherent rule based processing with ENVI commer- tector that is position sensitive. Images from Methods cial software (Mazel, 2002). such systems can be formed over larger vol- Holographic Imaging Systems have also been developed com- umes but tend to suffer from backscatter ef- Holographic imaging of particulates was mercially. NightSea offers two products for fects that adversely impact accuracy in turbid demonstrated decades ago (Thompson, 1990; “seeing, videotaping, and photographing” media. Other developments include a system Carder, 1979). Recent developments include fluorescence underwater, which enable a diver originally patented by Moran that uses phase Katz et al., 1999, where an in-line hologram is to visualize fluorescence, while accessory fil- encoding of a waveform and simultaneous formed, recorded, and subsequently converted ters for photo and video equipment make it modulation of the gain of an image intensified into a 3-D image capable of 10 ìm-particle possible to record the effect. One model uses array to achieve an intensity-encoded 2-D range resolution. Further refinement of holographic high-intensity UV for stimulating fluores- image (Moran et al., 1993). Range can easily be imaging systems (Watson et al., 2001) sup- cence of hydrocarbons, tracer dyes for leak displayed in pseudocolor encoding of the ob- ports recording of both in-line and off-axis detection, and other fluorescent materials served scene. More complex schemes are pos- reference beam holograms. Off-axis holograms used in non-destructive testing, while the sible, which perform better in turbid condi- require a higher spatial frequency carrier and other system uses lower intensity blue illu- tions (Mullen et al., 1999; Mullen et al., 2004). high resolution recording media, but have the mination for observing biological fluorescence A multi-dimensional laser line scanning advantage of being able to record higher par- and some fluorescent dyes. imaging system has been developed to serve as ticle densities compared to in-line methods. For still photography with film or digital a platform for new concepts in underwater cameras, custom filters are available that fit imaging (Fuchs, 2004). The system has ex- Optical Methods using Frequency over existing electronic flash units to stimulate panded on its patented 3-D range data pre- Conversion fluorescence, with a complementary filter in decessor (Caimi and Smith, 1995), adding Fluorescence Imaging front of the camera. Either ultraviolet or blue relative reflectance of each pixel and five chan- Fluorescence imaging, as well as other im- light can be used for photography or nels of fluorescence emission in the scene. aging methods that exploit inelastic optical videography. NightSea has also developed a Applications for such data range from Home- processes, is useful for detection and identifi- compact unit that incorporates a Nikonos cam- land Security issues such as mine detection cation when other methods fail or give am- era and two small strobes for taking pictures in and ship husbandry, to environmental stud- biguous results. The Fluorescence Imaging the daytime. One of the camera models was ies offering an understanding of the nature Laser Line Scanner (FILLS) is a variant on used to locate a leak in an intake pipe in a and significance of fluorescence and reflec- the RGB color scanner that uses synchro- reservoir using drinking-water-safe fluorescent tance characteristics of benthic marine organ- nously scanned laser with a reduced FOV dye (Jones, 2001). isms, particularly in areas. and multiple, optically-filtered PMT receiv- Northrop-Grumman’s first-generation laser ers to reduce forward and backscatter (Strand Raman Imaging line scan (LLS) imaging system, SM2000, was et al., 1996). This method is able to produce Raman imaging allows probing of mo- used on several occasions for benthic habitat images at up to 6 optical attenuation lengths. lecular vibrational states via inelastic scattering

12 Marine Technology Society Journal (Hu and Voss, 1997) and has been used for moving objects, significant illumination 2004; Zhang and Negahdaripour, 2005) fo- remote sea surface measurement, changes, transect superposition, and position- cus on methods designed to decrease time, im- which is useful for detection of various objects ing drift. Figure 3 demonstrates Gracias and prove accuracy and enhance robustness of lo- and substances. Recent developments of Negahdaripour’s construction of mosaics using cating point correspondences in image pairs Raman imaging systems at close range are the data captured at multiple altitudes, where the using RANSAC-based algorithms (Fischler and Deep Ocean Raman in situ Spectrometer or two color shades are used to distinguish images Bolles, 1981). In one application, for example, DORISS—a Raman probe designed for coming from each of two altitudes (2005). In quadranocular (2-pair) stereo video images are geochemical analysis. It requires precise focus- this mosaic, 5,000 image frames were rendered used for 3-D reconstruction to automate the ing and location-finding ability that is ob- and 496 key frames were used in the global trajectory of the camera’s position onboard an tained from Monterey Bay Aquarium Research registration to cover an approximate area of 400 ROV (Zhang and Negahdaripour, 2003). Institute’s (MBARI) precise underwater m2. The resulting mosaic image provides a reso- Unlike most reconstruction techniques that rely positioner (PUP) work platform (Kirkwood lution of 2.5 mm/pixel useful for ongoing on knowing the camera’s geometric position et al., 2003). PUP uses laser photogrammetric monitoring and health assessment of coral reef relative to the target, this method accounts for methods (Caimi and Tusting, 1988) to posi- mapping funded by DoD’s Strategic Environ- the 6 DOF platform motion and recovers posi- tion the probe at or near required focal dis- mental Research and Development Program tion and pose automatically from the video data. tance. The probe uses a cooled CCD array (SERDP). Another stereovision application uses a sim- and holographic spectral grating for detection Recent stereovision techniques (Lacey et al., pler feature-matching scheme (not using and Nd:YAG laser for excitation of mineral or 2000; Adams et al., 2002; Pizarro et al., 2003; RANSAC) to perform closed-loop tracking of other species. Gas and mineral analysis have Pessel et al., 2003; Zhang and Negahdaripour, gelatinous animals in open water for visual been achieved in geological samples (White et al., 2004). FIGURE 3 Multiple Perspective Methods Complete video mosaic taken in Key Biscayne, Florida covering an area of approximately 400 m2. A number of recent activities combine video imagery taken from multiple views of a scene to derive size and depth measurements (Liu, 2001; Baglio et al., 2001; Boynton and Voss, 2004; Takahashi et al., 2004), photo-mosaics (Fleischer et al., 1996; Xu and Negahdaripour, 1997; Negahdaripour et al., 1998a; Negahdaripour and Xu, 2002; Garcia et al., 2001; Roman and Singh, 2001; Espiau and Rives, 2001; Rzhanov et al., 2001; Gracias and Santos-Victor, 2001; Gracias et al., 2002; Gracias et al., 2003; Gracias and Negahdaripour, 2005) and 3-D recon- structions (Plakas et al., 1998; Khamene and Negadaripour, 2003; Negahdaripour and Madjidi, 2003; Pizarro et al., 2004). These ac- tivities support (semi-) autonomous or opera- tor-supervised missions pertaining to automatic vision-guided station keeping, location-find- ing and navigation, survey and mapping, tra- jectory following and online construction of a composite image, as well as search and inspec- tion of seabed structures and ship hulls (Leabourne et al., 1997; Negahdaripour et al., 1999; Lots et al., 2001; Negahdaripour and Firoozfam, 2005). These tasks require an accu- rate estimation of camera position, together with fast, accurate correspondence determination, particularly for real-time registration. Common sources of error include non-planar seafloor,

Fall 2005 Volume 39, Number 3 13 servoing for ROV navigation control (Rife and Image Intensity Algorithms Based for transmission (Negahdaripour et al., Rock, 2001). In instances when these nearly on Image Processing Techniques 1998b; Yu et al., 2003; Schmalz, 2005), transparent subjects become invisible in one Optical imaging provides high-resolution image enhancement and object recognition camera image, a feature-based scaling method, imagery at fast sampling rates, permitting op- using multidimensional spectral analysis rather than stereo triangulation, is used to com- erations requiring high-precision and online (Soriano et al., 2001; Madjidi and pute range. More recently, Rife and Rock performance. Nearly all subsea vehicles are Negahdaripour, 2004), polarization (2003) introduce a predictive assessment equipped with at least one standard video (Schechner and Karpel, 2004), pose (Crovato method for analyzing segmentation methods camera. Cameras are passive, providing a natu- et al., 2000), or other methods and data required for tracking an object of interest—in ral choice for covert and reconnaissance op- (Bailey et al., 2003). Some recent develop- this case jellyfish. erations, and optical imagery contains rich ments are described below. Single camera stereo systems with mirror visual cues (e.g., color, shading, texture, and reflectors, called planar catadioptric stereo systems, surface markings) for numerous recognition Polarization-based Image Enhancement have been used in terrestrial applications but applications. New techniques are being and Extended Range Viewing have not yet migrated to subsea applications adapted from traditional image processing A new physics-based approach to com- (Gluckman and Nayar, 1999; Tan et al., 2004). and computer vision research that allow im- puter vision extends in naturally Instead, specialized, multi-camera configurations proved image formation, data analysis and illuminated underwater images by removing have been used to effectively expand the FOV visualization. These often rely on complex degradations caused by polarization of light in ocean applications. One system, comprised mathematical operations to obtain apparently (Schechner and Karpel, 2004). The algo- of 6 standard CCD cameras having overlap- hidden or underlying information or to re- rithm makes use of at least two images taken ping fields of view, combines images from each duce noise or extraneous information. Ex- through a polarizer at different orientations camera to form a single, high-resolution image amples are image stabilization algorithms (e.g., horizontal and vertical). Significant im- for high-volume data capture (Negahdaripour (Barufaldi et al., 2003; Plakas and Trucco, provements in scene contrast and color cor- et al., 2001; Firoozfam and Negahdaripour, 2000), reduction of redundant image infor- rection can be obtained, as illustrated in Fig- 2002). Results show that the extended field of mation to reduce bandwidth requirements ure 4, which features white balancing based view enhanced robustness and accuracy for se- lected computer vision algorithms, including FIGURE 4 3-D motion estimation. A conical panoramic Image enhancement by removing degradations caused by polarization of light: the left side shows the best stereo imaging system has also been tested, com- raw image; the right side shows the recovered image. prised of 12 standard CCD cameras (Firoozfam et al., 2003). This system targets 3-D visualiza- tion and improved scene reconstruction by uti- lizing correspondences in more than 2 views of the scene at modest distances. Other hybrid systems combine mul- tiple views with secondary sensor data. For example, a laser line imaged by a stereovision system is used to measure bedform geometry such as wave ripples (Baglio et al., 2001). Another system uses a Kalman filter-based framework to inte- grate the shading flow with image motion and stereo disparity to recover 3-D motion and structure of a seafloor environment (Khamene and Negahdaripour, 2003). Lastly, J-QUEST (Japan QUantitative Echo-sounder and Stereo Tv-camera sys- tem) combines an echo sounder for esti- mating fish density per unit volume and a stereo, low light TV camera system to iden- tify fish species and estimate length and angle of the observed fish for fish stock assessments (Takahashi et al., 2004).

14 Marine Technology Society Journal on a nearby white sand patch. In terms of For example, researchers at CSIRO Ma- Electromagnetic Methods the raw image, this process quickly loses its rine Research (CMR) in Australia use a dual- Electromagnetic signatures can be ob- effectiveness as objects become more distant, video camera system with 4 reference lasers tained for ships and other seagoing vessels. but with appropriate processing, are each, equipped to a small ROV to study Many different sensor types are used for de- recovered over longer distances. In addition benthic habitats and community dynam- tection and rely generally on low frequency to enhancing visibility, an estimate of the ics. In this application, an OptimasTM image or static field measurement. Mapping of the scene’s relative range map can be obtained. processing add-on tool developed at earth’s magnetic field anomalies can provide Thus, if the distance to a single point is MBARI, called Laser Measure©, is used to a background against which other anoma- known, then the absolute distance to any extract quantitative measurements from the lies can be detected. Visualization of the sen- point in the scene can be derived. laser-referenced imagery (Barker et al., sor data over 2-D and 3-D regions is there- Interestingly, as noted by Johnsen 2001). New algorithms are currently being fore of interest for salvage, geological (2005), “the of this [visibility en- added to the toolbox, such as determining monitoring, and for military applications. hancement] algorithm suggests that it may the focal length of a using the Sensor types include high and low tempera- be used by polarization-sensitive species for position of a single laser (Davis, 2005). ture superconducting magnetic gradiometers the same purpose.” For instance, one spe- Similar to this system is a three-refer- useful for seafloor mapping. The Proton pre- cies of squid is known to use polarization ence laser system, “3-Beam”, developed for cession magnetometer and the more sensi- sensitivity to enhance contrast of possible Washington State Department of Fish and tive alkali vapor magnetometer are commonly prey (Shashar et al., 1998). Other sea crea- Wildlife researchers to assess population used for detection of magnetic field anoma- tures, such as cuttlefishes and mantis density of ground fish (Kocak et al., 2004). lies at a close distance to the seafloor from a shrimps, possess polarization vision and use This system offers automated processing towbody. High sensitivity is also possible polarization patterns for signaling (Cronin of the sequential images based on time, dis- from optical magnetometers (Deeter et al., et al., 2003). tance traveled or random sampling. Im- 1993; Dandridge, 1994; Zyra et al., 1995). ages are collected at a measured slant angle Other magnetic sensing approaches use field Photogrammetric Methods taking into account the vehicle roll and generation with subsequent detection of field Ongoing research uses undersea video pitch motions, and vehicle position data is perturbation from conductive material on and still photogrammetric systems to ex- stored for each image for geo-referencing. the seafloor. Images are produced by plat- tract quantitative measurements such as In a May 2005 cruise with Canadian De- form motion since magnetometers are in- object size, area, volume, shape, density, and partment of Fisheries (DOF) researchers, herently omnidirectional. movement; distance between objects; as well video and observed data were collected Algorithmic methods for inversion of as range to a point, object plane or surface. onboard Nuytco’s Aquarius submersible to the sensor data to form 3-D image maps of Automated methods require recognition and compare quillback rockfish density esti- the subsurface region are an active subject segmentation of multiple laser spots and la- mates over a survey area using strip vs. line (Lesselier and Habashy, 2000). The future ser stripes or alternately might use camera transect methods. In addition to the video, is expected to bring advancements such as and lens information in conjunction with forward-looking sonar data was collected those now sought by the Navy’s Distrib- algorithms that discard out of focus (or to monitor observed fish for attraction or uted Undersea Warfare Pyramid Electro- fuzzy) regions. Since these systems offer a repulsion behavior. A fourth laser, parallel magnetic Sensor development program. low-cost, low-impact addition to standard but off-axis, was added to the 3-Beam sys- Arrays of sensors will cooperate to produce video, a growing application for this tech- tem to derive local rugosity (roughness) 2 and 3-dimensional image representations nology can be found in marine resource measures of the benthic surface to assist in that will require data processing to deter- management—particularly the documen- habitat classification. Other techniques to mine variations or shifts in magnetic field tation, long-term monitoring and estima- obtain roughness or relief measurements intensity in real-time. tion of the geographic extent of critical habi- have recently been published using laser tats. Many of these applications require stripe patterns (Chotiros et al., 2001; Wang Acoustic Imaging Techniques unique customization (i.e. automated se- et al., 2001). Advances in DSP technology have paved quential image processing based on distance Two systems utilizing 5 reference lasers the way for reduced cost and higher capabil- traveled, image correction for vehicle roll mounted to DOE Phantom XTL ROVs ity acoustic systems. Both long-range and shal- and pitch motions, inclusion of metadata that are also used for seabed surveys are Uni- low water short-range systems have received from system sensors, etc.) not found in com- versity of Plymouth’s Autonomous Benthic recent attention. Since the end of the Cold mercially available systems such as C-Map Image Scaling System (ABISS) (Pilgrim et War, a shift in emphasis for military acoustic Systems Video Ruler, Tritech ISS-Soft, and al., 2001; Parry et al., 2003) and NOAA detection systems has gone from passive to Eos Systems PhotoModeler Pro (Bythell et Southwest Fisheries Science Center’s “5- active types due to changes in the military al., 2001). Beam” system (Pinkard et al., 2005). threat profile.

Fall 2005 Volume 39, Number 3 15 Active Acoustic Systems Imaging systems designed for these applica- turbid water conditions, sonar images are used New developments in active acoustic sys- tions require situational awareness for visual to pilot the vehicle. All data is geo-referenced tems include: long range Low Frequency Ac- search, contact detection, recognition, identi- and stored in a Geographic Information tive Sonar (LFAS), Parametric Sonar, and very fication under most weather conditions, navi- Stytem (GIS). The first real-world applica- high frequency acoustic (VHFA) active imag- gation support, real-time imagery analysis and tion was at the 2005 Super Bowl in Jackson- ing systems. LFAS uses 100-500 Hz for de- imagery enhancement. Often this implies a ville, Florida, where the system inspected the tection at long distances using the deep-water multi-sensor approach. With funding from piers that housed the cruise liners. During channel. Range and speed estimates can be ONR and the U.S. Coast Guard, University this inspection, one target was found war- made for detection and identification of small of South Florida’s (USF) Center for Ocean ranting further assessment. The team marked targets at distances beyond torpedo range. Para- Technology has recently developed a Mobile the target location with a buoy and in less metric sonar systems rely on beat frequency Inspection Package (MIP) designed for port than two minutes determined its interaction from two relatively narrow high security that is easily deployable on numerous threat potential. frequency beams. The required are platforms. The MIP consists of a navigation Other VHFA high-resolution imagers in- relatively small so that the side lobe emission module (ring laser gyro and Doppler Velocity clude the Norwegian Echoscope operating at can be controlled sufficiently to avoid back- Log), complimentary (DIDSON and 150, 300, and 500 KHz (Anderson and scatter from surface and bottom. Recent ad- ES MK II), optical systems (USF 3-D laser Hansen, 1996). This ROV mountable system vances have improved the efficiency for pro- line scanner and video), a laser/video photo- produces images at a 5-10 Hz frame rate at a ducing the beat frequency above the previous grammetric system (3-Beam or ISS), and high- range of up to 100 m with a resolution of 0.6 - 2-4% levels (Potter, 1999). It is expected that speed Ethernet communications link. An ex- 2.5 degrees. A second system produced by Tho- 2-D parametric sonar arrays will evolve with ample data set is shown in Figure 5 (a-d). mas Marconi operates at 5-10 MHz for very subsequent requirements for multidimensional Data and images collected by these sensors, high-resolution imaging (1 mm at close range of data display for interpretation. One 2-D sonar including rotation of the 3-D sonar for view- 5 m) (Jones, 1999) and another low cost shal- system has been successfully employed by the ing at varying angles and perspectives, can low water system by BlueView Technologies German Navy. Acoustic imaging systems us- be closely evaluated at a Command and Con- operates at 450KHz with resolution of 2 inches ing VHF sparse arrays have recently been pro- trol Center in real-time. In low visibility and at ranges to 450 ft. and 10 Hz update rate. duced and trends will be to improve resolu- tion and image quality. The advantage of these systems over advanced optical imagers is the FIGURE 5 ability to operate in very turbid waters in shal- Mine-like object on a seawall recorded from the MIP: (a) target in air; (b) camera image with scaling lasers low water regions. (191.7 mm measured, 196.0 mm actual); (c) DIDSON image (0.47 m measured, 0.46 m actual); and (d) ES-1600 image (0.5 m measured, 0.46 m actual). A VHF acoustic camera that began devel- opment in the mid 1990’s for Navy divers to (a) (b) use in underwater surveillance has recently defined a new class of identification sonar that produces near video quality images (Belcher et al., 2002). This “Dual-frequency IDentification SONar”, known as DIDSON, has advantages over optical systems that are limited by low light, turbid water and short- range capability. Two models are available: a 1 to 30 m version that operates at 1.8 MHz and 1.1 MHz, and a longer-range, 1 to 80 m ver- (c) (d) sion that operates at 0.70 MHz and 1.2 MHz. A recent publication addresses software meth- ods for uniquely calibrating the camera (Negahdaripour et al., 2005). One application well suited for this iden- tification sonar is security and safety-of-ship systems. Since September 2001, new regula- tions have been adopted to enhance ship and port security and to avert shipping from be- coming a target of international terrorism.

16 Marine Technology Society Journal Multibeam high frequency sonar has also enced. Co-located pixels are added incoher- FIGURE 6 been useful for large volume midwater bio- ently to generate a multiple aspect image from ROMANIS array in transit to deployment site. logical surveys where detection, tracking and which target shape can be estimated using characterization are required. Accurate calibra- tomographic reconstruction, allowing subse- tion may be accomplished in small volume quent classification. The receiver array size is tanks using coherent subtraction of surface or 1.5 m but can be reconfigured into a linear other reverberation to obtain best results when array for AUV “wing” mounting. This tomo- multiple objects are to be observed. (Chu et graphic reconstruction method has been al., 2001). A least squares method for calibra- shown to provide more information than avail- tion of multibeam sonar for target azimuth able with Maximum Intensity Projection localization to within 0.1 degrees has also been (MIP) derived imagery. demonstrated where accurate Direction-of- GOATS (Generic Oceanographic Array Arrival is required (Chu et al., 2001). Technology Systems) Joint Research Program Sidescan sonar configurations have also has continued the development of environ- advanced and are particularly suited to syn- mentally adaptive autonomous underwater thetic combining to increase resolu- vehicle technology directed toward Rapid tion along the track. One recently developed Environmental Assessment and Mine Counter synthetic aperture sonar (SAS), the 4400- Measures in coastal environments (Edwards SAS side scan sonar, was introduced at the and Schmidt, 2002). By combining theory Undersea Defense Technology conference in and modeling of the 3-D environmental 2004 (EdgeTech, 2004). This sonar is de- acoustics with experiments involving AUVs producing images with a resolution of 1 m at signed for installation on a towed body, AUV and new sensor technology, the program has a 40 m range using only ambient noise or ROV and obtains high-resolution images the potential of relaxing navigation accuracy (Buckingham et al., 1996). A second-genera- at medium ranges (120 kHz) with range in- required by the competing SAS method in tion system, the Remotely Operated Ambient dependent along track resolution of 10 cm. complex topological environments and for Noise Imaging System (ROMANIS), was sub- It is expected to be highly effective in mine reducing the need for significant aperture over- sequently developed by the Acoustic Research countermeasure applications. lap between consecutive receptions. One ele- Laboratory in Singapore. Signal processing for A novel combined-mode subsea acoustic ment of the program involves the develop- ROMANIS has been generalized to include system developed at Simon Fraser University is ment of a multi-static sonar concept, which several new imaging concepts beyond ADO- now commercially licensed to Benthos, Inc. uses a low frequency source on a single AUV NIS intensity-based imaging, including (Fluxgold, 2003; Tidd, 2005). The C3D sys- to sub-critically insonify the seabed, and in Kalman filtering (Potter and Chitre, 1999). tem combines bathymetry and sidescan sonar addition a team of multiple AUVs for map- ROMANIS operates over the 25-85 KHz for towed or fixed-mount operation, produc- ping the associated 3D scattered acoustic field. range with 504 sensors (Figure 6) and an ag- ing both depth and imagery information. The The spatial and temporal structure of the gregate data rate of 1.6 Gb/s, 10 times that of C3D system uses Small Aperture Range Angle acoustic field is subsequently processed using the Norwegian VHF system, and has pro- Computed Angle of Arrival Transient Imaging SAS algorithms with dynamic pattern recog- duced images at 70 m range with 1 m resolu- (SARA CAATI) for estimating the backscatter nition and tracking for concurrent detection tion using only ambient noise (Venugopalan arrival spectrum. Though similar to interfer- and classification. The proposed classification et al., 2003). ROMANIS uses 54 Pentium ometry in that an angle-of-arrival estimation is method has been shown to be readily imple- computers in the wet-end array, linked by 4 used, this system solves the problem of concur- mented in real-time, as demonstrated both in fibre-channel communications loops to pro- rent arrivals from multiple images. simulations and in post-processing of the ex- vide full raw data, recorded on 54 fibre-chan- A 3-D acoustic imaging system has also perimental data. More recent improvements nel hard drives on the surface. A 36-cpu par- been developed using a 252-element receiver include additional processing that supports allel computing platform is being built to and omni-directional source transmitting FM target reacquisition and acoustic field predic- perform the beamforming and imaging in near pulses at 2-12 kHz for sub-bottom imaging tion (Nave and Edwards, 2003; Edwards and real-time. ROMANIS won the Singapore of buried objects (Schock et al., 2001). The Schmidt, 2003). Defence Technology Prize (DTP) in 2004. Buried Object Scanning Sonar (BOSS) pro- cesses each transmission event using coherent Passive Acoustic Imaging Acoustic Algorithms based on Image nearfield focusing to generate a 3-D map of In 1994, a prototype system called the Processing Techniques acoustic intensity. As the sonar passes over the Acoustic Daylight Ocean Noise Imaging Sys- Acoustic imaging systems are associated seabed, the 3-D intensities are combined in a tem (ADONIS) was developed at SIO. ADO- with many of the processing tasks found in matrix representation that is locally geo-refer- NIS was successfully tested on various targets, optical image formation systems. Image pro-

Fall 2005 Volume 39, Number 3 17 cessing algorithms are applicable for automated al., 2000c; Trucco et al., 2002). Specifically, mimic biological vision systems of some ani- object recognition (Foresti et al., 1997; Independent Component Analysis (ICA) has mals and provide more robust detection or navi- Boulinquez and Quinquis, 1999; Boulinquez been used on subsampled data sets to allow gation capability for undersea vehicles. Some of and Quinquis, 2002), segmentation and fea- real-time operation at 5 fps and the Marching these include combined signal processing at the ture tracking (Trucco et al., 2000; Cutter et Intersection Algorithm (Rocchini et al., 2001) local (pixel) level for velocity estimation (speed al., 2002), enhancement and noise reduction was used for online geometric fusion on bor- of motion and direction), improved tracking (Murino and Trucco, 1999; Tu and Jiang, der grid cells. ability, simplified identification, and adaptive 2004), image compression (Hoaq et al., 1997; functionality. Barbosa and Barroso, 2000; Cunha et al., New techniques in computer vision and 2000), motion estimation and discrimination A Vision of the Future pattern recognition algorithms may include (Petillot et al., 2001; Fernandez et al., 2004), “Nothing ages so quickly as yesterday’s vision of automatic target recognition on compressed and others. Processing trends have included the future.” imagery; improved compression or encoding classify-before-detect (CBD) algorithms where Richard Corliss (1944-present) methods for higher bandwidth transmission target classes are distinguished from the target of multi-dimensional data sets to multiple lo- echo using a priori information. One reported “Vision is the art of seeing the invisible.” cations including the Internet; fusion of data method uses a supervised statistical principal Jonathan Swift (1667-1745) from multiple platforms and data types re- component classifier to produce a reduced fea- quiring a combination of robust algorithms ture vector, which is then sent to a supervised Regulations and issues relating to Home- (also called algorithm fusion); behavior recog- statistical classifier (Trucco, 2001). Another land Security, security and safety of ships and nition and automated scene understanding recent processing method employs PCA to ports, monitoring and conservation of our processes applied to an image sequence; stor- reduce the dimensionality of the feature vec- environment, surveillance and defense systems age of metadata including geo-coordinates tor, which is filtered using best basis wavelet will drive future research in underwater imag- with individual sensor data; and improved decomposition to classify seabed composition ing. Emphasis will be placed on automating methods of database management and infor- using a linear discriminant analysis approach and optimizing the processing—using both mation retrieval, particularly with LIDAR and (Barat and Rendas, 2002). hardware and software measures—to increase sonar bathymetric data. As data becomes avail- Several developments in acoustic image throughput, robustness, and data assimilation able on the Internet, needs arise for interpreta- mosaicking have also been reported (Murino affording maximum resolution with mini- tion of visual contents, storage and indexing et al., 2000b; Repetto et al., 2002; Kim et al., mum delay. As more and more sensory infor- information for search engines, filtering un- 2004; Castellani et al., 2004; Negahdaripour mation is gathered, emphasis will continue to desired contents and encrypting sensitive in- et al., 2005). The problem of matching and be placed on methods to fuse similar and dis- formation, and handling video streams. Fu- positioning successive images has been ap- parate sensor data and to display, archive, trans- ture advancements will include real-time proached using various projection methods mit and share this data to achieve real-time watermarking of such data into the video im- including a precision ray tracing approach and awareness at multiple sites. This observation age to link still image and video data archive, an analytic solution. The former requires a re- holds particularly when an instantaneous de- as well as to allay copyright and security con- duction of the number of rays for optimum cision must be based on up-to-the-minute cerns related to dispersal of such material over results with minimum computational load, information. As we have seen in some present the Internet. while the latter uses an interpolation method research, multiple sensor systems are becom- The future of acoustic imaging should al- based on generation of virtual beam signals to ing increasingly important. Thus, simplified low a high degree of integration and cost re- obtain high quality mosaics (Repetto et al., data usage and automated detection/process- duction to provide a truly covert capability at 2002). Another approach (Negahdaripour et ing based on data fusion is needed. All data, target standoff distances exceeding those of al., 2005) uses a projective transformation for however, should be made available in an un- advanced optical imaging systems which op- 2-D image registration correcting for rotational processed format with an option to invoke erate at 4 to 7 optical attenuation lengths (300- and translational motion of images produced image processing algorithms for improving the 400 m in clear water). The future also prom- by an acoustic camera. Fusion algorithms were representation. ises sensor fusion between passive and active demonstrated to reduce errors due to rapid Although conventional methods of image imaging systems using both acoustic and op- image motion thereby increasing resolution processing are well suited for target recognition, tical methodologies. (Refer to Sloan et al. and reducing blur (Kim et al., 2004). identification, location, and tracking, there is a [2003] for an extensive review of acoustic and Techniques for reconstructing 3-D repre- need for future systems to make physical and optical-based imaging techniques that have sentations from range images obtained from biological measurements over large survey areas provided ecological insights and information multiple views have also been proposed for species monitoring, differentiation, and about marine benthic systems. Hewitt et. al (Castellani et al., 2004), with processing meth- health assessment through spectral analysis. Sen- [2004] provide a similar case study.) The abil- ods discussed in detail (Murino and Trucco et sors using advanced concepts are expected to ity to combine data in different formats and

18 Marine Technology Society Journal data rates for online decision making will evolve, in features will preserve rich color and tones around ports, between ports and over the as well as processing techniques to coordinate, and offer automatic image correction and en- horizon. The goal is to create reliable tech- present, and interpret multiplatform and hancement, as is just beginning in the com- nology that can monitor activities, connect multidimensional sensor data. Acoustic imag- mercial market. These systems will be tailored diverse events that suggest potential threats ing sensors may be used in distributed net- to the entire range of lighting conditions; how- and alert security personnel when necessary works at harbor choke points, similar to ever, there will be a trend toward self-con- (BAE, 2005; Quintas, 2005). deployable underwater surveillance systems tained, high efficiency LED illumination. Long-range imaging systems have con- (DUSS). In one such system, an active sonar Signal processing integrated onboard im- siderable value for inspection, search and res- concept based on distributed network of small aging semiconductor chips and local process- cue, and species assessment. In the past, these multistatic transmitter/receiver nodes has been ing at the pixel level for sensitivity normaliza- systems were large, expensive, and power con- deployed and tested (Mozzone et al., 1999). tion, contrast enhancement, and filtering will sumptive, requiring dedicated support pro- Due to the large volume of data in these multi- be just a few of the future enhancements. grams. The future promises reduction below sensor systems, data archival and access con- More advanced features such as the discrimi- the current 150-200 watt power levels and tinues to be important. Geographic informa- nation of temporally variant signatures may further reduction in packaging size—thereby tion systems (GIS) are becoming widely used also evolve to provide enhanced detection and allowing deployment on smaller ROVs or as a tool for visualization and analysis (NOAA, monitoring systems. AUVs. With current trends toward higher 2003; Hamano, 2004; Mitchell, 2004). In As for applications, the need for land-based data storage density, the potential exists for benthic surveys, for example, multibeam imaging systems for close observation of vessel replacing current real-time imaging capabili- bathymetry data can be collected over a large traffic, ports and harbors, offshore platforms, ties with onboard image storage for later off- survey area and mapped into a GIS Benthic oil and gas fields, fisheries, and coastal water- loading and processing. These advancements Terrain Modeler (BTM) tool, as described in ways is already becoming increasingly impor- are expected to allow increased usage for bio- Lundblad et al. (2005). In this technique, tant. In addition, unattended offshore plat- logical assessment of population densities, e.g. both a fine and broad scale Bathymetric Posi- forms and buoys face dangers from pirates or patterns in distribution and abundance, es- tion Index (BPI) can be estimated as a second- terrorists due to their remoteness. Since opera- pecially for low population density species. order derivative along with surface slope mea- tion of these platforms is expensive and tech- In addition, archival recording of data can sures and rugosity estimates. The BPI maps nologically demanding, protecting them is a allow image post-processing for a variety of are then standardized and a classification dic- top priority. Planned monitoring systems range applications and uses beyond current inter- tionary is used to produce the benthic struc- from simple, single radar installations to fully est. The further embellishment of these sys- tures and zones. The resulting classification networked multiple radar or multiple moni- tems with multi-wavelength lasers, could al- maps provide a better understanding of spe- toring systems with integrated video/thermal low spectral differentiation, as well as cies–habitat relationships that can assist in es- imaging cameras, radio and direction finding fluorescence measurements and other data tablishing and monitoring marine protected (DF), GPS and other transponders and me- to be acquired for more accurate detection, areas. The BTM methodology is most useful teorological, hydrological and geographical differentiation, species assessment, and health when combined with in situ data sets. information databases. One developing sys- monitoring. Advances in image sensor technology will tem from Sidus integrates radar in- yield smaller size, distributed networking and formation with video or still camera position smart processors (e.g., autonomous “smart control to provide disparate views of a given Summary dust” sensors [Kahn et al., 1999]). A distribu- location, then transmits the images to a re- It is clear that imaging and image analysis tion of underwater imaging sensors in a self- mote command center via satellite communi- will play an increasing role in remote sensing formed or other network architecture would cations for evaluation (Flakus, 2005). En- with applications in Homeland Security, en- provide ubiquitous coverage affording mul- hanced perception at night or in poor visibility vironmental monitoring and assessment, and tiple perspective views for robust identifica- situations is achieved using ISIT, ICCD, or basic biological and geological research, with tion and rapid detection. These advanced sys- thermal cameras. perhaps the most emphasis being applied to tems could also mimic swarming or schooling Another prototype development under- collection, transmission, storage, cataloging, behaviors of biological organisms for coopera- way, the Department of Homeland Security’s and retrieval of information collected over tive problem solving given the mission at hand. (DHS) Project Hawkeye Intelligent Video vast distances and areas. The adage “a picture We are already seeing a wide range of small, Analysis, is aimed at assisting the U.S. Coast is worth a thousand words” is perhaps more affordable camera systems with greater resolu- Guard in South Florida with maritime sur- appropriately today rephrased as “informa- tion and capability due to CMOS sensors and veillance by integrating a host of different tion is worth a thousand images” and tasks onboard processors. In the future, the imagers imaging technologies into an intelligent sys- such as data/image collection, management will maintain compatibility across a wide range tem capable of automating the detection, and analysis will certainly lead our future en- of software, hardware and image types. Built- tracking and evaluation of vessel traffic deavors.

Fall 2005 Volume 39, Number 3 19 Acknowledgements Baglio, S., Faraci, C., Foti, E. and Musumeci, Bellis, M. 2005c. Inventors: History of the The authors thank the following col- R. 2001. Analysis of small scale bedforms with Digital Camera. About, Inc. Retrieved 25 April leagues for providing figures and references: 2D and 3D image acquisition techniques. Proc. 2005 . graphic Institution), Kevin Hardy (Scripps Bailey, B.C., Blatt, J.H. and Caimi, F.M. 2003. Bellis, M. 2005d. Inventors: Laser history. Institution of Oceanography), Dr. Sönke Radiative transfer modeling and analysis of About, Inc. Retrieved 18 April 2005 . (University of South Florida), Emory Kristof undersea object detection. IEEE J Oceanic Bellis, M. 2005e. Inventors: The History of (National Geographic), Dr. Shahriar Eng. 28(4):570-582. Negahdaripour and Dr. Nuno Gracias (Uni- Sonar. About, Inc. Retrieved 25 April 2005 versity of Miami), Dr. John Potter (National Baker, N. 1997. William Thompson – the . (Israel Institute of Technology), Dr. Andrea Historical Diving Times, 2(19). Bertolotti, M. 1983. Masers and lasers: an Trucco (University of Genoa), and Dr. Edith Bamji, C. and Charbon, E. 2003. U.S. Patent historical approach. Bristol: Adam Hilger, 268 pp. Widder (Ocean Recon). No. 6,580,496. Borrillo, D.J. 2001. Barat, C. and Rendas, M.J. 2002. Classification review for the aviation medical examiner. Fed. References of sonar measures using optimized wavelets. Proc. Air Surgeon’s Medical Bul., Winter. MTS/IEEE OCEANS ’02, 4, pp. 2225- 2233. Adams, H., Singh, S. and Strelow, D. 2002. Boulinquez, D. and Quinquis, A. 1999. An empirical comparison of methods for Barbosa, J. and Barroso, V. 2000. Very low bit Underwater buried object recognition using image-based motion estimation. In: IEEE/RSJ rate video compression for acoustic transmission wavelet packets and Fourier descriptors. Proc. Int. Conf. IROS, 1, pp. 123-128. of subsea images of hydrothermal vents. Proc. IEEE Int. Conf. Image Analysis and Process- Amend, M., Yoklavich, M., Grimes, C., MTS/IEEE OCEANS ’00, 1, pp. 589-594. ing, pp. 478-483. Wakefield, W. and Rzhanov, Y. 2004. It’s in Barker, B.A.J., Davis, D.L. and Smith, G.P. Boulinquez, D. and Quinquis, A. 2002. 3-D the details: Benthic habitat assessment using 2001. The calibration of laser-referenced underwater object recognition. IEEE J th laser line scan technology. In: Proc. 13 underwater camera for quantitative assessment Oceanic Eng. 27(4):814-829. Western Groundfish Conf., pp. 11. of marine resources. Proc. MTS/IEEE Bowker, K. and Lubard, S.C. 1995. U.S. OCEANS ’01, 3, pp. 1854-1859. Anderson, P.A. and Hansen, R.K. 1996. A 3D Patent, No. 5,467,122. underwater acoustic camera – properties and Barufaldi, C., Firoozfam, P. and Nedgahdaripour, Boynton, G.C. and Voss, K. 2004. An in-water application. In: Acoustical Imaging, Masotti S. 2003. An integrated vision-based position- digital stereo video system for fish population and Tortoli, eds. pp. 607-611, New York: ing system for video stabilization and accurate assessment. Invited talk, Nat. Mar. Fisheries Plenum Press. local navigation and terrain mapping. Proc. Service Workshop on Analysis of Underwater MTS/IEEE OCEANS ’03, 5, pp. 2567-2573. Anthony, R. 2001. Titanic Revisited: Ghosts Video Observations, August 4-6. Seattle: WA. of the Abyss. RossAnthony.com. Updated 14 Belcher, E.O., Fox, W.L.J. and Hanot, W.H. Buckingham, M.J., Potter, J.R. and Epifanio, July 2004. Retrieved 25 April 2005 . candidate for an obstacle avoidance, gap-filter, noise. Scientific American, 274(2):40-44. Bachrach, A.J. 1998. The history of the diving and identification sensor for untethered bell. Historical Diving Times, Issue 21. underwater vehicles. Proc. MTS/IEEE Bythell, J.C., Lee, J. and Pan, P. 2001. Three- Retrieved 9 September 2005 . Bellis, M. 2005a. Inventors: George Eastman. techniques. Coral reef, 20(3):193-199. BAE, 2005. BAE Systems wins Coast Guard About, Inc. Retrieved 25 April 2005 . optical methods and apparatus. U.S. Patent, No. 4,777,501. . 2005 . Gonzalez, F. 1993. Advanced underwater laser systems for ranging, size estimation, and profiling. MTS J. 27(1):31-41.

20 Marine Technology Society Journal Caimi, F.M. and Smith, D.C. 1995. U.S. Chu, D., Baldwin, K.C., Foote, K.G., Edwards, J.R. and Schmidt, H. 2002. Real- Patent, No. 5,418,608. Yanchao, L., Mayer, L. A. and Melvin, G.D. time classification of buried targets with teams 2001. Multibeam sonar calibration: target of unmanned vehicles. Proc. MTS/IEEE Caimi, F.M., Kocak, D.M. and Asper, V.L. localization in azimuth. Proc. MTS/IEEE OCEANS ’02, 1, pp. 29-31. 1996. Developments in laser-line scanned OCEANS ‘01, 4, pp. 2506-2510. undersea surface mapping and image analysis Edwards, J.R. and Schmidt, H. 2003. Target systems for scientific applications. Proc. Coles, B. 1997. Laser line scan systems as re-acquisition using acoustic features with an MTS/IEEE OCEANS ’96, Sup., pp. 75-81. environmental survey tools. Ocean News and autonomous underwater vehicle-borne sonar. Tech., 3(4):22-27. Proc. 146th Acoustical Soc. of America. Carder, K.L. 1979. Holographic microvelocimeter for use in studying ocean Cronin, T.W., Shashar, N., Caldwell, R.L., Espiau, F.-X. and Rives, P. 2001. Extracting particle dynamics. Opt Eng. 18:524-525. Marshall, J., Cheroske, A.G. and Chiou, T.H. robust features and 3D reconstruction in 2003. Polarization vision and its role in underwater images. Proc. MTS/IEEE Carder, K.L. and Costello, D.K. 1994. Optical biological signaling. Integrative and Compara- OCEANS ’01, 4, pp. 2564-2569. effects of large particles. In: Ocean Optics, tive Biology. 43(4):549-558. Spinrad, R.W., Carder, K.L. and Perry, M.J., Fenton, M. 2004. From Baird to MPEG, A eds., pp. 243-257, New York: Oxford Crovato, D., Ros, B., Filippini, M., Zampato, history of video. Transdiffusion Broadcasting University Press. M. and Frezza, R. 2000. Segmentation of System. Updated 24 May 2004. Retrieved 20 underwater images for AUV navigation. IEEE April 2005 . NikonDigital.org. Updated 7 August 2001. Retrieved 4 September 2005 . C.J. 2000. Simultaneous compression and and Stroud, J.S. 2004. Synthetic aperture denoising of side scan sonar images using the sonar development for autonomous underwa- Carey, D.A., Rhoads, D.C. and Hecker, B. discrete wavelet transform. Proc. MTS/IEEE ter vehicles. Proc. MTS/IEEE OCEANS ’04, 2003. Use of laser line scan for assessment of OCEANS ’04, 1, pp. 195-199. 4, pp. 1927-1933. response of benthic habitats and demersal fish to seafloor disturbance. J Exp Mar Bio and Cutter, G.R., Rzhanov, Y. and Mayer, L.A. Firoozfam, P. and Negahdaripour, S. 2002. Ecol., 255-286:435-452. 2002. Automated segmentation of seafloor Multi-camera conical imaging; calibration and bathymetry from multibeam echosounder data robust 3-D motion estimation for ROV based Castellani, U., Fusiello, A., Murino, V., using local Fourier histogram texture features. mapping and positioning. Proc. MTS/IEEE Papaleo, L., Puppo, E., Repetto, S. and J Exp Mar Bio and Eco. 285-286:355-370. OCEANS ’02, 3, pp. 1595-1602. Pittore, M. 2004. Efficient on-line mosaicking from 3D acoustical images. Proc. MTS/IEEE Dandridge, A. 1994. The development of fiber Firoozfam, P., Negaharipour, S. and Barufaldi, OCEANS ’04, 2, pp. 670-677. optic sensor systems.: Proc SPIE 10th Optical C. 2003. A conical panoramic stereo imaging Fibre Sensors Conf., 2360, pp. 154. system for 3-D scene reconstruction. Proc. Choon, K.L. and Potter, J.R. 2002. Ambient MTS/IEEE OCEANS ’03, 4, pp. 2303-2308. noise imaging; enhanced spatial correlation Davis, D.L. 2005. Personal communication at algorithms and a way to combine independent MBARI, Moss Landing, CA, April 8th. Fischler, M.A. and Bolles, R.C. 1981. Random images for improved stability and false-alarm sample consensus: a paradigm for model fitting Deeter, M., Day, G., Beahn, T. and rejection. Proc. IEEE Int. Symp. on Underwa- with applications to image analysis and Manheimer, M. 1993. Magneto-optic ter Tech. pp. 196-201. automated cartography. Comm. of the ACM, magnetic field sensor with 1.4 pT/(Hz noise 24, pp. 381-395. Chotiros, N.P., Piper, J.N., Moore, J. and equivalent field at 1 kHz). Electron Lett., Weigl, D.F. 2001. Small scale seafloor 29:993-994. Flakus, G. 2005. The offshore technology roughness measurements using a ROV. Proc. conference is a sea of resources, an ocean of DxO Labs. 2005. DxO Labs Home Page. MTS/IEEE OCEANS ’01, 4, pp. 2525-2529. knowledge. VOA. Updated 19 May 2005. Retrieved 1 May 2005 < http:// Retrieved 21 May 2005 . www.voanews.com/english/2005-05-19- Yanchao, L., Mayer, L. A. and Melvin, G.D. EdgeTech. 2004. EdgeTech to exhibit new voa45.cfm>. 2001. Multibeam sonar calibration: removal of system at Undersea Defense Technology static surface reverberation by coherent echo Fleischer, S.D., Wang, H.W., Rock, S.M., and (UDT) 2004. 4 June 2004. Retrieved 7 June subtraction. Proc. MTS/IEEE OCEANS ‘01, Lee, M.J. 1996. Video mosaicking along 2005 . AUV Technology, pp. 293-299.

Fall 2005 Volume 39, Number 3 21 Fluxgold, H. 2003. A clearer view of bottom. Hamano, A., Umetani, Y., Tanoue, H. and Hunter, J., Witana, V. and Antoniades, M. SFU News, 28(6). Nakamura, T. 2004. Application of marine 1997. A review of video streaming over the GIS using information from sonar for coastal Internet. DSTC Technical Report TR97-10. Foresti, G.L., Murino, V., Regazzoni, C.S. and fisheries. Proc. MTS/IEEE OCEANS ’04, 1, Revised 19 April 2000. Retrieved 4 May 2005 Trucco, A. 1997. A voting-based approach for pp. 428-432. . images. IEEE J Oceanic Eng., 22(1):57-65. Hammond, J.H. 1981. The camera obscura, a Jaffe, J.S., McLean, J., Strand, M.P., and chronicle. Bristol: Adam Hilger Ltd. 182 pp. Francis, K. and Tuell, G. 2005. Rapid Moore, K.D. 2001. Underwater optical environmental assessment: The next advancement Hardy, K., Olsson, M., Yayanos, A.A., Prsha, J. imaging: status and prospects. Oceanography, in airborne bathymetric LIDAR. Ocean News and Hagey, W. 2002. Deep Ocean Visualiza- 14(3):66-76. & Tech., 11(3):2-4. tion Experimenter (DOVE): low-cost 10 km Johnsen, S. 2005. Visual ecology on the high camera and instrument platform. Proc. MTS/ Fuchs, E. 2004. Multidimensional laser seas. Mar Ecol Prog Ser. 287:263-307:281-285. IEEE OCEANS ’05, 4, pp. 2390-2394. scanning system to test new concepts in Johnsen, S. and Widder, E.A. 2001. Ultravio- underwater imaging. Proc. MTS/IEEE Harris, B. 2002. World submarine history let absorption in transparent zooplankton and OCEANS ’04, pp. 1224-1228. timeline 1580-2000. Revised 25 October its implications for depth distribution and 2002. Retrieved 23 April 2005 . system to evaluate the accuracy of a visual Johnson, D., 2005. JPEG2000 coming to a mosaicking methodology. Proc. MTS/IEEE Hart, N.S., Lisney, T.J., Marshall, N.J. and screen near you. Embedded Technology, OCEANS ’01, 4, pp. 2570-2576. Collin, S.P. 2004. Multiple cone visual pigments August 2005. Retrieved 9 Sept. 2005 . Biol. 207:4587-4594. prerequisites. Diver Magazine, December. Jones, I.S.F. 1999. High resolution underwater shows AVClairity encoder solution. 9 Sept. ’99, 3, pp. 1093-1097. 2005. Retrieved 11 Sept. 2005 < http:// Gluckman, J. and Nayar, S.K. 1999. Planar www.hdissues.com/articles/ Jones, D. 2001. Marketplace: underwater catadioptric stereo: geometry and calibration. viewarticle.jsp?id=34543&afterinter=true>. photographic equipment. UnderWater IEEE CVPR 1999, 1:23-25. Magazine, May/June. Retrieved 19 April 2004 Hewitt, J.E., Thrush, S.F., Legendre, P., Gracias, N. and Santos-Victor, J. 2001. . Mapping of marine soft-sediment communities: reconstruction using global alignment. Proc. integrated sampling for ecological interpreta- Kahn, J.M., Katz, R.H. and Pister, K.S.J. MTS/IEEE OCEANS ’01, 4, pp. 2557-2563. tion. Eco. Applications, 14(4):1203-1216. 1999. Mobile Networking for Smart Dust. Gracias, N., van der Zwaan, S., Bernardino, A. ACM/IEEE Int. Conf. on Mobile Comp and Hoaq, D.F., Ingle, V.K. and Gaudette, R.J. and Santos-Victor, J. 2002. Results on Networking (MobiCom 99), Seattle, WA, 1997. Low-bit-rate coding of underwater video underwater mosaic-based navigation.: Proc. August 17-19. using wavelet-based compression algorithms. MTS/IEEE OCEANS ’02, 3, pp. 1588-1594. IEEE J Oceanic Eng. 22(2):393-400. Katz, J., Zhang, P.L., King, S. and Russell, K. Gracias, N.R., van der Zwaan, S., Bernardino, 1999. Submersible holocamera for detecton of Hohler, S. 2002. Depth records and ocean A. and Santos-Victor, J. 2003. Mosaic-based particle characteristics and motions in the volumes: ocean profiling by sounding navigation for autonomous underwater ocean. Deep Sea Res. Part 1, 46(8):1455-1481. technology, 1850 – 1930. History and vehicles. IEEE J Oceanic Eng. 28(4):609-624. Technology, 2(18):119-154. Khamene, A. and Negahdaripour, S. 2003. Gracias, N. and Negahdaripour, S. 2005. Motion and structure from multiple cues; Hu, C. and Voss, K.J. 1997. Solar stimulated Underwater mosaic creation using video image motion, shading flow, and stereo inelastic light scattering in clear sea water. sequences from different altitudes. Proc. MTS/ disparity. Computer Vision and Image Proc. SPIE, 2963, pp. 266-271. IEEE OCEANS ’05, 1, pp. 1234-1240. Understanding, 90(1):99-127.

Grundberg, A. 2005. Photography, History of. Kim, K., Neretti, N. and Intrator, N. 2004. Microsoft® Encarta® Online Encyclopedia. Acoustic camera image mosaicking and super- Retrieved 19 April 2005 .

22 Marine Technology Society Journal Kirkwood, W., White, S.N., Brown, M., Lots, J.F., Lane, D.M., Trucco, E., and Milius, S. 2004. Hide and see: Conflicting Henthorn, R., Jensen, S., Salamy, K.A., Chaumette, F. 2001. A 2-D visual servoing for views of reef-fish colors. Science News. Peltzer, E.T. and Brewer, P. 2003. Precision underwater vehicle station keeping. Proc. 166(19):296-302. underwater positioning for in situ laser Raman IEEE Conf. Robotics and Automation, 3, pp. Mitchell, A. 2004. The application of spectrographic applications. Proc. MTS/IEEE 2767-2772. Monterey Bay Aquarium Research Institute OCEANS ’03, 2, 838-843. Lucent Technologies, 2005. Inventors: CCD (MBARI) remotely operated vehicle (ROV) Kocak, D.M. and Caimi, F.M. 1999. Surface – The History of CCDs or Charge Coupled video footage to benthic habitat mapping: a metrology & 3-D imaging using laser line Devices. About, Inc. Retrieved 4 September pilot study of Carmel Canyon head. MBARI scanners. Ocean Sys Design, 3(4):4-6. 2005 . 2004. Retrieved 19 May 2005 . Ocean Engineering Handbook. F. El-Hawary, E.M., Rinehart, R., Anderson, S.M., Battista, ed., pp. 20-43. Boca Raton: CRC Press LLC. T., Naar, D.F., and Donahue, B.T. 2005. Moore, K.D., Jaffe, J.S. and Ochoa, B.L. Classifying benthic terrains with multibeam 2000. Development of a new underwater Kocak, D.M., Jagielo, T.H., Wallace, F. and bathymetry, bathymetric position and rugosity: bathymetric laser system: L-Bath. J Atmos and Kloske, J. 2004. Remote sensing using laser Tutuila, American Samoa. Marine Geodsey, in Ocean Tech., 17(8):1106-1117. projection photogrammetrically aided review as of September 2005. underwater video system for quantification and Moore, K.D. 2001. Intercalibration method mensuration in underwater surveys. Proc. Madjidi, H. and Negahdaripour, S. 2004. On for underwater three-dimensional mapping IEEE IGARSS ‘04, 2, pp. 1451-1454. robustness and localization accuracy of optical laser line scan systems. J Applied Optics, flow computation from color imagery. Proc. 40(33):5991-6004. Kristof, E. 2005. MTS/IEEE OCEANS ’05 IEEE 2nd Int. Symp. 3DPVT 2004, pp. 317-324. presentation and personal communications on Moran, S.E., Ginaven, R.O. and Odeen, E.P. September 24, 2005. Marshall, N.J., Cronin, T.W. and Frank, T.M. 1993. Dual detector lidar system and method. 2003. Visual adaptations in crustaceans: U.S. Patent No. 5,270,780. Lacey, A.J., Pinitkarn, N. and Thacker, N.A. chromatic, developmental, and temporal 2000. An evaluation of the performance of Mozzone, L., Bongi, S. and Filocca, F. 1999. aspects. In: Sensory Proc. in Aquatic RANSAC algorithms for stereo camera Diversity in multistatic active sonar. Proc. Environments, Collin, S.P. and Marshall, N.J., calibration. Eleventh British Machine Vision MTS/IEEE OCEANS ’99, pp. 1058-1063, eds., pp. 343-372. New York: Springer. Conference (BMVC2000), Sept. vol. 2. Marshall, P. 2005a. Photography: A Flash of Leabourne, K.N., Rock, S.M., Fleischer, S.D., Mu, J. 2005. China’s compression choice. Light. About, Inc. Retrieved 25 April 2005 and Burton, R.L. 1997. Station keeping of an EDN, pp. 56-57, June 23, 2005. . Mullen, L.J., Contarino, V.M., Strand, M.P. IEEE OCEANS ’97, 1, pp. 634-640. and Coles, B. 1999. Modulated laser line Marshall, P. 2005b. Photography: The Birth Leathem, J. and Coles, B.W. 1993. Use of laser scanner for enhanced underwater imaging. of Electronic Flash. About, Inc. Retrieved 25 sources for search and survey. Proc. UI 1993. Airborne and In-Water Underwater Imaging, April 2005 . section on electromagnetic imaging and inversion Mullen, L., Laux, A., Concannon, B., Zega, Mazel, C.H. 2002. Coastal Benthic Optical of the earth’s subsurface. Proc. IOP, 16(5). E.P., Katsev, I.L. and Prikhach, A.S. 2004. Properties (CoBOP): Optical properties of Amplitude modulated laser imager. Applied Liu, H.T. 2001. A video-based stereoscopic benthic marine organisms and substrates. Optics, 43(19):3874-3892. imaging and measurement system (SIMS) for Final Report, No. A435604, 10 pp. undersea applications. Proc. MTS/IEEE Murino, V. and Trucco, A. 1999. A confidence- Mazel, C.H., Strand, M.P., Lesser, M.P. and OCEANS ’01, 1, pp. 275-286. based approach to enhancing underwater Crosby, M.P. 2003. High resolution of coral acoustic image formation. Proc. IEEE Trans. Losey, G.S., McFarland, W.N., Loew, E.R., reef bottom cover from multispectral on Image Processing, 8(2):270-285. Zanzow, J.P., Nelson, P.A. and Marshall, N.J. fluorescence laser line scan imagery. Limnol 2003. Visual biology of Hawaiian coral reef Oceanogr., 48 (1, Part 2):522-534. Murino, V. and Trucco, A. 2000a. Underwater I: Ocular transmission and visual computer vision and pattern recognition. pigments. Copeia 2003, pp. 433-454. Computer Vision and Image Understanding, 79(1):1-3.

Fall 2005 Volume 39, Number 3 23 Murino, V., Fusiello, A., Iuretigh, N. and Negahdaripour, S. and Firoozfam, F. 2005. An Pack, R.T. and Frederick, B. 2003. U.S. Puppo, E. 2000b. 3D mosaicking for ROV stereovision system for ship hull Patent, No. 6,664,529. environment reconstruction. Proc. IEEE Conf inspection. To appear: IEEE J Oceanic Eng. Parry, D.M., Kendall, M.A., Pilgrim, D.A. and Pattern Recognition, 3, pp. 358-360. Negahdaripour, S., Firoozfam, P. and Jones, M.B. 2003. Identification of patch Sabzmeydani, P. 2005. On processing and Murino, V. and Trucco, A. 2000c. Three- structures within marine benthic landscapes registration of forward-scan acoustic video dimensional image generation and processing using remotely operated vehicle. J Exp Mar imagery. Proc. Canadian Conf. on Comp. and in underwater acoustic vision. Proc. of the Bio and Eco. pp. 285-286, 497-511. Robot Vision ‘05, pp. 452-459. IEEE, 88(12), pp. 1903-1946. Pearson, G. and Gill, M. 2005. An evaluation Nevis, A.J and Strand, M.P. 2000. Image Nave, J.A. and Edwards, J.R. 2003. Fast field of motion JPEG 2000 for video archiving. processing and qualitative interpretation of prediction via neural networks. Proc. 6th Int. Proc. Archiving 2005, IS&T fluorescence imaging laser line scan data. In: Conf. on Theoretical & Comp. Acoustics. (www.imaging.org), pp. 237-243. Ocean Optics XV, Monaco. Negahdaripour, S., Xu, X., Khamene, A. and Pessel, N., Opderbecke, J. and Aldon, M.J. NHK, 1998. Technologies for advanced image Awan, Z. 1998a. 3-D motion and depth 2003. Camera self-calibration in underwater presentation. NHK (Japan Broadcasting estimation from sea-floor images for mosaic- environment. Proc. 11th Int. Conf. WSCG, Corporation). Retrieved 4 Sept. 2005 . ROVs/AUVs and high-resolution sea-floor Petillot, Y., Tena Ruiz, I. and Lane, D.M. mapping. Proc. AUV ’98, pp. 191-200. NightSea, 2005a. History of underwater 2001. Underwater vehicle obstacle avoidance fluorescence: observation and photography. Negahdaripour, S., Zhang, S., Xun, X. and and path planning using a multi-beam forward NightSea. Retrieved 18 April 2005 . recovery from underwater imagery for 3D 26(2):240-251. mapping and motion-based video compression. NightSea, 2005b. Fluorescence Images by Picardi, M. and Jan, T. 2003. Recent advances Proc. MTS/IEEE OCEANS ’98, 1, pp. 277-281. Steve Ackleson. NightSea. Retrieved 9 in computer vision. Industrial , September 2005 . Direct estimation of motion from sea floor Pilgrim, D.A., Parry, D.M. and Rimmer, S. images for automatic station-keeping of NOAA Biogeography Program (NWHI), 2001. The underwater optics of Abiss submersible platforms. IEEE J Oceanic Eng. 2003. Benthic Habitat Mapping: Northwest (Autonomous Benthic Image Scaling System). 24(3):370-382. Hawaiian Islands Classification Manual. Ocean Optics VI, Institute of Physics, London. National Ocean Service. Updated 3 June Negahdaripour, S., Zhang, H., Firoozfam, P. 2002. Retrieved 3 May 2005 . quantify transect area for remotely operated applications. Proc. MTS/IEEE OCEANS ’01, vehicle (ROV) rockfish and abalone population 4, pp. 2593-2600. Olson, S. 2003. Computer vision makes rapid surveys. Proc. MTS/IEEE OCEANS ’05, 5 pp. progress. Geek.com. Updated 24 March 2003. Negahdaripour, S. and Xu, X. 2002. Mosaic- Retrieved 18 April 2005 . calibrated imaging platforms. Proc. 7th Digital of submersible vehicle. IEEE J Oceanic Eng. Image Computing: Techniques and App., Sun 27(1):79-99. Olsson, M. 1999. Undersea Imaging. MTS J., C., Talbot, H., Ourselin, S., Adriaansen, T. State of the technology report – advanced Negahdaripour, S. and Madjidi, H. 2003. (eds.), pp. 601-612. marine technology division, J. Jaeger, ed., Stereovision imaging on submersible platforms 33(3):103-104. Pizarro, O., Eustice, R. and Singh, H. 2004. for 3-D mapping of benthic habitats and Large area 3D reconstructions from underwa- seafloor structures. IEEE J Oceanic Eng. Optech, 2003. SHOALS-1000T: The Next ter surveys. Proc. MTS/IEEE OCEANS ’04, 28(4):625-650. Generation of Airborne Laser Bathymetry. 2, pp. 678-687. Optech, Inc. Retrieved 20 May 2005 . Uncalibrated vision for 3-D underwater applications. Proc. MTS/IEEE OCEANS ’98, 1, pp. 272-276.

24 Marine Technology Society Journal Plakas, K. and Trucco, E. 2000. Developing a Schechner, Y.Y. and Karpel, N. 2004. Clear Strickrott, J.A. and Negaharipour, S. 1997. On real-time, robust, video tracker. Proc. MTS/ . IEEE CVPR 2004, 1, pp. the development of an active vision system for IEEE OCEANS ’00, 2, pp. 1345-1352. 536-543. 3-D scene reconstruction and analysis from underwater images. Proc. MTS/IEEE Potter, J.R. 1999. Challenges of seeing Schmalz, M.S. 2005. Recent advances in OCEANS ’97, 1, pp. 626-633. underwater – a vision for tomorrow. Naval object-based image compression. Proc. IEEE Platform Technologies Seminar, Singapore. Data Compression Conf., pp. 478. Takahashi, H., Sawada, K., Watanabe, K., Horne, J.K., McClatchie, S., Takao, Y. and Potter, J.R., and Chitre, M.A. 1999. Ambient Schock, S.G., Tellier, A., Wulf, J., Sara, J. and Abe, K. 2004. Development of a stereo tv noise imaging in warm shallow seas; second- Ericksen, M. 2001. Buried object scanning camera system to complement fish school order moment and model-based imaging sonar. J IEEE Oceanic Eng. 26(4):677-689. measurements by a quantitative echo sounder. algorithms. J Acoust Soc Am. 106(6):3201-3210. HBOI, 2005. Deep-sea exploration beneath Proc. MTS/IEEE OCEANS ’04, 1, pp. 409-414. Katrina’s wake. Harbor Branch Oceano- Quintas, P. 2005. DHS Project Hawkeye graphic Institution. Updated 1 Sept. 2005. Tan, K.-H., Hua, H. and Ahuja, N. 2004. Intelligent Video Analysis. Public Eye, High-Tech Retrieved 9 Sept. 2005 < http:// Multiview panoramic cameras using mirror Vision reports, 19 May 2005. Retrieved 24 May www.hboi.edu/news/press/sept0105.html>. pyramids. Proc. IEEE PAMI, 26(7), pp. 941-946. 2005 . 1998. Polarization vision helps detect 4,189,211. transparent prey. Nature. 393:222-223. Repetto, S., Palmeses, M. and Trucco, A. Theberge, A.E. 1989. Sounding pole to sea 2002. Projection and mosaicking of real data Sloan, M., Germano, J.D., Rhoads, D.C., beam. ASPRS/ACSM Surveying and gathered with a front-scan sonar system. Proc. Smith, C., Michaud, E., Parry, D., Wenzhofer, Cartography, 5: 334-346. MTS/IEEE OCEANS ’02, 4, pp. 2466-2471. F., Kennedy, B., Henriques, C., Battle, E., Thompson, B.J. 1990. Selected papers on Carey, D., Iocco, L., Valente, R., Watson, J. Rife, J. and Rock, S.M. 2001. A pilot-aid for holographic particle diagnostics, Vikram, C.S. and Rosenberg, R. 2003. Towards a greater ROV based tracking of gelatinous animals in (ed.), Milestone Series, 624 pp. understanding of pattern, scale and process in the midwater. Proc. MTS/IEEE OCEANS marine benthic systems: a picture is worth a Tidd, R. 2005. C3D: A novel acoustic ’01, 2, pp. 1137-1144. thousand worms. J Exp Mar Bio. and Eco. technology for port and harbor surveillance. Rife, J. and Rock, S.M. 2003. Segmentation 285-286:313-338. 2005 IEEE Conf. on Tech. for Homeland methods for visual tracking of deep-ocean Security, poster session. Boston: IEEE. Soriano, M., Marcos, S., Saloma, C., jellyfish using a conventional camera. IEEE J Quibilan, M. and Alino, P. 2001. Image Trucco, A. 2001. Detection of objects buried Oceanic Eng. 28(4):595-608. classification of coral reef components from in the seafloor by a pattern recognition Rocchini, C., Cignoni, P., Ganovelli, F., underwater color video. Proc. MTS/IEEE approach. IEEE J Oceanic Eng. 26(4):769-782. Montani, C., Pingi, P. and Scopigno, R. 2001. OCEANS ’01, 2, pp. 1008-1013. Trucco, A., Palmese, M., Fusiello, A. and Marching intersections: an efficient resampling Strand, M.P., Coles, B.W., Nevis, A.J. and Murino, V. 2002. Three-dimensional algorithm for surface management. IEEE Int. Regan, R. 1996. Laser line scan fluorescence underwater acoustical imaging and processing. Conf. on Shape Modeling and App., pp. 296-305. and multi-spectral imaging of coral reef In: Underwater Acou Dig Sig Proc and Comm Roman, C. and Singh, H. 2001. Estimation of environments. Proc. SPIE Ocean Optics Sys, Istepanian, R.S.H. and Stojanovic, M., eds., error in large area underwater photomosaics XIIII, 2963, pp. 790-795. New York: Kluwer Academic Publishers, pp. 28. using vehicle navigation data. Proc. MTS/ Strand, M. 2002. Mine hunting using Trucco, E., Petillot, Y.R., Ruiz, I.T., Plakas, K. IEEE OCEANS ’01, 3, pp. 1849-1853. fluorescence imaging laser line scan (FILLS) and Lane, D.M. 2000. Feature tracking in Rzhanov, Y., Cutter, G.R. and Huff, L. 2001. imagery. Proc. Ocean Optic XVI Conf. video and sonar subsea sequences with Sensor-assisted video mosaicing for seafloor applications. Comp. Vision and Image Strickler, J.R. and Hwang, J.S. 2000. Matched mapping. IEEE Proc. Int. Conf. on Image Understanding, 79:92-122. spatial filters in long working distance Proc., 2, pp. 411-414. microscopy of phase objects. In: Focus on Tu , C.-K. and Jiang, Y.-Y. 2004. Development Modern Microscopy, Cheng, P.C., Hwang, of noise reduction algorithm for underwater P.P., Wu.J.L., Wang, G. and Kim, H., eds. signals. Int. Symp. on UT ’04, pp. 175-179. New Jersey: World Scientific Publishing.

Fall 2005 Volume 39, Number 3 25 Venugopalan, P., Chitre, M.A., Tan, E.T., Zhang, H. and Negahdaripour, S. 2003. On Potter, J., Beng, K.T., Ruiz, S.B. and Tan, S.P. reconstruction of 3-D volumetric models of 2003. Ambient noise imaging – first deploy- coral reef and benthic structures from image ments of ROMANIS and preliminary data sequences of a stereo rig. Proc. MTS/IEEE analysis. Proc. MTS/IEEE OCEANS ’03, 2, OCEANS ’03, 5, pp. 2553-2559. pp. 882-888. Zhang, H. and Negahdaripour, S. 2004. Wang, C.C., Shyue, S.W., Hsu, H.C., Sue, Improved temporal correspondences in stereo- J.S. and Huang, T.C. 2001. CCD camera vision by RANSAC. ICPR04, 4, pp. 52-55. calibration for underwater laser scanning Zhang, H. and Negahdaripour, S. 2005. system. Proc. OCEANS ’01, 4, pp. 2511-2517. Epiflow quadruplet matching: enforcing Watson, J., Alexander, S., Craig, G., Hendry, epipolar geometry for spatio-temporal stereo D.C., Hobson, P.R., Lampitt, R.S., Marteau, correspondences. Proc. IEEE WACV 2005, J.M., Naried, H., Player, M.A., Saw, K. and pp. 481-486. Tipping, K. 2001. Simultaneous in-line and Zyra, S., Kullander, F. and Laurent, C. 1995. off-axis subsea holographic recording of Fibre Optic Magnetic and Electric Field and other marine particles. Meas Sci Sensors and Fibre Optic Sensor Multiplexing. Tech. 12, L9-L15. Proc. Hudroakustik 1995. Weidemann, A.D., Fournier, G.R., Forand, L.L., Mathiew, P., and Mclean, S. 2002. Using a laser underwater camera image enhancer for mine warfare applications: What is gained? NRL, Stennis Space Center, MS, Technical Report, No. A731714, 9 pp., April 2002.

White, S.N., Kirkwood, W.J., Sherman, A., Brown, M., Henthorn, R., Salamy, K.A., Peltzer, E.T., Walz, P. and Brewer, P.G. 2004. Laser Raman spectroscopic instrumentation for in situ geochemical analyses in the deep ocean. Proc. MTS/IEEE OCEANS ’04, 1, pp. 95-100.

Wiebe, P.H., Stanton, T.K., Greene, C.H., Benfield, M.C., Sosik, H.M., Austin, T.C., Warren, J.D. and Hammar, T., 2002. IEEE J Oceanic Eng. 27(3):700-716.

Wilkerson, T.D., Sanders, J.A. and Andrus, I.Q. 2003. U.S. Patent, No. 6,535,158. Xu, X. and Negahdaripour, S. 1997. Vision- based motion sensing for underwater navigation and mosaicing of ocean floor images. Proc. MTS/IEEE OCEANS ’97, 2, pp. 1412-1417.

Yu, Y., Lu, X. and Wang, X. 2003. Image processing in seawater based on measured modulation transfer function. IEEE PACRIM Comm. Comp. and Sig. Processing, 2, pp. 712-715.

26 Marine Technology Society Journal