Oceanography The Official Magazine of
The Oceanography Society
Volume 34 Issue 04 Supplement

View Issue TOC
Volume 34, No. 4
Pages 92 - 93

OpenAccess

Visualizing Multi-Hectare Seafloor Habitats with BioCam

By Blair Thornton , Adrian Bodenmann, Takaki Yamada, David Stanley, Miquel Massot-Campos, Veerle Huvenne, Jennifer Durden, Brian Bett, Henry Ruhl, and Darryl Newborough 
Jump to
Full text Citation References Copyright & Usage
Full Text

Range or resolution? We often get asked this question when mapping the seafloor. And it is important because the type of data we choose to collect fundamentally changes the science that can follow. Photos taken by camera-equipped autonomous underwater vehicles (AUVs) represent one extreme of the range/resolution trade-off, where sub-centimeter resolutions can be achieved, but typically only from close ranges of 2 m to 3 m. Taking images from higher altitudes increases the area mapped during visual surveys in two ways. First, a larger footprint can be observed in each image, and second, the lower risk of collision with rugged terrains when operating at higher altitudes allows use of flight-style AUVs (e.g., Autosub6000 shown in Figure 1), which are faster and more energy efficient than the hover-capable vehicles typically used for visual surveys. Combined, these factors permit several tens to more than a hundred hectares of the seafloor to be mapped in a single AUV deployment.

 

FIGURE 1. BioCam consists of a central unit with a stereo pair of cameras and control electronics, fore and aft dual LED strobes, and line lasers that are used to generate 3D color reconstructions of the seafloor. > High res figure

 

BioCam is a high-altitude three-dimensional (3D) imaging system that uses a stereo pair of high-​dynamic-​range scientific complementary metal-​oxide semiconductor (sCMOS) cameras, each with 2,560 × 2,160 pixel resolution, that are mounted in a 4,000 m rated titanium housing. The housing has domed windows to minimize image distortion and also includes low-power electronics for communication, data storage, and control of the dual LED strobes and dual line lasers BioCam uses to acquire 3D imagery. The LED strobes each emit 200,000 lumens of warm hue white light for 4 milli­seconds. The lasers each project a green line (525 nm, 1 W Class 4) onto the seafloor at right angles to the AUV’s direction of travel to measure the shape of the terrain. The optical components are arranged along the bottom of the AUV, with an LED and a laser each mounted fore and aft of the cameras (Figure 1). A large distance between these illumination sources and the cameras ensures high-quality images, and high-resolution bathymetry data can be gathered from target altitudes of 6 m to 10 m.

The large dynamic range of the sCMOS cameras is necessary for high-altitude imaging because red light attenuates much more strongly than green and blue light in water (Figure 2). A large dynamic range allows detection of low intensity red light with sufficient bit resolution to restore color information, while simultaneously detecting the more intense light of the other color channels without saturation. Range information from the dual lasers allows the distance light travels from the strobes to each detected pixel to be calculated for accurate color rectification (see Figure 2). Rectified color is projected onto the laser point cloud and fused with AUV navigation data to generate texture-mapped, 3D visual reconstructions (Bodenmann et al., 2017). The BioCam processing pipeline calibrates the dual laser setup so that quantitative length, area, and volumetric measurements can be made together with estimates of dimensional uncertainty, without the need for artificial field calibration targets (Leat et al., 2018).

 

FIGURE 2. Laser-derived 3D range information is used to rectify the color information in the images. Physics-based image formation models use the range maps to compensate for the wavelength-​dependent attenuation of light in water. This allows the darkening effects and the blue-green hue seen in the raw images to be rectified even over rugged terrains. > High res figure

 

Although 3D reconstructions are useful for studying detailed seafloor information, exploring them is both time-consuming and subjective. To help plan more effective data acquisition during research expeditions, it is valuable to be able to rapidly understand large georeferenced image data sets in expedition-relevant timeframes. For this, we have developed location-guided unsupervised learning methods (Yamada et al., 2021) that can automatically learn the features that best describe images in a georeferenced data set without needing any human input for interpretation. These features are used to cluster images into groups with similar appearances, identifying the most representative images in each cluster and also allowing scientists to flexibly query data sets by ranking all images in order of their similarity to any input image, where the ranked outputs for different query images can be generated in milliseconds. Both the clustering and query returns can be visualized using georeference information to identify spatial patterns in the data sets.

Figure 3 shows an example of a 3D visual reconstruction collected during a survey of the Darwin Mounds marine protected area, 160 km northwest of Cape Wrath, Scotland, at ~1,000 m depth. BioCam was mounted on the flight-style Autosub6000 AUV, which operated at 6 m altitude and 1 m/s forward speed to cover 30,000 m2/h. The setup achieved a resolution of 3.3 mm across track and 2 mm in depth. The closeup in Figure 3 shows individual colonies of cold-​water-corals forming a ring around the base of a micro-mound. Figure 4 shows the results of clustering, representative image identification, and content-​based query. Cold-water coral colonies were most densely distributed around the bases of mounds, several of which are significantly larger (up to 75 m wide and 5 m high) than the micro-mound in Figure 3, forming ring patterns more broadly throughout the 30 ha region mapped during the dive. The clustering results also show that xenophyophores, large single-cell organisms recognized as a vulnerable marine ecosystem indicator species, are most densely distributed in the tails of the mounds. The ability to recognize biological zonation associated with mounds, in particular micro-mounds that are difficult to observe in lower resolution acoustic data, illustrates how combining subcentimeter resolution 3D visual mapping with methods developed to summarize observations and flexibly answer queries can generate rapid human insight and so help focus efforts in observation and downstream analysis.

 

FIGURE 3. Three-dimensional visual mapping data gathered at the Darwin Mounds marine protected area (59°54'N, 7°39'W). The top left panel shows a 30 ha area mapped using BioCam mounted on the autonomous underwater vehicle overlaid on side-scan sonar data. The expanded detail (indicated by red lines) shows a micro-mound (~5 m diameter, 20 cm high). The smaller, individual protrusions that form a ring around the mound are cold-​water-​coral colonies consisting mainly of Desmophyllum pertusum and Madrepora oculata. These can be seen more clearly in the expanded isometric view indicated by blue lines. The individual colonies have diameters of between 20 cm and 50 cm and are between 10 cm and 30 cm high in the area shown. > High res figure

 

FIGURE 4. Unsupervised clustering outputs (left) and examples of automatically identified cluster representative images (right) show the distribution of cold-water coral and coral rubble (red), xenophyophores (green), and rippled sand (blue). The light green, green, and dark green clusters show dense distributions of xenophyophores in the tails of the mounds where the mounds themselves are characterized by the presence of coral and coral rubble (red). The content-based query ranks images in order of their similarity to an input image (orange). > High res figure

 

Acknowledgment

This research is funded by the UK Natural Environment Research Council’s Oceanids program, grant NE/P020887/1. The data used in this article are available on the benthic imaging repository Squidle+ (www.soi.squidle.org).
Citation

Thornton, B., A. Bodenmann, T. Yamada, D. Stanley, M. Massot-Campos, V. Huvenne, J. Durden, B. Bett, H. Ruhl, and D. Newborough. 2021. Visualizing multi-hectare seafloor habitats with BioCam. Pp. 92–93 in Frontiers in Ocean Observing: Documenting Ecosystems, Understanding Environmental Changes, Forecasting Hazards. E.S. Kappel, S.K. Juniper, S. Seeyave, E. Smith, and M. Visbeck, eds, A Supplement to Oceanography 34(4), https://doi.org/10.5670/oceanog.2021.supplement.02-34.

References

Bodenmann, A., B. Thornton, and T. Ura. 2017. Generation of high-​resolution three-dimensional reconstructions of the seafloor in color using a single camera and structured light. Journal of Field Robotics 34(5):833–851, https://doi.org/10.1002/rob.21682.

Leat, M., A. Bodenmann, M. Massot-Campos, and B. Thornton. 2018. Analysis of uncertainty in laser-scanned bathymetric maps. In 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, November 6–9, 2018, Porto, Portugal, https://doi.org/10.1109/AUV.2018.8729747.

Yamada, T., A. Prügel-Bennett, and B. Thornton. 2021. Learning features from georeferenced seafloor imagery with location guided autoencoders. Journal of Field Robotics 38:52–67, https://doi.org/10.1002/rob.21961.

Copyright & Usage

This is an open access article made available under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format as long as users cite the materials appropriately, provide a link to the Creative Commons license, and indicate the changes that were made to the original content. Images, animations, videos, or other third-party material used in articles are included in the Creative Commons license unless indicated otherwise in a credit line to the material. If the material is not included in the article’s Creative Commons license, users will need to obtain permission directly from the license holder to reproduce the material.