Hardware Synchronization of Multiple Kinects and Microphones for 3D Audiovisual Spatiotemporal Data Capture

Hardware Synchronization of Multiple Kinects and Microphones for 3D Audiovisual Spatiotemporal Data Capture

HARDWARE SYNCHRONIZATION OF MULTIPLE KINECTS AND MICROPHONES FOR 3D AUDIOVISUAL SPATIOTEMPORAL DATA CAPTURE 1Yijun Jiang, 1David Russell, 2Timothy Godisart, 1Natasha Kholgade Banerjee, 1Sean Banerjee 1Clarkson University, Potsdam, NY, 2Oculus Pittsburgh, Pittsburgh, PA fjiangy, russeldj, nbanerje, [email protected], [email protected] ABSTRACT We present an approach that uses linear timecode to tempo- rally synchronize visual data captured at 30 fps from multiple 1500235.592 Kinect v2 sensors and audio data captured from external mi- crophones. Existing approaches suffer from reduced frame rate due to latencies in relaying synchronization signals over a network, have inaccuracies due to ambient sound contami- 1472028.611 nation when using audio for synchronization, or show multi- frame delays when using networking protocols to synchro- nize capture computer clocks. We align audio and color frame times to a timecode signal injected using a custom-designed 1446982.876 hardware board to the audio processing board of each Kinect. Our approach provides synchronized capture unaffected by ambient noise or network latencies, with maximum synchro- nization offsets within a single frame for any number of de- 1426493.289 vices. We show 3D point cloud reconstruction results with Frames Synchronized To 3D Point Cloud Reconstructed At audio for a variety of single- and multi-person interactions Timecode 01:07:33:13 Timecode 01:07:33:13 captured using four Kinect v2 sensors and two external mi- crophones synchronized by our approach. Fig. 1. Given a set of unsynchronized frames from multiple Kinect v2 sensors as shown by the different internal Kinect Index Terms— synchronization, timecode, spatiotempo- color frame times in milliseconds on the left, we provide an ral, 3D reconstruction, audiovisual, Kinect, depth approach that uses linear timecode to synchronize the frames. Our approach enables merged 3D point cloud reconstructions 1. INTRODUCTION from multiple sensors with sub-frame synchronization. The pervasion of consumer RGB-D capture devices such as the Microsoft Kinect v1 and v2 sensors has provided signif- duces a delay of up to 30 milliseconds (ms) or nearly a single icant momentum to the reconstruction of 3D spatiotemporal frame which can degrade as the number of capture computers data from multiple sensors [1, 2, 3, 4, 5, 6, 7, 8, 9]. A prin- increases. Approaches that automatically search for cues such ciple challenge in generating spatiotemporal reconstructions as a flashing light or a canonical audio signal [7] suffer from from multiple Kinect v2 sensors is ensuring that the data mismatches in synchronization when the cues are altered by captured by the sensors is temporally synchronized. Since body and object motions, spoken content, or ambient noise a single computer supports capture from only one Kinect typical of everyday interactions. v2, traditional approaches perform synchronization over the We present an approach that uses linear timecode (LTC) network connecting the capture computers. As discussed to address the challenge of temporally synchronizing multi- in [6], Kinect v2 synchronization approaches such as Live ple Kinect v2 sensors for 3D point cloud reconstruction as Scan3D [5] which send capture requests over the network fail shown in Figure 1. The novelty of our approach is to lever- to achieve full frame rate capture of 30 frames per second age the audio input that is ubiquitous on most consumer de- due to network latencies. The authors of [6] provide an ap- vices to transfer linear timecode as a global synchronization proach to synchronize the clocks of the capture computers for signal in the absence of generator locking ports. We insert two Kinect v2 sensors using Network Time Protocol (NTP). a low-cost custom-designed hardware synchronization board However, as mentioned by the authors, their approach intro- between the Kinect microphones and the audio processing 978-1-5386-1737-3/18/$31.00 c 2018 IEEE 01:06:44:02 01:06:44:03 01:06:44:04 01:06:44:05 01:06:44:06 01:06:44:07 01:06:44:08 01:06:44:09 Kinect v2 sensor. Approaches that synchronize multiple Kinect v2 sensors traditionally use a networked architecture since the devices lack generator locking or timecode ports. The approach of Live Scan3D [5] synchronizes Kinect v2 sen- sors by sending a capture request over the network and wait- Fig. 2. Our approach enables synchronized capture of 3D ing for a response prior to relaying the next request. How- motions such as the subject folding the paper from multi- ever, the approach is unable to handle full frame rate capture ple Kinect sensors at 30 frames per second. Timecode in due to sensor start time delays and network latencies. In [6], hour:minute:second:frame format is shown above the authors synchronize a pair of Kinect v2 sensors by using each frame. Network Time Protocol (NTP) and recording Kinect frame and thread time stamps. However, their approach introduces a maximum lag of 30 ms or 1 frame, which can degrade when board that transmits an input timecode signal to the audio pro- the number of devices is increased. The authors of [7] use cessing board. Our approach automatically decodes the out- an audio signal to perform post-capture synchronization of put signal from the audio processing board as timecode val- multi-Kinect data. However, ambient noise typical of every- ues. We synchronize multiple Kinect v2 sensors by searching day environments can cause temporal misalignments. for correspondences between the times at which color frames Due to the move toward using larger densities of Kinect are captured by the sensors, and the times corresponding to sensors and similar off-the-shelf capture devices [10, 7, 5, 11, the audio frames captured at each timecode value. 12, 4, 13, 14, 15], accurately synchronizing dense arrays of Our approach allows the Kinects to capture 3D motions sensors becomes vital to create temporally aligned 3D recon- such as those shown in Figure 2 at their full frame rate of structions. Our hardware synchronization approach handles 30 frames per second. Our approach is unaffected by back- any number of Kinect v2 sensors, and works with any device ground noise or network latencies. As shown by our results, with a timecode port or microphone jack. In this paper we the maximum offsets in synchronization are no more than a demonstrate the versatility of our system by synchronizing single frame, irrespective of the number of devices used in four Kinect v2 sensors and two microphones. capture. By using linear timecode, we can synchronize any number of Kinects to each other, and to other devices that ac- cept timecode through a dedicated port or through audio in- 3. MULTI-KINECT V2 HARDWARE put. We demonstrate multi-device synchronization using four SYNCHRONIZATION Kinect v2 sensors and two external microphones. The audio output from the Kinect v2 sensor shows fea- Our hardware synchronization approach uses linear timecode tures characteristic of off-the-shelf devices, e.g., active can- (LTC) to synchronize the Kinects and microphones. We cellation of repetitive patterns such as timecode signals choose LTC over repetitive signals such as sine or square treated as background noise, and occasional signal corrup- waves since LTC acts as a natural 30 fps counter, and pro- tion due to imperfect processing by the Kinect hardware. Our vides hour, minute, second, and frame values for each video approach provides two contributions to extract a clean audio or audio frame. While Gray code is a possible choice of signal from the noisy output. To suppress active noise cancel- counting signal, LTC has the advantage of having 16 parity lation, we install a set of relays on the hardware synchroniza- bits of the form 0011 1111 1111 1101 that are used to pre- tion board that swap the timecode data at regular intervals for vent decoding errors from propagating in time. We generate random audio signals from the microphones over short time LTC using a Denecke SB-3 generator, and amplify it using periods. To repair the corrupted timecode signal, we automat- a Pelco CM9760-MDA amplifier to handle increasing num- ically estimate corrected timecode values by examining con- ber of capture devices. We capture environment audio using secutive pairs of decoded timecode and audio frame times. the Behringer microphones. To synchronize the microphone audio, we transmit the amplified timecode signal to two Alto ZMX52 mixers. Each mixer outputs two-channel audio as 2. RELATED WORK shown in Figure 3, where the right and left channels consist respectively of the timecode signal and the audio from the With the pervasion of low-cost RGB-D sensors such as corresponding microphone. the Microsoft Kinect, many approaches have used multiple To synchronize the Kinects using LTC, we re-purpose the Kinect v1 sensors to provide merged 3D reconstructions of audio channel on the Kinect sensors as a carrier for the time- human activities [1, 2, 3, 4]. Since up to four Kinect v1 sen- code signal. We transmit the amplified timecode signal to sors can be connected to one computer, approaches that use a custom-designed hardware synchronization board installed the Kinect v1 leverage low inter-process communication to on the Kinect as shown in Figure 4. The timecode signal is perform software synchronization of the Kinect v1 sensors. applied across the pins LTC+ and LTC-. We remove the wires Unlike the Kinect v1, a single computer supports one to the microphone and audio processing board connections on Timecode data from Denecke SB-3 (right channel) 0.5 0 -0.5 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 Time (seconds) Voice data from Behringer Xm8500 microphone (left channel) 0.2 0 -0.2 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 Time (seconds) Relays Fig.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us