WO2017003958A1 - Automatic microphone selection in a sports camera - Google Patents

Automatic microphone selection in a sports camera Download PDF

Info

Publication number
WO2017003958A1
WO2017003958A1 PCT/US2016/039679 US2016039679W WO2017003958A1 WO 2017003958 A1 WO2017003958 A1 WO 2017003958A1 US 2016039679 W US2016039679 W US 2016039679W WO 2017003958 A1 WO2017003958 A1 WO 2017003958A1
Authority
WO
WIPO (PCT)
Prior art keywords
microphone
audio signal
correlation
metric
responsive
Prior art date
Application number
PCT/US2016/039679
Other languages
French (fr)
Inventor
Zhinian Jing
Erich Tisch
Ke Li
Paul Beckmann
Joyce ROSENBAUM
Magnus Hansson
Evan L. COONS
Alexander Wroblewski
Original Assignee
Gopro, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/083,266 external-priority patent/US9769364B2/en
Application filed by Gopro, Inc. filed Critical Gopro, Inc.
Publication of WO2017003958A1 publication Critical patent/WO2017003958A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/027Spatial or constructional arrangements of microphones, e.g. in dummy heads
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • G03B17/08Waterproof bodies or housings
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B31/00Associated working of cameras or projectors with sound-recording or sound-reproducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1656Details related to functional adaptations of the enclosure, e.g. to provide protection against EMI, shock, water, or to host detachable peripherals like a mouse or removable expansions units like PCMCIA cards, or to provide access to internal components for maintenance or to removable storage supports like CDs or DVDs, or to mechanically mount accessories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/004Monitoring arrangements; Testing arrangements for microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Definitions

  • This disclosure relates to audio capture, and more specifically, to the selecting between multiple available microphones in an audio capture system.
  • FIG. 1 is a block diagram illustrating an example embodiment of an audio capture system.
  • FIG. 2 is a flowchart illustrating a first embodiment of a process for selecting between audio signals from different microphones in an audio capture system with multiple microphones.
  • FIG. 3 is a flowchart illustrating a second embodiment of a process for selecting between audio signals from different microphones in an audio capture system with multiple microphones.
  • FIG. 4 is a flowchart illustrating an embodiment of a process for detecting a wet microphone condition.
  • FIG. 5 is a flowchart illustrating an embodiment of a process for selecting a subset of microphones out of a group of microphones.
  • FIG. 6A is first perspective view of an example camera system.
  • FIG. 6B is second perspective view of an example camera system.
  • FIG. 7 illustrates an example of a drainage enhancement feature for an enhanced microphone in a camera system.
  • an output audio signal is generated in an audio capture system having multiple microphones including at least a first microphone and a second microphone.
  • the first microphone includes a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature.
  • a first audio signal is received from the first microphone representing ambient audio captured by the first microphone during a time interval.
  • a second audio signal is received from the second microphone representing ambient audio captured by the second microphone during the time interval.
  • a correlation metric is determined between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal. Responsive to the correlation metric exceeding a predefined threshold, the first audio signal is outputted for the time interval.
  • a first noise metric is determined for the first audio signal and a second noise metric is determined for the second audio signal. Responsive to the sum of the first noise metric and a bias value being less than the second noise metric, the first audio signal is output for the time interval. Responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, the second audio signal is output for the time interval.
  • an output audio signal is generated in an audio capture system having multiple microphones including at least a first microphone and a second microphone.
  • the first microphone includes a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature.
  • a first audio signal is received from the first microphone representing ambient audio captured by the first microphone during a time interval.
  • a second audio signal is received from the second microphone representing ambient audio captured by the second microphone during the time interval.
  • a correlation metric is determined between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal. Responsive to the correlation metric exceeding a first predefined threshold, the first audio signal is output for the time interval.
  • the microphones Responsive to the correlation metric not exceeding the first predefined threshold, it is determined whether the microphones are submerged in liquid. If the microphones are not submerged, it is determined whether the first microphone is wet. If the first microphone is wet, the second microphone signal is output for the time interval. Responsive to determining that first microphone is not wet or that the microphones are submerged, a first noise metric is determined for the first audio signal and a second noise metric is determined for the second audio signal. Responsive to the sum of the first noise metric and a bias value being less than the second noise metric, the first audio signal is output for the time interval. Responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, the second audio signal is output for the time interval.
  • a method determines if a first microphone is wet in an camera system having a first microphone and a second microphone, where the first microphone is positioned in a recess of an inner side of a face of the camera, where the recess is coupled to a channel coupled to a lower drain below the channel to drain water from the recess away from the microphone via the channel, and where the second microphone is positioned away from the channel and the drain.
  • a first average signal level of the first audio signal and a second average signal level of the second audio signal are determined over a predefined time interval.
  • a ratio of the first average signal level to the second average signal level is determined.
  • a camera comprises a lens assembly, a substantially cubic camera housing, a first microphone, a lower drain, an upper drain, a channel, and a second microphone.
  • the lens assembly directs light received through a lens window to an image sensor.
  • the substantially cubic camera housing encloses the lens assembly and comprises a bottom face, left face, right face, back face, top face, and front face.
  • the first microphone is integrated with the front face of the camera and positioned within a recess on an interior facing portion of the front face.
  • the lower drain is below the first microphone and comprises an opening in the substantially cubic camera housing near the front face. The lower drain allows water that collects in the recess housing the first microphone to drain.
  • the upper drain is above the first microphone and comprises an opening in the substantially cubic housing near the front face.
  • the upper drain allows air to enter the recess as the water drains.
  • the channel through the interior facing portion of the front face couples the recess to the lower drain.
  • the second microphone is integrated with a rear portion of the substantially cubic camera housing.
  • an audio capture system comprises a substantially cubic housing including a bottom face, left face, right face, back face, top face, and front face.
  • a first microphone is integrated with the front face of the audio capture system and positioned within a recess on an interior facing portion of the front face.
  • a lower drain below the first microphone comprises an opening in the substantially cubic housing near the front face to allow water that collects in the recess housing the first microphone to drain.
  • An upper drain above the first microphone comprises an opening in the substantially cubic housing near the front face to allow air to enter the recess as the water drains.
  • a channel through the interior facing portion of the front face couples the recess to the lower drain.
  • a second microphone is integrated with a rear portion of the substantially cubic housing.
  • FIG. 1 illustrates an example of an audio capture system 100 including multiple microphones.
  • the audio capture system 100 includes at least one "enhanced” microphone 110, at least one "reference” microphone 120, a microphone selection controller 130, and an audio encoder 140.
  • the enhanced microphone 110 includes a drainage enhancement feature to enable water to drain from the microphone more quickly than the reference microphone 120.
  • the drainage enhancement feature may be accomplished utilizing gravity and/or surface tension forces.
  • the drainage enhancement feature may be implemented using an inner surface energy coating or particular hole dimensions, shapes, density, patterns, or interior curvature or a combination of features that affect that drainage profile of the enhanced microphone 1 10.
  • the reference microphone 120 includes a physical barrier between the splashing water and a waterproof membrane over the microphone to mitigate the impulses from splashing water.
  • the barrier comprises a plastic barrier that absorbs some of the water impact impulse.
  • an air buffer may exist between the barrier and the waterproof membrane over the microphone.
  • a porting structure traps a buffer layer of water on the outside of a waterproof membrane over the microphone, thus creating a protective layer that blocks splashing water from directly impacting the waterproof membrane.
  • both the enhanced microphone 110 and reference microphone 120 capture ambient audio 105 and pass the captured audio to the microphone selection controller 130.
  • the audio captured by the enhanced microphone 110 and the reference microphone 120 may have varying audio characteristics due to the different structural features of the microphones 1 10, 120.
  • the enhanced microphone 110 will have more spectral artifacts both in open air and when operating under water due to the drainage enhancement feature.
  • the enhanced microphone 110 may have degraded signal-to-noise in windy conditions due to the drainage enhancement feature.
  • the enhanced microphone 1 10 will generally have better signal-to-noise ratio performance out of water in non-windy conditions relative to the reference microphone 120. Therefore, a different selection between the enhanced microphone 110 and the reference microphone 120 may be desirable under different audio capture conditions.
  • the microphone selection controller 130 processes the audio captured from the enhanced microphone 1 10 and the reference microphone 120 and selects, based on the audio characteristics, which of the audio signals to pass to the audio encoder 140. In one embodiment, the microphone selection controller 130 operates on a block-by -block basis. In this embodiment, for each time interval, the microphone selection controller 130 receives a first block of audio data from the enhanced microphone and a second block of audio data from the reference microphone 120, each corresponding to ambient audio 105 captured by the respective microphones 1 10, 120 during the same time interval. The microphone selection controller 130 processes the pair of blocks to determine which block to pass the audio encoder 140.
  • the microphone selection controller 130 generally operates to select the enhanced microphone 110 directly after transitioning out of water since the enhanced microphone 1 10 tends to drain the water faster and has better out of water audio quality. Furthermore, the microphone selection controller 130 generally operates to select the reference microphone 120 when in the water and when transitioning between air and water because it better mitigates the unnatural impulses caused by splashing water.
  • the audio encoder 140 encodes the blocks of audio received from the microphone selection controller 130 to generate an encoded audio signal 145.
  • the microphone selection control 130 and/or the audio encoder 140 are implemented as a processor and a non-transitory computer -readable storage medium storing instructions that when executed by the processor carry out the functions attributed to the microphone selection controller 130 and/or audio encoder 140 described herein.
  • the microphone selection controller 130 and audio encoder 140 may be implemented using a common processor or separate processors.
  • the microphone selection controller 130 and/or audio encoder 140 may be implemented in hardware, (e.g., with an FPGA or ASIC), firmware, or a combination of hardware, firmware and software.
  • the audio capture system 100 is implemented within a camera system such as the camera 500 described below with respect to FIG. 5. Such a camera may use the encoded audio 145 captured by the audio capture system 100 as an audio channel for video captured by the camera. Thus, the audio capture system 100 may capture audio in a manner that is concurrent and synchronized with corresponding frames of video.
  • FIG. 2 is a flowchart illustrating an embodiment of a process for selecting between an enhanced microphone 1 10 and a reference microphone 120.
  • a correlation metric is determined 202 between signal levels of audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 respectively.
  • the correlation metric represents a similarity between a first audio signal captured from the enhanced microphone 1 10 during a time interval and a second audio signal captured from the reference microphone 120 during the same time interval.
  • the signals will be well-correlated in the absence of wind noise, but will be poorly correlated when wind noise is present.
  • the correlation metric may operate as a wind detector.
  • the correlation metric comprises a value from 0 to 1 where a correlation metric of 1 represents a situation where there is no wind, and a correlation metric of 0 means that the captured audio is entirely wind noise.
  • the correlation metric is determined using a correlation function that includes a regularization term ⁇ to handle low level signals.
  • the correlation function is given by:
  • X max(0, ⁇ (L[n] + ⁇ ) * (R[n] + y) ) (1)
  • (*) represents a scalar multiplication
  • N is the block size
  • L[n] and R[n] are the samples from the enhanced microphone and reference microphone respectively.
  • the max operator constrains the correlation metric Xto be in the range 0 and +1. In one embodiment, the correlation metric is calculated over a predefined spectral range (e.g., 600-1200 Hz).
  • the correlation metric is updated at a frequency based on the audio sample rate and sample block size. For example, if a 32kHz sampling rate is used with a block size of 1024 samples, the correlation metric may be updated approximately every 32 milliseconds. In one embodiment, the correlation metric is smoothed over time.
  • the correlation metric is compared 204 to a predefined threshold.
  • the predefined threshold may changes between two or more predefined thresholds depending on the previous state (e.g., whether the reference microphone or enhanced microphone was selected) to include a hysteresis effect. For example, if for the previously processed block, the correlation metric exceeded the predefined threshold (e.g., a predefined threshold of 0.8) indicating that low wind noise detected, then the predefined threshold is set lower for the current block (e.g. 0.7). If for the previously processed block, the correlation metric did not exceed the predefined threshold (e.g., a predefined threshold of 0.8), indicating that high wind noise was detected, then the predefined threshold for the current block is set higher (e.g., to 0.8).
  • the predefined threshold e.g., a predefined threshold of 0.8
  • the enhanced microphone 1 10 is selected because it typically has better signal-to-noise ratio. If the correlation metric does not exceed 204 the predefined threshold, noise metrics are determined for the audio signals captured by the enhanced microphone 110 and the reference microphone 120. Under some conditions, it may be reasonably presumed that both microphones 1 10, 120 pick up the desired (noiseless) signal at approximately, the same level and if one of the microphones is slightly blocked, then the correlation metric will still be relatively high indicating that there is low wind. Furthermore, it may be reasonably presumed that noise from the effects of wind or water is local to each microphone and that the noise will not destructively cancel out the signal.
  • the microphone that is louder during a low correlation condition is determined to be the microphone that has the noise.
  • the noise metrics simply comprise root-mean-squared amplitude levels of the enhanced and reference microphones over a predefined time period.
  • the predefined time period may include a sliding time window that includes the currently processed block and a fixed number of blocks prior to the current block (e.g., an approximately 4 second window).
  • a recursive-based RMS value is used (e.g., with a time constant of approximately 4 seconds).
  • the noise metric is based on equalized amplitude levels of the microphones.
  • the equalization levels are set so that the microphones have similar amplitude characteristics under normal conditions (e.g., non-windy and non-watery conditions).
  • the noise metric is measured across substantially the entire audible band (e.g., between 20 Hz and 16kHz).
  • the microphone selection controller 130 selects 212 the enhanced microphone. If the sum of the noise metric for the enhanced microphone 110 and a bias value is less than the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 212 the enhanced microphone. On the other hand, if the sum of the noise metric for the enhanced microphone 1 10 and the bias value is not less than (e.g., greater than) the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 212 the reference microphone 120.
  • the bias value may comprise either a positive or negative offset that is dynamically adjusted based on the correlation metric. For example, if the correlation metric is below a lower threshold (e.g., 0.4), then a first bias value is used which may be a positive bias value (e.g., l OdB). If the correlation metric is above an upper threshold (e.g., 0.8), then a second bias value is used which may be a negative bias value (e.g., -6dB). If the correlation metric is between the lower threshold (e.g., 0.4) and the upper threshold (e.g., 0.8), the bias value is a linear function of the correlation metric X. For example, in one embodiment, the bias value is given by:
  • biasi is the first bias value used when the correlation metric Xis below the lower threshold TII L and bias 2 is the second bias value used when the correlation metric Xis above the upper threshold Thy.
  • a hysteresis component is additionally included in the bias value.
  • the bias value is adjusted up or down depending on whether the reference microphone 120 or the enhanced microphone 110 was selected for the previous block, so as to avoid switching between the microphones 1 10, 120 too frequently. For example, in one embodiment, if the enhanced microphone 110 was selected for the previous block, an additional hysteresis bias (e.g., 5 db) is subtracted from the bias value to make it more likely that the enhanced microphone 1 10 will be selected again as shown in the equation below:
  • biasn is the hysteresis bias
  • the additional hysteresis bias (e.g,. 5 dB) is added to the bias value to make it more likely that the reference microphone is selected again as shown in the equation below:
  • the bias value takes into account that not all wind level is created equal. It is possible to have wind that is softer, but generates more perceptive noise, than a louder wind. With high amounts of wind (low correlation metric), the enhanced microphone 1 10 tends to generate more perceptive noise than the reference microphone 120 during high wind condition due to the drainage enhancement feature. Thus, the bias value is used to penalize the enhanced microphone 1 10 for low correlation metrics.
  • FIG. 3 is a flowchart illustrating another embodiment of a process for selecting between an enhanced microphone 1 10 and a reference microphone 120.
  • a correlation metric is determined 302 between signal levels of audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 respectively. If the correlation metric exceeds 304 a predefined threshold, then the enhanced microphone 1 10 is selected because it typically has better signal-to-noise ratio. If the correlation metric does not exceed 304 the threshold, it is determined 306 if the microphones are submerged in liquid (e.g. , water).
  • the predefined threshold may be determined in the same manner described above.
  • a water submersion sensor may be used to determine if the microphones are submerged.
  • an image analysis may be performed to detect features
  • detecting color loss may be indicative of the camera being submerged because it causes exponential loss of light intensity depending on wavelength.
  • crinkle patterns may be present in the image when the camera is submerged because the water surface can form small concave and convex lenses that create patches of light and dark. Additionally, light reflecting off particles in the water creates scatter and diffusion that can be detected to determine if the camera is submerged.
  • water pressure on the microphone's waterproof membrane may be detected because the waterproof membrane will deflect under external water pressure. This causes increased tension which shifts the waterproof membrane' s resonance higher from its nominal value and can be detected in the microphone signal.
  • the deflection of the waterproof membrane will results in a positive pressure on and deflection of the microphone membrane which could manifest itself as a shift in microphone bias.
  • a sensor could be placed near the waterproof membrane to detect an increase in shear force caused by deflection of the waterproof membrane that is indicative of the microphone being submerged.
  • the microphones are not submerged, then it is determined 316 whether the enhanced microphone 110 is wet (e.g., not sufficiently drained after being removed from water).
  • the wet microphone condition can be detected by observing spectral response changes over a predefined frequency range (e.g., 2kHz - 4kHz) or by detecting the sound pattern known to be associated with a wet microphone as compared to a drained microphone.
  • the spectral features associated with a wet (undrained) microphone can be found through empirical means. In general, when a microphone membrane is wet, higher frequency sounds are attenuated because the extra weight of the water on the membrane reduces the vibration of the membrane.
  • the water generally acts as a low pass filter.
  • An example of a process for detecting wet microphones is described in FIG. 4 below.
  • spectral changes can be monitored based on the measured known drain time constant differences between the microphone geometries. If the enhanced microphone 110 is wet (e.g., not sufficiently drained), then the reference microphone 120 is selected 320. Otherwise, if the microphones are submerged or if the enhanced microphone 110 is not wet, then noise metrics are determined 310 for the audio blocks captured by the enhanced microphone 110 and the reference microphone 120. The noise metrics may be determined in the same manner as described above in FIG. 2.
  • the microphone selection controller 130 selects 314 the enhanced microphone. If the sum of the noise metric for the enhanced microphone 110 and the bias value is not less than the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 320 the reference microphone 120.
  • the bias value may be determined based on equations (2) - (4) described above.
  • FIG. 4 is a flowchart illustrating an embodiment of a process for detecting a wet microphone.
  • water on a microphone has a transfer function approximating a low pass filter.
  • the amount of attenuation and the cutoff frequency of the wet microphone transfer function is dependent on how much water is on the microphone. Particularly, the more water on the microphone membrane, the greater the attenuation and the lower the cutoff frequency. This phenomenon is due to the added mass of the water on the microphone membrane dampening the movement of the membrane.
  • root-mean- squared (RMS) signal levels of the audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 are calculated 402 across a predefined frequency range (e.g., 2kHz - 4kHz).
  • a smoothing filter may be applied 404 to smooth the a ratio of the enhanced microphone RMS signal level to the reference microphone RMS signal level over time. If it is determined 406 that the ratio of the enhanced microphone RMS signal level to the reference microphone RMS signal level is above a predefined threshold, then the wet microphone is not detected 412. Otherwise, if it is determined 406 that the ratio of the RMS signal levels is not above the predefined threshold, it is determined 408 if wind is present since the presence of wind can result in similar RMS ratios.
  • the presence of wind can be determined based on, for example, a detection signal from a wind detector that determines the presence of wind based on a correlation metric as described above.
  • wind noise threshold is met (i.e., the correlation metric is less than a predefined threshold)
  • the wet microphone is not detected 412. Otherwise, if the wind noise threshold is not met (i.e., the correlation metric is greater than a predefined threshold), then the wet microphone condition is detected 410.
  • the selection algorithm described above may be applied to a group of enhanced microphones 1 10 and group of reference microphones 120 instead of a single enhanced microphone 1 10 and single reference microphone 120.
  • the enhanced microphone signal and reference microphone signal inputted to the processes above may comprise, for example, an average of all of the enhanced microphones and the reference microphones respectively. Then the processes described above select either the enhanced microphone group or the reference group.
  • a separate selection algorithm may be applied to select an audio block from one of the microphones in the selected group to provide to the audio encoder 140 (e.g., the signal with the lowest noise).
  • a process selects a subset of microphones out of a group of microphones that may include reference microphones or enhanced microphones.
  • FIG. 5 illustrates an embodiment of a process performed by the microphone selection controller 130 for choosing N microphones out of a group of M microphones. Audio signals are received 502 from each of the microphones in the group. Adverse conditions such as wind (e.g., low correlation value) or wet microphone (e.g., using the process of FIG. 4) are detected 504 if present. If no adverse conditions (e.g., wind, water, etc.) are detected, the microphone selection controller 130 selects 506 N microphones in the group of M microphones that are pre-identified as being preferred microphones.
  • wind e.g., low correlation value
  • wet microphone e.g., using the process of FIG.
  • the RMS levels of each of the M microphones are measured 508 and a bias value is added to each microphone.
  • the bias value is determined based on the bias equations (2) - (4) described above.
  • the bias value for each microphone may be different depending on the configuration of each microphone.
  • the bias function can be a function of the correlation metric, the RMS values of all other microphones and the determination of whether or not the microphone is under water. Then, the N microphones having the lowest sums of their respective bias values and RMS levels are selected 510.
  • the microphone selection controller 130 picks the N microphones having the smallest cost value of J and where Ji is a cost value associated with the z ' th microphone, is the correlation metric, R, is the RMS value of the z ' th microphone, and f, is a predefined cost function.
  • g(X) is the piecewise linear function described in the bias equations above
  • f ⁇ is the cost function for the enhanced microphone 1 10
  • / 2 is the cost function for the reference microphone 120.
  • a hysteresis bias may also be included as described above, except with potentially different thresholds, depending on the configuration.
  • FIGs. 6A-6B illustrate perspective views of an example camera 600 in which the audio capture system 100 may be integrated.
  • the camera 600 comprises at least one cross- section having four approximately equal length sides in a two-dimensional plane. Although the cross-section is substantially square, the corners of the cross-section may be rounded in some embodiments (e.g., a rounded square or squircle).
  • the exterior of the square camera 600 includes 6 surfaces (i.e. a front face, a left face, a right face, a back face, a top face, and a bottom face). In the illustrated embodiment, the exterior surfaces substantially conform to a rectangular cuboid, which may have rounded or unrounded corners.
  • all camera surfaces may also have a substantially square (or rounded square) profile, making the square camera 600 substantially cubic.
  • only two of the six faces e.g., the front face 610 and back face 640
  • the other faces may be other shapes, such as rectangles.
  • the camera 600 can have a small form factor (e.g. a height of 2 cm to 9 cm, a width of 2 cm to 9 cm, and a depth of 2 cm to 9 cm) and is made of a rigid material such as plastic, rubber, aluminum, steel, fiberglass, or a combination of materials.
  • the camera 600 may have a different form factor.
  • the camera 600 includes a camera lens window 602 surrounded by a front face perimeter portion 608 on a front face 610, an interface button 604 and a display 614 on a top face 620, an I/O door 606 on a side face 630, and a back door 612 on a back face 640.
  • the camera lens window 602 comprises a transparent or substantially transparent material (e.g., glass or plastic) that enables light to pass through to an internal lens assembly.
  • the camera lens window 602 is substantially flat (as opposed to a convex lens window found in many conventional cameras).
  • the front face 610 of the camera 600 furthermore comprises a front face perimeter portion 608 that surrounds the lens window 602.
  • the front face perimeter portion 608 comprises a set of screws to secure the front face perimeter portion 608 to the remainder of the housing of the camera 600 and to hold the lens window 602 in place.
  • the interface button 604 provides a user interface that when activated enables a user to control various functions of the camera 600. For example, pressing the button 604 may control the camera to power on or power off, take pictures or record video, save a photo, adjust camera settings, or perform any other action relevant to recording or storing digital media.
  • the interface button 604 may perform different functions depending on the type of interaction (e.g., short press, long press, single tap, double tap, triple tap, etc.) In alternative embodiments, these functions may also be controlled by other types of interfaces such as a knob, a switch, a dial, a touchscreen, voice control, etc.
  • the camera 600 may have more than one interface button 604 or other controls.
  • the display 614 comprises, for example, a light emitting diode (LED) display, a liquid crystal display (LCD) or other type of display for displaying various types of information such as camera status and menus.
  • LED light emitting diode
  • LCD liquid crystal display
  • the interface button 604, display 606, and/or other interface features may be located elsewhere on the camera 600.
  • the I/O door 606 provides a protective cover for various input/output ports of the camera 600.
  • the camera 600 includes a Universal Serial Bus (USB) port and/or a High-Definition Media Interface (HDMI) port, and a memory card slot accessible behind the I/O door 606.
  • USB Universal Serial Bus
  • HDMI High-Definition Media Interface
  • additional or different input/output ports may be available behind the I/O door 606 or elsewhere on the camera 600.
  • the back door 612 provides a protective cover that when removed enables access to internal components of the camera 600.
  • a removable battery is accessible via the back door 612.
  • the camera 600 described herein includes features other than those described below.
  • the square camera 600 can include additional buttons or different interface features such as a speakers and/or various input/output ports.
  • the reference microphone 110 is integrated with or near the back door 612 of the camera 600 such that it is positioned near the rear of the camera 600, and the enhanced microphone is integrated with the front face 610 of the camera 600 such that it is positioned near the front of the camera 600.
  • FIG. 7 illustrates an example of a front face perimeter portion 608 of a camera 600 with an integrated drain enhancement feature in the form of a channel 702 between a recess 704 where the enhanced microphone 110 (not shown) is positioned, and one or more drains (e.g,. an upper drain structure 708 and a lower drain structure 706, each of which may comprise a single drain or multiple drains) to enable liquid to drain.
  • Drains e.g,. an upper drain structure 708 and a lower drain structure 706, each of which may comprise a single drain or multiple drains
  • Microphone ports 710 provide openings to let sound reach the microphone(s) housed in recess 704.
  • the upper drain structure 708 is positioned above the channel 702 and the lower drain structure 706 is positioned below the channel 702.
  • the lower drain structure 706 is generally much larger than the upper drain structure 708.
  • the entire channel 702 generally fills with water.
  • the large mass of water in the channel 702 flows out through the lower drain structure 706 through the force of gravity. This pulls air in through upper drain structure 708 and clears water from the recess 704, the upper drain structure 708, and/or the microphone ports 710, thus allowing the microphone to resume normal acoustic performance.
  • Coupled along with its derivatives.
  • the term “coupled” as used herein is not necessarily limited to two or more elements being in direct physical or electrical contact. Rather, the term
  • Coupled may also encompass two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other, or are structured to provide a drainage path between the elements.
  • any reference to "one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
  • the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

Abstract

An audio capture system for a sports camera includes at least one "enhanced" microphone and at least one "reference" microphone. The enhanced microphone includes a drainage enhancement feature to enable water to drain from the microphone more quickly than the reference microphone. A microphone selection controller selects between the microphones based on a microphone selection algorithm to enable high quality in conditions where the sports camera transitions in and out of water during activities such as surfing, water skiing, swimming, or other wet environments.

Description

AUTOMATIC MICROPHONE SELECTION IN A SPORTS CAMERA
INVENTORS:
ZHINIAN JING ERICH TISCH KE LI PAUL BECKMANN JOYCE ROSENBAUM MAGNUS HANSSON EVAN L. COONS
ALEXANDER WROBLEWSKI
CROSS- EFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 62/188,450 entitled "Automatic Microphone Selection in a Sports Camera" to Zhinian Jing, et al., filed on July 2, 2015, U.S. Application No. 15/083,262 entitled "Automatic Microphone Selection in a Sports Camera" to Zhinian Jing, et al, filed on March 28, 2016, U.S. Application No. 15/083,264 entitled "Automatic Microphone Selection in a Sports Camera Based on Wet Microphone Determination" to Zhinian Jing, et al., filed on March 28, 2016, U.S. Application No. 15/083,266 entitled "Automatically Determining a Wet Microphone Condition in a Sports Camera" to Zhinian Jing, et al., filed on March 28, 2016, and U.S. Application No. 15/083,267 entitled "Drainage Channel for Sports Camera" to Erich Tisch, et al., filed on March 28, 2016, the contents of which are incorporated by reference herein.
BACKGROUND
TECHNICAL FIELD
[0002] This disclosure relates to audio capture, and more specifically, to the selecting between multiple available microphones in an audio capture system.
DESCRIPTION OF THE RELATED ART
[0003] In a camera designed to operate both in and out of water, the audio subsystem can be stressed to the point where the resulting signal captured by the microphone is distorted and unnatural. The transition between the two environments can be particularly challenging due to the impulse of splashing water. During certain activities such as surfing, swimming, or other water sports, transition in and out of water may occur frequently over an extended period of time. BRIEF DESCRIPTIONS OF THE DRAWINGS
[0004] The disclosed embodiments have other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
[0005] Figure (FIG.) 1 is a block diagram illustrating an example embodiment of an audio capture system.
[0006] FIG. 2 is a flowchart illustrating a first embodiment of a process for selecting between audio signals from different microphones in an audio capture system with multiple microphones.
[0007] FIG. 3 is a flowchart illustrating a second embodiment of a process for selecting between audio signals from different microphones in an audio capture system with multiple microphones.
[0008] FIG. 4 is a flowchart illustrating an embodiment of a process for detecting a wet microphone condition.
[0009] FIG. 5 is a flowchart illustrating an embodiment of a process for selecting a subset of microphones out of a group of microphones.
[0010] FIG. 6A is first perspective view of an example camera system.
[0011] FIG. 6B is second perspective view of an example camera system.
[0012] FIG. 7 illustrates an example of a drainage enhancement feature for an enhanced microphone in a camera system.
DETAILED DESCRIPTION
[0013] The figures and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
[0014] Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. CONFIGURATION OVERVIEW
[0015] In a first embodiment, an output audio signal is generated in an audio capture system having multiple microphones including at least a first microphone and a second microphone. The first microphone includes a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature. A first audio signal is received from the first microphone representing ambient audio captured by the first microphone during a time interval. A second audio signal is received from the second microphone representing ambient audio captured by the second microphone during the time interval. A correlation metric is determined between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal. Responsive to the correlation metric exceeding a predefined threshold, the first audio signal is outputted for the time interval. Responsive to the correlation metric not exceeding the first predefined threshold, a first noise metric is determined for the first audio signal and a second noise metric is determined for the second audio signal. Responsive to the sum of the first noise metric and a bias value being less than the second noise metric, the first audio signal is output for the time interval. Responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, the second audio signal is output for the time interval.
[0016] In a second embodiment, an output audio signal is generated in an audio capture system having multiple microphones including at least a first microphone and a second microphone. The first microphone includes a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature. A first audio signal is received from the first microphone representing ambient audio captured by the first microphone during a time interval. A second audio signal is received from the second microphone representing ambient audio captured by the second microphone during the time interval. A correlation metric is determined between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal. Responsive to the correlation metric exceeding a first predefined threshold, the first audio signal is output for the time interval. Responsive to the correlation metric not exceeding the first predefined threshold, it is determined whether the microphones are submerged in liquid. If the microphones are not submerged, it is determined whether the first microphone is wet. If the first microphone is wet, the second microphone signal is output for the time interval. Responsive to determining that first microphone is not wet or that the microphones are submerged, a first noise metric is determined for the first audio signal and a second noise metric is determined for the second audio signal. Responsive to the sum of the first noise metric and a bias value being less than the second noise metric, the first audio signal is output for the time interval. Responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, the second audio signal is output for the time interval.
[0017] In another embodiment, a method determines if a first microphone is wet in an camera system having a first microphone and a second microphone, where the first microphone is positioned in a recess of an inner side of a face of the camera, where the recess is coupled to a channel coupled to a lower drain below the channel to drain water from the recess away from the microphone via the channel, and where the second microphone is positioned away from the channel and the drain. A first average signal level of the first audio signal and a second average signal level of the second audio signal are determined over a predefined time interval. A ratio of the first average signal level to the second average signal level is determined. Responsive to the ratio of the first average signal level to the second average signal level exceeding a first threshold or detecting a wind condition, it is determined that a wet microphone condition is not detected. Responsive to the ratio of the first average signal level to the second average signal level not exceeding the first threshold and not detecting the wind condition, it is determined that the wet microphone condition is detected.
[0018] In another embodiment, a camera comprises a lens assembly, a substantially cubic camera housing, a first microphone, a lower drain, an upper drain, a channel, and a second microphone. The lens assembly directs light received through a lens window to an image sensor. The substantially cubic camera housing encloses the lens assembly and comprises a bottom face, left face, right face, back face, top face, and front face. The first microphone is integrated with the front face of the camera and positioned within a recess on an interior facing portion of the front face. The lower drain is below the first microphone and comprises an opening in the substantially cubic camera housing near the front face. The lower drain allows water that collects in the recess housing the first microphone to drain. The upper drain is above the first microphone and comprises an opening in the substantially cubic housing near the front face. The upper drain allows air to enter the recess as the water drains. The channel through the interior facing portion of the front face couples the recess to the lower drain. The second microphone is integrated with a rear portion of the substantially cubic camera housing.
[0019] In yet another embodiment, an audio capture system comprises a substantially cubic housing including a bottom face, left face, right face, back face, top face, and front face. A first microphone is integrated with the front face of the audio capture system and positioned within a recess on an interior facing portion of the front face. A lower drain below the first microphone comprises an opening in the substantially cubic housing near the front face to allow water that collects in the recess housing the first microphone to drain. An upper drain above the first microphone comprises an opening in the substantially cubic housing near the front face to allow air to enter the recess as the water drains. A channel through the interior facing portion of the front face couples the recess to the lower drain. A second microphone is integrated with a rear portion of the substantially cubic housing.
EXAMPLE AUDIO CAPTURE SYSTEM
[0020] FIG. 1 illustrates an example of an audio capture system 100 including multiple microphones. The audio capture system 100 includes at least one "enhanced" microphone 110, at least one "reference" microphone 120, a microphone selection controller 130, and an audio encoder 140. The enhanced microphone 110 includes a drainage enhancement feature to enable water to drain from the microphone more quickly than the reference microphone 120. The drainage enhancement feature may be accomplished utilizing gravity and/or surface tension forces. In various embodiments, the drainage enhancement feature may be implemented using an inner surface energy coating or particular hole dimensions, shapes, density, patterns, or interior curvature or a combination of features that affect that drainage profile of the enhanced microphone 1 10. The enhanced microphone 110 can therefore recover relatively quickly when moved from in water to out of water and therefore mitigates the frequency response distortion leading to muffled, unnatural sound when water is trapped on the membrane over the microphone or obscures the acoustic pathways to the microphone. In contrast, the reference microphone 120 includes a physical barrier between the splashing water and a waterproof membrane over the microphone to mitigate the impulses from splashing water. For example, in one embodiment, the barrier comprises a plastic barrier that absorbs some of the water impact impulse. In another embodiment, an air buffer may exist between the barrier and the waterproof membrane over the microphone. In another embodiment, a porting structure traps a buffer layer of water on the outside of a waterproof membrane over the microphone, thus creating a protective layer that blocks splashing water from directly impacting the waterproof membrane. Additionally, the muffling quality of water pooled on the waterproof membrane reduces some high frequency content of the splashing water. [0021] In operation, both the enhanced microphone 110 and reference microphone 120 capture ambient audio 105 and pass the captured audio to the microphone selection controller 130. The audio captured by the enhanced microphone 110 and the reference microphone 120 may have varying audio characteristics due to the different structural features of the microphones 1 10, 120. Typically, the enhanced microphone 110 will have more spectral artifacts both in open air and when operating under water due to the drainage enhancement feature. Furthermore, the enhanced microphone 110 may have degraded signal-to-noise in windy conditions due to the drainage enhancement feature. However, the enhanced microphone 1 10 will generally have better signal-to-noise ratio performance out of water in non-windy conditions relative to the reference microphone 120. Therefore, a different selection between the enhanced microphone 110 and the reference microphone 120 may be desirable under different audio capture conditions.
[0022] The microphone selection controller 130 processes the audio captured from the enhanced microphone 1 10 and the reference microphone 120 and selects, based on the audio characteristics, which of the audio signals to pass to the audio encoder 140. In one embodiment, the microphone selection controller 130 operates on a block-by -block basis. In this embodiment, for each time interval, the microphone selection controller 130 receives a first block of audio data from the enhanced microphone and a second block of audio data from the reference microphone 120, each corresponding to ambient audio 105 captured by the respective microphones 1 10, 120 during the same time interval. The microphone selection controller 130 processes the pair of blocks to determine which block to pass the audio encoder 140.
[0023] In one embodiment, the microphone selection controller 130 generally operates to select the enhanced microphone 110 directly after transitioning out of water since the enhanced microphone 1 10 tends to drain the water faster and has better out of water audio quality. Furthermore, the microphone selection controller 130 generally operates to select the reference microphone 120 when in the water and when transitioning between air and water because it better mitigates the unnatural impulses caused by splashing water.
[0024] The audio encoder 140 encodes the blocks of audio received from the microphone selection controller 130 to generate an encoded audio signal 145.
[0025] In an embodiment, the microphone selection control 130 and/or the audio encoder 140 are implemented as a processor and a non-transitory computer -readable storage medium storing instructions that when executed by the processor carry out the functions attributed to the microphone selection controller 130 and/or audio encoder 140 described herein. The microphone selection controller 130 and audio encoder 140 may be implemented using a common processor or separate processors. In other embodiments, the microphone selection controller 130 and/or audio encoder 140 may be implemented in hardware, (e.g., with an FPGA or ASIC), firmware, or a combination of hardware, firmware and software.
[0026] In an embodiment, the audio capture system 100 is implemented within a camera system such as the camera 500 described below with respect to FIG. 5. Such a camera may use the encoded audio 145 captured by the audio capture system 100 as an audio channel for video captured by the camera. Thus, the audio capture system 100 may capture audio in a manner that is concurrent and synchronized with corresponding frames of video.
[0027] FIG. 2 is a flowchart illustrating an embodiment of a process for selecting between an enhanced microphone 1 10 and a reference microphone 120. A correlation metric is determined 202 between signal levels of audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 respectively. The correlation metric represents a similarity between a first audio signal captured from the enhanced microphone 1 10 during a time interval and a second audio signal captured from the reference microphone 120 during the same time interval. Generally, the signals will be well-correlated in the absence of wind noise, but will be poorly correlated when wind noise is present. Thus, the correlation metric may operate as a wind detector. In one embodiment, the correlation metric comprises a value from 0 to 1 where a correlation metric of 1 represents a situation where there is no wind, and a correlation metric of 0 means that the captured audio is entirely wind noise. In one embodiment, the correlation metric is determined using a correlation function that includes a regularization term γ to handle low level signals. For example, in one embodiment, the correlation function is given by:
X = max(0,∑^(L[n] + γ) * (R[n] + y) ) (1) where (*) represents a scalar multiplication, N is the block size, γ is the regularization term (e.g., γ = 0.001), and L[n] and R[n] are the samples from the enhanced microphone and reference microphone respectively. The max operator constrains the correlation metric Xto be in the range 0 and +1. In one embodiment, the correlation metric is calculated over a predefined spectral range (e.g., 600-1200 Hz). Using a restricted range beneficially eliminates or reduces artifacts caused by vibration (which typically occur at low frequencies) and reduces the amount of processing relative to calculating the metric over the full frequency spectrum. In one embodiment, the correlation metric is updated at a frequency based on the audio sample rate and sample block size. For example, if a 32kHz sampling rate is used with a block size of 1024 samples, the correlation metric may be updated approximately every 32 milliseconds. In one embodiment, the correlation metric is smoothed over time.
[0028] The correlation metric is compared 204 to a predefined threshold. In one
embodiment, the predefined threshold may changes between two or more predefined thresholds depending on the previous state (e.g., whether the reference microphone or enhanced microphone was selected) to include a hysteresis effect. For example, if for the previously processed block, the correlation metric exceeded the predefined threshold (e.g., a predefined threshold of 0.8) indicating that low wind noise detected, then the predefined threshold is set lower for the current block (e.g. 0.7). If for the previously processed block, the correlation metric did not exceed the predefined threshold (e.g., a predefined threshold of 0.8), indicating that high wind noise was detected, then the predefined threshold for the current block is set higher (e.g., to 0.8).
[0029] If the correlation metric exceeds 204 a predefined threshold, then the enhanced microphone 1 10 is selected because it typically has better signal-to-noise ratio. If the correlation metric does not exceed 204 the predefined threshold, noise metrics are determined for the audio signals captured by the enhanced microphone 110 and the reference microphone 120. Under some conditions, it may be reasonably presumed that both microphones 1 10, 120 pick up the desired (noiseless) signal at approximately, the same level and if one of the microphones is slightly blocked, then the correlation metric will still be relatively high indicating that there is low wind. Furthermore, it may be reasonably presumed that noise from the effects of wind or water is local to each microphone and that the noise will not destructively cancel out the signal. Based on these assumptions, the microphone that is louder during a low correlation condition is determined to be the microphone that has the noise. Thus, in one embodiment, the noise metrics simply comprise root-mean-squared amplitude levels of the enhanced and reference microphones over a predefined time period. For example, the predefined time period may include a sliding time window that includes the currently processed block and a fixed number of blocks prior to the current block (e.g., an approximately 4 second window). In another embodiment, a recursive-based RMS value is used (e.g., with a time constant of approximately 4 seconds). In one embodiment, the noise metric is based on equalized amplitude levels of the microphones. The equalization levels are set so that the microphones have similar amplitude characteristics under normal conditions (e.g., non-windy and non-watery conditions). In one embodiment, the noise metric is measured across substantially the entire audible band (e.g., between 20 Hz and 16kHz).
[0030] If the sum of the noise metric for the enhanced microphone 110 and a bias value is less than the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 212 the enhanced microphone. On the other hand, if the sum of the noise metric for the enhanced microphone 1 10 and the bias value is not less than (e.g., greater than) the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 212 the reference microphone 120.
[0031] In one embodiment, the bias value may comprise either a positive or negative offset that is dynamically adjusted based on the correlation metric. For example, if the correlation metric is below a lower threshold (e.g., 0.4), then a first bias value is used which may be a positive bias value (e.g., l OdB). If the correlation metric is above an upper threshold (e.g., 0.8), then a second bias value is used which may be a negative bias value (e.g., -6dB). If the correlation metric is between the lower threshold (e.g., 0.4) and the upper threshold (e.g., 0.8), the bias value is a linear function of the correlation metric X. For example, in one embodiment, the bias value is given by:
Figure imgf000011_0001
where biasi is the first bias value used when the correlation metric Xis below the lower threshold TIIL and bias2 is the second bias value used when the correlation metric Xis above the upper threshold Thy.
[0032] In one embodiment, a hysteresis component is additionally included in the bias value. In this embodiment, the bias value is adjusted up or down depending on whether the reference microphone 120 or the enhanced microphone 110 was selected for the previous block, so as to avoid switching between the microphones 1 10, 120 too frequently. For example, in one embodiment, if the enhanced microphone 110 was selected for the previous block, an additional hysteresis bias (e.g., 5 db) is subtracted from the bias value to make it more likely that the enhanced microphone 1 10 will be selected again as shown in the equation below:
Figure imgf000011_0002
where biasn is the hysteresis bias.
[0033] On the other hand, if the reference microphone 120 was selected for the previous block, the additional hysteresis bias (e.g,. 5 dB) is added to the bias value to make it more likely that the reference microphone is selected again as shown in the equation below:
Figure imgf000012_0001
[0034] The bias value takes into account that not all wind level is created equal. It is possible to have wind that is softer, but generates more perceptive noise, than a louder wind. With high amounts of wind (low correlation metric), the enhanced microphone 1 10 tends to generate more perceptive noise than the reference microphone 120 during high wind condition due to the drainage enhancement feature. Thus, the bias value is used to penalize the enhanced microphone 1 10 for low correlation metrics.
[0035] FIG. 3 is a flowchart illustrating another embodiment of a process for selecting between an enhanced microphone 1 10 and a reference microphone 120. A correlation metric is determined 302 between signal levels of audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 respectively. If the correlation metric exceeds 304 a predefined threshold, then the enhanced microphone 1 10 is selected because it typically has better signal-to-noise ratio. If the correlation metric does not exceed 304 the threshold, it is determined 306 if the microphones are submerged in liquid (e.g. , water). The predefined threshold may be determined in the same manner described above.
[0036] In one embodiment, a water submersion sensor may be used to determine if the microphones are submerged. In other embodiment (in which the audio capture system is integrated with a camera), an image analysis may be performed to detect features
representative of the camera being submerged in water. For example, detecting color loss may be indicative of the camera being submerged because it causes exponential loss of light intensity depending on wavelength. Furthermore, crinkle patterns may be present in the image when the camera is submerged because the water surface can form small concave and convex lenses that create patches of light and dark. Additionally, light reflecting off particles in the water creates scatter and diffusion that can be detected to determine if the camera is submerged. In yet another embodiment, water pressure on the microphone's waterproof membrane may be detected because the waterproof membrane will deflect under external water pressure. This causes increased tension which shifts the waterproof membrane' s resonance higher from its nominal value and can be detected in the microphone signal.
Furthermore, the deflection of the waterproof membrane will results in a positive pressure on and deflection of the microphone membrane which could manifest itself as a shift in microphone bias. Additionally, a sensor could be placed near the waterproof membrane to detect an increase in shear force caused by deflection of the waterproof membrane that is indicative of the microphone being submerged.
[0037] If the microphones are not submerged, then it is determined 316 whether the enhanced microphone 110 is wet (e.g., not sufficiently drained after being removed from water). In one embodiment, the wet microphone condition can be detected by observing spectral response changes over a predefined frequency range (e.g., 2kHz - 4kHz) or by detecting the sound pattern known to be associated with a wet microphone as compared to a drained microphone. For example, in one embodiment the spectral features associated with a wet (undrained) microphone can be found through empirical means. In general, when a microphone membrane is wet, higher frequency sounds are attenuated because the extra weight of the water on the membrane reduces the vibration of the membrane. Thus, the water generally acts as a low pass filter. An example of a process for detecting wet microphones is described in FIG. 4 below. In one embodiment, spectral changes can be monitored based on the measured known drain time constant differences between the microphone geometries. If the enhanced microphone 110 is wet (e.g., not sufficiently drained), then the reference microphone 120 is selected 320. Otherwise, if the microphones are submerged or if the enhanced microphone 110 is not wet, then noise metrics are determined 310 for the audio blocks captured by the enhanced microphone 110 and the reference microphone 120. The noise metrics may be determined in the same manner as described above in FIG. 2. If the sum of the noise metric for the enhanced microphone 110 and a bias value is less than the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 314 the enhanced microphone. On the other hand, if the sum of the noise metric for the enhanced microphone 110 and the bias value is not less than the noise metric for the reference microphone 120, then the microphone selection controller 130 selects 320 the reference microphone 120. The bias value may be determined based on equations (2) - (4) described above.
[0038] FIG. 4 is a flowchart illustrating an embodiment of a process for detecting a wet microphone. Generally, water on a microphone has a transfer function approximating a low pass filter. The amount of attenuation and the cutoff frequency of the wet microphone transfer function is dependent on how much water is on the microphone. Particularly, the more water on the microphone membrane, the greater the attenuation and the lower the cutoff frequency. This phenomenon is due to the added mass of the water on the microphone membrane dampening the movement of the membrane. In one embodiment, root-mean- squared (RMS) signal levels of the audio blocks captured by the enhanced microphone 1 10 and reference microphone 120 are calculated 402 across a predefined frequency range (e.g., 2kHz - 4kHz). A smoothing filter may be applied 404 to smooth the a ratio of the enhanced microphone RMS signal level to the reference microphone RMS signal level over time. If it is determined 406 that the ratio of the enhanced microphone RMS signal level to the reference microphone RMS signal level is above a predefined threshold, then the wet microphone is not detected 412. Otherwise, if it is determined 406 that the ratio of the RMS signal levels is not above the predefined threshold, it is determined 408 if wind is present since the presence of wind can result in similar RMS ratios. The presence of wind can be determined based on, for example, a detection signal from a wind detector that determines the presence of wind based on a correlation metric as described above. If it is determined 408 that wind noise threshold is met (i.e., the correlation metric is less than a predefined threshold), then the wet microphone is not detected 412. Otherwise, if the wind noise threshold is not met (i.e., the correlation metric is greater than a predefined threshold), then the wet microphone condition is detected 410.
[0039] In embodiments where there are two or more enhanced microphones 1 10 and two or more reference microphones 120, the selection algorithm described above may be applied to a group of enhanced microphones 1 10 and group of reference microphones 120 instead of a single enhanced microphone 1 10 and single reference microphone 120. In this embodiment, the enhanced microphone signal and reference microphone signal inputted to the processes above may comprise, for example, an average of all of the enhanced microphones and the reference microphones respectively. Then the processes described above select either the enhanced microphone group or the reference group. Furthermore in one embodiment, once either the enhanced microphones 1 10 or reference microphones 120 are selected, a separate selection algorithm may be applied to select an audio block from one of the microphones in the selected group to provide to the audio encoder 140 (e.g., the signal with the lowest noise).
[0040] In another embodiment, a process selects a subset of microphones out of a group of microphones that may include reference microphones or enhanced microphones. FIG. 5 illustrates an embodiment of a process performed by the microphone selection controller 130 for choosing N microphones out of a group of M microphones. Audio signals are received 502 from each of the microphones in the group. Adverse conditions such as wind (e.g., low correlation value) or wet microphone (e.g., using the process of FIG. 4) are detected 504 if present. If no adverse conditions (e.g., wind, water, etc.) are detected, the microphone selection controller 130 selects 506 N microphones in the group of M microphones that are pre-identified as being preferred microphones. If adverse conditions are detected (e.g., wind or water) the RMS levels of each of the M microphones are measured 508 and a bias value is added to each microphone. In one embodiment, the bias value is determined based on the bias equations (2) - (4) described above. In alternative embodiments, the bias value for each microphone may be different depending on the configuration of each microphone. For example, in one embodiment, the bias function can be a function of the correlation metric, the RMS values of all other microphones and the determination of whether or not the microphone is under water. Then, the N microphones having the lowest sums of their respective bias values and RMS levels are selected 510. Mathematically, the process described above can be represented by the following equations:
Figure imgf000015_0001
where the microphone selection controller 130 picks the N microphones having the smallest cost value of J and where Ji is a cost value associated with the z'th microphone, is the correlation metric, R, is the RMS value of the z'th microphone, and f, is a predefined cost function.
[0041] In the case of only a single reference microphone 120 and a single enhanced microphone 120, fx {X, Rx, R2) = Rx + g{X) and f2{X, R1, R2) = R2 where g(X) is the piecewise linear function described in the bias equations above, f\ is the cost function for the enhanced microphone 1 10 and /2 is the cost function for the reference microphone 120. In one embodiment, a hysteresis bias may also be included as described above, except with potentially different thresholds, depending on the configuration.
EXAMPLE CAMERA SYSTEM CONFIGURATION
[0042] FIGs. 6A-6B illustrate perspective views of an example camera 600 in which the audio capture system 100 may be integrated. The camera 600 comprises at least one cross- section having four approximately equal length sides in a two-dimensional plane. Although the cross-section is substantially square, the corners of the cross-section may be rounded in some embodiments (e.g., a rounded square or squircle). The exterior of the square camera 600 includes 6 surfaces (i.e. a front face, a left face, a right face, a back face, a top face, and a bottom face). In the illustrated embodiment, the exterior surfaces substantially conform to a rectangular cuboid, which may have rounded or unrounded corners. In one example embodiment, all camera surfaces may also have a substantially square (or rounded square) profile, making the square camera 600 substantially cubic. In alternate embodiments, only two of the six faces (e.g., the front face 610 and back face 640) have equal length sides and the other faces may be other shapes, such as rectangles. The camera 600 can have a small form factor (e.g. a height of 2 cm to 9 cm, a width of 2 cm to 9 cm, and a depth of 2 cm to 9 cm) and is made of a rigid material such as plastic, rubber, aluminum, steel, fiberglass, or a combination of materials. In other embodiments, the camera 600 may have a different form factor.
[0043] In an embodiment, the camera 600 includes a camera lens window 602 surrounded by a front face perimeter portion 608 on a front face 610, an interface button 604 and a display 614 on a top face 620, an I/O door 606 on a side face 630, and a back door 612 on a back face 640. The camera lens window 602 comprises a transparent or substantially transparent material (e.g., glass or plastic) that enables light to pass through to an internal lens assembly. In one embodiment, the camera lens window 602 is substantially flat (as opposed to a convex lens window found in many conventional cameras). The front face 610 of the camera 600 furthermore comprises a front face perimeter portion 608 that surrounds the lens window 602. In one embodiment, the front face perimeter portion 608 comprises a set of screws to secure the front face perimeter portion 608 to the remainder of the housing of the camera 600 and to hold the lens window 602 in place.
[0044] The interface button 604 provides a user interface that when activated enables a user to control various functions of the camera 600. For example, pressing the button 604 may control the camera to power on or power off, take pictures or record video, save a photo, adjust camera settings, or perform any other action relevant to recording or storing digital media. In one embodiment, the interface button 604 may perform different functions depending on the type of interaction (e.g., short press, long press, single tap, double tap, triple tap, etc.) In alternative embodiments, these functions may also be controlled by other types of interfaces such as a knob, a switch, a dial, a touchscreen, voice control, etc. Furthermore, the camera 600 may have more than one interface button 604 or other controls. The display 614 comprises, for example, a light emitting diode (LED) display, a liquid crystal display (LCD) or other type of display for displaying various types of information such as camera status and menus. In alternative embodiments, the interface button 604, display 606, and/or other interface features may be located elsewhere on the camera 600.
[0045] The I/O door 606 provides a protective cover for various input/output ports of the camera 600. For example, in one embodiment, the camera 600 includes a Universal Serial Bus (USB) port and/or a High-Definition Media Interface (HDMI) port, and a memory card slot accessible behind the I/O door 606. In other embodiments, additional or different input/output ports may be available behind the I/O door 606 or elsewhere on the camera 600.
[0046] The back door 612 provides a protective cover that when removed enables access to internal components of the camera 600. For example, in one embodiment, a removable battery is accessible via the back door 612.
[0047] In some embodiments, the camera 600 described herein includes features other than those described below. For example, instead of a single interface button 604, the square camera 600 can include additional buttons or different interface features such as a speakers and/or various input/output ports.
[0048] In one embodiment, the reference microphone 110 is integrated with or near the back door 612 of the camera 600 such that it is positioned near the rear of the camera 600, and the enhanced microphone is integrated with the front face 610 of the camera 600 such that it is positioned near the front of the camera 600.
[0049] FIG. 7 illustrates an example of a front face perimeter portion 608 of a camera 600 with an integrated drain enhancement feature in the form of a channel 702 between a recess 704 where the enhanced microphone 110 (not shown) is positioned, and one or more drains (e.g,. an upper drain structure 708 and a lower drain structure 706, each of which may comprise a single drain or multiple drains) to enable liquid to drain. Microphone ports 710 provide openings to let sound reach the microphone(s) housed in recess 704. In one embodiment, the upper drain structure 708 is positioned above the channel 702 and the lower drain structure 706 is positioned below the channel 702. The lower drain structure 706 is generally much larger than the upper drain structure 708.
[0050] When the camera 600 is submerged the entire channel 702 generally fills with water. When the camera 600 emerges from the water, the large mass of water in the channel 702 flows out through the lower drain structure 706 through the force of gravity. This pulls air in through upper drain structure 708 and clears water from the recess 704, the upper drain structure 708, and/or the microphone ports 710, thus allowing the microphone to resume normal acoustic performance. ADDITIONAL CONFIGURATION CONSIDERATIONS
[0051] Throughout this specification, some embodiments have used the expression "coupled" along with its derivatives. The term "coupled" as used herein is not necessarily limited to two or more elements being in direct physical or electrical contact. Rather, the term
"coupled" may also encompass two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other, or are structured to provide a drainage path between the elements.
[0052] Likewise, as used herein, the terms "comprises," "comprising," "includes,"
"including," "has," "having" or any other variation thereof, are intended to cover a nonexclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
[0053] In addition, use of the "a" or "an" are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
[0054] Finally, as used herein any reference to "one embodiment" or "an embodiment" means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
[0055] Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs as disclosed from the principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.

Claims

CLAMS
A method for generating an output audio signal in an audio capture system having
multiple microphones including at least a first microphone and a second microphone, the first microphone including a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature, the method comprising:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval; determining, by a processor, a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold,
determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the
second noise metric, outputting the first audio signal for the time interval; and responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
The method of claim 1, wherein determining the correlation metric comprises correlating the first audio signal and the second audio signal over a predefined spectral range of approximately 600 Hz to approximately 1200 Hz.
The method of claim 1, wherein determining the first noise metric and the second noise metric comprises determining the first and second noise metrics over a predefined spectral range of approximately 20 Hz to approximately 16 kHz. The method of claim 1, further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
The method of claim 1, wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
The method of claim 1, further comprising:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
The method of claim 6, wherein dynamically setting the bias value comprises:
setting the bias value to a positive predefined value responsive to the correlation
metric being below the lower correlation threshold;
setting the bias value to a negative predefined value responsive to the correlation metric being above the upper correlation threshold; and
setting the bias value as a linear function of the correlation metric responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold.
The method of claim 6, wherein dynamically setting the bias value comprises:
setting the bias value to the difference of a positive predefined value and a hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the first microphone being selected for a prior time interval; setting the bias value to a difference of a negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the first microphone being selected for a prior time interval; setting the bias value as a difference between a linear function of the correlation
metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the first microphone being selected in the prior time interval; setting the bias value to a sum of a positive predefined value and the hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the second microphone being selected for the prior time interval; setting the bias value to a sum of the negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the second microphone being selected for the prior time interval; setting the bias value as a sum of a linear function of the correlation metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the second microphone being selected in the prior time interval.
9. A non-transitory computer -readable medium storing instructions for generating an output audio signal in an audio capture system having multiple microphones including at least a first microphone and a second microphone, the first microphone including a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature, the instructions when executed by a processor causing the processor to perform steps including:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval;
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval; responsive to the correlation metric not exceeding the first predefined threshold, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the
second noise metric, outputting the first audio signal for the time interval; and responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
10. The non-transitory computer-readable medium of claim 9, wherein determining the
correlation metric comprises correlating the first audio signal and the second audio signal over a predefined spectral range of approximately 600 Hz to approximately 1200 Hz.
1 1. The non-transitory computer-readable medium of claim 9, wherein determining the first noise metric and the second noise metric comprises determining the first and second noise metrics over a predefined spectral range of approximately 20 Hz to
approximately 16 kHz.
12. The non-transitory computer-readable medium of claim 9, the instructions when
executed further causing the processor to perform steps including:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
13. The non-transitory computer-readable medium of claim 9, wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
14. The non-transitory computer-readable medium of claim 9, the instructions when executed further causing the processor to perform steps including:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation
thresholds.
15. The non-transitory computer-readable medium of claim 14, wherein dynamically setting the bias value comprises:
setting the bias value to a positive predefined value responsive to the correlation
metric being below the lower correlation threshold;
setting the bias value to a negative predefined value responsive to the correlation
metric being above the upper correlation threshold; and
setting the bias value as a linear function of the correlation metric responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold.
16. The non-transitory computer-readable medium of claim 14, wherein dynamically setting the bias value comprises:
setting the bias value to the difference of a positive predefined value and a hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the first microphone being selected for a prior time interval; setting the bias value to a difference of a negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the first microphone being selected for a prior time interval; setting the bias value as a difference between a linear function of the correlation
metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the first microphone being selected in the prior time interval; setting the bias value to a sum of a positive predefined value and the hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the second microphone being selected for the prior time interval; setting the bias value to a sum of the negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the second microphone being selected for the prior time interval; setting the bias value as a sum of a linear function of the correlation metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the second microphone being selected in the prior time interval.
17. An audio capture system comprising:
a first microphone including a drainage enhancement feature structured to drain
liquid;
a second microphone lacking the drainage enhancement feature;
a processor; and
a non-transitory computer -readable medium storing instructions for generating an output audio signal, the instructions when executed by the processor causing the processor to perform steps including:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval;
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold,
outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the first predefined
threshold, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the second noise metric, outputting the first audio signal for the time interval; and
responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
18. The audio capture system of claim 17, further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
19. The audio capture system of claim 17, wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
20. The audio capture system of claim 17, further comprising:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
21. A method for generating an output audio signal in an audio capture system having
multiple microphones including at least a first microphone and a second microphone, the first microphone including a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature, the method comprising:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval;
determining, by a processor, a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal; responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold,
determining if the first and second microphones are submerged in liquid; responsive to determining that the first and second microphones are not submerged, determining whether the first microphone is wet;
responsive to determining that the first microphone is wet, outputting the second microphone signal for the time interval;
responsive to determining that first microphone is not wet or that the first and second microphones are submerged, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the
second noise metric, outputting the first audio signal for the time interval; and responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
22. The method of claim 21, wherein determining if the wet microphone condition is met comprises:
determining an average signal level of the first audio signal and an average signal level for the second audio signal;
responsive to a ratio of the first average signal level to the second average signal level exceeding a second threshold or detecting a wind condition, determining that the wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the second threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
23. The method of claim 22, further comprising:
applying a smoothing filter to the ratio, the smoothing filter to smooth the ratio over a sequence of time intervals.
24. The method of claim 21 , wherein determining the correlation metric comprises
correlating the first audio signal and the second audio signal over a predefined spectral range of approximately 600 Hz to approximately 1200 Hz.
25. The method of claim 21 , wherein determining the first noise metric and the second noise metric comprises determining the first and second noise metrics over a predefined spectral range of approximately 20 Hz to approximately 16 kHz.
26. The method of claim 21 , further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
27. The method of claim 21 , wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
28. The method of claim 21 , further comprising:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
29. The method of claim 28, wherein dynamically setting the bias value comprises:
setting the bias value to a positive predefined value responsive to the correlation
metric being below the lower correlation threshold;
setting the bias value to a negative predefined value responsive to the correlation metric being above the upper correlation threshold; and
setting the bias value as a linear function of the correlation metric responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold.
30. The method of claim 28, wherein dynamically setting the bias value comprises:
setting the bias value to the difference of a positive predefined value and a hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the first microphone being selected for a prior time interval; setting the bias value to a difference of a negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the first microphone being selected for a prior time interval; setting the bias value as a difference between a linear function of the correlation
metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the first microphone being selected in the prior time interval; setting the bias value to a sum of a positive predefined value and the hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the second microphone being selected for the prior time interval; setting the bias value to a sum of the negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the second microphone being selected for the prior time interval; setting the bias value as a sum of a linear function of the correlation metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the second microphone being selected in the prior time interval.
31. A non-transitory computer -readable medium storing instructions for generating an output audio signal in an audio capture system having multiple microphones including at least a first microphone and a second microphone, the first microphone including a drainage enhancement feature structured to drain liquid more quickly than the second microphone lacking the drainage enhancement feature, the instructions when executed by a processor causing the processor to perform steps including:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval; determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold,
determining if the first and second microphones are submerged in liquid; responsive to determining that the first and second microphones are not submerged, determining whether the first microphone is wet;
responsive to determining that the first microphone is wet, outputting the second microphone signal for the time interval;
responsive to determining that first microphone is not wet or that the first and second microphones are submerged, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the
second noise metric, outputting the first audio signal for the time interval; and responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
32. The non-transitory computer-readable medium of claim 31, wherein determining if the wet microphone condition is met comprises:
determining an average signal level of the first audio signal and an average signal level for the second audio signal;
responsive to a ratio of the first average signal level to the second average signal level exceeding a second threshold or detecting a wind condition, determining that the wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the second threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
33. The non-transitory computer-readable medium of claim 32, wherein the instructions when executed further cause the processor to perform steps including:
applying a smoothing filter to the ratio, the smoothing filter to smooth the ratio over a sequence of time intervals.
34. The non-transitory computer-readable medium of claim 31 , wherein determining the correlation metric comprises correlating the first audio signal and the second audio signal over a predefined spectral range of approximately 600 Hz to approximately 1200 Hz.
35. The non-transitory computer-readable medium of claim 31 , wherein determining the first noise metric and the second noise metric comprises determining the first and second noise metrics over a predefined spectral range of approximately 20 Hz to
approximately 16 kHz.
36. The non-transitory computer-readable medium of claim 31 , wherein the instructions when executed further cause the processor to perform steps including:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
37. The non-transitory computer-readable medium of claim 31 , wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
38. The non-transitory computer-readable medium of claim 31 , wherein the instructions when executed further cause the processor to perform steps including:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
39. The non-transitory computer-readable medium of claim 38, wherein dynamically setting the bias value comprises: setting the bias value to a positive predefined value responsive to the correlation metric being below the lower correlation threshold;
setting the bias value to a negative predefined value responsive to the correlation
metric being above the upper correlation threshold; and
setting the bias value as a linear function of the correlation metric responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold.
40. The non-transitory computer-readable medium of claim 38, wherein dynamically setting the bias value comprises:
setting the bias value to the difference of a positive predefined value and a hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the first microphone being selected for a prior time interval; setting the bias value to a difference of a negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the first microphone being selected for a prior time interval; setting the bias value as a difference between a linear function of the correlation
metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the first microphone being selected in the prior time interval; setting the bias value to a sum of a positive predefined value and the hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the second microphone being selected for the prior time interval; setting the bias value to a sum of the negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the second microphone being selected for the prior time interval; setting the bias value as a sum of a linear function of the correlation metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the second microphone being selected in the prior time interval.
41. An audio capture system comprising:
a first microphone including a drainage enhancement feature structured to drain
liquid;
a second microphone lacking the drainage enhancement feature; a processor; and
a non-transitory computer -readable medium storing instructions for generating an output audio signal, the instructions when executed by the processor causing the processor to perform steps including:
receiving a first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving a second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval;
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold, determining if the first and second microphones are submerged in liquid;
responsive to determining that the first and second microphones are not
submerged, determining whether the first microphone is wet;
responsive to determining that the first microphone is wet, outputting the second microphone signal for the time interval;
responsive to determining that first microphone is not wet or that the first and second microphones are submerged, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the second noise metric, outputting the first audio signal for the time interval; and
responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
The audio capture system of claim 41 , wherein determining if the wet microphi
condition is met comprises: determining an average signal level of the first audio signal and an average signal level for the second audio signal;
responsive to a ratio of the first average signal level to the second average signal level exceeding a second threshold or detecting a wind condition, determining that the wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the second threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
43. The audio capture system of claim 41 , further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
44. The audio capture system of claim 41 , wherein the instructions when executed further cause the processor to perform steps including:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
45. A method for determining if a first microphone is wet in a camera system having a first microphone and a second microphone, which the first microphone is positioned in a recess of an inner side of a face of the camera, the recess coupled to a channel coupled to a lower drain below the channel to drain water from the recess away from the microphone via the channel, and wherein the second microphone is positioned away from the channel and the drain, the method comprising:
determining an first average signal level of the first audio signal and a second average signal level of the second audio signal over a predefined time interval;
determining a ratio of the first average signal level to the second average signal level; responsive to the ratio of the first average signal level to the second average signal level exceeding a first threshold or detecting a wind condition, determining, by a processor, that a wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the first threshold and not detecting the wind condition, determining, by the processor, that the wet microphone condition is detected.
5. The method of claim 45, further comprising:
applying a smoothing filter to the ratio, the smoothing filter to smooth the ratio over a sequence of time intervals.
7. The method of claim 45, further comprising detecting whether or not the wind condition present, wherein detecting whether or not the wind condition is present comprises:
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, determining that the wind condition is not present; and
responsive to the correlation metric not exceeding the first predefined threshold, determining that the wind condition is present.
. The method of claim 47, wherein determining the correlation metric comprises:
correlating the first audio signal and the second audio signal over a predefined
spectral range of approximately 600 Hz to approximately 1200 Hz.
. The method of claim 47, further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
50. The method of claim 45, wherein the camera system further comprises an upper drain that operates to let air enter the recess and push the water downward through the channel and out the lower drain.
51. A non-transitory computer -readable storage medium storing instructions for determining if a first microphone is wet in an camera system having a first microphone and a second microphone, which the first microphone is positioned in a recess of an inner side of a face of the camera, the recess coupled to a channel coupled to a lower drain below the channel to drain water from the recess away from the microphone via the channel, and wherein the second microphone is positioned away from the channel and the drain, the instructions when executed by a processor causing the processor to perform steps including:
determining a first average signal level of the first audio signal and a second average signal level of the second audio signal over a predefined time interval;
determining a ratio of the first average signal level to the second average signal level; responsive to the ratio of the first average signal level to the second average signal level exceeding a first threshold or detecting a wind condition, determining that a wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the first threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
52. The non-transitory computer-readable storage medium of claim 51, further comprising: applying a smoothing filter to the ratio, the smoothing filter to smooth the ratio over a sequence of time intervals.
53. The non-transitory computer-readable storage medium of claim 51, further comprising detecting whether or not the wind condition is present, wherein detecting whether or not the wind condition is present comprises:
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, determining that the wind condition is not present; responsive to the correlation metric not exceeding the first predefined threshold, determining that the wind condition is present.
54. The non-transitory computer-readable storage medium of claim 53, wherein determining the correlation metric comprises:
correlating the first audio signal and the second audio signal over a predefined
spectral range of approximately 600 Hz to approximately 1200 Hz.
55. The non -transitory computer-readable storage medium of claim 53, wherein the
instructions when executed further cause the processor to perform steps including: setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
56. The non-transitory computer-readable storage medium of claim 45, wherein the camera system further comprises an upper drain that operates to let air enter the recess and push the water downward through the channel and out the lower drain.
57. An audio capture system comprising:
a housing having a recess on an interior face, the recess coupled to a channel coupled to a lower drain below the channel to drain water away from the recess via the channel;
a first microphone positioned in the recess,
a second microphone positioned away from the channel and the drain;
a processor; and
a non-transitory computer -readable medium storing instructions for generating an output audio signal, the instructions when executed by the processor causing the processor to perform steps including:
determining an first average signal level of the first audio signal and a second average signal level of the second audio signal over a predefined time interval; determining a ratio of the first average signal level to the second average signal level;
responsive to the ratio of the first average signal level to the second average signal level exceeding a first threshold or detecting a wind condition, determining that a wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the first threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
58. The audio capture system of claim 57, further comprising:
applying a smoothing filter to the ratio, the smoothing filter to smooth the ratio over a sequence of time intervals.
59. The audio capture system of claim 57, further comprising detecting whether or not the wind condition is present, wherein detecting whether or not the wind condition is present comprises:
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold, determining that the wind condition is not present;
responsive to the correlation metric not exceeding the first predefined threshold, determining that the wind condition is present.
60. The audio capture system of claim 59, wherein determining the correlation metric
comprises:
correlating the first audio signal and the second audio signal over a predefined
spectral range of approximately 600 Hz to approximately 1200 Hz.
61. The audio capture system of claim 59, wherein the instructions when executed further cause the processor to perform steps including:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
62. The audio capture system of claim 57, wherein the audio capture system further
comprises an upper drain that operates to let air enter the recess and push the water downward through the channel and out the lower drain.
63. A camera, comprising:
a lens assembly for directing light received through a lens window to an image sensor; a substantially cubic camera housing enclosing the lens assembly, the substantially cubic camera housing comprising a bottom face, left face, right face, back face, top face, and front face;
a first microphone integrated with the front face of the camera and positioned within a recess on an interior facing portion of the front face;
a lower drain below the first microphone comprising an opening in the substantially cubic camera housing near the front face, the lower drain to allow water that collects in the recess housing the first microphone to drain;
an upper drain above the first microphone comprising an opening in the substantially cubic camera housing near the front face, the upper drain to allow air to enter the recess as the water drains;
a channel through the interior facing portion of the front face that couples the recess to the lower drain; and
a second microphone integrated with a rear portion of the substantially cubic camera housing.
64. The camera of claim 63, further comprising:
a plurality of microphone ports comprising openings between the recess and an
exterior facing portion of the front face.
65. The camera of claim 63, further comprising:
a water submersion sensor to sense when the camera is submerged in water.
66. The camera of claim 63, further comprising:
a microphone selection controller to select between the first microphone and the
second microphone based on characteristics of a first audio signal captured by the first microphone and a second audio signal captured by the second microphone.
67. The camera of claim 66, wherein the microphone selection controller comprises:
a processor; and
a non-transitory computer -readable storage medium storing instructions executable by the processor for selecting between the first microphone and the second microphone, the instructions when executed causing the processor to perform steps including:
receiving the first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving the second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval;
determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold,
outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the first predefined
threshold, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the second noise metric, outputting the first audio signal for the time interval; and
responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval.
The camera of claim 67, wherein determining the correlation metric comprises
correlating the first audio signal and the second audio signal over a predefined spectral range of approximately 600 Hz to approximately 1200 Hz.
69. The camera of claim 67, wherein determining the first noise metric and the second noise metric comprises determining the first and second noise metrics over a predefined spectral range of approximately 20 Hz to approximately 16 kHz.
70. The camera of claim 67, further comprising:
setting the predefined threshold to a first predefined value responsive to the
correlation metric exceeding the predefined threshold in a prior time interval; and
setting the predefined threshold to a second predefined value responsive to the
correlation metric not exceeding the predefined threshold in the prior time interval, wherein the first predefined value is higher than the second predefined value.
71. The camera of claim 67, wherein determining the first noise metric and the second noise metric comprises:
setting the first noise metric to a first value based on a root-mean-square level of the first audio signal over a predefined time period; and
setting the second noise metric to a second value based on a root-mean-square level of the second audio signal over the predefined time period.
72. The camera of claim 67, further comprising:
dynamically setting the bias value for each time interval based on whether the
correlation metric is above an upper correlation threshold, below a lower correlation threshold, or in between the lower and upper correlation thresholds.
73. The camera of claim 72, wherein dynamically setting the bias value comprises:
setting the bias value to a positive predefined value responsive to the correlation
metric being below the lower correlation threshold;
setting the bias value to a negative predefined value responsive to the correlation metric being above the upper correlation threshold; and
setting the bias value as a linear function of the correlation metric responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold.
74. The camera of claim 72, wherein dynamically setting the bias value comprises:
setting the bias value to the difference of a positive predefined value and a hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the first microphone being selected for a prior time interval; setting the bias value to a difference of a negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the first microphone being selected for a prior time interval; setting the bias value as a difference between a linear function of the correlation
metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the first microphone being selected in the prior time interval; setting the bias value to a sum of a positive predefined value and the hysteresis bias responsive to the correlation metric being below the lower correlation threshold and the second microphone being selected for the prior time interval; setting the bias value to a sum of the negative predefined value and the hysteresis bias responsive to the correlation metric being above the upper correlation threshold and the second microphone being selected for the prior time interval; setting the bias value as a sum of a linear function of the correlation metric and the hysteresis bias responsive to the correlation metric being in between the lower correlation threshold and the upper correlation threshold and the second microphone being selected in the prior time interval.
75. The camera of claim 66, wherein the microphone selection controller comprises:
a processor; and
a non-transitory computer -readable storage medium storing instructions executable by the processor for selecting between the first microphone and the second microphone, the instructions when executed causing the processor to perform steps including:
receiving the first audio signal from the first microphone representing ambient audio captured by the first microphone during a time interval;
receiving the second audio signal from the second microphone representing ambient audio captured by the second microphone during the time interval; determining a correlation metric between the first audio signal and the second audio signal representing a similarity between the first audio signal and the second audio signal;
responsive to the correlation metric exceeding a predefined threshold,
outputting the first audio signal for the time interval;
responsive to the correlation metric not exceeding the predefined threshold, determining if the first and second microphones are submerged in liquid;
responsive to determining that the first and second microphones are not
submerged, determining whether the first microphone is wet;
responsive to determining that the first microphone is wet, outputting the second microphone signal for the time interval;
responsive to determining that first microphone is not wet or that the first and second microphones are submerged, determining a first noise metric for the first audio signal and a second noise metric for the second audio signal;
responsive to a sum of the first noise metric and a bias value being less than the second noise metric, outputting the first audio signal for the time interval; and
responsive to the sum of the first noise metric and the bias value being greater than the second noise metric, outputting the second audio signal for the time interval. camera of claim 75, wherein determining if the wet microphone condition is met comprises:
determining an average signal level of the first audio signal and an average signal level for the second audio signal;
responsive to a ratio of the first average signal level to the second average signal level exceeding a second threshold or detecting a wind condition, determining that the wet microphone condition is not detected;
responsive to the ratio of the first average signal level to the second average signal level not exceeding the second threshold and not detecting the wind condition, determining that the wet microphone condition is detected.
77. An audio capture system, comprising:
a substantially cubic housing comprising a bottom face, left face, right face, back face, top face, and front face;
a first microphone integrated with the front face of the audio capture system and
positioned within a recess on an interior facing portion of the front face;
a lower drain below the first microphone comprising an opening in the substantially cubic housing near the front face, the lower drain to allow water that collects in the recess housing the first microphone to drain;
an upper drain above the first microphone comprising an opening in the substantially cubic housing near the front face, the upper drain to allow air to enter the recess as the water drains;
a channel through the interior facing portion of the front face that couples the recess to the lower drain; and
a second microphone integrated with a rear portion of the substantially cubic housing.
78. The audio capture system of claim 77, further comprising:
a plurality of microphone ports comprising openings between the recess and an
exterior facing portion of the front face.
79. The audio capture system of claim 77, further comprising:
a water submersion sensor to sense when the audio capture system is submerged in water.
80. The audio capture system of claim 77, further comprising:
a microphone selection controller to select between the first microphone and the
second microphone based on characteristics of a first audio signal captured by the first microphone and a second audio signal captured by the second microphone.
PCT/US2016/039679 2015-07-02 2016-06-27 Automatic microphone selection in a sports camera WO2017003958A1 (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201562188450P 2015-07-02 2015-07-02
US62/188,450 2015-07-02
US15/083,267 2016-03-28
US15/083,264 2016-03-28
US15/083,266 2016-03-28
US15/083,266 US9769364B2 (en) 2015-07-02 2016-03-28 Automatically determining a wet microphone condition in a sports camera
US15/083,262 2016-03-28
US15/083,267 US9787884B2 (en) 2015-07-02 2016-03-28 Drainage channel for sports camera
US15/083,264 US9661195B2 (en) 2015-07-02 2016-03-28 Automatic microphone selection in a sports camera based on wet microphone determination
US15/083,262 US9706088B2 (en) 2015-07-02 2016-03-28 Automatic microphone selection in a sports camera

Publications (1)

Publication Number Publication Date
WO2017003958A1 true WO2017003958A1 (en) 2017-01-05

Family

ID=56411910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/039679 WO2017003958A1 (en) 2015-07-02 2016-06-27 Automatic microphone selection in a sports camera

Country Status (1)

Country Link
WO (1) WO2017003958A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021257620A1 (en) * 2020-06-15 2021-12-23 Axon Enterprise, Inc. Adaptive directional audio for wearable audio devices

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292213B1 (en) * 1997-03-30 2001-09-18 Michael J. Jones Micro video camera usage and usage monitoring
US20130282369A1 (en) * 2012-04-23 2013-10-24 Qualcomm Incorporated Systems and methods for audio signal processing
US20140185853A1 (en) * 2012-12-27 2014-07-03 Panasonic Corporation Waterproof microphone device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292213B1 (en) * 1997-03-30 2001-09-18 Michael J. Jones Micro video camera usage and usage monitoring
US20130282369A1 (en) * 2012-04-23 2013-10-24 Qualcomm Incorporated Systems and methods for audio signal processing
US20140185853A1 (en) * 2012-12-27 2014-07-03 Panasonic Corporation Waterproof microphone device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021257620A1 (en) * 2020-06-15 2021-12-23 Axon Enterprise, Inc. Adaptive directional audio for wearable audio devices

Similar Documents

Publication Publication Date Title
US10771660B2 (en) Automatically determining a wet microphone condition in a camera
US11589178B2 (en) Generating an audio signal from multiple microphones based on uncorrelated noise detection
US20220247894A1 (en) Drainage channels for use in a camera
KR102313894B1 (en) Method and apparatus for wind noise detection
JP4934968B2 (en) Camera device, camera control program, and recorded voice control method
KR20110038313A (en) Image photographing apparatus and control method thereof
JP2005110127A (en) Wind noise detecting device and video camera with wind noise detecting device
US20160286116A1 (en) Imaging apparatus
WO2017003958A1 (en) Automatic microphone selection in a sports camera
US9872006B2 (en) Audio signal level estimation in cameras
JPH05119794A (en) Sound collection device
Terano et al. Sound capture from rolling-shuttered visual camera based on edge detection
JP2010081395A (en) Electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16738940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16738940

Country of ref document: EP

Kind code of ref document: A1