WO2005101346A1 - Recording/analyzing system for accidental event - Google Patents

Recording/analyzing system for accidental event Download PDF

Info

Publication number
WO2005101346A1
WO2005101346A1 PCT/JP2004/004739 JP2004004739W WO2005101346A1 WO 2005101346 A1 WO2005101346 A1 WO 2005101346A1 JP 2004004739 W JP2004004739 W JP 2004004739W WO 2005101346 A1 WO2005101346 A1 WO 2005101346A1
Authority
WO
WIPO (PCT)
Prior art keywords
accident
data
recording
sound
incident
Prior art date
Application number
PCT/JP2004/004739
Other languages
French (fr)
Japanese (ja)
Inventor
Tatsuya Katada
Takaharu Kitamura
Original Assignee
Hitachi Zosen Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Zosen Corporation filed Critical Hitachi Zosen Corporation
Priority to JP2006512179A priority Critical patent/JP4242422B2/en
Priority to PCT/JP2004/004739 priority patent/WO2005101346A1/en
Publication of WO2005101346A1 publication Critical patent/WO2005101346A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to an incident recording / analysis system provided at an intersection, for example, for recording an incident such as a traffic accident and analyzing the content thereof.
  • This type of traffic accident automatic recording device captures intersections with a camera device and automatically saves images before and after the detection of collision sounds, sudden braking sounds, etc. due to traffic accidents. It was a thing.
  • a type of sound source is specified using a neural network method (for example, refer to Japanese Patent Application Laid-Open No. 2003-202602).
  • the present invention makes it possible to eliminate the need to collect the recorded data of the incident that occurred at each intersection and to record the incident that can easily analyze the incident.
  • the purpose is to provide an analysis system. Disclosure of the invention
  • the recording / analysis system of the incident according to the present invention comprises:
  • a sudden event recording means having a data recording means for temporarily recording video data from an event photographing means
  • a sudden event classifying means for classifying the sudden event by inputting the audio data and the video data recorded by the sudden event recording means
  • An incident analysis means for analyzing the content of the incident based on the classification data classified by the incident classification means
  • the above-mentioned sudden event recording means is placed at the place where the sudden event occurred, and the above sudden event classification means and sudden event analysis means are installed in the management center.
  • the audio data and the video data recorded by the sudden event recording means are sent to a management center via a communication line.
  • Another recording / analysis system of an unexpected event may be configured such that:
  • the classification data is at least the type of vehicle that is the object of the accident and the sound of the accident.
  • Another recording / analysis system of an unexpected event may be configured such that:
  • the incident analysis means analyzes at least the positional relationship and the presence or absence of a collision of the vehicle, which is the object of the accident, using video data and audio data.
  • Another recording / analysis system of an unexpected event includes the above-mentioned recording / analysis system,
  • a microphone is used as sound detection means provided in the sound source determination means, and the microphone-phones are installed on both sides of the center of the intersection.
  • the video data and audio data of the sudden event are recorded at the location when the sound source determination means determines whether or not the event is an accident, and when the event is judged to be an accident, the data is recorded.
  • the data recorded in the data recording means is transmitted to the management center via a communication line, where the incident event is classified by the incident classifying means, and the incident event is analyzed by the incident event analyzing means.
  • FIG. 1 is a block diagram showing a schematic configuration of a system for recording and analyzing a traffic accident, which is an accident, according to an embodiment of the present invention.
  • Figure 2 is a plan view showing the layout of the accident recording device at the intersection in the system.
  • Figure 3 is a block diagram showing the schematic configuration of the accident recording device in the system.
  • Fig. 4 is a graph showing detection signals from the sound source judgment device in the accident recording device.
  • FIG. 5 is a graph showing a spectrum calculation result of an acoustic signal related to the first classification work in the sound source identification unit in the sound source determination device.
  • FIG. 6 is a graph showing a spectrum distribution as a result of the second classification work in the sound source identification unit in the sound source determination device.
  • Fig. 5 is a graph showing the spectrum distribution as a result of the third classification work in the sound source discriminating unit in the sound source determination device,
  • Fig. 8 is a graph showing the spectrum distribution as a result of the fourth classification work in the sound source Raiya separate part in the same sound source determination device
  • Figure 9 is a conceptual diagram of the classification work by the neural network in the sound source identification unit in the sound source determination device.
  • Fig. 10 is a block diagram showing the schematic configuration of the accident content classification device in the system.
  • Fig. 11 is a block diagram showing the schematic configuration of the accident congratulation analyzer in this system.
  • Figs. 12 and 13 show the accident situation data obtained by the accident situation analysis device.
  • (A) is a vehicle trace diagram
  • (b) is a graph showing acoustic analysis results
  • (c) is a vehicle.
  • Figure 13 shows the contents of the analysis report obtained by the accident situation analyzer.
  • Fig. 14 is a flowchart showing the evaluation method of the sound source judgment evaluation device in the system.
  • FIG. 15 is a plan view showing an arrangement state at an intersection according to a modification of the accident recording device in the system.
  • the recording and analysis system of traffic accidents is explained, and the clues to record and analyze the traffic accidents are sound sources and images.
  • the clues to record and analyze the traffic accidents are sound sources and images.
  • the explanation includes events other than traffic accidents.
  • this traffic alert recording and analysis system is installed at the location where traffic accidents are monitored, for example, at the intersection K where traffic accidents occur frequently (the location where sudden incidents occur).
  • Camera device for photographing traffic accidents (accidents, etc.)
  • a CCD camera is used in the case of a system IJ) 1.
  • Microphone an example of sound detection means, hereinafter referred to as microphone 2
  • sound data the sound signal
  • Sound source determination device an example of sound source determination means 3 and a video signal captured by the power camera device 1 when the sound source determination device 3 determines that a traffic accident is -1 (Hereafter, video data A data recording device (data in an example of a recording device, for example, an example of an accident recorder (accident recording means comprising a hard disk device is used) 4) 5 for recording with the acoustic data of the) that from the microphone 2,
  • a data storage unit for inputting and temporarily storing the audio data and the video data recorded by the accident recording device 5 (a hard disk or the like is used and may be referred to as a data recording unit) 6;
  • the data stored in the data storage unit 6 is read and based on the details of the accident
  • the accident record A sound evaluation judgment device (an example of a sound source judgment evaluation means) 9 for updating the judgment data in the sound therapy judgment device 3 in the device 5, and a classification data classified by the accident content classification device 7 described above.
  • Database (a hard disk or the like is used) 10 as a data storage unit that stores data, and data browsing that can search each database stored in
  • the data browsing device 11 also has a function of simply browsing data stored in the database 10.
  • the accident recording device 5 is located at the intersection, ⁇ :, and the other devices 6 to 11 are located at the control center 12; 5 and the management center 12 are connected via a communication line (the Internet or intranet using a public line, a broadband line or the like) 13.
  • a communication line the Internet or intranet using a public line, a broadband line or the like
  • a server (computer device) 14 for managing and operating the data storage unit 6 and the data base 10 is provided on the management center 12 side, and the server 14 and each device 6 to The server 11 is designed to be able to receive and receive data from each other via LAN 15 or the like, and the server 14 is connected to the traffic control system Senichi 16 via a communication line 17. Connected.
  • the accident recording device 5 and the server 14 on the management center 12 disposed at each intersection are provided with communication devices 5a and 14a, respectively.
  • the sound source determination device 3 includes a signal extraction unit 21 that inputs an acoustic signal (acoustic data) collected by the My 2 and extracts a signal in a predetermined frequency range,
  • the signal extraction unit 21 receives the extracted acoustic signal generated by f, integrates the signal at a predetermined first integration time, obtains an acoustic energy (integral value, the same applies hereinafter), and It is determined whether or not the energy exceeds a predetermined first set level value. If the energy exceeds the predetermined first set level value, a level detection unit 22 that outputs a predetermined detection signal and the signal extraction unit 21 described above are used.
  • the extracted extracted sound signal is input, and the sound energy is obtained by integrating the predetermined second integral shorter than the first integration time to obtain the sound energy, and the sound energy has a predetermined set peak value. Determine whether or not In this case, the peak detection unit 23 that outputs a predetermined detection signal and the extracted sound signal extracted by the signal extraction unit 21 are input, and the door / " The sound energy is calculated by integration, and if the sound energy exceeds the predetermined set level value, it is determined again after the lapse of a predetermined time that the sound energy has exceeded the predetermined set level value.
  • a level continuation detection unit 24 that outputs a predetermined detection signal
  • a recruitment unit that receives at least one of the detection signals from the level detection unit 22 and the peak detection unit 23
  • the predetermined frequency region is divided into a predetermined number and A spectrum calculation unit 25 that calculates the frequency spectrum (hereinafter simply referred to as a spectrum and also referred to as a spectrum) of the acoustic signal relating to the region, and the spectrum calculation unit 25 calculates the spectrum.
  • the detection signals from the peak detection section 23 and the level continuation detection section 24 are input, and the And an accident judging section 27 for judging whether or not they are the same.
  • the accident recording device 5 saves the video to be recorded on the data recording device 4 while shooting with the camera device 1.
  • a data storage instruction section 28 for outputting an instruction is provided.
  • the signal extracting section 21 extracts a signal having a frequency of, for example, 0 to 2.5 kPiz, and then removes a portion of 0 to 500 Hz. This is to narrow down the range of accident noises generated during traffic accidents and vehicle running, that is, due to accidents and the like, and to remove extra engine sounds (0 to 500 Hz).
  • the level detection unit 22 receives the output sound signal from the signal extraction unit 21 and performs integration during a predetermined first integration time (for example, about 500 msec) to perform sound energy
  • the first integrator 3 L is compared with the acoustic energy obtained by the first integrator 31 and a predetermined set level value of Xe 1 to determine the acoustic energy at the first set level J Above value
  • the detection signal (the trigger signal), for example,
  • a first comparator 32 is provided for outputting a signal of "1" (note that "0" is output when the signal is equal to or less than the set level value). That is, the level detector 22 integrates the acoustic signal at a certain time interval, thereby determining whether or not the magnitude of the acoustic signal exceeds a predetermined level.
  • the peak detecting section 23 receives the extracted acoustic signal from the signal extracting section 21 and performs integration during a second integration time (for example, about 100 msec) shorter than the first integration time.
  • the second integrator 33 for obtaining the energy is compared with the sound energy obtained by the second integrator 33 and a predetermined second set level value, and the peak value of the sound energy is set to the second set value. If the level value (which is also the set peak value) is exceeded, a signal of “1”, for example, is output as a detection signal (which is a trigger signal). ) Is provided. That is, the peak detection unit 23 determines whether the peak value of the sound signal exceeds a predetermined level (peak value) by integrating the sound signal in a short time. .
  • the level continuation detecting section 24 receives the extracted acoustic signal from the signal extracting section 21 and sets a predetermined third integration time (for example, the same as the first integration time in the level detecting means). ), A third integrator 35 for obtaining the acoustic energy by performing integration, and the acoustic energy obtained by the third integrator 35 and a predetermined third set level value (for example, level output means). If the sound energy exceeds the third set level value by comparing with the set level value at For example, at 300 msec), whether or not the set level value exceeds the same set level value is compared again. If the set level value is exceeded, it is determined that the set level value is continued (maintained), and the detection is performed. Shinguchi
  • La which is the trigger signal
  • a signal of “1” (Note that the set level value is not continued mA
  • the mouth is provided with a third comparator 36 which outputs "0").
  • FIG. 4 shows each of the comparators 32, 22 in each of the detection units 22 to 24.
  • (A) is for the first comparator 32 in the level detector 22
  • (b) is for the second comparator 34 in the peak detector 23
  • (C) is continuous level detection 3 shows the state after the third comparator 36 in the unit 24, and (d) shows the reset signal.
  • the detection signal (“1”) from the level detection section 22 and the detection signal (“1”) from the peak detection section 23 are set.
  • the extracted audio signal is first converted to an AZD converter
  • each division is performed by dividing a predetermined frequency region (450 to 250 Hz) into a predetermined number, for example, 105
  • a frequency spectrum also called a frequency spectrum
  • FFT fast Fourier transform
  • the type of the sound source is specified using the neural network.
  • the frequency spectrum is divided into four levels. Recognition and classification are performed based on these classification methods (Classes 1 to 4) and using neural networks, and the classification numbers obtained in these classifications are determined in advance by experiments and the like.
  • the detected sound is compared with the classification table, which is the judgment data, and the detected sound is one of a number of random sounds, including crash noise, tire and road noise, cracks, runaway noise, and sirens Specified.
  • the classification table for example, a five-digit number (two digits are assigned to the first category and one digit is assigned to each of the second to fourth categories) is assigned to each type of sound, and Is provided with a record flag column indicating whether or not to record the data.
  • the first classification (the first stage) will be explained.
  • the classification number is obtained by using two types of division patterns with different classification criteria.
  • the divided frequency regions of the 105 banks are divided into, for example, five based on the division pattern predetermined according to the total area, and are assigned classification numbers of # 0 to # 4.
  • the division method here is based on the sample data (500 samples) of actual traffic sounds, for example, frequency distribution of each total area value of 500 samples (the horizontal axis is the bank position, The vertical axis is the area value), and it is divided so that it is equally divided.
  • the part where the frequency distribution of the total area value is large has a narrow division width.
  • the part where the frequency distribution of the total area value is small has a wider division width (in some cases, the division is not necessarily equal, and the division number is not necessarily five).
  • the classification number to which this total area 2401 belongs the total area value, the classification number and the Are assigned, for example, # 3.
  • 105 banks are divided into, for example, 10 banks, and classification numbers # 0 to # 9 are assigned.
  • each frequency spectrum of the audio signal relating to each of the divided frequency regions (which is a bank) divided into 105 is normalized by its maximum value, and then normalized. The one with the largest peak in the frequency spectrum in 105 banks where the is performed.
  • the divided frequency domain of 105 banks is divided into, for example, 10 divided patterns according to the number of banks (bank positions) where the maximum peak exists, and the classification numbers of # 0 to # 9 ,
  • the spectrum having the highest level belongs to any part of # 0 to # 9 in the spectrum series (shown by the bar graph in Fig. 5) related to the extracted acoustic signal. Is required.
  • the method of dividing the 105 links is predetermined according to the bank position at the maximum level.
  • the bank positions of each of the 50,000 cases are frequency-distributed (the horizontal axis is the bank position, and the vertical axis is Is the number of cases), and it is divided so that it is evenly divided.
  • the division width is narrowed and the maximum level is The portion where the frequency distribution of the bank position of the file is small has a large division width (in some cases, the division is not necessarily equal, and the division number is not necessarily 10).
  • classification numbers are assigned from # 0 to # 9, from the smallest bank number to the largest bank number. Therefore, in FIG. 5, the vicinity of 88 banks has the highest level, and the classification number to which the 88 banks belong is assigned, for example, as # 8.
  • the classification number in the first classification is determined in consideration of the above two types of numbers. For example, if the number is assigned to # 3 in 5 divisions by the total area value, and to # 9 in 10 divisions by the maximum level bank position, the classification number according to the first classification is # 3 9 It becomes.
  • the characteristic part of the acoustic signal is extracted based on the spectrum, and an experiment is performed in advance to identify the extracted sequence and the sound source.
  • the pattern matching (pattern recognition) with the spectrum series obtained by the above is performed using neural nets, and based on the classification numbers obtained in each classification work, As described above, the detected sound was identified with one of many types including collision noise, tire / road friction, crack, runaway, siren, etc. Is done.
  • a group of a predetermined number of signal identification patterns prepared in the database for example, five Are selected and used for pattern recognition.
  • the second to fourth classification tasks will be described.
  • the second classification work first, five patterns are extracted from the database based on the classification number (for example, # 39) obtained in the first classification.
  • the third classification based on the classification number obtained in the second classification, five patterns are extracted from the data base, and at the same time, the pattern sequence of the bank of 105 in the sound signal concerned is obtained.
  • the maximum 5 strokes of the maximum stroke and 2 before and after it are set to zero (zero reset), and a new 105 bank spectrum sequence is created.
  • those with a maximum spectrum above a certain threshold are less than 25% of the maximum spectrum. Is assumed to be zero, and the above five patterns are represented by a neural network for the normalized spectral series (shown in Fig. 7).
  • Classification numbers are assigned by performing pattern matching with a total of seven patterns, including patterns that are less than (thus, even patterns below the threshold are considered as one pattern). In other words, in this classification work, classification is performed on the remaining spectral series from which the spectral part with the strongest is removed. Become.
  • the fourth classification the following two cases are classified: Of course, in this classification work as well, based on the classification number obtained in the three classification work, the data is changed from the data base to the data base. Five patterns used for matching are extracted.
  • a pattern sequence (shown in Fig. 8) is normalized by the neural network for a normalized vector sequence (shown in Fig. 8).
  • a classification number is assigned by performing pattern matching with the extracted 5 patterns plus a pattern indicating a pattern other than the 5 patterns and a pattern below the threshold value). In this classification work, classification is performed on the spectrum sequence from which the spectrum part having the second highest strength has been removed.
  • Fig. 9 shows a conceptual diagram of the above-mentioned categorization work using dual-net.
  • a detection signal (represented by PT) from the peak detection unit 23 is input, and a logical operation of ⁇ (NT and PD) or PT ⁇ is performed. ) Is determined.
  • the identification signal (NT) is set to “1” if the sound is generated due to an accident or the like
  • the detection signal (PD) is set to “1” if the sound is continuous.
  • the detection signal (PT) is also set to “1” when the peak value is equal to or higher than a predetermined intensity.
  • the logical product (and) in the above logical operation expression indicates that the sound is not instantaneous, and in the case of an accident or the like, it is considered that the sound is continuous for a short period of time. (PD), and on the other hand, if the sound is caused by an accident, etc., its peak value is considered to have considerable strength, so the peak value is If the value is larger than the set level value (of course, this value is set by experiments), the above logical product (NT and PD) is calculated so that it can be determined to be due to an accident or the like. The logical sum of the detection signals (PT) is calculated.
  • the accident judging unit 27 judges that the accident is an accident, an instruction to that effect is output to the data storage instruction unit 28 and the data recording device 4 The video before and after is recorded and stored together with the sound.
  • the accident determination section 27 described above uses, as an index of the video image, an accident content (for example, a coded ) Are recorded together.
  • an accident content for example, a coded
  • collision noise, collision noise + tire-road friction noise, collision noise + crush, tire-road friction noise, crash, runaway noise, siren, and other sounds are identified.
  • each of the above units, the integrator, the comparator, and the like are each configured by an electric signal circuit.
  • the sound source identification unit 26, which performs an operation by a neural network includes, for example, a CPU as an operation processing unit. Is provided.
  • the acoustic signal detected by the microphone 2 is extracted in a predetermined frequency band by the signal extracting unit 21, and the extracted acoustic signal is The signal is input to the level detector 22, peak detector 23 and level continuation detector 24, and a preliminary judgment is made as to whether or not an accident has occurred. And, among the level detecting section 22 and the peak detecting section 23,
  • the spectrum calculation unit 25 calculates the calculation power S of the spectrum. Done.
  • the spectrum sequence obtained by this calculation is input to the sound source identification unit 26.
  • the sound source power S is identified by the above-described classification method using the neural network, and the identified sound is likely to lead to an accident or the like. (Sound, collision sound + tire-road friction sound, collision noise + crush, tyre-road friction sound, crash, running noise, siren, etc.), a detection signal (NT) indicating an accident, etc. Is output.
  • ⁇ > Shows the detection signal (NT) for the above-mentioned accidents, the detection signal (PD) indicating continuation from the level continuation detection section 24, and the peak detection signal (PT) from the peak detection section 23.
  • the signal is input to the accident determination unit 27 and a logical operation is performed to determine whether the sound is caused by an accident or the like.
  • the accident judging section 27 judges that an accident or the like has occurred, an instruction signal to that effect is output to the data storage instructing section 28, and images are taken before and after the sound is generated.
  • the recorded video is recorded and stored in the data recording device 4.
  • the code data of the sound source type specified by the sound source identification unit 26 is recorded as an index to the video data, and later retrieval of the video data is performed. Is facilitated.
  • the one-time identification time in the sound source identification unit 26 is, for example, 3 seconds.
  • a detection signal (trigger signal) is obtained in each of the detection units 22 to 24: The output of the detection signal is maintained until the lapse of 3 seconds, and reset after the lapse of 3 seconds. A signal is output.
  • the low frequency that the vehicle normally emits and the high frequency that is difficult for humans to hear, such as engine sound For the extracted audio signal from which noise has been removed, the level detection unit 22 detects whether the level value of the audio signal exceeds the set level value, and the peak detection unit 23 sets the peak value of the audio signal. Detects whether the peak value is exceeded, and if at least one exceeds the set value, finds the frequency spectrum of the sound signal and uses the dual-net to determine the type of the sound source Since the sound source is specified, the sound source can be identified more accurately.
  • the level continuation detection unit 24 further outputs the level continuation time of the sound signal to the sound source specified by the neural network. Since the determination as to whether or not the time exceeds the set duration is added, it is possible to accurately determine whether or not an accident has occurred.
  • the accident content classification device 7 performs image processing by inputting video data from the database 10 and performs, for example, a contour of an object in an intersection for each image frame [for example, at a predetermined time interval (more specifically, The image processing unit 41 to be extracted and the image processing data obtained by the image processing unit 41 may be input to cause a traffic accident.
  • the object (target object) determined to be For example, it recognizes any kind of car, motorcycle, pedestrian, etc. and also specifies the type of vehicle (it is a vehicle type). For example, if it is a car, it is a large car, a passenger car, an emergency vehicle, etc.
  • An object recognition unit 42 for judging motorcycles, bicycles, etc., and a sound source identification unit 4 for inputting sound data from the database 10 and for specifying sound data based on recognition of image data, that is, for specifying accident sounds. 3 is provided.
  • the object recognizing unit 42 detects an object moving with respect to the video by the difference method, for example, as a rectangular parallelepiped.
  • the determination of the detected object, that is, the vehicle type is performed by pattern matching for a rectangular parallelepiped.
  • the parameters for evaluation used for this pattern matching use the width, depth, height, volume, etc. of the rectangular parallelepiped. Specifically, looking at the magnitude relationship between width, depth, and height, the relationship is generally depth> height> width for motorcycles, and height> width, depth for pedestrians. Become a relationship. In the case of four-wheeled vehicles, the size, depth, and height vary widely, but the volume makes it possible to distinguish large vehicles from passenger vehicles.
  • the accident situation analysis device 8 inputs image data from the database 10 and performs image processing. For example, an outline of an object in an intersection is set for each image frame [for example, at a predetermined time interval (more specific).
  • the image processing unit 51 to be extracted and the image processing data obtained by the image processing unit 51 may cause a traffic accident.
  • a trajectory calculation unit 52 for obtaining the trajectory of the determined object, and an object entry position, an object speed, an accident position, a brake estimation, and whether or not a signal is ignored from the trajectory calculated by the trajectory calculation unit 52.
  • An accident situation detection section 53 for detecting an accident situation and an accident situation output section 54 for outputting the accident situation obtained by the accident situation detection section 53 in a report format are provided.
  • the specific analysis contents obtained by this accident situation analysis device 8 are as follows.
  • Figure 13 shows a specific example of an accident analysis report summarizing these results.
  • This data browsing device 11 is a search and browsing software (hereinafter referred to as browsing software, etc.) that can search and browse various data related to video data and operation logs stored in the database 10.
  • browsing software a search and browsing software
  • a web browser is used), and of course, it is connected to the server 10 via LAN 15 overnight.
  • the above viewing software can be used to search the video data for “intersection name, date, recording factor [type of sound judged to be recorded (specifically, sound source identification result)], type of accident , Accident vehicle type, etc., and a search can be performed, and a list of search results (eg, intersection name, date, recording factor, accident type, accident vehicle type, video data file name) can be displayed. be able to.
  • the video data can be reproduced.
  • the operation log can be searched or narrowed down from "intersection name, date", etc., and a list of search results (eg, intersection name, date, log file name) Display can be performed.
  • search results eg, intersection name, date, log file name
  • the operation log by selecting the log file name displayed in the list and pressing the display button, the operation log is displayed. Can be displayed.
  • the sound source judgment and evaluation device 9 is a search and browsing software (hereinafter, referred to as a video browsing software (hereinafter, referred to as a video browsing software) capable of performing search and browsing of the video data stored in the database 10 and various data related to the operation port.
  • a video browsing software hereinafter, referred to as a video browsing software
  • This is a computer terminal equipped with a software such as browsing software (for example, a web browser is used), and is connected to the device 10 via the LAN 15.
  • the video data can be reproduced.
  • the neural network recognizes again that the accident was not a traffic accident but a neural accident. It has a re-learning function to make learning using the program.
  • each accident recording device 5 the classification number used to identify (determine) the acoustic data relating to the erroneously recognized video data and the classification number for the erroneously recognized data are changed.
  • the record flag of the misrecognized classification number from the classification table is changed from “1 (record)” to “0 (zero: not recorded)”, and the revised classification table It is transmitted to the accident recording device 5 relating to misrecognition, and the classification table is updated.
  • Fig. 14 is a flowchart explaining the re-learning function.
  • this data transfer operation is performed by virtually establishing a one-to-one communication path using the communication line 13, for example, the Internet connection network (intranet).
  • the transmission protocol is, for example, T
  • the FTP protocol in CP-IP is used. Accordingly, the server 14 and the accident recording device 5 of the management center 12 each have an FTP program file.
  • the sound data used for identification of the video data and the sound source recorded by the accident recording device 5 is provided at a predetermined time, for example, about once a day.
  • This data is sent to the management center 12 (for example, when the date is changed), and along with these data, the operation log of the recording device 5 (for example, power-on time, power failure time, abnormalities of the unit)
  • the cause of the occurrence / abnormality recovery and the factors during recording are also sent. [Sound source identification result and spectrum calculation result (or sound pressure data)] are also sent.
  • the operation setting data in the accident recording device 5 (for example, the number of video channels, video size, image quality, recording time, recording time before trigger, video frame interval, intersection name, signal path)
  • a file that describes the data is sent to the accident recording device 5 as necessary.
  • the classification table corrected by the sound source judgment evaluation device 9 is also sent to the accident recording device 5 side.
  • the necessary traffic accident data (eg, intersection name, time of occurrence, accident situation, etc.) It is sent to Sen-Yu 1-16.
  • the sound source determination device 3 determines whether or not the traffic accident occurred at the location where the traffic accident occurred, and the traffic If it is determined that the accident has occurred, the data is recorded on the data recording device 4 and the data recorded on the data recording device 4 is sent to the management center 12 via the communication line 13 where the accident occurs. Since the traffic accidents are classified by the content classification device 7 and the details of the traffic accidents are analyzed by the accident situation analysis device 8, the data of the traffic accidents are recorded in the data recorder installed at the intersection as in the past.
  • the main data analysis (for example, contact sound detection, vehicle tracing, etc.) is automatically performed by the accident situation analysis device 8, which makes it easy for the observer to analyze the details of the accident and to make the analysis work easier. It can be done quickly.
  • the classification has been described as being performed in four stages, such as the first to fourth classifications.
  • the first to third classifications are used (in this case, The classification number may be four digits).
  • the sound source may be identified.
  • the sound source can be identified accurately in this case as in the first embodiment.
  • a description has been given of recording a traffic accident at an intersection.
  • the present invention may be applied to a place other than an intersection. It can also be applied to the monitoring of work performed in the office and the monitoring of convenience stores.

Abstract

A recording/analyzing system in which the details of an accident can be analyzed easily while eliminating the need for collecting the record data of a traffic accident occurred at an intersection. The recording/analyzing system comprises an accident recorder (5) including a sound source judging unit (3) for judging whether inputted audio data is related to a traffic accident or not, and a data recorder (4) for recording the audio data and video data temporarily in the case of a traffic accident, a classifying unit (7) for receiving data recorded in the accident recorder and classifying the traffic accidents, and an accident situation analyzing unit (8) for analyzing the details of accidents based on the data classified by the classifying unit. The accident recorder is disposed at an intersection whereas the classifying unit and the analyzing unit are installed in a management center (12) and the data recorded in the accident recorder is transmitted to the management center (12) over a communication line (13).

Description

明 細 書 突発事象の記録 · 解析システム 技術分野  Description Record of incidents and analysis systems Technical field
本発明は、 例えば交差点などに設けられて交通事故などの突発事 象を記録するとともにその内容を解析する突発事象の記録 · 解析シ ステムに関するものである。 背景技術  The present invention relates to an incident recording / analysis system provided at an intersection, for example, for recording an incident such as a traffic accident and analyzing the content thereof. Background art
近年、 交通事故は増加の一途を迪り、 交通事故の発生数を減らす ことが急務とされ、 このため、 交差点などに交通事故自動記録装置 を設置し、 事故前後の状況を映像にて記録することにより、 事故解 析 (事故分析ともいう) が行われている。  In recent years, the number of traffic accidents has been increasing, and it is urgent to reduce the number of traffic accidents.Therefore, automatic traffic accident recorders are installed at intersections and other locations to record the situation before and after the accidents. As a result, accident analysis (also referred to as accident analysis) is being conducted.
この種の交通事故自動記録装置は、 カメラ装置にて交差点を撮影 するとともに、 交.通事故に基づく衝突音、 急ブレーキ音などを検出 した際に、 その前後における映像を、 自動的に保存するものであつ た。  This type of traffic accident automatic recording device captures intersections with a camera device and automatically saves images before and after the detection of collision sounds, sudden braking sounds, etc. due to traffic accidents. It was a thing.
ところで、 交差点などにおいては、 事故以外の音が多数発生して おり、 交通事故の映像を保存するためには、 検出した音が交通事故 に起因しているものであるか否かを判断する必要があるが、 通常、 時系列の音圧分布にて、 或るしきい値を超えた場合に、 車両の衝突 による衝撃音が発生したと判断されていたが、 例えば道路工事、 動 物の鳴き声などとの区別が難しいという問題があつた。  By the way, many noises other than accidents are generated at intersections, etc.In order to save images of traffic accidents, it is necessary to determine whether the detected sound is due to a traffic accident. Usually, it was determined that an impulsive sound caused by a vehicle collision was generated when a certain threshold was exceeded in the time-series sound pressure distribution. There was a problem that it was difficult to distinguish it from others.
このような問題を解決するものとして、 事故音をより正確に検出 するためにニュ一ラリレネッ トワーク手法を用いて、 音源の種類を特 定するようにしたも のがある (例えば、 特開 2 0 0 3 — 2 0 2 2 6 0参照) 。 To solve such problems, more accurate detection of accident sounds In order to achieve this, a type of sound source is specified using a neural network method (for example, refer to Japanese Patent Application Laid-Open No. 2003-202602).
しかし、 上述した-従来の突発事象の自動記録装置の構成によ ると 、 各交差点に設置された装置を、 一定間隔おきに回収する必要があ り、 非常に面倒な作業であった。  However, according to the configuration of the above-described conventional automatic recording apparatus for an unexpected event, the devices installed at each intersection had to be collected at regular intervals, which was a very troublesome operation.
そこで、 上記課題 を解決するため、 本発明は、 例えば各交差点に て発生した突発事象の記録データの回収を不要にし得るととも に突 発事象の解析.を容易 に行い得る突発事象の記録 · 解析システム を提 供することを目的とする。 発明の開示  Therefore, in order to solve the above-mentioned problems, the present invention makes it possible to eliminate the need to collect the recorded data of the incident that occurred at each intersection and to record the incident that can easily analyze the incident. The purpose is to provide an analysis system. Disclosure of the invention
本発明の突発事象の記録 · 解析システムは、  The recording / analysis system of the incident according to the present invention comprises:
音源からの音響データを入力して突発事象であるか否かを判定デ 一夕に基づき判断す る音源判定手段を有するとともに当該音源判定 手段にて突発事象で あると判断された音響データおよび突発事象の 撮影手段からの映像データを一時的に記録するデータ記録手段 を有 する突発事象記録手段と、  It has sound source judgment means for inputting sound data from a sound source to judge whether or not the event is an accident, based on the judgment data, and the sound data and the sudden sound which are judged to be an accident by the sound source judgment means. A sudden event recording means having a data recording means for temporarily recording video data from an event photographing means;
この突発事象記録手段にて記録された音響データおよび映像デ一 夕を入力して突発事象を分類する突発事象分類手段と、  A sudden event classifying means for classifying the sudden event by inputting the audio data and the video data recorded by the sudden event recording means;
この突発事象分類手段にて分類された分類データに基づき突発事 象の内容を解析する 突発事象解析手段とから構成し、  An incident analysis means for analyzing the content of the incident based on the classification data classified by the incident classification means;
さ らに上記突発事象記録手段を突発事象の発生箇所に配置すると ともに、 上記突発事象分類手段および突発事象解析手段を管理セン 夕一に設置し、 且つ上記突発事象記録手段に て記録された音響データおよび映像 データを、 通信回線を介して管理センタ一に送るようにしたもので ある。 In addition, the above-mentioned sudden event recording means is placed at the place where the sudden event occurred, and the above sudden event classification means and sudden event analysis means are installed in the management center. In addition, the audio data and the video data recorded by the sudden event recording means are sent to a management center via a communication line.
また、 本発明の他の突発事象 の記録 · 解析システムは、 上記記録 · 解析システムにおいて、  Further, another recording / analysis system of an unexpected event according to the present invention is the above-mentioned recording / analysis system,
突発事象解析手段により解析 された結果と映像を目視した結果と を比較して、 予め決められた突発事象でない場合に、 音源判定手段 における判定データを変更する ための音源判定評価手段を具備した ものである。  A device equipped with sound source judgment evaluation means for comparing the result analyzed by the sudden event analysis means with the result of visually observing the video and changing the judgment data in the sound source judgment means when the event is not a predetermined accident event. It is.
また、 本発明の他の突発事象 の記録 · 解析システムは、 上記各記 録 · 解析システムにおいて、  Further, another recording / analysis system of an unexpected event according to the present invention may be configured such that:
突発事象が交通事故である場合に、 分類データが、 少なく とも事 故対象物である車両の種類およ び事故音とするものである。  If the incident is a traffic accident, the classification data is at least the type of vehicle that is the object of the accident and the sound of the accident.
また、 本発明の他の突発事象 の記録 · 解析システムは、 上記各記 録 · 解析システムにおいて、  Further, another recording / analysis system of an unexpected event according to the present invention may be configured such that:
突発事象が交通事故である場合に、 突発事象解析手段にて、 映像 データおよび音響データを用い て、 少なく とも事故対象物である車 両の位置関係および衝突の有無 を解析するものである。  If the incident is a traffic accident, the incident analysis means analyzes at least the positional relationship and the presence or absence of a collision of the vehicle, which is the object of the accident, using video data and audio data.
さ らに、 本発明の他の突発事象の記録 · 解析システムは、 上記各 記録 · 解析システムにおいて、  Further, another recording / analysis system of an unexpected event according to the present invention includes the above-mentioned recording / analysis system,
突発事象が交通事故であると ともに、 音源判定手段に具備される 音響検出手段としてマイクロフ オンを用いるとともに、 このマイク 口フォンを交差点の中心を挟ん で両侧に設置したものである。 上記突発事象の記録 解析シ ステムによると、 所定箇所で発生し た突発事象の映像デ 夕および音響データについては、 その場所に て、 音源判定手段により突発事象であるか否かが判断されるととも に突発事象であると判断された場合にデ一夕記録手段に記録され、 しかも、 このデータ記録手段に記録されたデータが通信回線を介し て管理センタ一に送られ、 ここで突発事象分類手段により突発事象 が分類されるとともに突発事象解析手段により突発事象の内容が解 析されるため、 従来のように、 突発事象の発生箇所に設けられたデ 一夕記録装置に突発事象のデータを記録させるようにしたものに比 ベて、 その記録データをわざわざ回収する手間を必要としないとと もに、 突発事象の主な内容については、 突発事象解析手段を用いて 解析が行われるため、 突発事象の解析作業が容易となる。 図面の簡単な説明 In addition to the accident being a traffic accident, a microphone is used as sound detection means provided in the sound source determination means, and the microphone-phones are installed on both sides of the center of the intersection. According to the record and analysis system of the above-mentioned sudden events, The video data and audio data of the sudden event are recorded at the location when the sound source determination means determines whether or not the event is an accident, and when the event is judged to be an accident, the data is recorded. The data recorded in the data recording means is transmitted to the management center via a communication line, where the incident event is classified by the incident classifying means, and the incident event is analyzed by the incident event analyzing means. Because the content of the incident is analyzed, the recorded data is bothersome than in the past, where the data of the incident was recorded by a data recorder installed at the location where the incident occurred. It does not require the trouble of collecting, and the main contents of the incident are analyzed using the incident analysis means, which facilitates the work of analyzing the incident. Brief Description of Drawings
図 1 は、 本発明の実施の形態に係る突発事象である交通事故の記 録 · 解析システムの概略構成を示すブロック図、  FIG. 1 is a block diagram showing a schematic configuration of a system for recording and analyzing a traffic accident, which is an accident, according to an embodiment of the present invention.
図 2 は、 同システムにおける事故記録装置の交差点での配置状態 を示す平面図、  Figure 2 is a plan view showing the layout of the accident recording device at the intersection in the system.
図 3 は、 同システムにおける事故記録装置の概略構成を示すプロ ック図、  Figure 3 is a block diagram showing the schematic configuration of the accident recording device in the system.
図 4は、 同事故記録装置に ける音源判定装置での検出信号を示 すグラフ、  Fig. 4 is a graph showing detection signals from the sound source judgment device in the accident recording device.
図 5は、 同音源判定装置における音源識別部での第 1分類作業に 係る音響信号のスぺク トル演算結果を示すグラフ、  FIG. 5 is a graph showing a spectrum calculation result of an acoustic signal related to the first classification work in the sound source identification unit in the sound source determination device.
図 6は、 同音源判定装置における音源識別部での第 2分類作業の 結果のスぺク トル分布を示すグラフ、 図 Ίは、 同音源判定装置における音源讖別部での第 3分類作業の 結果のスぺク トル分布を示すグラフ、 FIG. 6 is a graph showing a spectrum distribution as a result of the second classification work in the sound source identification unit in the sound source determination device. Fig. 5 is a graph showing the spectrum distribution as a result of the third classification work in the sound source discriminating unit in the sound source determination device,
図 8は、 同音源判定装置における音源雷哉別部での第 4分類作業の 結果のスぺク トル分布を示すグラフ、  Fig. 8 is a graph showing the spectrum distribution as a result of the fourth classification work in the sound source Raiya separate part in the same sound source determination device,
図 9は、 同音源判定装置における音源識別部でのニューラルネッ トワークによる分類作業の概念図、  Figure 9 is a conceptual diagram of the classification work by the neural network in the sound source identification unit in the sound source determination device.
図 1 0は、 同システムにおける事故内容分類装置の概略構成を示 すブロック図、  Fig. 10 is a block diagram showing the schematic configuration of the accident content classification device in the system.
図 1 1 は、 同システムにおける事故状祝解析装置の概略構成を示 すブロック図、  Fig. 11 is a block diagram showing the schematic configuration of the accident congratulation analyzer in this system.
図 1 2は、 同事故状況解析装置にて得 られた事故状況のデータを 示す図で、 ( a ) は車両のトレース図、 ( b ) は音響の解析結果を 示すグラフ、 ( c ) は車両の速度の変化を示すグラフ、  Figs. 12 and 13 show the accident situation data obtained by the accident situation analysis device. (A) is a vehicle trace diagram, (b) is a graph showing acoustic analysis results, and (c) is a vehicle. Graph showing the change in speed of
図 1 3は、 同事故状況解析装置にて得 られた解析レポー トの内容 を示す図、  Figure 13 shows the contents of the analysis report obtained by the accident situation analyzer.
図 1 4は、 同システムにおける音源判定評価装置での評価方法を 示すフ 口一チヤ一ト、  Fig. 14 is a flowchart showing the evaluation method of the sound source judgment evaluation device in the system.
図 1 5は、 同システムにおける事故記録装置の変形例に係る交差 点での配置状態を示す平面図である。 発明を実施するための最良の形態  FIG. 15 is a plan view showing an arrangement state at an intersection according to a modification of the accident recording device in the system. BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明の実施の形態に係る突発事象の記録 · 解析システム を、 添付の図面に従って説明する。  Hereinafter, an unexpected event recording / analysis system according to an embodiment of the present invention will be described with reference to the accompanying drawings.
なお、 本実施の形態では、 突発事象と して、 交通事故である場合 について説明する。 すなわち、 交通事故の記録 · 解析システム こついて説明するとと もに、 この交通事故を記録し解析するための手掛かりは、 音源およ び映像であり、 またこの音源の識別対象としては、 衝突音などの他 に、 交通事故の検出に際し、 重要な手掛かりとなるブレーキ、 クラ クシヨン、 サイ レン、 暴走音などの音源につレゝても対象としており 、 したがって以下の説明では、 これらを含めて事故音と称するとと もに、 交通事故以外の事象も含むため事故等と称して説明を行う。 図 1および図 2 に示すように、 この交通事敏の記録 · 解析システ ムは、 交通事故を監視する箇所、 例えば交通事故が頻繁に発生する 交差点 (突発事象の発生箇所) Kに設置されて交通事故 (事故等) を撮影するカメラ装置 (撮影手段の一例で、 ί歹 IJえば C C Dカメラが 用いられる) 1、 交差点 K付近にて発生する音源 (以下、 音響とも いう) を検出するためのマイクロフォン (音響検出手段の一例で、 以下、 マイクという) 2、 このマイク 2 にて検出された音響信号 ( 以下、 音響デ一夕という) に基づき交通事故一であるか否かを判定デ 一夕を用いて判断する音源判定装置 (音源判定手段の一例) 3およ びこの音源判定装置 3 にて交通事故であると- 1¾断された場合に、 力 メラ装置 1 にて撮影された映像信号 (以下、 映像データという) を 上記マイク 2からの音響データとともに記録するデータ記録装置 ( データ記録手段の一例で、 例えばハードディスク装置が用いられる ) 4からなる事故記録装置 (突発事象記録手段の一例) 5 と、 Note that, in the present embodiment, a case where the accident is a traffic accident will be described. In other words, the recording and analysis system of traffic accidents is explained, and the clues to record and analyze the traffic accidents are sound sources and images. In addition, when detecting traffic accidents, it also covers sound sources such as brakes, clutches, sirens, and runaway sounds that are important clues. In addition to the term “accidents,” the explanation includes events other than traffic accidents. As shown in Fig. 1 and Fig. 2, this traffic alert recording and analysis system is installed at the location where traffic accidents are monitored, for example, at the intersection K where traffic accidents occur frequently (the location where sudden incidents occur). Camera device for photographing traffic accidents (accidents, etc.) (An example of a photographing method, a CCD camera is used in the case of a system IJ) 1. To detect sound sources (hereinafter also referred to as sound) generated near the intersection K Microphone (an example of sound detection means, hereinafter referred to as microphone) 2, based on the sound signal (hereinafter referred to as sound data) detected by this microphone 2, it is determined whether or not it is a traffic accident. Sound source determination device (an example of sound source determination means) 3 and a video signal captured by the power camera device 1 when the sound source determination device 3 determines that a traffic accident is -1 (Hereafter, video data A data recording device (data in an example of a recording device, for example, an example of an accident recorder (accident recording means comprising a hard disk device is used) 4) 5 for recording with the acoustic data of the) that from the microphone 2,
この事故記録装置 5にて記録された音響データおよび映像データ を入力して一時的に記憶するデータ記憶部 ( ー ドディスクなどが 用いられ、 デ一夕記録部と称してもよい) 6 と、 このデータ記憶部 6 に記憶されたデータを読み込むとともに交 ill事故の内容に基づき これらデータを分類する事故内容分類装置 (突 g事象分類手段の一 例) 7 と、 この事故内容分類装置 7 にて分類された分類データに基 づき事故状況を解析する事故状況解析装置 (突 fg事象解析手段の一 例) 8 と、 この事故状況解析装置 8 により解析ざれた結果と映像を 目視した結果とを比較して、 交通事故 (予め決められた突発事象) でない場合に、 上記事故記録装置 5 における音療判定装置 3 におけ る判定データを更新するための音^判定評価装置 (音源判定評価手 段の一例) 9 と、 上記事故内容分類装置 7 にて分類された分類デ一 夕を蓄積するデータ蓄積部であるデータベース (ハードディスクな どが用いられる) 1 0 と、 このデータベース 1 0 に蓄積された各デ 一夕を検索するとともにその検索結果を閲覧し得るデータ閲覧装置A data storage unit for inputting and temporarily storing the audio data and the video data recorded by the accident recording device 5 (a hard disk or the like is used and may be referred to as a data recording unit) 6; The data stored in the data storage unit 6 is read and based on the details of the accident An accident content classification device (an example of a sudden g event classification means) 7 that classifies these data and an accident situation analysis device (a sudden fg fg) that analyzes the accident situation based on the classification data classified by the accident content classification device 7 By comparing the result analyzed by the accident situation analysis device 8 with the result of visually observing the video, if the traffic accident is not a traffic accident (predetermined sudden event), the accident record A sound evaluation judgment device (an example of a sound source judgment evaluation means) 9 for updating the judgment data in the sound therapy judgment device 3 in the device 5, and a classification data classified by the accident content classification device 7 described above. Database (a hard disk or the like is used) 10 as a data storage unit that stores data, and data browsing that can search each database stored in this database 10 and browse the search results apparatus
1 1 とから構成されている。 なお、 このデータ閎覧装置 1 1 は、 当 然、 データベース 1 0に蓄積されたデータを閲覧するだけの機能も 有している。 1 and 1 It should be noted that the data browsing device 11 also has a function of simply browsing data stored in the database 10.
そして、 上記事故記録装置 5については交差,^:に配置されるとと もに、 それ以外の各装置 6〜 1 1 については管理セン夕一 1 2に設 置されており、 上記事故記録装置 5 と管理センター 1 2 とは、 通信 回線 (公衆回線、 ブロー ドバンド回線などによるイ ンターネッ ト、 イン トラネッ トが用いられる) 1 3を介して接続されている。  The accident recording device 5 is located at the intersection, ^ :, and the other devices 6 to 11 are located at the control center 12; 5 and the management center 12 are connected via a communication line (the Internet or intranet using a public line, a broadband line or the like) 13.
また、 上記管理センター 1 2側には上記データ記憶部 6およびデ —夕ベース 1 0 を管理 · 運用するためのサーバ (コンピュータ装置 ) 1 4が設けられるとともに、 このサーバ 1 4と各装置 6〜 1 1 と は、 L A N 1 5などを介して互いにデ一夕の受^ 3"渡しができるよう にされており、 また当該サーバ 1 4は交通官制セン夕一 1 6 に通信 回線 1 7 を介して接続されている。 なお、 各交差点に配置された事故記録装置 5および管理センター 1 2側のサーバ 1 4には、 それぞれ通信装置 5 a , 1 4 aが具備さ れている。 In addition, a server (computer device) 14 for managing and operating the data storage unit 6 and the data base 10 is provided on the management center 12 side, and the server 14 and each device 6 to The server 11 is designed to be able to receive and receive data from each other via LAN 15 or the like, and the server 14 is connected to the traffic control system Senichi 16 via a communication line 17. Connected. The accident recording device 5 and the server 14 on the management center 12 disposed at each intersection are provided with communication devices 5a and 14a, respectively.
次に、 上記事故記録装置 5における音源判定装置 3を、 図 3〜図 9 に基づき説明する。 .  Next, the sound source determination device 3 in the accident recording device 5 will be described with reference to FIGS. .
上記音源判定装置 3は、 図 3 に示すように、 上記マイ 2 にて集 音された音響信号 (音響デ一夕) を入力して所定周波数 域の信号 を抽出する信号抽出部 2 1 と、 この信号抽出部 2 1 にて f由出された 抽出音響信号を入力して、 所定の第 1積分時間にて積分 ¾行い音響 エネルギー (積分値である、 以下、 同じ) を求め、 且つ当該音響ェ ネルギ一が所定の第 1設定レベル値を超えているか否か 判断する とともに、 超えている場合には、 所定の検出信号を出力するレベル 検出部 2 2 と、 上記信号抽出部 2 1 にて抽出された抽出 ^響信号を 入力して、 上記第 1積分時間よりも短い所定の第 2積分曰寺間にて積 分を行い音響エネルギーを求め、 且つ当該音響エネルギーが所定の 設定ピーク値を超えているか否かを判断するとともに、 超えている 場合には、 所定の検出信号を出力するピーク検出部 2 3 と、 上記信 号抽出部 2 1 にて抽出された抽出音響信号を入力して、 戸/"「定の第 3 積分時間にて積分を行い音響エネルギーを求め、 且つ当該音響エネ ルギ一が所定の設定レベル値を超えている場合に、 さら 所定時間 経過後に、 再度、 当該所定の設定レベル値を超えている 否かを判 断するとともに、 超えている場合には、 所定の検出信号を出力する レベル継続検出部 2 4 と、 上記レベル検出部 2 2および ピーク検出 部 2 3からの少なく ともいずれかの検出信号を入力した 募合に、 所 定周波数領域を所定個数に分割するとともに、 これら各 割周波数 領域に係る音響信号の周波数スぺク トル (以下、 単にスペケ トルと いい、 またスペク トラムともいう) をそれぞれ演算するスペク トル 演算部 2 5 と、 このスぺク トル演算部 2 5にて求められた ^"分割周 波数領域でのスぺク トルを入力して、 ニューラルネッ トワーク手法 (以下、 単に、 ニューラルネッ トという) を用いて音源を待定し識 別を行う音源識別部 2 6 と、 この音源識別部 2 6 にて識別ざれた音 源識別信号を入力するとともに、 上記ピーク検出部 2 3お びレべ ル継続検出部 2 4からの検出信号をそれぞれ入力して、 事古夂等であ るか否かを判定する事故判定部 2 7 とが具備されている。 As shown in FIG. 3, the sound source determination device 3 includes a signal extraction unit 21 that inputs an acoustic signal (acoustic data) collected by the My 2 and extracts a signal in a predetermined frequency range, The signal extraction unit 21 receives the extracted acoustic signal generated by f, integrates the signal at a predetermined first integration time, obtains an acoustic energy (integral value, the same applies hereinafter), and It is determined whether or not the energy exceeds a predetermined first set level value. If the energy exceeds the predetermined first set level value, a level detection unit 22 that outputs a predetermined detection signal and the signal extraction unit 21 described above are used. The extracted extracted sound signal is input, and the sound energy is obtained by integrating the predetermined second integral shorter than the first integration time to obtain the sound energy, and the sound energy has a predetermined set peak value. Determine whether or not In this case, the peak detection unit 23 that outputs a predetermined detection signal and the extracted sound signal extracted by the signal extraction unit 21 are input, and the door / " The sound energy is calculated by integration, and if the sound energy exceeds the predetermined set level value, it is determined again after the lapse of a predetermined time that the sound energy has exceeded the predetermined set level value. In addition, if it exceeds, a level continuation detection unit 24 that outputs a predetermined detection signal, and a recruitment unit that receives at least one of the detection signals from the level detection unit 22 and the peak detection unit 23 In this case, the predetermined frequency region is divided into a predetermined number and A spectrum calculation unit 25 that calculates the frequency spectrum (hereinafter simply referred to as a spectrum and also referred to as a spectrum) of the acoustic signal relating to the region, and the spectrum calculation unit 25 calculates the spectrum. Input the spectrum in the divided "" frequency domain, and use a neural network method (hereinafter simply referred to as a neural network) to await a sound source and identify it. In addition to the input of the sound source identification signal identified by the sound source identification section 26, the detection signals from the peak detection section 23 and the level continuation detection section 24 are input, and the And an accident judging section 27 for judging whether or not they are the same.
また、 この事故記録装置 5には、 上記事故判定部 2 7にて事故等 であると判定された場合に、 カメラ装置 1 にて撮影していら映像を データ記録装置 4に記録させるための保存指示を出力するデータ保 存指示部 2 8が設けられている。  In addition, when the accident determination unit 27 determines that an accident or the like has occurred, the accident recording device 5 saves the video to be recorded on the data recording device 4 while shooting with the camera device 1. A data storage instruction section 28 for outputting an instruction is provided.
次に、 上記各部における構成または処理内容について詳しく説明 する。  Next, the configuration or processing content of each of the above units will be described in detail.
上記信号抽出部 2 1では、 周波数が例えば 0〜 2 . 5 k Pi z の信 号が取り出された後、 0〜 5 0 0 H zの部分が除去される。 これは 、 交通事故および車両の走行に、 すなわち事故等に起因して発生す る事故音の範囲を絞るとともに、 余分なエンジン音 ( 0〜 5 0 0 H z ) を除去するためである。  The signal extracting section 21 extracts a signal having a frequency of, for example, 0 to 2.5 kPiz, and then removes a portion of 0 to 500 Hz. This is to narrow down the range of accident noises generated during traffic accidents and vehicle running, that is, due to accidents and the like, and to remove extra engine sounds (0 to 500 Hz).
また、 上記レベル検出部 2 2は、 信号抽出部 2 1からの ί甶出音響 信号を入力して、 所定の第 1積分時間 (例えば、 5 0 0 m s e c程 度) にて積分を行い音響エネルギーを求める第 1積分器 3 : L と、 こ の第 1積分器 3 1 にて求められた音響エネルギーと所定の薛 1 設定 レベル値とを比較して音響エネルギーが当該第 1設定レべ Jレ値を超 えている場合に、 検出信号 (トリガー信号である) として、 例 えばFurther, the level detection unit 22 receives the output sound signal from the signal extraction unit 21 and performs integration during a predetermined first integration time (for example, about 500 msec) to perform sound energy The first integrator 3: L is compared with the acoustic energy obtained by the first integrator 31 and a predetermined set level value of Xe 1 to determine the acoustic energy at the first set level J Above value The detection signal (the trigger signal), for example,
「 1」 の信号 (なお、 設定レベル値以下の場合には、 「 0」 が 出力 されている) を出力する第 1比較器 3 2 とが具備されている。 すな わち、 このレベル検出部 2 2では、 音響信号を或る時間間隔で 積分 することにより、 当該音響信号の大きさが、 所定のレベルを超 えて いるか否かが判断される。 A first comparator 32 is provided for outputting a signal of "1" (note that "0" is output when the signal is equal to or less than the set level value). That is, the level detector 22 integrates the acoustic signal at a certain time interval, thereby determining whether or not the magnitude of the acoustic signal exceeds a predetermined level.
上記ピーク検出部 2 3は、 信号抽出部 2 1からの抽出音響信 号を 入力して、 上記第 1積分時間より短い第 2積分時間 (例えば、 1 0 0 m s e c程度) にて積分を行い音響エネルギーを求める第 2 積分 器 3 3 と、 この第 2積分器 3 3 にて求められた音響エネルギー と所 定の第 2設定レベル値とを比較して音響エネルギーのピーク値 が当 該第 2設定レベル値 (設定ピーク値でもある) を超えている場合に 、 検出信号 (トリガー信号である) として、 例えば 「 1」 の信号 ( なお、 設定レベル値以下の場合には、 「 0」 が出力されている ) を 出力する第 2比較器 3 4とが具備されている。 すなわち、 この ピー ク検出部 2 3では、 音響信号を短い時間でもって積分すること によ り、 当該音響信号のピーク値が、 所定のレベル (ピーク値) を 超え ているか否かが判断される。  The peak detecting section 23 receives the extracted acoustic signal from the signal extracting section 21 and performs integration during a second integration time (for example, about 100 msec) shorter than the first integration time. The second integrator 33 for obtaining the energy is compared with the sound energy obtained by the second integrator 33 and a predetermined second set level value, and the peak value of the sound energy is set to the second set value. If the level value (which is also the set peak value) is exceeded, a signal of “1”, for example, is output as a detection signal (which is a trigger signal). ) Is provided. That is, the peak detection unit 23 determines whether the peak value of the sound signal exceeds a predetermined level (peak value) by integrating the sound signal in a short time. .
上記レベル継続検出部 2 4は、 信号抽出部 2 1からの抽出音 響信 号を入力して、 所定の第 3積分時間 (例えば、 レベル検出手段 にお ける第 1積分時間と同じ時間とされる) にて積分を行い音響ェ ネル ギーを求める第 3積分器 3 5 と、 この第 3積分器 3 5 にて求め られ た音響エネルギーと所定の第 3設定レベル値 (例えば、 レベル妆出 手段における設定レベル値が用いられる) とを比較して音響ェ ネル ギ一が当該第 3設定レベル値を超えている場合に、 所定時間後 (例 えば、 3 0 0 m s e c ) に、 再度、 じ設定レベル値を超えている か否かを比較して超えている場合には 、 当該設定レベル値が継続 ( 維持) されていると判断し、 検出信口 The level continuation detecting section 24 receives the extracted acoustic signal from the signal extracting section 21 and sets a predetermined third integration time (for example, the same as the first integration time in the level detecting means). ), A third integrator 35 for obtaining the acoustic energy by performing integration, and the acoustic energy obtained by the third integrator 35 and a predetermined third set level value (for example, level output means). If the sound energy exceeds the third set level value by comparing with the set level value at For example, at 300 msec), whether or not the set level value exceeds the same set level value is compared again. If the set level value is exceeded, it is determined that the set level value is continued (maintained), and the detection is performed. Shinguchi
,ラ (トリガー信号である) とし て、 例えば 「 1」 の信号' (なお、 設定レベル値が継続されていない m A  , La (which is the trigger signal), for example, a signal of “1” (Note that the set level value is not continued mA
口 には、 「 0」 が出力されている ) を出力する第 3比較器 3 6 と が具備されている。  The mouth is provided with a third comparator 36 which outputs "0").
なお 、 図 4に、 上記各検出部 2 2 〜 2 4における各比較器 3 2 , FIG. 4 shows each of the comparators 32, 22 in each of the detection units 22 to 24.
3 4 , 3 6での入力信号、 出力信号および検出信号の波形図を示しThe waveform diagrams of input signal, output signal and detection signal at 34 and 36 are shown.
、 ( a ) はレベル検出部 2 2での第 1比較器 3 2のものを、 ( b ) はピーク検出部 2 3での第 2比較器 3 4のものを、 ( C ) はレベル 継続検出部 2 4での第 3比較器 3 6 のちのをそれぞれ示し、 また ( d ) は、 リセッ ト信号を示す。 , (A) is for the first comparator 32 in the level detector 22, (b) is for the second comparator 34 in the peak detector 23, and (C) is continuous level detection 3 shows the state after the third comparator 36 in the unit 24, and (d) shows the reset signal.
また、 上記スぺク トル演算部 2 5では、 レベル検出部 2 2からの 検出信号 ( 「 1」 ) およびピーク検出部 2 3からの検出信号 ( 「 1 Further, in the above-described spectrum calculation section 25, the detection signal (“1”) from the level detection section 22 and the detection signal (“1”) from the peak detection section 23 are set.
」 ) のいずれかが入力されると 、 まず抽出音響信号が AZD変換器)), The extracted audio signal is first converted to an AZD converter
(図示せず) にてデジタル化された後、 図 5に示すように、 所定周 波数領域 ( 4 5 0 〜 2 5 0 0 H z ) を所定個数 、 例えば 1 0 5個に 分割した各分割周波数領域 (バンクともいう ) に係る音響信号の周 波数スぺク トル (周波数スぺク 卜ラムともい ) が、 高速フーリエ 変換 ( F F T ) にて求められる After being digitized by (not shown), as shown in FIG. 5, each division is performed by dividing a predetermined frequency region (450 to 250 Hz) into a predetermined number, for example, 105 A frequency spectrum (also called a frequency spectrum) of an acoustic signal in a frequency domain (also called a bank) is obtained by a fast Fourier transform (FFT).
そして、 上記音源識別部 2 6 にて、 ニューラルネッ トが用いられ て音源の種類が特定される。  Then, in the sound source identification unit 26, the type of the sound source is specified using the neural network.
以下、 このニューラルネッ トを用いた処理内容について、 詳しく 説明する。  Hereinafter, the processing content using this neural network will be described in detail.
この音源識別部 2 6では、 周波数スぺク トルを 4段階にてそれぞ れの分類方法 (第 1分類〜第 4分類) に基づき且つニューラルネッ トを用いて認識 · 分類作業が行われ、 これらの分類作業にて得られ た分類番号が、 予め、 実験などにより求められた判定データである 分類表と照合されて、 検出された音が、 衝突音、 タイヤと路面の摩 擦音、 クラクショ ン、 暴走音、 サイ レンを含めて多数の種頓のいず れかに特定される。 なお、 上記分類表には、 例えば各音の種類に対 して 5桁の番号 (第 1分類に 2桁、 第 2〜第 4分類にそれぞれ 1桁 が与えられる) が割り当てられるとともに、 当該音を記録するか否 かを表す記録フラグの欄が設けられている。 In the sound source identification unit 26, the frequency spectrum is divided into four levels. Recognition and classification are performed based on these classification methods (Classes 1 to 4) and using neural networks, and the classification numbers obtained in these classifications are determined in advance by experiments and the like. The detected sound is compared with the classification table, which is the judgment data, and the detected sound is one of a number of random sounds, including crash noise, tire and road noise, cracks, runaway noise, and sirens Specified. In the above classification table, for example, a five-digit number (two digits are assigned to the first category and one digit is assigned to each of the second to fourth categories) is assigned to each type of sound, and Is provided with a record flag column indicating whether or not to record the data.
こ こで、 上記分類作業の内容について説明する。  Here, the contents of the classification work will be described.
第 1分類 (第 1段階) について説明するが、 この第 1分類では、 分類基準が異なる 2種類の分割パターンを併用して分類番号が求め られる。  The first classification (the first stage) will be explained. In the first classification, the classification number is obtained by using two types of division patterns with different classification criteria.
まず、 第 1種類目としては、 1 0 5個に分割した各分割周波数領 域 (以下、 バンクともいう) に係る音響信号の各周波数スペク トル を、 その最大値でもって正規化を行つた後、 正規化が行われた 1 0 5バンクにおける周波数スぺク トルについて、 合計面積が求められ る。  First, as the first type, after normalizing each frequency spectrum of an acoustic signal related to each divided frequency region (hereinafter also referred to as a bank) divided into 105, using its maximum value, Then, the total area is obtained for the normalized frequency spectrum in 105 banks.
そして、 1 0 5バンクの分割周波数領域が、 上記合計面積に応じ て予め定められている分割パターンに基づき、 例えば 5個に分けら れて、 # 0〜 # 4の分類番号が付される。 ここでの分割の仕方は、 実際の交通音のサンプルデータ ( 5 0 0 0件) に基づき、 例えば 5 0 0 0件それぞれの各合計面積値を度数分布化し (横軸がバンク位 置で、 縦軸が面積値となる) 、 それが等分になるように分割する。 すなわち、 合計面積値の度数分布の大きい部分は、 分割幅が狭く さ れるとともに、 合計面積値の度数分布が小さい部分は、 分割幅が広 く される (場合によっては、 等分でなくてもよく、 また分割個数に ついては、 5個でなくてもよい) 。 例えば、 図 5 において、 1 0 5 バンクの周波数スぺク トルの合計面積が 2 4 1 0であるとすると、 この合計面積 2 4 1 0が属する分類番号 (予め、 合計面積値と分類 番号とが対応付けられている) 力 例えば # 3 というように割り当 てられる。 Then, the divided frequency regions of the 105 banks are divided into, for example, five based on the division pattern predetermined according to the total area, and are assigned classification numbers of # 0 to # 4. The division method here is based on the sample data (500 samples) of actual traffic sounds, for example, frequency distribution of each total area value of 500 samples (the horizontal axis is the bank position, The vertical axis is the area value), and it is divided so that it is equally divided. In other words, the part where the frequency distribution of the total area value is large has a narrow division width. At the same time, the part where the frequency distribution of the total area value is small has a wider division width (in some cases, the division is not necessarily equal, and the division number is not necessarily five). For example, in FIG. 5, assuming that the total area of the frequency spectrum of bank 105 is 240, the classification number to which this total area 2401 belongs (the total area value, the classification number and the Are assigned, for example, # 3.
次に、 第 2種類目としては、 1 0 5バンクが例えば 1 0個に分け られて、 # 0〜 # 9の分類番号が付される。  Next, as a second type, 105 banks are divided into, for example, 10 banks, and classification numbers # 0 to # 9 are assigned.
この第 2種類目においては、 1 0 5個に分割した各分割周波数領 域 (バンクである) に係る音響信号の各周波数スペク トルを、 その 最大値でもって正規化を行った後、 正規化が行われた 1 0 5バンク における周波数スペク トルが最大ピークのものを求める。  In the second type, each frequency spectrum of the audio signal relating to each of the divided frequency regions (which is a bank) divided into 105 is normalized by its maximum value, and then normalized. The one with the largest peak in the frequency spectrum in 105 banks where the is performed.
すなわち、 この 1 0 5バンクの分割周波数領域が、 上記最大ピー クが存在するバンク数 (バンクの位置) に応じた分割パターンに、 例えば 1 0個に分けられるとともに # 0〜 # 9 の分類番号が付され た後、 当該抽出音響信号に係るスペク トル系列 (図 5の棒状グラフ にて示す) のうち、 最大のレベルを有するスぺク トルが、 # 0〜 # 9のいずれの部分に属するかが求められる。 なお、 1 0 5ノ ンクの 分割の仕方は、 最大レベルのバンク位置に応じて予め決められてい る。 具体的には、 実際の交通音のサンプルデータ ( 5 0 0 0件) に 基づき、 例えば 5 0 0 0件それぞれの各最大レベルのバンク位置を 度数分布化し (横軸がバンク位置で、 縦軸が件数となる) 、 それが 等分になるように分割する。 すなわち、 最大レベルのバンク位置の 度数分布が大きい部分は、 分割幅が狭くされるとともに、 最大レべ ルのバンク位置の度数分布が小さい部分は、 分割幅が広く される ( 場合によっては、 等分でなくてもよく、 また分割個数については、 1 0個でなくてもよい) 。 そして、 例えばバンク番号が小さいもの から大きいものへと分類番号が、 # 0〜 # 9 というように付けられ る。 したがって、 図 5 においては、 8 8バンク付近が最大のレベル を有しており、 この 8 8バンクが属する分類番号が、 例えば # 8 と いうように割り当てられる。 That is, the divided frequency domain of 105 banks is divided into, for example, 10 divided patterns according to the number of banks (bank positions) where the maximum peak exists, and the classification numbers of # 0 to # 9 , The spectrum having the highest level belongs to any part of # 0 to # 9 in the spectrum series (shown by the bar graph in Fig. 5) related to the extracted acoustic signal. Is required. The method of dividing the 105 links is predetermined according to the bank position at the maximum level. Specifically, based on the sample data of actual traffic sounds (500 cases), for example, the bank positions of each of the 50,000 cases are frequency-distributed (the horizontal axis is the bank position, and the vertical axis is Is the number of cases), and it is divided so that it is evenly divided. In other words, in the part where the frequency distribution of the bank position at the maximum level is large, the division width is narrowed and the maximum level is The portion where the frequency distribution of the bank position of the file is small has a large division width (in some cases, the division is not necessarily equal, and the division number is not necessarily 10). Then, for example, classification numbers are assigned from # 0 to # 9, from the smallest bank number to the largest bank number. Therefore, in FIG. 5, the vicinity of 88 banks has the highest level, and the classification number to which the 88 banks belong is assigned, for example, as # 8.
そして、 この第 1分類における分類番号は上記 2種類の番号が考 慮されて決定される。 例えば、 合計面積値による 5個の分割では # 3 に、 最大レベルのバンク位置による 1 0個の分割では # 9 という ように番号が割り当てられると、 この第 1分類による分類番号は、 # 3 9 となる。  The classification number in the first classification is determined in consideration of the above two types of numbers. For example, if the number is assigned to # 3 in 5 divisions by the total area value, and to # 9 in 10 divisions by the maximum level bank position, the classification number according to the first classification is # 3 9 It becomes.
次に、 第 2分類〜第 4分類においては、 音響信号の特徴部分をス ぺク トルに基づき抽出するとともに、 この抽出されたスぺク トル系 列と、 音源を特定するために、 予め実験により求められたスぺク ト ル系列とのパターンマッチング (パターン認識) を、 ニューラルネ ッ トを用いて行い、 これら各分類作業にて得られた分類番号に基づ き、 最終的に、 上述したように、 実験などにより求められた分類表 と照合されて、 検出された音が、 衝突音、 タイヤと路面の摩擦音、 クラクショ ン、 暴走音、 サイレンを含めて多数の種類のいずれかに 特定される。 なお、 これら各分類作業においては、 その前段階作業 にて得られた分類結果に基づき、 データベースに多数用意された信 号特定用パターンの中から、 所定個数の一群 (後述するが、 例えば 5個づつ) が選択抽出されて、 パターン認識に使用される。  Next, in the second to fourth classes, the characteristic part of the acoustic signal is extracted based on the spectrum, and an experiment is performed in advance to identify the extracted sequence and the sound source. The pattern matching (pattern recognition) with the spectrum series obtained by the above is performed using neural nets, and based on the classification numbers obtained in each classification work, As described above, the detected sound was identified with one of many types including collision noise, tire / road friction, crack, runaway, siren, etc. Is done. In each of these classification operations, based on the classification results obtained in the preceding stage operation, a group of a predetermined number of signal identification patterns prepared in the database (for example, five Are selected and used for pattern recognition.
以下、 第 2分類作業〜第 4分類作業について説明する。 第 2分類作業では、 まず、 上記第 1分類で求められた分類番号 ( 例えば、 # 3 9である) に基づき、 5個のパターンがデ一夕べ一ス から取り出される。 Hereinafter, the second to fourth classification tasks will be described. In the second classification work, first, five patterns are extracted from the database based on the classification number (for example, # 39) obtained in the first classification.
そして、 当該音響信号における 1 0 5バンクのスぺク トル系列の うち、 最大スペク トルの 5 0 %未満のデータを零となし (ゼロリセ ッ 卜ともいう) 、 これを正規化したスぺク 卜ル系列 (図 6 に示す) に対して、 ニューラルネッ トを用いて、 上記 5個のパターンに、 当 該 5個以外のパターン (用意されたパターン以外のもの) を示すパ ターンを加えた、 計 6個のパターンとパターンマッチングを行うこ とにより、 分類番号が割り当てられる。  Then, data of less than 50% of the maximum spectrum in the spectrum sequence of 105 banks in the audio signal is regarded as zero (also referred to as zero reset), and the normalized spectrum is obtained. To the series (shown in Fig. 6), using a neural network, a pattern indicating the other five patterns (other than the prepared patterns) was added to the above five patterns. A classification number is assigned by performing pattern matching with a total of six patterns.
第 3分類では、 上記第 2分類で求められた分類番号に基づき、 や はり、 5個のパターンがデータべ一スから取り出されるとともに、 当該音響信号における 1 0 5バンクのスぺク トル系列のうち、 最大 スぺク トルおよびその前後 2ノ ンクづつの合計 5ノ ンクのスぺク ト ル部分をゼロとなし (ゼロリセッ ト) 、 新たな 1 0 5バンクのスぺ ク トル系列を作成する。 そして、 新たに作成された 1 0 5バンクの スぺク トル系列のうち、 最大スぺク トルが或るしきい値以上のスぺ ク トルに対しては、 最大スペク トルの 2 5 %未満をゼロとなし、 そ れを正規化したスペク トル系列 (図 7に示す) に対してニューラル ネッ トを用いて、 上記 5個のパターンに、 当該 5個以外のパターン を示すパターンおよびしきい値未満のパターン (しきい値未満につ いても、 1個のパターンとみなす) を加えた、 計 7個のパターンと パターンマッチングを行うことにより、 分類番号が割り当てられる 。 すなわち、 この分類作業では、 強さが一番大きいスペク トル部分 が除去された残りのスぺク トル系列に対する分類が行われることに なる。 In the third classification, based on the classification number obtained in the second classification, five patterns are extracted from the data base, and at the same time, the pattern sequence of the bank of 105 in the sound signal concerned is obtained. Of these, the maximum 5 strokes of the maximum stroke and 2 before and after it are set to zero (zero reset), and a new 105 bank spectrum sequence is created. . And, among the newly created spectral sequences of 105 banks, those with a maximum spectrum above a certain threshold are less than 25% of the maximum spectrum. Is assumed to be zero, and the above five patterns are represented by a neural network for the normalized spectral series (shown in Fig. 7). Classification numbers are assigned by performing pattern matching with a total of seven patterns, including patterns that are less than (thus, even patterns below the threshold are considered as one pattern). In other words, in this classification work, classification is performed on the remaining spectral series from which the spectral part with the strongest is removed. Become.
さ らに、 第 4分類では、 下記の 2 の場合について分類が行われ : 勿論 、 この分類作業においても、 3分類作業にて得られた分類 番号に基づき、 デ一夕ベースからパ夕一ンマッチングに使用される パターンが 5個取り出される。  In addition, in the fourth classification, the following two cases are classified: Of course, in this classification work as well, based on the classification number obtained in the three classification work, the data is changed from the data base to the data base. Five patterns used for matching are extracted.
( 1 ) 第 3分類において、 最大スぺク トルがしきい値未満である 場ム  (1) In the third category, when the maximum spectrum is less than the threshold
α  α
第 3分類にて作成された 1 0 5バンクのスぺク トル系列のうち、 最大スぺク トルの 2 5 %未満のスぺク 卜ルをゼロとなし (ゼロリセ 卜) 、 それを正規化したスぺク トル系列に対して、 二ユーラルネ 卜により、 上記 5個のパターンに 、 当該 5個以外のパターンを示 すパ夕ーンを加えた、 計 6個のパタ ンとパターンマッチングを行 ラ ことにより、 分類番号が割り当てら  Of the series of 105 banks created in the third category, those that are less than 25% of the maximum are regarded as zero (zero reset) and normalized. The pattern sequence obtained was subjected to pattern matching with a total of 6 patterns obtained by adding patterns indicating the other 5 patterns to the above 5 patterns using dual neural nets. Classification number is assigned by
( 2 ) 第 3分類において、 最大スぺク トルが所定のしきい値以上 である A  (2) In the third category, the maximum spectrum is above a predetermined threshold A
·¾σ 口 。  · ¾σ mouth.
W> 3分類にて作成された 1 0 5バンクのスぺク トル系列に対し、 sfi大 ぺク トルぁょびその前後 2 Αンクの計 5バンクのスぺク 卜ル をゼロとなし (ゼロ Uセッ 卜) 、 新たな 1 0 5ノ ンクのスぺク 卜ル 系列を作成する  For a series of 105 banks created with W> 3 classifications, a total of 5 banks of sfi large vectors and two links before and after that are zero and none (zero) U set), create a new 105-k series of strokes
そして、 この 1 0 5バンクのスぺク トル系列のうち 、 Mx大スぺク 卜ルが或るしきい値以上のスぺク トルに対しては、 Six大スぺク ル の 1 2 . 5 %未満のスベク トルをゼロとなし、 それを正規化したス ぺク トル系列 (図 8 に示す) に対してニューラルネ V 卜によるパ夕 —ン認識 ίτい、 7個のパターン (この場合も、 デ一夕ベ一スから 取り出 された 5パターンに、 当該 5個以外のパターンを示すパター ンおよびしきい値未満のパターンを加えたもの) とパターンマッチ ングを行う ことにより、 分類番号が割り当てられる。 この分類作業 では、 強さが二番目に大きいスぺク トル部分が除去されたスぺク ト ル系列に対する分類が行われることになる。 なお、 上記の二ユーラ ルネッ トによる分類作業の概念図を、 図 9 に示す。 Then, in the 105-bank spectrum series, for the spectrum in which the Mx large spectrum is equal to or more than a certain threshold value, the one of the Six large squares is used. A vector less than 5% is assumed to be zero, and a pattern sequence (shown in Fig. 8) is normalized by the neural network for a normalized vector sequence (shown in Fig. 8). From the night base A classification number is assigned by performing pattern matching with the extracted 5 patterns plus a pattern indicating a pattern other than the 5 patterns and a pattern below the threshold value). In this classification work, classification is performed on the spectrum sequence from which the spectrum part having the second highest strength has been removed. Fig. 9 shows a conceptual diagram of the above-mentioned categorization work using dual-net.
さら に、 上記事故判定部 2 7では、 上記音源識別部 2 6での分類 結果、 すなわち識別信号 (NTにて表す) および上記レベル継続検 出部 2 4からの検出信号 ( P Dにて表す) 並びにピーク検出部 2 3 からの検出信号 ( P Tにて表す) を入力して、 { ( N T a n d P D ) o r P T } の論理演算が行われ、 その音源が事故等に起因するも の (事故音) であるか否かが判定される。 なお、 識別信号 (N T) については、 事故等に起因して発生する音である場合には 「 1」 と され、 また検出信号 ( P D) についても、 音が継続している場合に は 「 1 」 に、 検出信号 ( P T) についても、 ピーク値が所定の強さ 以上である場合には 「 1」 とされる。  Further, in the accident determination section 27, the classification result in the sound source identification section 26, that is, the identification signal (indicated by NT) and the detection signal from the level continuation detection section 24 (indicated by PD) In addition, a detection signal (represented by PT) from the peak detection unit 23 is input, and a logical operation of {(NT and PD) or PT} is performed. ) Is determined. Note that the identification signal (NT) is set to “1” if the sound is generated due to an accident or the like, and the detection signal (PD) is set to “1” if the sound is continuous. In addition, the detection signal (PT) is also set to “1” when the peak value is equal to or higher than a predetermined intensity.
そして、 上記論理演算式における論理積 ( a n d ) の部分は、 音 が瞬間的なものでなく、 事故等であれば、 わずかな時間ではあるが 継続する音であると考えられることから、 検出信号 ( P D ) との論 理積を とるようにしたものであり、 一方、 事故等に起因する音であ れば、 そのピーク値がかなりの強さを有すると考えられるため、 そ のピーク値が設定レベル値 (勿論、 この値は実験などにより設定さ れている) より大きいものである場合には、 事故等に起因するもの と判断し得るように、 上記論理積 (N T a n d P D) に対して検出 信号 ( P T ) の論理和をとるようにしたものである。 したがって、 この論理演算式によると、 検出した音が、 二ユーラ ルネッ トにより、 事故等に関係するものであり且つその音が少しの 時間であるが継続している場合、 または検出した音のピーク値が事 故等に起因して発生するような強い (高い) ものである場合には、 事故等に起因した音であると判定される。 The logical product (and) in the above logical operation expression indicates that the sound is not instantaneous, and in the case of an accident or the like, it is considered that the sound is continuous for a short period of time. (PD), and on the other hand, if the sound is caused by an accident, etc., its peak value is considered to have considerable strength, so the peak value is If the value is larger than the set level value (of course, this value is set by experiments), the above logical product (NT and PD) is calculated so that it can be determined to be due to an accident or the like. The logical sum of the detection signals (PT) is calculated. Therefore, according to this logical operation expression, if the detected sound is related to an accident or the like due to the dual-net and the sound continues for a short time, or the peak of the detected sound If the value is strong (high) such as that caused by an accident, it is determined that the sound is caused by an accident.
この事故判定部 2 7で事故等であると判定された場合には、 デー 夕保存指示部 2 8 にその旨の指示が出力されて、 データ記録装置 4 にて、 当該事故等の発生時の前後における映像が音響とともに記録 されて保存される。  If the accident judging unit 27 judges that the accident is an accident, an instruction to that effect is output to the data storage instruction unit 28 and the data recording device 4 The video before and after is recorded and stored together with the sound.
そして、 このデータ保存指示部 2 8 による映像の保存指示時には 、 映像デ一夕のインデックスとして、 上記事故判定部 2 7で事故等 であると判断された場合の事故内容 (例えば、 コード化したもの) が一緒に記録される。 このインデックスにより、 例えば衝突音、 衝 突音 +タイヤと路面の摩擦音、 衝突音 +クラクショ ン、 タイヤと路 面の摩擦音、 クラクショ ン、 暴走音、 サイ レン、 これら以外の音の 識別が行われる。  When an instruction to save a video is given by the data storage instructing section 28, the accident determination section 27 described above uses, as an index of the video image, an accident content (for example, a coded ) Are recorded together. With this index, for example, collision noise, collision noise + tire-road friction noise, collision noise + crush, tire-road friction noise, crash, runaway noise, siren, and other sounds are identified.
ところで、 上記各部、 積分器、 比較器などについては、 それぞれ 電気信号回路により構成されており、 特に、 ニューラルネッ トによ り演算が行われる音源識別部 2 6 には、 演算処理部として例えば C P Uが具備されている。  By the way, each of the above units, the integrator, the comparator, and the like are each configured by an electric signal circuit. In particular, the sound source identification unit 26, which performs an operation by a neural network, includes, for example, a CPU as an operation processing unit. Is provided.
こ こで、 上記事故記録装置 5 により、 交通事故等を自動的に記録 する際の処理手順を簡単に説明する。  Here, a processing procedure for automatically recording a traffic accident or the like by the accident recording device 5 will be briefly described.
例えば、 交差点に配置された力メラ装置 1およびマイク 2が作動 している状態において、 マイク 2 により検出された音響信号が信号 抽出部 2 1 にて所定周波数帯域でもって抽出され、 この抽出音響信 号が、 レベル検出部 2 2、 ピーク検出部 2 3およびレベル継続検出 部 2 4に入力されて、 事故等であるか否かの予備判断が行われる。 そして 、 レベル検出部 2 2およびピーク検出部 2 3のうち、 少なFor example, in a state where the power camera device 1 and the microphone 2 arranged at the intersection are operating, the acoustic signal detected by the microphone 2 is extracted in a predetermined frequency band by the signal extracting unit 21, and the extracted acoustic signal is The signal is input to the level detector 22, peak detector 23 and level continuation detector 24, and a preliminary judgment is made as to whether or not an accident has occurred. And, among the level detecting section 22 and the peak detecting section 23,
< ¾と、 いずれかからの検出信号がめつた場合、 抽出音響信号に対 して A / D変換が行われた後、 スぺク トル演算部 2 5にてスぺク ト ルの演算力 S行われる。 If the detection signal from any one of <¾ and <¾ is found, A / D conversion is performed on the extracted sound signal, and then the spectrum calculation unit 25 calculates the calculation power S of the spectrum. Done.
この演算により求められたスぺク 卜ル系列が音源識別部 2 6 に入 The spectrum sequence obtained by this calculation is input to the sound source identification unit 26.
,力されて 、 ここで、 上述したニューラルネッ 卜を用いた分類方法に て、 音源力 S識別されるとともに、 この識別された音が事故等に結び 付く可能个生が高いもの (例えば、 衝突音、 衝突音 +タイヤと路面の 摩擦音、 衝突音 +クラクショ ン、 タィャと路面の摩擦音、 クラクシ ヨン、 走音、 サイ レンなどの音) である場合には、 事故等を示す 検出信号 ( N T ) が出力される。 Here, the sound source power S is identified by the above-described classification method using the neural network, and the identified sound is likely to lead to an accident or the like. (Sound, collision sound + tire-road friction sound, collision noise + crush, tyre-road friction sound, crash, running noise, siren, etc.), a detection signal (NT) indicating an accident, etc. Is output.
<) に、 上記事故等の検出信号 (N T ) 、 レベル継続検出部 2 4か らの継続を示す検出信号 ( P D ) お びピーク検出部 2 3からのピ 一クの検出信号 ( P T ) が事故判定部 2 7 に入力されて論理演算が 行われ、 事故等に起因する音であるか否かが判定される。  <> Shows the detection signal (NT) for the above-mentioned accidents, the detection signal (PD) indicating continuation from the level continuation detection section 24, and the peak detection signal (PT) from the peak detection section 23. The signal is input to the accident determination unit 27 and a logical operation is performed to determine whether the sound is caused by an accident or the like.
そして 、 事故判定部 2 7 にて、 事故等であると判断された場合に は、 デ一タ保存指示部 2 8 にその旨の指示信号が出力されて、 その 音が発生した前後において撮影された映像がデータ記録装置 4に記 録されて保存される。 勿論、 この映像デ一夕の記録時には、 その映 像データに対するィンデックスとして、 音源識別部 2 6にて特定さ れた音源種別のコー ドデータがー緒に記録され、 後からの映像デー 夕の検索の容易化が図られている。  If the accident judging section 27 judges that an accident or the like has occurred, an instruction signal to that effect is output to the data storage instructing section 28, and images are taken before and after the sound is generated. The recorded video is recorded and stored in the data recording device 4. Of course, at the time of recording this video data, the code data of the sound source type specified by the sound source identification unit 26 is recorded as an index to the video data, and later retrieval of the video data is performed. Is facilitated.
なお、 音源識別部 2 6での 1回の識別時間は、 例えば 3秒とされ ており、 各検出部 2 2〜 2 4において、 検出信号 (トリ ガー信号) が得られた場合には: この 3秒が経過するまでは検出信号の出力が 維持され、 3秒経過後にリセッ 卜信号が出力される。 The one-time identification time in the sound source identification unit 26 is, for example, 3 seconds. When a detection signal (trigger signal) is obtained in each of the detection units 22 to 24: The output of the detection signal is maintained until the lapse of 3 seconds, and reset after the lapse of 3 seconds. A signal is output.
このように、 上記事故記録装置 5 の構成、 特にその音源識別部 2 6の構成によると、 エンジン音などのように、 車両が通常に発して いる低周波数および人間が聞く ことが困難な高い周波数を除去した 抽出音響信号について、 レベル検出部 2 2にて音響信号のレベル値 が設定レベル値を超えているか否かを検出するとともに、 ピーク検 出部 2 3 にて音響信号のピーク値が設定ピーク値を超えているか否 かを検出し、 少なく とも一方がそれぞれの設定値を超えている場合 に、 当該音響信号の周波数スペク トルを求めるとともに、 二ユーラ ルネッ トを用いて、 その音源の種別を特定するようにしたので、 音 源の識別を、 よ り正確に行う ことができる。  As described above, according to the configuration of the accident recording device 5, particularly the configuration of the sound source identification unit 26, the low frequency that the vehicle normally emits and the high frequency that is difficult for humans to hear, such as engine sound, For the extracted audio signal from which noise has been removed, the level detection unit 22 detects whether the level value of the audio signal exceeds the set level value, and the peak detection unit 23 sets the peak value of the audio signal. Detects whether the peak value is exceeded, and if at least one exceeds the set value, finds the frequency spectrum of the sound signal and uses the dual-net to determine the type of the sound source Since the sound source is specified, the sound source can be identified more accurately.
また、 この音源識別部 2 6 を用いた音源判定装置 3の構成による と、 ニューラルネッ トにて特定された音源に対して、 さ らにレベル 継続検出部 2 4 にて音響信号のレベル継続時間が設定継続時間を超 えているか否かの判断が加味されているので、 事故等であるか否か の判断を、 〜層、 正確に行う ことができる。  In addition, according to the configuration of the sound source determination device 3 using the sound source identification unit 26, the level continuation detection unit 24 further outputs the level continuation time of the sound signal to the sound source specified by the neural network. Since the determination as to whether or not the time exceeds the set duration is added, it is possible to accurately determine whether or not an accident has occurred.
次に、 上記事故内容分類装置 7 を、 図 1 0 に基づき説明する。 この事故内容分類装置 7 には、 データベース 1 0から映像データ を入力して画像処理を行い、 例えば交差点内での物体の輪郭を各画 像フレームごと に [例えば、 所定時間間隔 (より具体的には 0 . 1 秒程度) おきの画像フレームごとであってもよい] 抽出する画像処 理部 4 1 と、 こ の画像処理部 4 1 で得られた画像処理データを入力 して交通事故を起こしたと判断された物体 (対象物体) がどのよう なものかを、 例えば自動車、 二輪車、 歩行者などのいずれかを認識 するとともに車両の種類 (車種である) も特定する、 例えば自動車 であれば大型車、 乗用車、 緊急車両など、 二輪車であればオートバ ィ、 自転車などを判断するための物体認識部 4 2 と、 データベース 1 0から音響データ を入力するとともに画像デ一夕の認識に基づい て音響データの特定すなわち事故音を特定する音源特定部 4 3 とが 具備されている。 Next, the accident content classification device 7 will be described with reference to FIG. The accident content classification device 7 performs image processing by inputting video data from the database 10 and performs, for example, a contour of an object in an intersection for each image frame [for example, at a predetermined time interval (more specifically, The image processing unit 41 to be extracted and the image processing data obtained by the image processing unit 41 may be input to cause a traffic accident. What is the object (target object) determined to be For example, it recognizes any kind of car, motorcycle, pedestrian, etc. and also specifies the type of vehicle (it is a vehicle type). For example, if it is a car, it is a large car, a passenger car, an emergency vehicle, etc. An object recognition unit 42 for judging motorcycles, bicycles, etc., and a sound source identification unit 4 for inputting sound data from the database 10 and for specifying sound data based on recognition of image data, that is, for specifying accident sounds. 3 is provided.
ここで、 上記物体認識部 4 2での物体の認識方法を簡単に説明し ておく。  Here, a method of recognizing an object in the object recognizing unit 42 will be briefly described.
この物体認識部 4 2では、 映像に対して差分法により移動する物 体を、 例えば直方体として検出する。 この検出された物体すなわち 車種の判断は、 直方体についてのパターンマッチングにより行われ る。 このパターンマッチングに用レ られる評価用のパラメ一夕は、 直方体の幅、 奥行き、 高さ、 体積などが用いられる。 具体的には、 幅、 奥行き、 高さの大小関係を見ると、 一般的には、 二輪車の場合 、 奥行き〉高さ >幅の関係となり、 歩行者の場合、 高さ >幅, 奥行 きの関係となる。 また、 4輪車の場合、 幅、 奥行き、 高さについて の大小闋係はまちまちであるが、 体積により、 大型車と乗用車の判 別が可能となる。  The object recognizing unit 42 detects an object moving with respect to the video by the difference method, for example, as a rectangular parallelepiped. The determination of the detected object, that is, the vehicle type is performed by pattern matching for a rectangular parallelepiped. The parameters for evaluation used for this pattern matching use the width, depth, height, volume, etc. of the rectangular parallelepiped. Specifically, looking at the magnitude relationship between width, depth, and height, the relationship is generally depth> height> width for motorcycles, and height> width, depth for pedestrians. Become a relationship. In the case of four-wheeled vehicles, the size, depth, and height vary widely, but the volume makes it possible to distinguish large vehicles from passenger vehicles.
さらに、 詳しく説明すれば、 予め記録された映像データを用いて 、 各車種の基準パラメ一夕をデータベース化しておく とともに、 最 終的には、 計算された評価パラメ一夕を、 基準パラメ一夕にて学習 されたニューラルネッ トに入力し、 各車種である確率 (各車種の基 準パラメータとのマッチング度) を計算し、 その値が最も大きくな る車種を、 その物体の車種とするものである。 次に、 上記事故状況解析装置 8 を、 図 1 1 に基づき説明する。 この事故状況解析装置 8 には、 データベース 1 0から映像デ一夕 を入力して画像処理を行い、 例えば交差点内での物体の輪郭を各画 像フレームごとに [例えば、 所定時間間隔 (より具体的には 1秒程 度) おきの画像フレームごとであってもよい] 抽出する画像処理部 5 1 と、 この画像処理部 5 1 で得られた画像処理データを入力して 交通事故を起こしたと判断された物体の軌跡を求める軌跡演算部 5 2 と、 この軌跡演算部 5 2で求められた軌跡から物体の進入位置、 物体の速度、 事故位置、 ブレーキの推定、 信号無視の有無などの事 故状況を検出する事故状況検出部 5 3 と、 この事故状況検出部 5 3 にて得られた事故状況をレポー ト形式で出力する事故状況出力部 5 4 とが具備されている。 More specifically, a database of reference parameters for each vehicle type is stored in a database using previously recorded video data, and finally, the calculated evaluation parameters are stored in a database. Is input to the neural network trained in, and the probability of each vehicle type (the degree of matching with the reference parameter of each vehicle type) is calculated, and the vehicle type with the largest value is taken as the vehicle type of the object It is. Next, the accident situation analysis device 8 will be described with reference to FIG. The accident situation analysis device 8 inputs image data from the database 10 and performs image processing. For example, an outline of an object in an intersection is set for each image frame [for example, at a predetermined time interval (more specific). The image processing unit 51 to be extracted and the image processing data obtained by the image processing unit 51 may cause a traffic accident. A trajectory calculation unit 52 for obtaining the trajectory of the determined object, and an object entry position, an object speed, an accident position, a brake estimation, and whether or not a signal is ignored from the trajectory calculated by the trajectory calculation unit 52. An accident situation detection section 53 for detecting an accident situation and an accident situation output section 54 for outputting the accident situation obtained by the accident situation detection section 53 in a report format are provided.
この事故状況解析装置 8にて得られる具体的な解析内容は以下の 通りである。  The specific analysis contents obtained by this accident situation analysis device 8 are as follows.
( 1 ) 事故であると分類された個々の物体、 すなわち車両などを 同一画面上でトレース表示する (例えば、 図 1 2 ( a ) の車両 A , 車両 Bにて示す) 。  (1) Trace the individual objects classified as accidents, that is, vehicles, on the same screen (for example, vehicle A and vehicle B in Fig. 12 (a)).
( 2 ) 事故車両の トレースにより、 車両同士が接触した部分を検 出し、 その時点における音響データとを照合し、 例えば衝突音が検 出された場合には (例えば、 図 1 2 ( b ) 参照) 、 最終的な事故位 置を特定することができる。  (2) From the trace of the accident vehicle, the part where the vehicles contact each other is detected and compared with the acoustic data at that time. For example, if a collision sound is detected (for example, see Fig. 12 (b)) ) The final accident location can be specified.
( 3 ) 物体の 1 コマ (フレーム) 当たりの移動距離により、 事故 直前 (交差点進入時) の速度を自動で特定する (例えば、 図 1 2 ( c ) のグラフ参照) 。  (3) The speed immediately before the accident (when entering the intersection) is automatically specified based on the distance traveled per frame (frame) of the object (for example, see the graph in Figure 12 (c)).
( 4 ) 映像データに記録されている各方向での信号に基づき、 車 両が信号を無視したか否かについても検出することができる。 (4) Based on signals in each direction recorded in the video data, It can also be detected whether both have ignored the signal.
これらの結果を纏めた事故 m析レポー トの具体例を図 1 3 に示し ておく。  Figure 13 shows a specific example of an accident analysis report summarizing these results.
次に、 データ閲覧装置 1 1 について説明する。  Next, the data browsing device 11 will be described.
このデータ閲覧装置 1 1は、 データベース 1 0 に蓄積された映像 データおよび動作ログに関わる各種デ一夕に対して検索および閲覧 を行い得る検索 · 閲覧ソフ ト (以下、 閲覧等ソフ トと称し、 例えば w e bブラウザが用いられる) が具備されたコンピュータ端末であ り、 勿論、 L A N 1 5 を介して、 デ一夕べ一ス 1 0に接続されてい る。  This data browsing device 11 is a search and browsing software (hereinafter referred to as browsing software, etc.) that can search and browse various data related to video data and operation logs stored in the database 10. For example, a web browser is used), and of course, it is connected to the server 10 via LAN 15 overnight.
上記閲覧等ソフ トは、 検索画面から映像データに対して、 「交差 点名称、 日付、 記録要因 [記録すべきと判断された音の種類 (具体 的には、 音源識別結果) ] 、 事故種類、 事故車種」 などから、 検索 または絞込み検索を行う ことができ、 また検索結果 (例えば、 交差 点名称、 日付、 記録要因、 事故種類、 事故車種、 映像データフアイ ル名) についてその一覧表示を行う ことができる。  From the search screen, the above viewing software can be used to search the video data for “intersection name, date, recording factor [type of sound judged to be recorded (specifically, sound source identification result)], type of accident , Accident vehicle type, etc., and a search can be performed, and a list of search results (eg, intersection name, date, recording factor, accident type, accident vehicle type, video data file name) can be displayed. be able to.
さらに、 検索結果の映像デ一夕に係るファイル名を選択するとと もに再生ボタンを押すことによ り、 映像デ一夕の再生を行う ことが できる。  Further, by selecting the file name related to the video data in the search result and pressing the play button, the video data can be reproduced.
また、 検索画面から動作ログに対して、 「交差点の名称、 日付」 などから、 検索または絞込み撿索を行う ことができ、 また検索結果 (例えば、 交差点名称、 日付、 ログファイル名) についてその一覧 表示を行うことができる。  In addition, from the search screen, the operation log can be searched or narrowed down from "intersection name, date", etc., and a list of search results (eg, intersection name, date, log file name) Display can be performed.
さらに、 動作ログに対しても、 一覧表示されたログファイル名を 選択するとともに表示ボタンを押すことにより、 その動作ログの内 容を表示することができる。 In addition, for the operation log, by selecting the log file name displayed in the list and pressing the display button, the operation log is displayed. Can be displayed.
次に、 音源判定評価装置 9について説明する。  Next, the sound source determination and evaluation device 9 will be described.
この音源判定評価装置 9は、 上記データ閲覧装置 1 1 と同様に、 データベース 1 0 に蓄積された映像データおよび動作口グに関わる 各種データに対する検索および閲覧を行い得る検索 · 閲覧ソフ ト ( 以下、 閲覧等ソフ トと称し、 例えば w e bブラウザが用いられる) が具備されたコンピュータ端末であり、 L A N 1 5を介して、 デ一 夕べ一ス 1 0に接続されている。  Similar to the data browsing device 11 described above, the sound source judgment and evaluation device 9 is a search and browsing software (hereinafter, referred to as a video browsing software (hereinafter, referred to as a video browsing software) capable of performing search and browsing of the video data stored in the database 10 and various data related to the operation port. This is a computer terminal equipped with a software such as browsing software (for example, a web browser is used), and is connected to the device 10 via the LAN 15.
また、 検索結果の映像データに係るファイル名を選択するととも に再生ボタンを押すことにより、 映像データの再生を行う ことがで きる。  Also, by selecting a file name related to the video data of the search result and pressing a play button, the video data can be reproduced.
そして、 さらに選択した画像デ一夕を再生して、 監視員が事故内 容を画面上で確認した際に、 交通事故でないのに交通事故であると 誤認識した件については、 再度、 ニューラルネッ トを用いて、 学習 させる再学習機能が具備されている。  When the observer checks the details of the accident on the screen by replaying the selected image, the neural network recognizes again that the accident was not a traffic accident but a neural accident. It has a re-learning function to make learning using the program.
この再学習機能を簡単に説明すると、 各事故記録装置 5 において 、 誤認識された映像デ一夕に係る音響データの識別 (判定) に用い られる分類番号が、 誤認識されたデータについて分類番号が毎回異 なる場合には、 分類表から誤認識した分類番号の記録フラグが 「 1 (記録する) 」 から 「 0 (ゼロ : 記録しない) 」 に修正され、 そし てこの修正された分類表が当該誤認識に係る事故記録装置 5 に送信 されて、 分類表の更新が行われる。  To briefly explain the re-learning function, in each accident recording device 5, the classification number used to identify (determine) the acoustic data relating to the erroneously recognized video data and the classification number for the erroneously recognized data are changed. In each case, the record flag of the misrecognized classification number from the classification table is changed from “1 (record)” to “0 (zero: not recorded)”, and the revised classification table It is transmitted to the accident recording device 5 relating to misrecognition, and the classification table is updated.
一方、 音源を識別するための分類番号が毎回同一 (いつも同じ) である場合には、 当該音響デ一夕を用いてニューラルネッ 卜で用い られる分類用の重み係数を再学習させた後、 この再学習に伴い分類 表中の分類番号、 種別、 記録フラグを見直し、 修正が必要となるも のについては修正が行われる。 そして、 この修正された重み係数お よび分類表が当該誤認識に係る事故記録装置 5 に送信されて、 分類 表の更新が行われる。 なお、 図 1 4 に上記再学習機能を説明するフOn the other hand, if the classification number for identifying the sound source is the same each time (always the same), the weighting coefficient for classification used in the neural network is re-learned using the acoustic data, and Classification with re-learning The classification numbers, types, and record flags in the table will be reviewed, and those that need correction will be corrected. Then, the corrected weight coefficient and the classification table are transmitted to the accident recording device 5 relating to the erroneous recognition, and the classification table is updated. Fig. 14 is a flowchart explaining the re-learning function.
□一チャ 卜 不す。 □ No one chat.
次に、 上記システムにおける事故記録装置 5 と管理セン夕 - 1 2 とにおけるデ一夕の受け渡し動作について簡単に説明する  Next, a brief description will be given of the transfer operation of the accident recording device 5 and the management center-12 in the above system.
このデ一夕の受け渡し動作は、 上述したように、 通信回線 1 3 、 例えばィン夕一ネッ ト (イントラネッ ト) により、 仮想的に 1対 1 の通信路を確立して行われ、 またその伝送プロ トコルは、 例えば T As described above, this data transfer operation is performed by virtually establishing a one-to-one communication path using the communication line 13, for example, the Internet connection network (intranet). The transmission protocol is, for example, T
C Pノ I Pにおける F T Pプロ トコルが用いら る。 したがつて、 管理セン夕 — 1 2 のサーバ 1 4および事故記録装置 5側には 、 それ ぞれ F T Pプログラムファイルが具備されている。 The FTP protocol in CP-IP is used. Accordingly, the server 14 and the accident recording device 5 of the management center 12 each have an FTP program file.
上記事故記録装置 5にて記録された映像デ一タおよび音源の識別 に用いた曰響データは、 所定時刻に、 例えば毎日 1 回程度の割合で The sound data used for identification of the video data and the sound source recorded by the accident recording device 5 is provided at a predetermined time, for example, about once a day.
(例えば 、 日付変更時に) 、 管理センター 1 2 に送られる このと さ、 これらのデータと一緒に、 事 記録装置 5 の動作履歴 Πグ (例 えば 、 源投入時刻、 停電時刻、 ュニッ トの異常発生/異常復帰、 記録時の要因デ一夕 [音源識別結果とスぺク トル計算結果 (または 音圧データ) ] も送られる。 This data is sent to the management center 12 (for example, when the date is changed), and along with these data, the operation log of the recording device 5 (for example, power-on time, power failure time, abnormalities of the unit) The cause of the occurrence / abnormality recovery and the factors during recording are also sent. [Sound source identification result and spectrum calculation result (or sound pressure data)] are also sent.
また、 管理センター 1 2側から、 事故記録装置 5 における運用設 定データ (例えば、 映像チャネル数、 映像サイズ、 画質、 記録時間 、 トリガ前の記録時間、 映像フレーム間隔、 交差点名称、 信号パ夕 ーンなどのデータ) を記述したファイルが、 必耍に応じて事故記録 装置 5側に送られる。 さらに、 音源判定評価装置 9 にて修正された分類表についても、 事故記録装置 5側に送られる。 Also, from the management center 12 side, the operation setting data in the accident recording device 5 (for example, the number of video channels, video size, image quality, recording time, recording time before trigger, video frame interval, intersection name, signal path) A file that describes the data is sent to the accident recording device 5 as necessary. Further, the classification table corrected by the sound source judgment evaluation device 9 is also sent to the accident recording device 5 side.
そして、 管理セン夕一 1 2 にて交通事故であると判定された場合 、 必要な交通事故データ (例えば、 交差点名、 発生時刻、 事故状況 など) 力^ 通信回線 1 7を介して、 交通管制セン夕一 1 6 に送られ る。  If it is determined by the management center that the traffic accident is a traffic accident, the necessary traffic accident data (eg, intersection name, time of occurrence, accident situation, etc.) It is sent to Sen-Yu 1-16.
上述した記録 · 解析システムによると、 交差点で発生した交通事 故の映像データおよび音響データについては、 その発生場所で、 音 源判定装置 3により交通事故であるか否かが判断されるとともに交 通事故であると判断された場合にはデータ記録装置 4に記録され、 しかも、 このデータ記録装置 4に記録されたデータが通信回線 1 3 を介して管理センター 1 2に送られ、 こ こで事故内容分類装置 7 に より交通事故が分類されるとともに事故状況解析装置 8 により交通 事故の内容が解析されるため、 従来のよ うに、 交差点に設けられた データ記録装置に交通事故のデータを記録させておき、 後で、 その 記録デ一夕を回収するようにしたシステムに比べて、 その回収する 手間を必要としないとともに、 交通事故の主な内容については、 事 故状況解析装置 8 により 自動的に主要なデータ解析 (例えば、 接触 音の検出、 車両の トレースなど) が行われるため、 監視員による事 故内容の解析作業が容易になるとともに、 解析作業を迅速に行う こ とができる。  According to the above-mentioned recording and analysis system, regarding the video data and audio data of a traffic accident that occurred at an intersection, the sound source determination device 3 determines whether or not the traffic accident occurred at the location where the traffic accident occurred, and the traffic If it is determined that the accident has occurred, the data is recorded on the data recording device 4 and the data recorded on the data recording device 4 is sent to the management center 12 via the communication line 13 where the accident occurs. Since the traffic accidents are classified by the content classification device 7 and the details of the traffic accidents are analyzed by the accident situation analysis device 8, the data of the traffic accidents are recorded in the data recorder installed at the intersection as in the past. In the meantime, compared to a system that collects the recorded data later, it does not require the trouble of collecting it, and the main contents of the traffic accident are The main data analysis (for example, contact sound detection, vehicle tracing, etc.) is automatically performed by the accident situation analysis device 8, which makes it easy for the observer to analyze the details of the accident and to make the analysis work easier. It can be done quickly.
ところで、 上記実施の形態においては、 交差点に、 マイクを 1個 だけ配置したものとして説明したが、 例えば図 1 5 に示すように、 交差点 Kの中心線の両側に、 マイク (マイクロフォン) 2 を 1個づ つ配置しておく ことにより、 その音響データの音響レベルを比較す ることにより、 交差点の中心線のどちら側で発生したかを特定する ことができる。 これにより、 交通事故の発生場所などを、 より確実 に特定することができる。 By the way, in the above embodiment, the description has been made assuming that only one microphone is arranged at the intersection. However, as shown in FIG. 15, for example, one microphone (microphone) 2 is provided on both sides of the center line of the intersection K. By arranging them individually, the sound levels of the sound data can be compared. By doing so, it is possible to specify which side of the center line of the intersection occurred. As a result, the location where a traffic accident has occurred can be specified more reliably.
また、 上記実施の形態においては、 第 1分類〜第 4分類というよ うに、 大きく 4段階でもって分類を行う ものとして説明したが、 例 えば第 1分類〜第 3分類を用いて (この場合の分類番号は 4桁とな る) 、 音源の識別を行うようにしてもよレ^ 勿論、 この場合も、 第 1 の実施の形態と同様に、 音源の識別を正確に行う ことができる。 さ らに、 上記実施の形態においては、 交差点での交通事故を記録 するように説明したが、 交差点以外の箇所であってもよく、 さらに は、 突発事象として交通事故以外の場合、 例えば工事現場で行われ る作業の監視、 コンビニエンスス トァの監視などにも適用すること ができる。  Further, in the above-described embodiment, the classification has been described as being performed in four stages, such as the first to fourth classifications. For example, the first to third classifications are used (in this case, The classification number may be four digits). However, the sound source may be identified. Of course, the sound source can be identified accurately in this case as in the first embodiment. Further, in the above-described embodiment, a description has been given of recording a traffic accident at an intersection. However, the present invention may be applied to a place other than an intersection. It can also be applied to the monitoring of work performed in the office and the monitoring of convenience stores.

Claims

請 求 の 範 囲 The scope of the claims
1 . 音源からの音響データを入力して突発事象であるか否かを判定 データに基づき判断する音源判定手段を有するとともに当該音源判 定手段にて突発事象であると判断された音響データおよび突発事象 の撮影手段からの映像データを一時的に記録するデータ記録手段を 有する突発事象記録手段と、 1. It has sound source determination means for inputting sound data from a sound source to determine whether or not an event is a sudden event, and based on the data, the sound data determined to be a sudden event by the sound source determination means and a sudden event. A sudden event recording means having data recording means for temporarily recording video data from an event photographing means;
この突発事象記録手段にて記録された音響データおよび映像デー 夕を入力して突発事象を分類する突発事象分類手段と、  An incident classifying means for inputting audio data and video data recorded by the incident recording means to classify the incident;
この突発事象分類手段にて分類された分類データに基づき突発事 象の内容を解析する突発事象解析手段とから構成し、  An incident analysis means for analyzing the content of the incident based on the classification data classified by the incident classification means;
さ らに上記突発事象記録手段を突発事象の発生箇所に配置すると ともに、 上記突発事象分類手段および突発事象解析手段を管理セン ターに設置し、  In addition, the above-mentioned sudden event recording means is placed at the place where the sudden event occurred, and the above sudden event classification means and sudden event analysis means are installed in the management center.
且つ上記突発事象記録手段にて記録された音響データおよび映像 データを、 通信回線を介して管理センターに送るようにしたことを 特徴とする突発事象の記録 · 解析システム。  And a recording / analysis system for a sudden event, wherein the sound data and the video data recorded by the sudden event recording means are transmitted to a management center via a communication line.
2 . 突発事象解析手段により解析された結果 と映像を目視した結果 とを比較して、 予め決められた突発事象でない場合に、 音源判定手 段における判定データを変更するための音源判定評価手段を具備し たことを特徴とする請求項 1 に記載の突発事象の記録 · 解析システ ム。 2. Comparing the result analyzed by the sudden event analysis means with the result of visual observation of the video, the sound source judgment evaluation means for changing the judgment data in the sound source judgment means when the event is not a predetermined accident event. The recording / analysis system for an incident according to claim 1, wherein the system is provided.
3 . 突発事象が交通事故である場合に、 分類データが、 少なく とも 事故対象物である車両の種類および事故音であることを特徴とする 請求項 1 または 2 に記載の突発事象の記録 - 解析システム。 3. If the incident is a traffic accident, the classification data must be at least The accident-event recording / analysis system according to claim 1 or 2, wherein the accident object is a type of a vehicle and an accident sound.
4 . 突発事象が交通事故である場合に、 突発事象解析手段にて、 映 像データおよび音響データを用いて、 少なく とも事故対象物である 車両の位置関係および衝突の有無を解析することを特徴とする請求 項 1 乃至 3 のいずれか一項に記載の突発事象の記録 · 解析システム 4. If the incident is a traffic accident, the incident analysis means analyzes at least the positional relationship of the vehicle that is the object of the accident and the presence or absence of a collision using video data and acoustic data. The recording and analysis system of the incident according to any one of claims 1 to 3.
5 . 突発事象が交通事故であるとともに、 音源半リ定手段に具備され る音響検出手段としてマイクロフォンを用いるとともに、 このマイ クロ フォンを交差点の中心を挟んで両側に設置したことを特徴とす る請求項 1乃至 4のいずれか一項に記載の突発事象の記録 · 解析シ スァム。 5. The sudden event is a traffic accident, and a microphone is used as sound detection means provided in the sound source semi-determination means, and the microphones are installed on both sides of the center of the intersection. The sudden event recording and analysis system according to any one of claims 1 to 4.
PCT/JP2004/004739 2004-03-31 2004-03-31 Recording/analyzing system for accidental event WO2005101346A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2006512179A JP4242422B2 (en) 2004-03-31 2004-03-31 Sudden event recording and analysis system
PCT/JP2004/004739 WO2005101346A1 (en) 2004-03-31 2004-03-31 Recording/analyzing system for accidental event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2004/004739 WO2005101346A1 (en) 2004-03-31 2004-03-31 Recording/analyzing system for accidental event

Publications (1)

Publication Number Publication Date
WO2005101346A1 true WO2005101346A1 (en) 2005-10-27

Family

ID=35150207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/004739 WO2005101346A1 (en) 2004-03-31 2004-03-31 Recording/analyzing system for accidental event

Country Status (2)

Country Link
JP (1) JP4242422B2 (en)
WO (1) WO2005101346A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007286857A (en) * 2006-04-17 2007-11-01 Sekisui Jushi Co Ltd Accident monitoring system
JP2012504832A (en) * 2008-10-02 2012-02-23 ボールズ、マーク Secondary market and vending systems for devices
ITBZ20130054A1 (en) * 2013-11-04 2015-05-05 Tarasconi Traffic Tecnologies Srl ROAD TRAFFIC VIDEO SURVEILLANCE SYSTEM WITH REPORTING DANGER SITUATIONS
JP2017010290A (en) * 2015-06-23 2017-01-12 株式会社東芝 Information processing apparatus and event detection method
US9818160B2 (en) 2008-10-02 2017-11-14 ecoATM, Inc. Kiosk for recycling electronic devices
US9881284B2 (en) 2008-10-02 2018-01-30 ecoATM, Inc. Mini-kiosk for recycling electronic devices
US9885672B2 (en) 2016-06-08 2018-02-06 ecoATM, Inc. Methods and systems for detecting screen covers on electronic devices
US9904911B2 (en) 2008-10-02 2018-02-27 ecoATM, Inc. Secondary market and vending system for devices
US9911102B2 (en) 2014-10-02 2018-03-06 ecoATM, Inc. Application for device evaluation and other processes associated with device recycling
US10127647B2 (en) 2016-04-15 2018-11-13 Ecoatm, Llc Methods and systems for detecting cracks in electronic devices
US10269110B2 (en) 2016-06-28 2019-04-23 Ecoatm, Llc Methods and systems for detecting cracks in illuminated electronic device screens
US10401411B2 (en) 2014-09-29 2019-09-03 Ecoatm, Llc Maintaining sets of cable components used for wired analysis, charging, or other interaction with portable electronic devices
US10417615B2 (en) 2014-10-31 2019-09-17 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
US10445708B2 (en) 2014-10-03 2019-10-15 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods
US10475002B2 (en) 2014-10-02 2019-11-12 Ecoatm, Llc Wireless-enabled kiosk for recycling consumer devices
US10572946B2 (en) 2014-10-31 2020-02-25 Ecoatm, Llc Methods and systems for facilitating processes associated with insurance services and/or other services for electronic devices
US10825082B2 (en) 2008-10-02 2020-11-03 Ecoatm, Llc Apparatus and method for recycling mobile phones
US10860990B2 (en) 2014-11-06 2020-12-08 Ecoatm, Llc Methods and systems for evaluating and recycling electronic devices
US11010841B2 (en) 2008-10-02 2021-05-18 Ecoatm, Llc Kiosk for recycling electronic devices
US11080672B2 (en) 2014-12-12 2021-08-03 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
WO2022050086A1 (en) * 2020-09-03 2022-03-10 ソニーグループ株式会社 Information processing device, information processing method, detection device, and information processing system
US11462868B2 (en) 2019-02-12 2022-10-04 Ecoatm, Llc Connector carrier for electronic device kiosk
WO2022208586A1 (en) 2021-03-29 2022-10-06 日本電気株式会社 Notification device, notification system, notification method, and non-transitory computer-readable medium
US11482067B2 (en) 2019-02-12 2022-10-25 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017200153A1 (en) 2017-01-09 2018-07-12 Ford Global Technologies, Llc Method for detecting traffic situations
CN107945512B (en) * 2017-11-27 2020-11-10 海尔优家智能科技(北京)有限公司 Traffic accident handling method and system
CN108764042B (en) * 2018-04-25 2021-05-28 深圳市科思创动科技有限公司 Abnormal road condition information identification method and device and terminal equipment
CN110942629A (en) * 2019-11-29 2020-03-31 中核第四研究设计工程有限公司 Road traffic accident management method and device and terminal equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1011694A (en) * 1996-06-24 1998-01-16 Mitsubishi Heavy Ind Ltd Automobile accident monitoring device
JP2000207676A (en) * 1999-01-08 2000-07-28 Nec Corp Traffic accident detector
JP2002230679A (en) * 2001-01-30 2002-08-16 Natl Inst For Land & Infrastructure Management Mlit Road monitoring system and road monitoring method
JP2002342882A (en) * 2001-05-11 2002-11-29 Fujitsu Ltd Device for identifying moving body, and method and device for automatically warning moving body
JP2003061074A (en) * 2001-08-09 2003-02-28 Mitsubishi Electric Corp Image recognition processing system
JP2003157487A (en) * 2001-11-22 2003-05-30 Mitsubishi Electric Corp Traffic state monitoring device
JP2003202260A (en) * 2001-10-25 2003-07-18 Hitachi Zosen Corp Sound source identifying device, sudden event detecting device, and device for automatically recording sudden event

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1011694A (en) * 1996-06-24 1998-01-16 Mitsubishi Heavy Ind Ltd Automobile accident monitoring device
JP2000207676A (en) * 1999-01-08 2000-07-28 Nec Corp Traffic accident detector
JP2002230679A (en) * 2001-01-30 2002-08-16 Natl Inst For Land & Infrastructure Management Mlit Road monitoring system and road monitoring method
JP2002342882A (en) * 2001-05-11 2002-11-29 Fujitsu Ltd Device for identifying moving body, and method and device for automatically warning moving body
JP2003061074A (en) * 2001-08-09 2003-02-28 Mitsubishi Electric Corp Image recognition processing system
JP2003202260A (en) * 2001-10-25 2003-07-18 Hitachi Zosen Corp Sound source identifying device, sudden event detecting device, and device for automatically recording sudden event
JP2003157487A (en) * 2001-11-22 2003-05-30 Mitsubishi Electric Corp Traffic state monitoring device

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007286857A (en) * 2006-04-17 2007-11-01 Sekisui Jushi Co Ltd Accident monitoring system
US11443289B2 (en) 2008-10-02 2022-09-13 Ecoatm, Llc Secondary market and vending system for devices
US9818160B2 (en) 2008-10-02 2017-11-14 ecoATM, Inc. Kiosk for recycling electronic devices
US10825082B2 (en) 2008-10-02 2020-11-03 Ecoatm, Llc Apparatus and method for recycling mobile phones
US10853873B2 (en) 2008-10-02 2020-12-01 Ecoatm, Llc Kiosks for evaluating and purchasing used electronic devices and related technology
US9881284B2 (en) 2008-10-02 2018-01-30 ecoATM, Inc. Mini-kiosk for recycling electronic devices
US11790328B2 (en) 2008-10-02 2023-10-17 Ecoatm, Llc Secondary market and vending system for devices
US9904911B2 (en) 2008-10-02 2018-02-27 ecoATM, Inc. Secondary market and vending system for devices
US11526932B2 (en) 2008-10-02 2022-12-13 Ecoatm, Llc Kiosks for evaluating and purchasing used electronic devices and related technology
US10032140B2 (en) 2008-10-02 2018-07-24 ecoATM, LLC. Systems for recycling consumer electronic devices
US10055798B2 (en) 2008-10-02 2018-08-21 Ecoatm, Llc Kiosk for recycling electronic devices
JP2012504832A (en) * 2008-10-02 2012-02-23 ボールズ、マーク Secondary market and vending systems for devices
US10157427B2 (en) 2008-10-02 2018-12-18 Ecoatm, Llc Kiosk for recycling electronic devices
US11935138B2 (en) 2008-10-02 2024-03-19 ecoATM, Inc. Kiosk for recycling electronic devices
US11080662B2 (en) 2008-10-02 2021-08-03 Ecoatm, Llc Secondary market and vending system for devices
US11907915B2 (en) 2008-10-02 2024-02-20 Ecoatm, Llc Secondary market and vending system for devices
US11010841B2 (en) 2008-10-02 2021-05-18 Ecoatm, Llc Kiosk for recycling electronic devices
ITBZ20130054A1 (en) * 2013-11-04 2015-05-05 Tarasconi Traffic Tecnologies Srl ROAD TRAFFIC VIDEO SURVEILLANCE SYSTEM WITH REPORTING DANGER SITUATIONS
US10401411B2 (en) 2014-09-29 2019-09-03 Ecoatm, Llc Maintaining sets of cable components used for wired analysis, charging, or other interaction with portable electronic devices
US10475002B2 (en) 2014-10-02 2019-11-12 Ecoatm, Llc Wireless-enabled kiosk for recycling consumer devices
US10496963B2 (en) 2014-10-02 2019-12-03 Ecoatm, Llc Wireless-enabled kiosk for recycling consumer devices
US11734654B2 (en) 2014-10-02 2023-08-22 Ecoatm, Llc Wireless-enabled kiosk for recycling consumer devices
US11126973B2 (en) 2014-10-02 2021-09-21 Ecoatm, Llc Wireless-enabled kiosk for recycling consumer devices
US9911102B2 (en) 2014-10-02 2018-03-06 ecoATM, Inc. Application for device evaluation and other processes associated with device recycling
US10438174B2 (en) 2014-10-02 2019-10-08 Ecoatm, Llc Application for device evaluation and other processes associated with device recycling
US11790327B2 (en) 2014-10-02 2023-10-17 Ecoatm, Llc Application for device evaluation and other processes associated with device recycling
US11232412B2 (en) 2014-10-03 2022-01-25 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods
US10445708B2 (en) 2014-10-03 2019-10-15 Ecoatm, Llc System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods
US10417615B2 (en) 2014-10-31 2019-09-17 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
US11436570B2 (en) 2014-10-31 2022-09-06 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
US10572946B2 (en) 2014-10-31 2020-02-25 Ecoatm, Llc Methods and systems for facilitating processes associated with insurance services and/or other services for electronic devices
US10860990B2 (en) 2014-11-06 2020-12-08 Ecoatm, Llc Methods and systems for evaluating and recycling electronic devices
US11315093B2 (en) 2014-12-12 2022-04-26 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
US11080672B2 (en) 2014-12-12 2021-08-03 Ecoatm, Llc Systems and methods for recycling consumer electronic devices
JP2017010290A (en) * 2015-06-23 2017-01-12 株式会社東芝 Information processing apparatus and event detection method
US10127647B2 (en) 2016-04-15 2018-11-13 Ecoatm, Llc Methods and systems for detecting cracks in electronic devices
US9885672B2 (en) 2016-06-08 2018-02-06 ecoATM, Inc. Methods and systems for detecting screen covers on electronic devices
US10909673B2 (en) 2016-06-28 2021-02-02 Ecoatm, Llc Methods and systems for detecting cracks in illuminated electronic device screens
US10269110B2 (en) 2016-06-28 2019-04-23 Ecoatm, Llc Methods and systems for detecting cracks in illuminated electronic device screens
US11803954B2 (en) 2016-06-28 2023-10-31 Ecoatm, Llc Methods and systems for detecting cracks in illuminated electronic device screens
US11482067B2 (en) 2019-02-12 2022-10-25 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices
US11462868B2 (en) 2019-02-12 2022-10-04 Ecoatm, Llc Connector carrier for electronic device kiosk
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
WO2022050086A1 (en) * 2020-09-03 2022-03-10 ソニーグループ株式会社 Information processing device, information processing method, detection device, and information processing system
WO2022208586A1 (en) 2021-03-29 2022-10-06 日本電気株式会社 Notification device, notification system, notification method, and non-transitory computer-readable medium

Also Published As

Publication number Publication date
JPWO2005101346A1 (en) 2008-03-06
JP4242422B2 (en) 2009-03-25

Similar Documents

Publication Publication Date Title
JP4242422B2 (en) Sudden event recording and analysis system
CN109616140B (en) Abnormal sound analysis system
US6442474B1 (en) Vision-based method and apparatus for monitoring vehicular traffic events
KR101969504B1 (en) Sound event detection method using deep neural network and device using the method
US8885929B2 (en) Abnormal behavior detection system and method using automatic classification of multiple features
CN109816987B (en) Electronic police law enforcement snapshot system for automobile whistling and snapshot method thereof
CN111986228B (en) Pedestrian tracking method, device and medium based on LSTM model escalator scene
Conte et al. An ensemble of rejecting classifiers for anomaly detection of audio events
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
TW201513055A (en) Traffic accident monitoring and tracking system
Zinemanas et al. MAVD: A dataset for sound event detection in urban environments.
CN110620760A (en) FlexRay bus fusion intrusion detection method and detection device for SVM (support vector machine) and Bayesian network
KR102518615B1 (en) Apparatus and Method for complex monitoring to judge abnormal sound source
Rovetta et al. Detection of hazardous road events from audio streams: An ensemble outlier detection approach
KR102066718B1 (en) Acoustic Tunnel Accident Detection System
CN114371353A (en) Power equipment abnormity monitoring method and system based on voiceprint recognition
CN111444843A (en) Multimode driver and vehicle illegal behavior monitoring method and system
CN114926824A (en) Method for judging bad driving behavior
Dong et al. At the speed of sound: Efficient audio scene classification
JP4046592B2 (en) Sound source identification device, sudden event detection device, and sudden event automatic recording device
WO2008055306A1 (en) Machine learning system for graffiti deterrence
CN111476102A (en) Safety protection method, central control equipment and computer storage medium
CN116302809A (en) Edge end data analysis and calculation device
JP3164100B2 (en) Traffic sound source type identification device
JP3248522B2 (en) Sound source type identification device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006512179

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase