US20060105299A1 - Method and program for scenario provision in a simulation system - Google Patents
Method and program for scenario provision in a simulation system Download PDFInfo
- Publication number
- US20060105299A1 US20060105299A1 US11/286,124 US28612405A US2006105299A1 US 20060105299 A1 US20060105299 A1 US 20060105299A1 US 28612405 A US28612405 A US 28612405A US 2006105299 A1 US2006105299 A1 US 2006105299A1
- Authority
- US
- United States
- Prior art keywords
- scenario
- actor
- background image
- video
- behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
Definitions
- the present invention is a continuation in part (CIP) of “Multiple Screen Simulation System and Method for Situational Response Training,” U.S. patent application Ser. No. 10/800,942, filed 15 Mar. 2004, which is incorporated by reference herein.
- the present invention relates to the field of simulation systems for weapons training. More specifically, the present invention relates to scenario authoring and provision in a simulation system.
- Training involves practicing marksmanship skills with lethal and/or non-lethal weapons. Additionally, training involves the development of decision-making skills in situations that are stressful and potentially dangerous. Indeed, perhaps the greatest challenges faced by a trainee are when to use force and how much force to use. If an officer is unprepared to make rapid decisions under the various threats he or she faces, injury to the officer or citizens may result.
- scenario training is essential for preparing a trainee to react safely with appropriate force and judgment, such training under various real-life situations is a difficult and costly endeavor.
- Live-fire weapons training may be utilized in firing ranges, but it is inherently dangerous, tightly safety regulated, costly in terms of training ammunition, and firing ranges may not be readily available in all regions.
- live-fire weapons cannot be safely utilized under various real-life situation training.
- Simulation provides a cost effective means of teaching initial weapon handling skills and some decision-making skills, and provides training in real-life situations in which live-fire may be undesirable due to safety or other restrictions.
- a conventional simulation system includes a single screen projection system to simulate reality.
- a trainee views the single screen with video projected thereon, and must decide whether to shoot or not to shoot at the subject.
- the weapon utilized in a simulation system typically employs a laser beam or light energy to simulate firearm operation and to indicate simulated projectile impact locations on a target.
- Single screen simulators utilize technology which restricts realism in tactical training situations and restricts the ability for thorough performance measurements. For example, in reality, lethal threats can come from any direction or from multiple directions. Unfortunately, a conventional single screen simulator does not expand or stimulate a trainee's awareness to these multi-directional threats because the trainee is compelled to focus on a situation directly in front of the trainee, as presented on the single screen. Accordingly, many instructors feel that the industry is encouraging “tunnel vision” by having the trainees focus on an 8-10 foot screen directly in front of them.
- One simulation system proposes the use of one screen directly in front of the trainee and a second screen directly behind the trainee.
- This dual screen simulation system simulates the “feel” of multi-directional threats.
- the trainee is not provided with peripheral stimulation in such a dual screen simulation system.
- Peripheral vision is used for detecting objects and movement outside of the direct line of vision. Accordingly, peripheral vision is highly useful for avoiding threats or situations from the side.
- the front screen/rear screen simulation system also suffers from the “tunnel vision” problem mentioned above. That is, a trainee does not employ his or her peripheral vision when assessing and reacting to a simulated real-life situation.
- prior art simulation systems utilize projection systems for presenting prerecorded video, and detection cameras for tracking shots fired, that operate at standard video rates and resolution based on National Television Standards Committee (NTSC) for analog television standard.
- NTSC National Television Standards Committee
- Training scenarios based on NTSC analog television standards suffer from poor realism due to low resolution images that are expanded to fit the large screen of the simulator system.
- detection cameras based on NTSC standards suffer from poor tracking accuracy, again due to low resolution.
- simulation system that provides realistic, multi-directional threats for situational response training.
- simulation system that includes that ability for high accuracy trainee performance measurements.
- the simulation system should support a number of configurations and should be cost effective.
- It is another advantage of the present invention is that a simulation system is provided in which a trainee can face multiple risks from different directions, thus encouraging teamwork and reinforcing the use of appropriate tactics.
- Another advantage of the present invention is that a simulation system is provided having realistic scenarios in which a trainee may practice observation techniques, practice time-critical judgment and target identification, and improve decision-making skills.
- Yet another advantage of the present invention is that a cost-effective simulation system is provided that can be configured to enable situational response training, marksmanship training, and/or can be utilized for weapons qualification testing.
- the above and other advantages of the present invention are carried out in one form by a simulation system.
- the simulation system includes a first screen for displaying a first view of a scenario, and a second screen for displaying a second view of the scenario.
- the first and second views of the scenario occur at a same instant, and the scenario is a visually presented situation.
- the simulation system further includes a device for selective actuation toward a target within the scenario displayed on the first and second screens, a detection subsystem for detecting an actuation of the device toward the first and second screens, and a processor in communication with the detection subsystem for receiving information associated with the actuation of the device and processing the received information to evaluate user response to the situation.
- a method of training a participant utilizing a simulation system the participant being enabled to selectively actuate a device toward a target.
- the method calls for displaying a first view of a scenario on a first screen of the simulation system and displaying a second view of the scenario on a second screen of the simulation system.
- the first and second views of the scenario occur at a same instant, the scenario is prerecorded video of a situation, and the first and second views are adjacent portions of the prerecorded video.
- the method further calls for detecting an actuation of the device toward a target within the scenario displayed on the first and second screens, and evaluating user response to the situation in response to the actuation of the device.
- FIG. 1 shows a block diagram of a full surround simulation system in accordance with a preferred embodiment of the present invention
- FIG. 2 shows a block diagram of components that form the simulation system of FIG. 1 ;
- FIG. 3 shows a side view of a rear projection system of the simulation system
- FIG. 4 shows a block diagram of a portion of the simulation system of FIG. 1 arranged in a firing range configuration
- FIG. 5 shows a table of a highly simplified exemplary scenario pointer database
- FIG. 6 shows a flowchart of an exemplary video playback process for a scenario that includes video branching to subscenarios
- FIG. 7 shows an illustrative representation of adjacent views of a prerecorded scenario
- FIG. 8 shows a block diagram of a half surround simulation system in accordance with another preferred embodiment of the present invention.
- FIG. 9 shows a block diagram of a three hundred degree surround simulation system in accordance with yet another preferred embodiment of the present invention.
- FIG. 10 shows a flowchart of a training process of the present invention
- FIG. 11 shows a diagram of an exemplary calibration pattern
- FIG. 12 shows a diagram of a detector of the simulation system zoomed in to a small viewing area for qualification testing
- FIG. 13 shows a block diagram of a simulation system in accordance with an alternative embodiment of the present invention.
- FIG. 14 shows a simplified block diagram of a computing system for executing a scenario provision process to generate a scenario for playback in a simulation system
- FIG. 15 shows a flow chart of a scenario provision process
- FIG. 16 shows a screen shot image of a main window presented in response to execution of the scenario provision process
- FIG. 17 shows a screen shot image of a library window from the main window exposing a list of background images for the scenario
- FIG. 18 shows a screen shot image of the main window following selection of one of the background images of FIG. 17 ;
- FIG. 19 shows a screen shot image of the library window from the main window exposing a list of actors for the scenario
- FIG. 20 shows a screen shot image of the library window from the main window exposing a list of behaviors for assignment to an actor from the list of actors;
- FIG. 21 shows a screen shot image of an exemplary drop-down menu of behaviors supported by a selected one of the actors from the list of actors;
- FIG. 22 shows a screen shot image of the main window following selection of actors and behaviors for the scenario
- FIG. 23 shows a screen shot image of a scenario logic window from the main window for configuring the scenario logic of the scenario
- FIG. 24 shows a table of a key of exemplary symbols utilized within the scenario logic window of FIG. 23 ;
- FIG. 25 shows a screen shot image of an exemplary drop down menu of events associated with the scenario logic window of FIG. 23 ;
- FIG. 26 shows a screen shot image of an exemplary drop down menu of triggers associated with the scenario logic window of FIG. 23 ;
- FIG. 27 shows a screen shot image of a background editor window of the scenario provision process with a pan tool enabling a pan capability
- FIG. 28 shows a screen shot image of the background editor window with a foreground marking tool enabling a layer capability
- FIG. 29 shows a screen shot image of the background editor window with a background image selected for saving into a database
- FIG. 30 shows an exemplary table of animation sequences associated with actors for use within the scenario provision process
- FIGS. 31 a - d show an illustration of a single frame of an exemplary video clip undergoing video filming and editing
- FIG. 32 shows a screen shot image of a behavior editor window showing a behavior logic flow for a first behavior
- FIG. 33 shows a table of a key of exemplary symbols utilized within the behavior editor window.
- FIG. 34 shows a partial screen shot image of the behavior editor window showing a behavior logic flow for a second behavior.
- FIG. 1 shows a diagram of a full surround simulation system 20 in accordance with a preferred embodiment of the present invention.
- Full surround simulation system 20 includes multiple screens 22 that fully surround a participation location 24 in which one or more participants, i.e., trainees 26 , may be positioned. Since multiple screens 22 fully surround participation location 24 , at least one of screens 22 is configured to swing open to enable ingress and egress. For example, screens 22 may be hingedly coupled to one another, and one of screens 22 may be mounted on casters that enables it to roll outwardly enough to allow passage of trainees 26 and/or trainers (not shown).
- Each of multiple screens 22 has a rear projection system 28 associated therewith.
- Rear projection system 28 is operable, and trainees 26 actions may be monitored from, a workstation 30 located remote from participation location 24 .
- Workstation 30 is illustrated as being positioned proximate screens 22 . However, is should be understood that workstation 30 need not be proximate screens 22 , but may instead be located more distantly, for example, in another room.
- bi-directional audio may be provided for communication between trainees 26 and trainers located at workstation 30 .
- video monitoring of participation location 24 may be provided to the trainer located at workstation 30 .
- Full surround simulation system 20 includes a total of six screens 22 arranged such that an angle 27 formed between corresponding faces 29 of screens 22 is approximately one hundred and twenty degrees. As such, the six screens 22 are arranged in a hexagonal pattern. In addition, each of screens 22 may be approximately ten feet wide by seven and a half feet high. Of course, those skilled in the art will recognize that other sizes of screens 22 may be provided. For example, a twelve foot wide by six foot nine inch high screen may be utilized for high definition formatted video. Thus, the configuration of simulation system 20 provides a multi-directional simulated environment in which a situation, or event, is unfolding. Although screens 22 are shown as being generally flat, the present invention may be adapted to include screens 22 that are curved. In such a configuration, screens 22 would form a generally circular pattern rather than the illustrated hexagonal pattern.
- Full surround simulation system 20 provides a visually presented situation onto each of screens 22 so that trainees 26 in participation location 24 are fully immersed in the situation.
- trainees 26 can train to respond to peripheral visual cues, multi-directional auditory cues, and the like.
- the visually presented situation is full motion, pre-recorded video.
- other techniques may be employed such as, video overlay, computer generated imagery, and the like.
- the situation presented by simulation system 20 is pertinent to the type of training and the trainees 26 participating in the training experience.
- Trainees 26 may be law enforcement, security, military personnel, and the like.
- training scenarios projected via rear projection system 28 onto associated screens 22 correspond to real life situations in which trainees 26 might find themselves.
- law enforcement scenarios could include response to shots fired at a facility, domestic disputes, hostage situations, and so forth.
- Security scenarios might include action in a crowded airport departure/arrival terminal, the jet way, or in an aircraft.
- Military scenarios could include training for a pending mission, a combat situation, an ambush, and so forth.
- Trainees 26 are provided with a weapon 31 .
- Weapon 31 may be implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized by trainees 26 in the course of duty.
- weapon 31 is equipped with a laser insert instead of actual ammunition.
- Trainees 26 actuate weapon 31 to selectively project a laser beam, represented by an arrow 33 , toward any of screens 22 in response to the situation presented by simulation system 20 .
- weapon 31 is a laser device that projects infra red (IR) light, although a visible red laser device may also be used.
- IR infra red
- other non-live fire weaponry and/or live-fire weaponry may be employed.
- FIG. 2 shows a block diagram of components that form simulation system 20 .
- Workstation 30 generally includes a simulation controller 32 , and a tracking processor 34 in communication with simulation controller 32 .
- simulation controller 32 is in communication with each of multiple projection controllers 36 .
- One each of projection controllers 36 is in communication with one each of rear projection systems 28 .
- controller/processor elements are utilized herein for different functions, those skilled in the art will readily appreciate that many of the computing functions performed by simulation controller 32 , tracking processor 34 , and projection controllers 36 may alternatively be combined into a comprehensive computing platform.
- Each rear projection system 28 includes a projector 38 having a video input 40 in communication with a video output 42 of its respective projection controller 36 , and a sound device, i.e., a speaker 44 , having an audio input 46 in communication with an audio output 48 of its respective projection controller 36 .
- Each rear projection system 28 further includes a detector 50 , in communication with tracking processor 34 via a high speed serial bus 51 . Thus, the collection of detectors 50 defines a detection subsystem of simulation system 20 . Projector 38 and detector 50 face a mirror 52 of rear projection system 28 .
- simulation controller 32 may include a scenario pointer database 54 that is an index to a number of scenarios (discussed below) that are prerecorded full motion video of various situations that are to be presented to trainees 26 .
- each of projection controllers 36 may include a scenario library 56 pertinent to their location within simulation system 20 .
- Each scenario library 56 includes a portion of the video and audio to be presented via the associated one of projectors 38 and speakers 44 .
- Simulation controller 32 accesses scenario pointer database 54 to index to the appropriate video identifiers (discussed below) that correspond to the scenario to be presented.
- Simulation controller 32 then commands each of projection controllers 36 to concurrently present corresponding video, represented by an arrow 58 , and any associated audio, represented by arced lines 60 .
- Video 58 is projected toward a reflective surface 62 of mirror 52 where video 58 is thus reflected onto screen 22 in accordance with conventional rear projection methodology.
- trainee 26 may elect to shoot his or her weapon 31 , i.e. project laser beam 33 , toward an intended target within the scenario.
- An impact location (discussed below) of laser beam 33 is detected by detector 50 via reflective surface 62 of mirror 52 when laser beam 33 is projected onto screen 22 .
- Information regarding the impact location is subsequently communicated to tracking processor 34 to evaluate trainee response to the presented scenario (discussed below).
- trainee response may then be concatenated into a report 64 .
- Simulation controller 32 is a conventional computing system that includes, for example, input devices (keyboard, mouse, etc.), output devices (monitor, printers, etc.), a data reader, memory, programs stored in memory, and so forth. Simulation controller 32 and projection controllers 36 operate under a primary/secondary computer networking communication protocol in which simulation controller 32 (the primary device) controls projection controllers (the secondary devices).
- Simulation system 20 illustrated in FIG. 1 , includes a quantity of six screens 22 and six rear projection systems 28 arranged in a hexagonal configuration to form the full surround configuration of system 20 .
- simulation system 20 need not be limited to only six screens 22 , but may include more or less than six screens 22 . Accordingly, the block diagram representation of simulation system 20 is shown having a quantity of “N” projection controllers 36 and their associated “N” rear projection systems 28 to illustrate this point.
- each of projectors 38 is capable of playing high definition video.
- high definition refers to being or relating to a television system that has twice as many scan lines per frame as a conventional system, a proportionally sharper image, and a wide-screen format.
- the high-definition format uses a 16:9 aspect ratio (an image's width divided by its height), although the 4:3 aspect ratio of conventional television may also be used.
- the high resolution images (1024 ⁇ 768 or 1280 ⁇ 720) allow much more detail to be shown.
- Simulator system 20 places trainees 26 close to screens 22 , so that trainees 26 can see more detail. Consequently, the high resolution video images are advantageously utilized to provide more realistic imagery to trainees 26 .
- the present invention is described in terms of it's use with known high definition video formats, the present invention may further be adapted for future higher resolution video formats.
- each of detectors 50 is an Institute of Electrical and Electronics Engineers (IEEE) 1394-compliant digital video camera in communication with tracking processor 34 via high speed serial bus 51 .
- IEEE 1394 is a digital video serial bus interface standard that offers high-speed communications and isochronous real-time data services.
- An IEEE 1394 system is advantageously used in place of the more common universal serial bus (USB) due to its faster speed.
- USB universal serial bus
- those skilled in the art will recognize that existing and upcoming standards that offer high-speed communications, such as USB 2.0, may alternatively be employed.
- Each of detectors 50 further includes an infrared (IR) filter 66 removably covering a lens 68 of detector 50 .
- IR filter 66 may be hingedly affixed to detector 50 or may be pivotally affixed to detector 50 .
- IR filter 66 covers lens 68 when simulator system 20 is functioning so as to accurately detect the impact location of laser beam 33 ( FIG. 1 ) on screen 22 by filtering all light except IR.
- IR filter 66 is removed from lens 68 so that visible light can be let in during the calibration process.
- IR filter 66 may be manually or automatically moved from in front of lens 68 as represented by detector 50 , labeled “DETECTOR N.” An exemplary calibration process will be described in connection with the training process of FIG. 10 .
- FIG. 3 shows a side view of one of rear projection systems 28 , i.e., a first projection system 28 ′, of simulation system 20 ( FIG. 1 ). Only one of rear projection systems 28 is described in detail herein. However, the following description applies equally to each of rear projection systems 28 depicted in FIG. 1 .
- First rear projection system 28 ′ includes a frame structure 70 for placement behind a first screen 22 ′.
- a first mirror 52 ′ is coupled to a first end 72 of frame structure 70 with a first reflective surface 62 ′ facing a rear face 74 of first screen 22 ′.
- Frame structure 70 retains first mirror 52 ′ in a fixed orientation that is substantially parallel to first screen 22 ′.
- a first projector 38 ′ is situated at a second end 76 of frame structure 70 at a distance, d, from first reflective surface 62 ′ of first mirror 52 ′.
- First projector 38 ′ is preferably equipped with an adjustment mechanism which can be employed to adjust first projector 38 ′ so that a center of a first view 78 of the projected video 58 ( FIG. 3 ) is approximately centered on first screen 22 ′.
- First projector 38 ′ projects first view 78 of video 58 toward first mirror 52 ′, and first view 78 reflects from first mirror 52 ′ onto first screen 22 ′.
- a first detector 50 ′ is also positioned on frame structure 70 .
- First detector 50 ′ may also be equipped with an adjustment mechanism which may be employed to adjust first detector 50 ′ so that first detector 50 ′ has an appropriate view of first screen 22 ′ via first mirror 52 ′.
- first rear projection system 28 ′ in simulation system 20 advantageously saves space by shortening the distance between first projector 38 ′ and first screen 22 ′.
- the distance, d, between first mirror 52 ′ and first projector 38 ′ is approximately one half the throw distance of first projector 38 ′ to maximize space savings.
- the use of a rear projection technique effectively frees participant location 24 ( FIG. 1 ) of the clutter and distraction of components that would be found in a front projection configuration, and avoids the problem of casting shadows that can occur in a front projection configuration.
- frame structure 70 simplifies system configuration and calibration, and makes adjusting of first projector 38 ′ simpler.
- frame structure 70 further includes casters 82 mounted to a bottom thereof. Through the use of casters 82 , simulation system 20 ( FIG. 1 ) can be readily repositioned into different arrangements of screens 22 and rear projection systems 28 .
- FIG. 4 shows a block diagram of a portion of the simulation system 20 arranged in a firing range configuration 84 .
- the configuration of simulation system 20 shown in FIG. 1 advantageously surrounds and immerses a participant in a realistic, multi-directional environment for situational response training.
- a comprehensive training program may also involve practicing marksmanship skills with lethal and/or non-lethal weapons and weapons qualification testing.
- firing range configuration 84 screens 22 are arranged such that corresponding viewing faces 86 of screens 22 are aligned to be substantially coplanar. Additionally, rear projection systems 28 are readily repositioned behind the aligned screens 22 via casters 82 ( FIG. 3 ). Trainees 26 may then face screens 22 , and project laser beam 33 of their respective weapons 31 , toward targets presented on screens 22 via rear projection systems 28 .
- firing range configuration 84 shows one of trainees 26 at each of screens 22 , it is equally likely that each of screens can accommodate more than one trainee 26 for marksmanship training and/or weapons qualification testing. Further discussion regarding the use of full surround simulation system 20 for marksmanship training and/or qualification testing is presented below in connection with FIG. 12 .
- FIG. 5 shows a table of a highly simplified exemplary scenario pointer database 54 .
- scenario pointer database 54 provides an index to a number of scenarios of prerecorded full motion video of various situations that are to be presented to trainees 26 .
- Simulation controller 32 accesses scenario pointer database 54 to index to the appropriate video identifiers that correspond to the scenario to be presented.
- Simulation controller 32 then commands projection controllers 36 to concurrently present corresponding video 58 ( FIG. 2 ) and any associated audio 60 ( FIG. 2 ) stored within their respective scenario libraries 56 ( FIG. 2 ).
- Exemplary scenario pointer database 54 includes four exemplary scenarios 86 , labeled “ 1 ”, “ 2 ”, “ 3 ”, and “ 4 ”, and referenced in a scenario identifier field 87 .
- Each of scenarios 86 is pre-recorded video 58 corresponding to a real life situation in which trainees 26 might find themselves, as discussed above.
- each of scenarios 86 is split into adjacent portions, i.e., adjacent views 88 , referenced in a video index identifier field 90 , and assigned to particular projection controllers 36 , referenced in a projection controller identifier field 92 .
- a first projection controller 36 ′ is assigned a first view 88 ′, identified in video index identifier field 90 , by the label 1 - 1 .
- a second projection controller 36 ′′ is assigned a second view 88 ′′, identified in video index identifier field 90 , by 1 - 2 .
- pre-recorded video 58 may be readily filmed utilizing multiple high-definition format cameras with lenses outwardly directed from the same location, or a compound motion picture camera, in order to achieve a 360-degree field-of-view.
- Post-production processing entails stitching, or seaming, the individual views to form a panoramic view.
- the panoramic view is subsequently split into adjacent views 88 that are presented, via rear projection systems 28 ( FIG. 2 ), onto adjacent screens 22 .
- adjacent views 88 can be time locked, for example, through the assignment of appropriate time codes so that adjacent views 88 of scenario 86 are played back at the same instant.
- the video is desirably split so that the primary subject or subjects of interest in the video is not split over adjacent screens 22 .
- the splitting of video into adjacent views 88 for presentation on adjacent screens 88 need not be a one to one correlation. For example, during post-production processing a stitched panoramic video having a 270-degree field-of-view may be projected onto five screens to yield a 300-degree field-of-view.
- Audio 60 may simply be recorded at the time of video production. During post-production processing, particular portions of the audio are assigned to particular slices of the video so that audio relevant to the view is provided. For example, audio 60 ( FIG. 2 ) of a door opening should come from speaker 44 ( FIG. 2 ) associated with one of screens 22 ( FIG. 2 ) at which the door is shown, while audio 60 of a person's voice should come from speaker 44 associated another of screens 22 at which the person is presented. Thus, audio 60 is cost effectively produced using an emulation of three-dimensional audio to match the video. Such an approach is much less expensive, often more realistic, and scales better with system configurations than more complex surround sound techniques.
- the pre-recorded video may be filmed utilizing a digital camera system having a lens system that can record 360-degree video.
- Post-production processing then merely entails splitting the 360-degree video into adjacent views to be presented on adjacent screens.
- audio may be produced utilizing one of several surround sound techniques known to those skilled in the art.
- Simulation system 20 may employ a branching video technology.
- Branching video technology enables control of multiple playback paths through a video database.
- scenarios 86 may optionally branch to a different outcome, i.e., a subscenario 94 based on the action or inaction of trainee 26 ( FIG. 1 ).
- FIG. 6 shows a flowchart of an exemplary video playback process 93 for a second scenario 86 ′′, labeled “ 2 ”, that includes video branching to subscenarios 94 .
- an operator initiates second scenario 86 ′′, labeled “ 2 ”.
- a branching decision 96 may be required. If no branch is to occur at branching decision 96 , second scenario 86 ′′ continues. However, if the video is to branch at branching decision 96 , a first subscenario 94 ′, labeled 2 A, may be presented to trainee 26 ( FIG. 1 ).
- second scenario 86 ′′ shows that following initiation of first subscenario 94 ′, another branching decision 98 may be required.
- branching decision 98 When no branching is to occur at branching decision 98 , first subscenario 94 ′ continues.
- branching is to occur at branching decision 98 , a second subscenario 94 ′′, labeled 2 C is presented.
- video playback process 93 for second scenario 86 ′′ is finished.
- An exemplary scenario 86 in which video branching might occur is as follows: detectors 50 ( FIG. 1 ) are surveying their respective screens 22 ( FIG. 1 ) for an infrared (IR) spot, indicating that at least one of weapons 31 ( FIG. 1 ) has been “fired” to project laser beam 33 ( FIG. 1 ) onto one of screens 22 .
- Tracking processor 34 FIG. 2
- Simulation controller time links the impact location of the “shot” to video 58 ( FIG. 1 ) and controls branching of the video accordingly. For example, if a person within second scenario 86 is “shot”, scenario 86 may branch to a subscenario 94 showing the person falling.
- scenario 86 can be tailored to the type and complexity of the desired training.
- scenario 86 labeled “ 1 ” may optionally take a single branch.
- second scenario 86 ′′ may optionally branch to first subscenario 94 ′, and then optionally branch from first subscenario 94 ′ to second subscenario 94 ′′.
- Another scenario 86 labeled “ 3 ” need not branch at all, and yet another scenario 86 , labeled “ 4 ”, may optionally branch to one of two subscenarios 94 .
- scenario creation software permits a scenario developer to construct situations that can be displayed on screens 22 from “stock” footage without the demands to perform extensive camera work.
- scenario creation software employs a technique known as compositing. Compositing is the post-production combination of two or more video/film/digital clips into a single image.
- compositing two images (or clips) are combined in one of several ways using a mask.
- the most common way is to place one image (the foreground) over another (the background). Where the mask indicates transparency, the background image will show through the foreground.
- Blue/green screening also known as chroma keying is a type of compositing where the mask is calculated from the foreground image. Where the image is blue (or green for green screen), the mask is considered to be transparent. This technique is useful when shooting film and video, as a blue or green screen can be placed behind the object being shot and some other image then inserted in that space later.
- the scenario creation software provides the scenario developer with a library of background still and/or motion images. These background images are desirably panoramic images, so that one large picture is continued from one view on one of screens 22 ( FIG. 1 ) to the adjacent one of screens 22 , and so forth. Green (or blue) screen video clips may be captured by the user, or may be provided within scenario creation software. These video clips may include threatening or non-threatening individuals opening doors, coming around corners, appearing from behind objects, and so forth.
- the scenario creation software then enables the scenario developer to display the background image with various foreground clips to form the scenario.
- the scenario developer may optionally determine the “logic” behind when and where the clips may appear. For example, the scenario developer could determine that foreground image “A” is to appear at a predetermined and/or random time.
- the scenario developer may add “hit zones” to the clips. These “hit zones” are areas where the clip would branch due to interaction by the user. The scenario developer can instruct the scenario to branch to clip “C” if a “hit zone” was activated on clip “B”.
- scenario creation software the software developer is enabled to add, modify, and subtract video clips, still images, and/or audio clips to or from the scenario that they are creating. The scenario developer may then be able to preview and test their scenario during the scenario creation process. Once the scenario developer is satisfied with the content, the scenario creation software can create the files needed by simulation system 20 ( FIG. 1 ), and automatically set up the scenario to be presented on screens 22 .
- FIG. 7 shows an illustrative representation of adjacent views 88 of prerecorded video 58 of one of scenarios 86 .
- Adjacent views 88 are presented on adjacent screens 22 .
- first screen 22 ′ shows first view 88 ′
- second screen 22 ′′ shows second view 88 ′′, and so forth.
- screens 22 are arranged in a hexagonal configuration. Accordingly, adjacent views 88 surround and immerse trainee 26 ( FIG. 1 ) into the situation presented in scenario 86 .
- FIG. 7 further illustrates an exemplary impact location 100 of laser beam 33 ( FIG. 1 ) projected onto first screen 22 ′.
- trainee 26 has determined that a subject 102 was an imminent threat to trainee 26 and/or to a second subject 104 .
- subject 102 is a target 105 within scenario 86 displayed on the multiple screens 22 .
- Trainee 26 responded to perceived aggressive behavior exhibited by subject 102 with the force that he or she deemed to be reasonably necessary during the course of the situation unfolding within scenario 86 .
- detector 50 FIG. 1
- Tracking processor 34 receives information from detector 50 associated with impact location 100 indicating that weapon 31 was actuated by trainee 26 .
- the received information may entail receipt of the raw digital video, which tracking processor 34 then converts to processed information, for example, X and Y coordinates of impact location 100 .
- the X and Y coordinates can then be presented to trainee 26 in the form of report 64 ( FIG. 2 ), and/or can be communicated to simulation controller 32 ( FIG. 2 ) for subsequent video branching, as discussed above.
- FIG. 8 shows a diagram of a half surround simulation system 106 in accordance with another preferred embodiment of the present invention.
- the components presented in full surround simulation system 20 are modular and can be readily incorporated into other simulation systems dependent upon training requirements. In this situation, a 180-degree field of view is accomplished.
- half surround simulation system 106 includes three screens 22 , each of which has associated therewith one of projectors 38 , one of detectors 50 , and one of speakers 44 .
- half surround simulation system 106 utilizes a conventional front projection technique. In this case, projectors 38 and detectors 50 are desirably mounted on a ceiling and out of the way of trainee 26 .
- the 180-degree field of view enables trainee 26 to utilize peripheral visual and auditory cues.
- space and cost savings is realized relative to full surround simulation system 106 .
- Space savings is realized because the overall footprint of half surround simulation system 106 is approximately half that of full surround simulation system 20 , and cost savings is realized by utilizing a smaller number of components.
- FIG. 9 shows a diagram of a three hundred degree surround simulation system 108 in accordance with yet another preferred embodiment of the present invention.
- 300-degree surround simulation system 108 includes a total of five screens 22 and five rear projection systems 28 .
- Three hundred degree surround simulation system 108 enables nearly full surround and effective immersion for trainees 26 .
- an opening 110 is formed between screens 22 for easy ingress, egress, and trainee observation purposes.
- System 108 is further shown as including a remote debrief station 111 .
- Remote debrief station 111 may be located in a different room, as represented by dashed lines 113 .
- Station 111 is in communication with workstation 30 , and more particularly with tracking processor 34 ( FIG. 2 ) and/or simulation controller 32 ( FIG. 2 ), via a wireline or wireless link 115 .
- software resident at workstation 30 compiles and transfers pertinent files for off-line review of trainee 26 response following a simulation experience. Off-line review could entail review and/or playback of the scenario, video/audio files of trainee 26 , results, and so forth.
- FIGS. 1, 4 , 8 , and 9 show the use of either front projection systems or rear projection systems, it should be understood that a single simulation system may include a combination of front and rear projections systems in order to better accommodate size limitations of the room in which the simulation system is to be housed.
- FIG. 10 shows a flowchart of a training process 112 of the present invention.
- Training process 112 is performed utilizing, for example, full surround simulation system 20 .
- Training process 112 will be described herein in connection with a single one of trainees 26 utilizing full surround simulation system 20 for simplicity of illustration. However, as discussed above, more than one trainee 26 may participate in training process 112 at a given session.
- training process 112 applies equivalently when utilizing half surround simulation system 106 ( FIG. 8 ) or three hundred degree surround simulation 108 ( FIG. 9 ).
- Training process 112 presents one of scenarios 86 ( FIG. 5 ), in the form of full motion, realistic video. Trainee 26 , with weapon 31 , is immersed into scenario 86 and is enabled to react to a threatening situation. The object of such training is to learn to react safely, and with appropriate use of force and judgment.
- Training process 112 begins with a task 114 .
- an operator calibrates simulation system 20 .
- calibration task 114 is a preliminary activity that can occur prior to positioning trainee 26 within participation location 24 of simulation system 20 .
- Calibration task 114 is employed to calibrate each of detectors 50 with their associated projectors 38 .
- calibration task 114 may be employed to calibrate, i.e., zero, weapon 31 relative to projectors 38 .
- FIG. 11 shows a diagram of an exemplary calibration pattern 116 of squares that may be utilized to calibrate each of detectors 50 with their associated projectors 38 .
- scenario 86 FIG. 5
- subscenario 94 FIG. 5
- the detection accuracy of detectors 50 corresponds with a known standard, i.e., calibration pattern 116 presented via one of projectors 38 .
- tracking software resident in tracking processor 34 ( FIG. 2 ) must determine the appropriate mathematical adjustments to ensure that detector 50 is coordinated with projector 38 .
- IR filter 66 ( FIG. 2 ) is removed from lens 68 ( FIG. 2 ) of detector 50 and visible light is allowed in so that detector 50 can detect calibration pattern 116 .
- IR filter 66 removal may be accomplished by manual removal by the operator, or by automatic means.
- projector 38 projects calibration pattern 116 for detection by detector 50
- tracking processor 34 ( FIG. 2 ) correlates detected coordinates with projected coordinates.
- Weapon zeroing may entail projecting laser beam 33 ( FIG. 1 ) from weapon 31 toward a predetermined position, i.e., a “zero” position, on calibration pattern 116 . Interpolation can subsequently be employed to correlate projected coordinates for impact location 100 ( FIG. 7 ) with detected coordinates for impact location 100 .
- Calibration task 114 is performed for each projector 38 and detector 50 pair, either sequentially or concurrently.
- trainee 26 involvement can begin at a task 118 .
- trainee 26 moves into participation location 24 , and the operator at workstation 30 displays a selected one of scenarios 86 ( FIG. 5 ).
- simulation controller 32 FIG. 2
- projection controllers 36 FIG. 2
- scenario libraries 56 FIG. 2
- Adjacent views 88 of scenario 86 are subsequently displayed on adjacent screens 22 , as described in connection with FIG. 7 .
- a query task 120 determines whether laser beam 33 is detected on one of screens 22 . That is, at query task 120 , each of detectors 50 monitors for laser beam 33 projected on one of screens 22 in response to actuation of weapon 31 . When one of detectors 50 detects laser beam 33 , this information is communicated to tracking processor 34 ( FIG. 2 ), in the form of, for example, a digital video signal.
- process flow proceeds to a task 122 .
- tracking processor 34 determines coordinates describing impact location 100 ( FIG. 7 ).
- Query task 124 determines whether to branch to one of subscenarios 94 ( FIG. 5 ).
- simulation controller 32 FIG. 2 determines from received information associated with impact location 100 , i.e., X-Y coordinates, or from the absence of X-Y coordinates, whether to command projection controllers 36 ( FIG. 2 ) to branch to one subscenarios 94 .
- this branching query task 124 may be due to action (i.e., detection of laser beam 33 ) or inaction (i.e., no laser beam 33 detected) of trainee 26 .
- Process 112 proceeds to a task 126 when a determination is made at query task 124 to branch to one of subscenarios 94 .
- simulation controller 32 commands projection controllers 36 ( FIG. 2 ) to access their respective scenario libraries 56 ( FIG. 2 ) to obtain a portion of video 58 ( FIG. 2 ) associated with the desired subscenario 94 ( FIG. 5 ).
- Adjacent views 88 of subscenario 94 are subsequently displayed on adjacent screens 22 .
- process 112 continues with a query task 128 .
- Query task 128 determines whether playback of scenario 86 is complete. When playback of scenario 86 is not complete, program control loops back to query task 120 to continue monitoring for laser beam 31 . Thus, training process 112 allows for the capability of detecting multiple shots fired from weapon 31 . Alternatively, when playback of scenario 86 is complete, process control proceeds to a query task 130 (discussed below).
- a query task 132 is performed in conjunction with task 126 .
- Query task 132 determines whether laser beam 33 is detected on one of screens 22 in response to the presentation of subscenario 94 .
- this information is communicated to tracking processor 34 ( FIG. 2 ), in the form of, for example, a digital video signal.
- process flow proceeds to a task 134 .
- tracking processor 34 determines coordinates describing impact location 100 ( FIG. 7 ).
- Query task 136 determines whether to branch to another one of subscenarios 94 ( FIG. 5 ).
- simulation controller 32 FIG. 2 determines from received information associated with impact location 100 , i.e., X-Y coordinates, or from the absence of X-Y coordinates, whether to command projection controllers 36 ( FIG. 2 ) to branch to another one of subscenarios 94 .
- Process 112 loops back to task 126 when a determination is made at query task 136 to branch to another one of subscenarios 94 .
- the next one of subscenarios 94 is subsequently displayed, and detectors 50 continue to monitor for laser beam 31 .
- process 112 continues with a query task 138 .
- Query task 138 determines whether playback of subscenario 94 is complete. When playback of subscenario 94 is incomplete, program control loops back to query task 132 to continue monitoring for laser beam 31 . Alternatively, when playback of subscenario 94 is complete, process control proceeds to query task 130 .
- query task 130 determines whether report 64 ( FIG. 2 ) is to be generated. A determination can be made when one of tracking processor 34 ( FIG. 2 ) or simulation controller 32 detects an affirmative or negative response to a request for report 64 presented to the operator. When no report 64 is desired, process 112 exits. However, when report 64 is desired, process 112 proceeds to a task 140 .
- report 64 is provided.
- tracking processor 34 may process the received information regarding impact location 100 , associate the received information with the displayed scenario 86 and any displayed subscenarios 94 , and combine the information into a format, i.e., report 64 , that can be used for review and de-briefing.
- Report 64 may be formatted for display and provided via a monitor at, for example, remote debrief station 111 ( FIG. 9 ) in communication with tracking processor 34 .
- report 64 may be printed out.
- Report 64 may include various information pertaining to trainee 26 performance including, for example, location of first and second subjects 102 and 104 , respectively ( FIG.
- the state of wellbeing might indicate whether the trainee's response to scenario 86 could have caused trainee 26 to be injured or killed in a real life situation simulated by scenario 86 .
- training process 112 exits.
- training process 112 can be optionally repeated utilizing the same one of scenarios 86 or another one of scenario 86 .
- Training process 112 describes methodology associated with situational response training for honing a trainee's decision-making skills in situations that are stressful and potentially dangerous.
- a comprehensive training program may also encompass marksmanship training and/or weapons qualification testing.
- Full surround simulation system 20 may be configured for marksmanship training and weapons qualification testing, as discussed in connection with FIG. 4 . That is, screens 22 may be arranged coplanar with one another to form firing range configuration 84 ( FIG. 4 ).
- FIG. 12 shows a diagram of detector 50 of simulation system 20 ( FIG. 1 ) zoomed in to a small viewing area 142 for weapons qualification testing.
- at least one of detectors 50 is outfitted with a zoom lens 144 .
- Zoom lens 144 is adjustable to decrease an area of one of screens 22 , for example, first screen 22 ′, that is viewed by detector 50 .
- By either automatically or manually zooming and focusing in to small viewing area 142 higher-resolution tracking of laser beam 31 ( FIG. 1 ) can be achieved.
- Targets 146 presented on first screen 22 ′ via one of projectors 38 are proportionately correct and sized to fit within small viewing area 142 . Thus, the size of targets 146 may be reduced by fifty percent relative to their appearance when zoomed out. As shown, there may be multiple targets 146 presented on first screen 22 ′. Additional information pertinent to qualification testing may also be provided on first screen 22 ′. This additional information may include, for example, distance to the target (for example, 75 meters), wind speed (for example, 5 mph), and so forth.
- an operator may optionally enter, via workstation 30 , information for use by a software ballistic calculator to compute, for example, the effects of wind, barometric pressure, altitude, bullet characteristics, and for forth, on the location of a “shot” fired toward targets 146 .
- Report 64 may be generated in response to qualification testing that includes data pertinent to shooting accuracy, such as average impact location for laser beam 31 , offset of laser beam 31 from center, a score, and so forth.
- FIG. 13 shows a block diagram of a simulation system 150 in accordance with an alternative embodiment of the present invention.
- Simulation system 150 includes many of the components of the previously described simulation systems. That is, simulation system 150 includes multiple screens 22 surrounding participation location 24 , a rear projection system 28 associated with each screen 22 , a workstation 30 , and so forth. Accordingly, a description of these components need not be repeated.
- simulation system 150 utilizes a non-laser-based weapon 152 .
- weapon 152 may be implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized by trainees 26 in the course of duty.
- weapon 152 is outfitted with at least two tracking markers 154 .
- Simulation system 150 further includes a detection subsystem formed from multiple tracking cameras 156 encircling, and desirably positioned above, participation location 24 .
- tracking markers 154 are reflective markers coupled to weapon 152 that are detectable by tracking cameras 156 .
- tracking cameras 156 can continuously track the movement of weapon 152 .
- Continuous tracking of weapon 152 provides ready “aim trace” where the position of weapon 152 (or even trainee 26 ) can be monitored and then replayed during a debrief.
- Reflective tracking markers 154 require no power, and tracking cameras 156 can track movement of weapon 152 in three dimensions, as opposed to two dimensions for projected laser beam tracking.
- reflective tracking is not affected by metal objects in close proximity, and reflective tracking operates at a very high update rate.
- Accurate reflective tracking calls for a minimum of two reflective markers 154 per weapon 152 and at least three tracking cameras 156 , although four to six tracking cameras 156 are preferred.
- Each of tracking cameras 156 emits light (often infrared-light) directly next to the lens of tracking camera 156 .
- Reflective tracking markers 154 then reflect the light back to tracking cameras 156 .
- a tracking processor (not shown) at workstation 30 then performs various calculations and combines each view from tracking cameras 156 to create a highly accurate three-dimensional position for weapon 152 .
- a calibration process is required for both tracking cameras 156 and weapon 152 , and if any of tracking cameras 156 are moved or bumped, simulation system 150 should be recalibrated.
- Weapon 152 may be a pistol, for example, loaded with blank rounds. Actuation of weapon 152 is thus detectable by tracking cameras 156 as a sudden movement of tracking markers 154 caused by the recoil of weapon 152 in a direction opposite from the direction of the “shot” fired, as signified by a bi-directional arrow 158 . By using such a technique, multiple weapons 152 can be tracked in participation location 24 , and the position of weapons 152 , as well as the projection of where a “shot” fired would go, can be calculated with high accuracy. Additional markers 154 may optionally be coupled to trainee 26 , for example, on the head region to track trainee 26 movement and to correlate the movement of trainee 26 with the presented scenario.
- weapon 152 could further be configured to transmit a signal, via a wired or wireless link, indicating actuation of weapon 152 .
- a weapon may be adapted to include both a laser insert and tracking markers, both of which may be employed to detect actuation of the weapon.
- FIG. 14 shows a simplified block diagram of a computing system 200 for executing a scenario provision process 202 to generate a scenario for playback in a simulation system, such as those described above.
- the present invention contemplates the provision of custom authoring capability of scenarios to the training organization.
- the present invention entails scenario creation code executable on computing system 200 and methodology for providing a scenario for use in the simulation systems described above.
- Traditional training authoring software for instructional use-of-force training and military simulation can provide three-dimensional components. That is, conventional authoring software enables the manipulation of three-dimensional geometry that represents, for example, human beings.
- computer-generated human characters lack realism in both look and movement, especially in real-time applications. If a trainee believes they are shooting a non-person, rather than an actual person, they may be more likely to use deadly force, even when deadly force is unwarranted. Consequently, a trainee having trained with video game-like “cartoon” characters, may overreact when faced with minimal or non-threats. Similarly, the trainee may be less effective against real threats.
- the scenario creation code permits a scenario developer to construct situations that can be displayed on screens 22 ( FIG. 1 ) from stock footage without the demands of performing extensive camera work.
- the present invention may be utilized to create scenarios for the simulation systems described above, as well as other use-of-force training and military simulation systems.
- Use-of-force training can include firearms as well as less lethal options, such as chemical spray, TZER, baton, and so forth.
- the present invention may be utilized to create scenarios for playback in other playback systems that are not related to use-of-force or military training, such as teaching or behavioral therapy environments, sales training, and the like.
- the present invention may be adapted for scenario creation for use within video games.
- Computing system 200 includes a processor 204 on which the methods according to the invention can be practiced.
- Processor 204 is in communication with a data input 206 , a display 208 , and a memory 210 for storing at least one scenario 211 (discussed below) generated in response to the execution of scenario provision process 202 .
- These elements are interconnected by a bus structure 212 .
- Data input 206 can encompass a keyboard, mouse, pointing device, and the like for user-provided input to processor 204 .
- Display 208 provides output from processor 204 in response to execution of scenario provision process 202 .
- Computing system 200 can also include network connections, modems, or other devices used for communications with other computer systems or devices.
- Computing system 200 further includes a computer-readable storage medium 214 .
- Computer-readable storage medium 214 may be a magnetic disc, optical disc, or any other volatile or non-volatile mass storage system readable by processor 204 .
- Scenario provision process 202 is executable code recorded on computer-readable storage medium 214 for instructing processor 204 to create scenario 211 for interactive use in a scenario playback system for visualization and interactive use by trainees 26 ( FIG. 1 ).
- a database 203 may be provided in combination with scenario provision process 202 .
- Database 203 includes actor video clips, objects, sounds, background images, and the like that can be utilized to create scenario 211 .
- FIG. 15 shows a flow chart of scenario provision process 202 .
- Process 202 is executed to create scenario 211 for playback in a simulation system, such as those described above.
- process 202 is executed to create scenario 211 for playback in three hundred degree surround simulation system 108 ( FIG. 9 ).
- process 202 may alternatively be executed to create scenario 211 for full surround simulation system 20 ( FIG. 1 ), half surround simulation system 106 ( FIG. 8 ), and other single screen or multi-screen simulation systems.
- process 202 allows a scenario author to customize scenario 211 by choosing and combining elements, such as actor video clips, objects, sounds, and background images.
- Process 202 further allows the scenario author to define the logic (i.e., a relationship between the elements) within scenario 211 .
- Scenario provision process 202 begins with a task 216 .
- process 202 is initiated. Initiation of process 202 occurs by conventional program start-up techniques and yields the presentation of a main window on display 208 ( FIG. 14 ).
- FIG. 16 shows a screen shot image 218 of a main window 220 presented in response to execution of scenario provision process 202 .
- Main window 220 is the primary opening view of process 202 , and includes a number of sub-windows such as a scenario layout window 222 , a library window 224 , a scenario logic window 226 , and a properties window 228 .
- Main window 220 further includes a number of user fields, referred to as buttons, for determining the behavior of process 202 and controlling its execution. The functions of the sub-windows and buttons within main window 220 will be revealed below in connection with the execution of scenario provision process 202 .
- scenario provision process 202 awaits receipt of commands from a scenario author (not shown) in order to generate scenario 211 ( FIG. 14 ).
- a task 230 is performed in response to the receipt of a first input, via data input 206 ( FIG. 14 ) from the scenario author.
- the first input indicates choice of a background image for scenario 211 .
- FIG. 17 shows a screen shot image 232 of library window 224 from main window 220 ( FIG. 16 ) exposing a list 233 of background images 234 for scenario 211 ( FIG. 14 ).
- Interactive buttons within library window 224 can include a “background images” button 236 , an “actors” button 238 , and a “behaviors” button 240 . Additional buttons include a “new folder” button 242 and a “create new” button 244 .
- List 233 is revealed when the scenario author clicks on background images button 236 . As shown, list 233 may be organized in folders representing image categories 246 , such as rural, urban, interior, and the like. However, it should be understood that list 233 may be organized in various ways pertinent to the particular organization executing scenario provision process 202 with the creation of new or different folders and image categories 246 .
- Background images 234 may be chosen from those provided within list 233 stored in database 203 ( FIG. 14 ). Alternatively, new background images 234 may be imported utilizing “create new” button 244 . In a preferred embodiment, background images 234 can be obtained utilizing a camera and creating still images within an actual, or real environment. Background images 234 may be in a panoramic format utilizing conventional panoramic photographic techniques and processing for use within the large field-of-view of three hundred degree surround simulation system 108 ( FIG. 9 ). The creation, editing, and storage of background images 234 will be described in greater detail in connection with a background editor illustrated in FIGS. 27-29 .
- the scenario author may utilize a conventional pointer 248 to point to one of background images 234 .
- a short description 250 in the form of text and/or a thumbnail image, may optionally be presented at the bottom of library window 224 to assist the scenario author in his or her choice of one of background images 234 .
- the scenario author can utilize a conventional drag-and-drop technique by clicking on one of background images 234 and dragging it into scenario layout window 222 ( FIG. 16 ).
- Those skilled in the art will recognize that other conventional techniques, rather than drag-and-drop, may be employed for choosing one of background images 234 and placing it within scenario layout window 222 .
- FIG. 18 shows a screen shot image 252 of main window 220 following selection of one background images 234 provided in list 233 ( FIG. 17 ).
- a first background image 234 ′ is shown in scenario layout window 222 .
- First background image 234 ′ is presented in five adjacent panels 254 within scenario layout window 222 . These five adjacent panels 254 correspond to the five adjacent screens 22 ( FIG. 9 ) of three hundred degree surround simulation system 108 ( FIG. 9 ).
- first background image 234 ′ can be seamlessly presented across panels 254 , hence the five screens 22 of system 108 .
- a sixth panel 256 in scenario layout window 222 may include a portion of one of background images 234 when creating scenario 211 ( FIG. 14 ) for utilization within full surround simulation system 20 ( FIG. 1 ).
- Video clip selection segment 258 includes a task 260 .
- Task 260 is performed in response to the receipt of a second input, via data input 206 ( FIG. 14 ) from the scenario author. The second input indicates selection of an actor that may be utilized within scenario 211 .
- FIG. 19 shows a screen shot image 262 of library window 224 from main window 220 ( FIG. 16 ) exposing a list 264 of actors 266 for scenario 211 .
- List 264 is revealed when the scenario author clicks on actors button 238 .
- List 264 may be organized in folders representing actor categories 268 , such as friendlies, hostiles, targets, and so forth. However, it should be understood that list 264 may be organized in various ways pertinent to the particular organization executing scenario provision process 202 with the creation of new or different folders and actor categories 268 .
- Actors 266 may be chosen from those provided within list 264 stored in database 203 ( FIG. 14 ). Alternatively, new actors 266 may be imported utilizing “create new” button 244 , and importing one or more video clips of an actor or actors performing activities, or animation sequences. In a preferred embodiment, video clips of actors 266 can be obtained by filming an actor against a blue or green screen, and performing post-production processing to create a “mask” or “matte”, of the area that the actor occupies against the blue or green screen. The creation, editing, and storage of video clips of actors 266 will be described in greater detail in connection with FIGS. 30-34 .
- the scenario author may utilize pointer 248 to point to one of actors 266 , for example a first actor 266 ′, labeled.“Offender 1 ”.
- a short description 270 in the form of text and/or a thumbnail image, may optionally be presented at the bottom of library window 224 to assist the scenario author in his or her selection of one of actors 266 .
- Task 272 is performed in response to the receipt of a third input, via data input 206 ( FIG. 14 ) from the scenario author.
- the third input indicates assignment of a behavior to the selected one of actors 266 .
- FIG. 20 shows a screen shot image 274 of library window 224 from main window 220 ( FIG. 16 ) exposing a list 276 of behaviors 278 for assignment to an actor 266 ( FIG. 19 ) from list 264 ( FIG. 19 ).
- list 276 may be revealed when the scenario author clicks on behaviors button 240 .
- List 276 may be organized in folders representing behavior categories 280 , such as aggressive, alert, civil, and such. However, it should be understood that list 276 may be organized in various ways pertinent to the particular organization executing scenario provision process 202 with the creation of new or different folders and behavior categories 280 .
- each of behaviors 278 within list 276 is the aggregate of actions and/or movements made by an object irrespective of the situation.
- Behaviors 278 within list 276 are not linked with particular actors 266 ( FIG. 19 ). Rather, they are the aggregate of possible behaviors provided within database 203 ( FIG. 14 ) that may be assigned to particular actors.
- one of behaviors 278 i.e., a first behavior 278 ′ labeled “Hostile A”, may be a hostile behavior that includes stand, shoot, and fall if shot, as indicated by its description 282 .
- a second behavior 278 ′′ labeled “Civil A” may be a civil, or non-hostile, behavior that includes stand, turn, and flee.
- behaviors 278 are not linked with particular actors, but rather are defined by the provider of scenario provision process 202 as possible actions and/or movements that may be undertaken within scenario 211 .
- List 264 ( FIG. 19 ) of actors 266 ( FIG. 19 ) is illustrated herein to show the presentation of an aggregate of actors 266 ( FIG. 19 ) that may be selected when creating scenario 211 .
- list 276 of behaviors 278 is illustrated herein to show the presentation of an aggregate of behaviors 278 ( FIG. 20 ) that may be assigned to actors 266 when creating scenario 211 .
- certain behaviors 278 can only be assigned to actors 266 if the actors 266 were initially filmed against a green or blue screen performing those behaviors 278 . That is, each of behaviors 278 represents a script that may be performed by any of a number of actors 266 and filmed to create video clips for use within scenario provision process 202 ( FIG. 15 ).
- a particular one of actors 266 may support a subset of behaviors 278 within list 276 , rather than the totality of behaviors 278 in list 276 .
- FIG. 21 shows a screen shot image 284 of an exemplary drop-down menu 286 of behaviors 278 supported by a selected one of the actors 266 .
- Drop-down menu 286 represents a subset of behaviors 278 in which the selected one of actors 266 was filmed and for which video clips of those behaviors 278 exist in database 203 ( FIG. 14 ).
- drop-down menu 286 may appear to facilitate assignment of one of behaviors 278 .
- pointer 248 to point to and select one of behaviors 278
- the scenario author may assign one of behaviors 278 , for example, first behavior 278 ′, to first actor 266 ′.
- the scenario author may select one of behaviors 278 from list 276 .
- a drop-down menu may appear that includes a subset of actors 266 from list 264 ( FIG. 19 ), each of which supports the selected one of behaviors 278 .
- the scenario author may subsequently select one of actors 266 that supports the selected one of behaviors 278 .
- process flow proceeds to a task 288 .
- the selected actor and the assigned behavior i.e., first actor 266 ′ ( FIG. 19 ) performing first behavior 278 ′
- the selected background image i.e., first background image 234 ′ ( FIG. 18 )
- scenario layout window 222 FIG. 18 .
- task 288 causes the combination of first background image 234 with video clips corresponding to first actor 266 ′ performing first behavior 278 ′.
- first actor 266 ′ is a mask portion that forms a foreground image over first background image 234 ′, with first background image 234 ′ being visible in the transparent portion (the blue or green screen background) of the video clips of first actor 266 ′.
- the scenario author can utilize a conventional drag-and-drop technique by clicking on first actor 266 ′ and dragging it into scenario layout window 222 .
- the scenario author can determine a location within first background image 234 ′ in which the author wishes first actor 266 ′ to appear.
- drag-and-drop may be employed for choosing one of actors 266 and placing it within scenario layout window 222 .
- scenario author can resize first actor 266 ′ relative to first background image 234 ′ to characterize a distance of first actor 266 ′ from trainee 26 ( FIG. 9 ) utilizing simulation system 108 ( FIG. 9 ).
- the scenario author may alter the pixel dimension of the digital image of first actor 266 by using up/down keys on the keyboard of data input 206 ( FIG. 14 ).
- the scenario author may select first actor 266 ′ within first background image 234 ′, position pointer 248 ( FIG. 17 ) over a conventional selection handle displayed around first actor 266 ′, and resize first actor 266 ′ by clicking on the handle and dragging.
- scenario provision process 202 may enable the entry of a desired distance of first actor 266 ′ from trainee 26 . Process 202 may then automatically calculate a height of first actor 266 ′ within first background image 234 ′ relative to the desired distance.
- scenario provision process 202 proceeds to a query task 290 .
- the scenario author determines whether scenario 211 is to include another one of actors 266 ( FIG. 19 ).
- process 202 loops back to task 260 so that another one of actors 266 , for example, a second actor 266 ′′ ( FIG. 19 ) is selected, assignment of one of behaviors 278 ( FIG. 20 ) is made at task 272 , for example, second behavior 278 ′′ ( FIG. 20 ), and video clips of second actor 266 ′′ performing second behavior 278 ′′ are combined with first background image 234 ′ ( FIG. 18 ). Consequently, repetition of tasks 260 , 272 , and 288 enables the scenario author to determine a quantity of actors 266 that would be appropriate for scenario 211 .
- FIG. 22 shows a screen shot image 294 of a portion of main window 220 following selection of first and second actors 266 ′ and 266 ′′, respectively, and their associated first and second behaviors 278 ′ and 278 ′′ ( FIG. 20 ), respectively, for scenario 211 . Since each of first and second actors 266 ′ and 266 ′′ are defined as a mask portion, or matte, during post production processing, each of first and second actors 266 ′ and 266 ′′ overlay first background image 234 ′.
- first and second actors 266 ′ and 266 ′′ appear to be behind portions of first background image 234 ′.
- first actor 266 ′ appears to be partially hidden by a rock 296
- second actor 266 ′′ appears to be partially hidden by shrubbery 298 .
- portions of first background image 234 ′ can be specified as foreground layers.
- rock 296 and shrubbery 298 are each defined as a foreground layer within first background image 234 ′.
- regions within a background image are defined as foreground layers, these foreground layers will overlay the mask portion of the video clips corresponding to first and second actors 266 ′ and 266 ′′. This layering feature is described in greater detail in connection with background editing of FIGS. 27-29 .
- program flow proceeds to a task 300 .
- the scenario author has the opportunity to build the scenario logic flow for scenario 211 ( FIG. 14 ). That is, although actors and behaviors have been selected, as of yet, there is no definition of when the actors may appear. Nor is there definition of the interaction, or lack thereof, between the actors and behaviors. That capability is provided to the scenario author to further customize scenario 211 in accordance with his or her particular training agenda.
- FIG. 23 shows a screen shot image 302 of scenario logic window 226 from main window 220 ( FIG. 16 ) for configuring the scenario logic of scenario 211 ( FIG. 14 ), and FIG. 24 shows a table 304 of a key of exemplary symbols 306 utilized within scenario logic window 226 .
- Symbols 306 represent actions, events, and activities within a logic flow for scenario 211 .
- the “logic”, or relationship between the elements can be readily constructed.
- Table 304 includes a “start point” symbol 308 , an “external command” symbol 310 , a “trigger” symbol 312 , an “event” symbol 314 , and “actor/behavior” symbol 316 , an “ambient sound” symbol 318 , and a “delay” symbol 320 .
- Symbols 306 are provided herein for illustrative purposes. Those skilled in the art will recognize that symbols 306 could take on a great variety of shapes. Alternatively, color coding could be utilized to differentiate the various symbols.
- a scenario logic flow 322 for scenario 211 includes a number of interconnected symbols 306 .
- Start point symbol 308 is automatically presented within scenario logic window 226 , and provides command and control to the scenario playback system, in this case three hundred degree surround simulation system 108 ( FIG. 9 ), to load and initialize scenario 211 .
- Actor/behavior symbol(s) 316 may appear in scenario logic window 226 when actors 266 ( FIG. 19 ) performing behaviors 278 ( FIG. 20 ) are combined with one of background images 234 . However, actor/behavior symbol(s) 316 are “floating” or unconnected with regard to any other symbols appearing in scenario logic window 226 until the scenario author creates those connections.
- Interactive buttons within scenario logic window 226 can include an “external command” button 324 , a “timer” button 326 , and a “sound” button 328 .
- External command symbol 310 is created in scenario logic window 226 when the scenario author clicks on external command button 324 .
- External commands are interactions that may be created within scenario logic flow 322 that occur from outside of simulation system 108 ( FIG. 9 ). These external commands may be stored within database 203 ( FIG. 14 ), and may be listed in, for example, in properties window 228 ( FIG. 16 ) of main window 220 ( FIG. 16 ) when external command symbol 310 is created in scenario logic window 226 . The scenario author can then select one of the external commands listed in properties window 228 .
- Exemplary external commands can include an instructor start command which starts the motion video of scenario 211 , a shoot command that causes an actor to be shot, although not by trainee 26 ( FIG. 1 ).
- Other exemplary external commands could include initiating a shoot back device toward trainee 26 , initiating random appearance of another actor, initiating a specialized sound, and so forth. In operation of scenario 211 , these external commands can be displayed for ease of use by the instructor.
- Delay symbol 320 is created in scenario logic window 226 when the scenario author clicks on timer button 326 .
- timer button 326 allows the scenario author to input a time delay into scenario logic flow 322 .
- Appropriate text may appear in, for example, properties window 228 of main window when delay symbol 320 is created in scenario logic window 226 . This text can allow the author to enter a duration of the delay, or can allow the author to select from a number of pre-determined durations of the delay.
- Ambient sound symbol 318 is created in scenario logic window 226 when the scenario author clicks on sound button 328 .
- the use of sound button 328 allows the scenario author to input ambient sound into scenario logic flow 322 .
- Text may appear in, for example, properties window 228 of main window when ambient sound symbol 320 is created in scenario logic window 226 .
- This text may be a list of sound files that are stored within database 203 ( FIG. 14 ).
- the scenario author can then select one of the sound files listed in properties window 228 .
- Exemplary sound files include wilderness sounds, warfare sounds, street noise, traffic, and so forth.
- properties window 228 may present a browse capability when the scenario author clicks on sound button 328 so that the author is enabled to browse within computing system 200 ( FIG. 14 ) or over a network connection for a particular sound file.
- Trigger symbol 312 within scenario logic flow 322 represents notification to actor/behavior symbol 316 that something has occurred.
- event symbol 314 within scenario logic flow 322 represents an occurrence of something within an actor's behavior that will cause a reaction within scenario logic flow 322 .
- trigger symbol 312 and event symbol 314 can be generated when the scenario author “right clicks” on actor/behavior symbol 316 .
- FIG. 25 shows a screen shot image 330 of an exemplary drop-down menu 332 of events 334 associated with scenario logic window 226 ( FIG. 23 ).
- scenario author “right clicks” on actor/behavior symbol 316 representing first actor 266 ′
- drop-down menu 332 is revealed and one of events 334 can be selected.
- Drop-down menu 332 reveals a set of events 334 that can occur within scenario logic flow 322 in response to an actor's behavior.
- pointer 248 By utilizing pointer 248 to point to and select one of events 334 , the scenario author may assign one of events 334 , for example, a “Fall” event 334 ′, to first actor 266 ′ within scenario logic flow 322 .
- FIG. 26 shows a screen shot image 336 of exemplary drop-down menu 332 of triggers 338 associated with scenario logic window 226 ( FIG. 23 ).
- scenario author “right clicks” on actor/behavior symbol 316 representing second actor 266 ′′
- drop-down menu 332 is again revealed and one of the listed triggers 338 can be selected.
- Drop-down menu 332 reveals a set of triggers 338 that can provide notification to an associated actor/behavior symbol 316 .
- pointer 248 to point to and select one of triggers 338
- the scenario author may assign one of triggers 338 , for example, a “Shot” trigger 338 ′, to second actor 266 ′′ within scenario logic flow 322 .
- Solid arrows 340 represent the interconnections made by the scenario author. Whereas, dashed arrows 342 are automatically generated when events 334 and/or triggers 338 are assigned to various actor/behavior symbols 316 within scenario logic flow 322 .
- Scenario logic flow 322 describes a “script” for scenario 211 ( FIG. 14 ).
- the “script” is as follows: scenario 211 starts (Start point 308 ), the instructor initiates events (Instructor start 310 ), ambient sound immediately begins (Ambient Sound 318 ), and first actor 266 ′ immediately begins performing his behavior (Offender 1 316 ). If first actor 266 ′ falls (Fall 314 ), a delay is imposed (Delay 320 ). Second actor 266 ′′ begins performing his behavior (Guard 1 316 ) following expiration of the delay. The instructor shoots second actor 266 ′′ (Shoot Guard 1 310 ) which causes a trigger (Shot 312 ) notifying second actor 266 ′′ to react. The reaction of second actor 266 ′′ is logged as an event (Fall 314 ).
- Scenario logic 322 is highly simplified for clarity of understanding. However, in general it should be understood that scenario logic can be generated such that the behavior of a first actor can effect the behavior of a second actor and/or that an external command can effect the behavior of either of the actors.
- the behaviors of the actors can also be affected by interaction of trainee 26 within scenario 211 . This interaction can occur at the behavior level of the actors, and is described in greater detail in connection with FIGS. 33-34 .
- scenario provision process 202 proceeds to a task 344 .
- scenario 211 ( FIG. 14 ) is saved into memory 210 ( FIG. 14 ).
- a task 346 is performed.
- scenario 211 is displayed on the scenario playback system, for example, three hundred degree surround simulation system 108 ( FIG. 9 ), for interaction with trainee 26 ( FIG. 1 ).
- Scenario provision process 202 includes ellipses 348 separating scenario save task 344 and scenario display task 346 .
- Ellipses 348 indicate an omission of standard processing tasks for simplicity of illustration. These processing tasks may include saving scenario 211 in a format compatible for playback at simulation system 108 , writing scenario 211 to a storage medium that is readable by simulation system 108 , conveying scenario 211 to simulation system 108 , and so forth. Following task 346 , scenario provision process 202 exits.
- FIG. 27 shows a screen shot image 350 of a background editor window 352 with a pan tool 354 enabling a pan capability.
- FIG. 28 shows a screen shot image 356 of background editor window 352 with a foreground marking tool 358 enabling a layer capability
- FIG. 29 shows a screen shot image 360 of background editor window 352 with first background image 234 ′ selected for saving into database 203 ( FIG. 14 ).
- background images 234 can be obtained utilizing a camera and creating still images within an actual, or real environment. These still images are desirably in a panoramic format.
- a still image may be manipulated in a digital environment through background editor window 352 to achieve a desired one of background images 234 .
- Interactive buttons within background editor window 352 include a “load panoramic” button 362 , a “pan” button 364 , and a “layer” button 366 .
- Load panoramic button 362 allows a user to browse within computing system 200 ( FIG. 14 ), over a network connection, or to load from a digital camera, a particular still image 368 . Once selected, still image 368 will be presented on adjacent panels 370 within background editor window 352 , that represent panels 254 ( FIG. 18 ) within scenario layout window 222 ( FIG. 18 ).
- pan button 364 allows the user to manipulate still image 368 horizontally and vertically for optimal placement of adjacent views within panels 370 .
- a horizontal lock 372 and a vertical lock 373 can be selected after still image has been manipulated to a desired position.
- a zoom adjustment element 374 may also be provided to enable the user to move still image 368 inward and outward at an appropriate depth.
- Foreground marking tool 358 allows the user to cover or “paint” over areas within still image 368 that he or she wishes to be specified as a foreground layer.
- Foreground marking tool 358 may take on a variety of forms for encircling a region, creating a “feathered” edge, subtracting a region, and so forth known to those skilled in the art.
- the foreground layer is designated by a shaded region 376 created by movement of foreground marking tool 358 . Shaded region 376 will be saved as a data file in association with still image 368 to define a foreground layer 378 ( FIG. 29 ).
- the user can select save still image as first background image 234 ′ by conventional procedures using a “save” button 380 .
- foreground layers 378 will not appear as shaded region 376 ( FIG. 28 ), but instead foreground layers 378 within first background image 234 ′ will appear as the image of the portion of still image 368 that was marked in FIG. 28 .
- shaded region 376 may be optionally toggled visible, invisible, or partially transparent.
- FIG. 30 shows an exemplary table 382 of animation sequences 384 associated with actors 266 for use within scenario provision process 202 ( FIG. 15 ).
- Table 382 relates to information stored within database 203 ( FIG. 14 ) of scenario provision process 202 .
- animation sequences 384 are the scripted actions that any of actors 266 may perform.
- Video clips 386 may be recorded of actors 266 performing animation sequences 384 against a blue or green screen. Information regarding video clips 386 are subsequently recorded in association with one of actors 266 .
- video clips 386 are distinguished by identifiers 388 , such as a frame number sequence, in table 382 characterizing one of animation sequences 384 .
- identifiers 388 such as a frame number sequence
- a logical grouping of animation sequences 384 defines one of behaviors 278 ( FIG. 20 ), as shown and discussed in connection with FIGS. 32-34 .
- video clips 386 of the animation sequences 384 that make up the desired one of behaviors 278 must first be recorded in database 203 ( FIG. 14 ).
- FIGS. 31 a - d show an illustration of a single frame 390 of an exemplary one of video clips 386 undergoing video filming and editing.
- Motion picture video filming may be performed utilizing a standard or high definition video camera.
- Video editing may be performed utilizing video editing software for generating digital “masks” of the actor's performance.
- video clips 386 contain many more than a single frame. However, only a single frame 390 is shown to illustrate post production processing that may occur to generate video clips 386 for use with scenario provision process 202 ( FIG. 15 ).
- first actor 266 ′ is filmed against a backdrop 392 having a single color, such as a green or blue screen.
- a matte 393 sometimes referred to as an alpha channel, is created that defines a mask portion 394 (i.e., the area that first actor 266 ′ occupies) and a transparent portion 396 (i.e., the remainder of frame 390 in which backdrop 392 is visible).
- zones, illustrated as shaded circular and oval regions 398 are defined on mask portion 394 . In an exemplary embodiment, these zones 398 are hit zones that provide information so scenario 211 ( FIG. 14 ) can detect discharge of a weapon into one of zones 398 . That is, scenario 211 can determine whether trainee 26 ( FIG. 1 ) hits or misses a target, such as first actor 266 ′.
- Zones 398 can be computed using matte 393 , i.e., the alpha channel, as a starting point. For example, in the area of frame 390 where the opacity exceeds approximately ninety-five percent, i.e., mask portion 394 , it can be assumed that the image asset, i.e. first actor 266 ′, is “solid” and therefore can be hit by a bullet. Any less opacity will cause the bullet to “miss” and hit the next object in the path of the bullet.
- This hit zone information can be enhanced by adding different types of zones 398 to different areas of first actor 266 ′. For example, FIG. 31 c shows circular hit zones 400 and oval hit zones 402 .
- zone 398 By using differing ones of zones 398 , behavior 278 for first actor 266 ′ can generate an event related to a strike in one of circular and oval hit zones 400 and 402 that would affect the behavior's branching.
- the information regarding zones 398 is stored in a file of hit zone information for each frame 390 in a given one of video clips 386 ( FIG. 30 ).
- single frame 390 is shown with foreground layer 378 overlaying mask portion 394 representing first actor 266 ′.
- FIG. 31 d is provided herein to demonstrate a situation in which foreground layer 378 overlies mask portion 394 . In such a circumstance, only those hit zones, in this case two circular hit zones 400 and a single oval hit zone 402 can be hit.
- FIG. 32 shows a screen shot image 404 of a behavior editor window 406 showing behavior logic flow 408 for first behavior 278 ′
- FIG. 33 shows a table 410 of a key of exemplary symbols 412 utilized within behavior editor window 406 .
- Symbols 412 represent actions, events, activities, and video clips within behavior logic flow 408 .
- the “logic”, or relationship between the elements can be readily constructed for one of behaviors 278 .
- table 410 includes “start point” symbol 308 , “trigger” symbol 312 , and “event” symbol 314 .
- table 410 includes an “animation sequence” symbol 414 , a “random” symbol 416 and an “option” symbol 418 .
- Symbols 412 are provided herein for illustrative purposes. Those skilled in the art will recognize that symbols 412 could take on a great variety of shapes. Alternatively, color coding could be utilized to differentiate the various symbols.
- behavior logic flow 408 for first behavior 278 ′ includes a number of interconnected symbols 412 .
- Start point symbol 308 is automatically presented within behavior logic flow 408 , and provides command and control to load and initialize first behavior 278 ′.
- Branching options window 420 facilitates generation of behavior logic flow 408 .
- Branching options window 420 includes a number of user interactive buttons.
- window 420 includes a “branch” button 422 , an “event” button 424 , a “trigger” button 426 , a “random” button 428 , and an “option” button 430 .
- selection of branch button 422 allows for a branch to occur within behavior logic flow 408 .
- Selection of event button 424 results in the generation of event symbol 314
- selection of trigger button 426 results in the generation of trigger symbol 312 in behavior logic flow 408 .
- trigger symbol 312 generated within behavior logic flow 408 is a notification that something has occurred within that behavior logic flow 408 .
- a trigger within behavior logic flow 408 becomes an event with scenario logic flow 322 .
- event symbol 314 generated within behavior logic flow 408 is an occurrence of something that results in a reaction of the actor in accordance with behavior logic flow 408 .
- An event within behavior logic flow 408 becomes a trigger within scenario logic flow 322 .
- Random button 428 results in the generation of random symbol 416 in behavior logic flow 408 .
- selection of option button 430 results in the generation of option symbol 418 in behavior logic flow 408 .
- the introduction of random and/or option symbols 416 and 418 , respectively, into behavior logic flow 408 introduces random or unexpected properties to a behavior logic flow. These random or unexpected properties will be discussed in connection with FIG. 34 .
- a properties window 432 allows the selection of animation sequences 384 .
- properties window 432 allows the behavior author to assign various properties to the selected one of animation sequences 384 . These various properties can include, for example, selection of a particular sound associated with a gunshot.
- animation sequence symbol 414 will appear in behavior editor window 406 .
- the various symbols 412 will be presented in behavior editor window 406 as “floating” or unconnected with regard to any other symbols 412 appearing in window 406 until the behavior creation author creates those connections.
- Symbols 412 within behavior logic flow 408 are interconnected by arrows 432 to define the various relationships and interactions.
- Behavior logic flow 408 describes a “script” for one of behaviors 278 ( FIG. 20 ), in this case first behavior 278 ′.
- the “script” is as follows: behavior flow 408 starts (Start point 308 ) and animation sequence 384 is presented (Stand 414 ). If an event occurs (Shot 314 ), a trigger is generated (Fall 312 ), and another animation sequence 384 is presented (Fall 414 ). The trigger (Fall 312 ) is communicated as needed within scenario logic flow 322 ( FIG. 23 ) as an event, Fall 314 ( FIG. 23 ).
- FIG. 34 shows a partial screen shot image 434 of behavior editor window 406 showing a behavior logic flow 436 for another one of behaviors 278 .
- Behavior logic flow 436 is significantly more complex than behavior logic flow 408 ( FIG. 32 ).
- flow 436 is readily constructed utilizing symbols 412 ( FIG. 33 ), and introduces various random properties.
- behavior logic flow 436 starts (Start point 308 ) and animation sequence 384 is presented (Duck 414 ). Next, a random property is introduced (Random 416 ). The random property (Random 416 ) allows behavior logic flow 436 to branch to either an optional side logic flow (Side 418 ) or an optional stand logic flow (Stand 418 ). Option symbols 418 indicate that logic flow can include either side logic flow, stand logic flow, or both side and stand logic flows when implementing the random property (Random 416 ).
- Animation sequence 384 is presented (From Duck: Side & Shoot 414 ). This translates to “from the duck position, move sideways and shoot).
- animation sequence 384 is presented (From Side: Shoot 414 ), meaning from the sideways position shoot weapon.
- a random property (Random 416 ) is introduced. The random property allows behavior logic flow 436 to branch and present either animation sequence 384 (From Side: Shoot 414 ) or animation sequence 384 (From Side: Shoot & Duck 414 ).
- any of the three animation sequences (From Duck: Side & Shoot 414 ), (From Side: Shoot 414 ), and (From Side: Shoot & Duck 414 ), and event can occur (Shot 314 ). If an event occurs (Shot 314 ), a trigger is generated (Fall 312 ), and another animation sequence 384 is presented (From Side: Shoot & Fall). If another event occurs (Shot 314 ), another trigger is generated (Fall 312 ), and yet another animation sequence 384 (Twitch 414 ) is presented.
- animation sequence 394 (From Side: Shoot & Duck 414 ) is presented for a period of time, and no event occurs, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414 ).
- animation sequence 384 is presented (From Duck: Stand & Shoot 414 ). This translates to “from the duck position, stand up and shoot).
- animation sequence 384 is presented (From Stand: Shoot and Duck 414 ), meaning from the standing position, shoot weapon, then duck. If an event associated with animation sequences 384 (From Duck: Stand. & Shoot 414 ) and (From Stand: Shoot and Duck 414 ) does not occur, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414 ).
- the present invention teaches of a method for scenario provision in a simulation system that utilizes executable code operable on a computing system.
- the executable code is in the form of a scenario provision process that permits the user to create new scenarios with the importation of sounds and image objects, such as, panoramic pictures, still digital pictures, standard and high-definition video files, green or blue screen video.
- Green or blue screen based filming provides for extensive reusability of content, as individual “actors” can be filmed and then “dropped” into various settings with various other “actors.”
- the program and method permits the user to place the image objects (for example, actor video clips) in a desired location within a background image.
- the program and method further allows a user to manipulate a panoramic image for use as a background image in a single or multi-screen scenario playback system.
- the program and method permits the user to assign sounds and image objects to layers so that the user can define what object is displayed in front of or behind another object.
- the program and method enables the user to readily construct scenario logic flow defining a scenario through a readily manipulated and understandable flowchart style user interface.
Abstract
A simulation system (20) facilitates training for trainees (26) subject to multi-directional threats. A method for providing a scenario (211) for use in the simulation system (20) utilizes a scenario provision process (202) executable on a computing system (26). The process (202) calls for choosing a background image (234) for the scenario (211), and selecting a video clip(s) (386) of one or more actors (266) from a database (203) of video clips (386). The video clip(s) (386) are filmed using a green or blue screen technique, and include a mask portion (394) of the actor (266) and a transparent portion (396). The video clip(s) (386) are combined with the background image (234) to create the scenario (211), with the mask portion (394) forming a foreground image over the background image (234). The scenario (211) is displayed on a display of the simulation system (20).
Description
- The present invention is a continuation in part (CIP) of “Multiple Screen Simulation System and Method for Situational Response Training,” U.S. patent application Ser. No. 10/800,942, filed 15 Mar. 2004, which is incorporated by reference herein.
- In addition, the present invention claims priority under 35 U.S.C. §119(e) to: “Video Hybrid Computer-Generated Imaging Software,” U.S. Provisional Patent Application Ser. No. 60/633,087, filed 3 Dec. 2004, which is incorporated by reference herein.
- The present invention relates to the field of simulation systems for weapons training. More specifically, the present invention relates to scenario authoring and provision in a simulation system.
- Due to current world events, there is an urgent need for highly effective law enforcement, security, and military training. Training involves practicing marksmanship skills with lethal and/or non-lethal weapons. Additionally, training involves the development of decision-making skills in situations that are stressful and potentially dangerous. Indeed, perhaps the greatest challenges faced by a trainee are when to use force and how much force to use. If an officer is unprepared to make rapid decisions under the various threats he or she faces, injury to the officer or citizens may result.
- Although scenario training is essential for preparing a trainee to react safely with appropriate force and judgment, such training under various real-life situations is a difficult and costly endeavor. Live-fire weapons training may be utilized in firing ranges, but it is inherently dangerous, tightly safety regulated, costly in terms of training ammunition, and firing ranges may not be readily available in all regions. Moreover, live-fire weapons cannot be safely utilized under various real-life situation training.
- One technique that has been in use for many years is the utilization of simulation systems to conduct training exercises. Simulation provides a cost effective means of teaching initial weapon handling skills and some decision-making skills, and provides training in real-life situations in which live-fire may be undesirable due to safety or other restrictions.
- A conventional simulation system includes a single screen projection system to simulate reality. A trainee views the single screen with video projected thereon, and must decide whether to shoot or not to shoot at the subject. The weapon utilized in a simulation system typically employs a laser beam or light energy to simulate firearm operation and to indicate simulated projectile impact locations on a target.
- Single screen simulators utilize technology which restricts realism in tactical training situations and restricts the ability for thorough performance measurements. For example, in reality, lethal threats can come from any direction or from multiple directions. Unfortunately, a conventional single screen simulator does not expand or stimulate a trainee's awareness to these multi-directional threats because the trainee is compelled to focus on a situation directly in front of the trainee, as presented on the single screen. Accordingly, many instructors feel that the industry is encouraging “tunnel vision” by having the trainees focus on an 8-10 foot screen directly in front of them.
- One simulation system proposes the use of one screen directly in front of the trainee and a second screen directly behind the trainee. This dual screen simulation system simulates the “feel” of multi-directional threats. However, the trainee is not provided with peripheral stimulation in such a dual screen simulation system. Peripheral vision is used for detecting objects and movement outside of the direct line of vision. Accordingly, peripheral vision is highly useful for avoiding threats or situations from the side. The front screen/rear screen simulation system also suffers from the “tunnel vision” problem mentioned above. That is, a trainee does not employ his or her peripheral vision when assessing and reacting to a simulated real-life situation.
- In addition, prior art simulation systems utilize projection systems for presenting prerecorded video, and detection cameras for tracking shots fired, that operate at standard video rates and resolution based on National Television Standards Committee (NTSC) for analog television standard. Training scenarios based on NTSC analog television standards suffer from poor realism due to low resolution images that are expanded to fit the large screen of the simulator system. In addition, detection cameras based on NTSC standards suffer from poor tracking accuracy, again due to low resolution.
- While effective training can increase the potential for officer safety and can teach better decision-making skills for management of use of force against others, law enforcement, security, and military training managers must devote more and more of their limited resources to equipment purchases and costly training programs. Consequently, the need to provide cost effective, yet highly realistic, simulation systems for situational response training in austere budget times has presented additional challenges to the simulation system community.
- Accordingly, what is needed is a simulation system that provides realistic, multi-directional threats for situational response training. In addition, what is needed is a simulation system that includes that ability for high accuracy trainee performance measurements. Moreover, the simulation system should support a number of configurations and should be cost effective.
- It is an advantage of the present invention that a simulation system is provided for situational response training.
- It is another advantage of the present invention is that a simulation system is provided in which a trainee can face multiple risks from different directions, thus encouraging teamwork and reinforcing the use of appropriate tactics.
- Another advantage of the present invention is that a simulation system is provided having realistic scenarios in which a trainee may practice observation techniques, practice time-critical judgment and target identification, and improve decision-making skills.
- Yet another advantage of the present invention is that a cost-effective simulation system is provided that can be configured to enable situational response training, marksmanship training, and/or can be utilized for weapons qualification testing.
- The above and other advantages of the present invention are carried out in one form by a simulation system. The simulation system includes a first screen for displaying a first view of a scenario, and a second screen for displaying a second view of the scenario. The first and second views of the scenario occur at a same instant, and the scenario is a visually presented situation. The simulation system further includes a device for selective actuation toward a target within the scenario displayed on the first and second screens, a detection subsystem for detecting an actuation of the device toward the first and second screens, and a processor in communication with the detection subsystem for receiving information associated with the actuation of the device and processing the received information to evaluate user response to the situation.
- The above and other advantages of the present invention are carried out in another form by a method of training a participant utilizing a simulation system, the participant being enabled to selectively actuate a device toward a target. The method calls for displaying a first view of a scenario on a first screen of the simulation system and displaying a second view of the scenario on a second screen of the simulation system. The first and second views of the scenario occur at a same instant, the scenario is prerecorded video of a situation, and the first and second views are adjacent portions of the prerecorded video. The method further calls for detecting an actuation of the device toward a target within the scenario displayed on the first and second screens, and evaluating user response to the situation in response to the actuation of the device.
- A more complete understanding of the present invention may be derived by referring to the detailed description and claims when considered in connection with the Figures, wherein like reference numbers refer to similar items throughout the Figures, and:
-
FIG. 1 shows a block diagram of a full surround simulation system in accordance with a preferred embodiment of the present invention; -
FIG. 2 shows a block diagram of components that form the simulation system ofFIG. 1 ; -
FIG. 3 shows a side view of a rear projection system of the simulation system; -
FIG. 4 shows a block diagram of a portion of the simulation system ofFIG. 1 arranged in a firing range configuration; -
FIG. 5 shows a table of a highly simplified exemplary scenario pointer database; -
FIG. 6 shows a flowchart of an exemplary video playback process for a scenario that includes video branching to subscenarios; -
FIG. 7 shows an illustrative representation of adjacent views of a prerecorded scenario; -
FIG. 8 shows a block diagram of a half surround simulation system in accordance with another preferred embodiment of the present invention; -
FIG. 9 shows a block diagram of a three hundred degree surround simulation system in accordance with yet another preferred embodiment of the present invention; -
FIG. 10 shows a flowchart of a training process of the present invention; -
FIG. 11 shows a diagram of an exemplary calibration pattern; -
FIG. 12 shows a diagram of a detector of the simulation system zoomed in to a small viewing area for qualification testing; -
FIG. 13 shows a block diagram of a simulation system in accordance with an alternative embodiment of the present invention; -
FIG. 14 shows a simplified block diagram of a computing system for executing a scenario provision process to generate a scenario for playback in a simulation system; -
FIG. 15 shows a flow chart of a scenario provision process; -
FIG. 16 shows a screen shot image of a main window presented in response to execution of the scenario provision process; -
FIG. 17 shows a screen shot image of a library window from the main window exposing a list of background images for the scenario; -
FIG. 18 shows a screen shot image of the main window following selection of one of the background images ofFIG. 17 ; -
FIG. 19 shows a screen shot image of the library window from the main window exposing a list of actors for the scenario; -
FIG. 20 shows a screen shot image of the library window from the main window exposing a list of behaviors for assignment to an actor from the list of actors; -
FIG. 21 shows a screen shot image of an exemplary drop-down menu of behaviors supported by a selected one of the actors from the list of actors; -
FIG. 22 shows a screen shot image of the main window following selection of actors and behaviors for the scenario; -
FIG. 23 shows a screen shot image of a scenario logic window from the main window for configuring the scenario logic of the scenario; -
FIG. 24 shows a table of a key of exemplary symbols utilized within the scenario logic window ofFIG. 23 ; -
FIG. 25 shows a screen shot image of an exemplary drop down menu of events associated with the scenario logic window ofFIG. 23 ; -
FIG. 26 shows a screen shot image of an exemplary drop down menu of triggers associated with the scenario logic window ofFIG. 23 ; -
FIG. 27 shows a screen shot image of a background editor window of the scenario provision process with a pan tool enabling a pan capability; -
FIG. 28 shows a screen shot image of the background editor window with a foreground marking tool enabling a layer capability; -
FIG. 29 shows a screen shot image of the background editor window with a background image selected for saving into a database; -
FIG. 30 shows an exemplary table of animation sequences associated with actors for use within the scenario provision process; -
FIGS. 31 a-d show an illustration of a single frame of an exemplary video clip undergoing video filming and editing; -
FIG. 32 shows a screen shot image of a behavior editor window showing a behavior logic flow for a first behavior; -
FIG. 33 shows a table of a key of exemplary symbols utilized within the behavior editor window; and -
FIG. 34 shows a partial screen shot image of the behavior editor window showing a behavior logic flow for a second behavior. -
FIG. 1 shows a diagram of a fullsurround simulation system 20 in accordance with a preferred embodiment of the present invention. Fullsurround simulation system 20 includesmultiple screens 22 that fully surround aparticipation location 24 in which one or more participants, i.e.,trainees 26, may be positioned. Sincemultiple screens 22 fully surroundparticipation location 24, at least one ofscreens 22 is configured to swing open to enable ingress and egress. For example, screens 22 may be hingedly coupled to one another, and one ofscreens 22 may be mounted on casters that enables it to roll outwardly enough to allow passage oftrainees 26 and/or trainers (not shown). - Each of
multiple screens 22 has arear projection system 28 associated therewith.Rear projection system 28 is operable, andtrainees 26 actions may be monitored from, aworkstation 30 located remote fromparticipation location 24.Workstation 30 is illustrated as being positionedproximate screens 22. However, is should be understood thatworkstation 30 need not beproximate screens 22, but may instead be located more distantly, for example, in another room. Whenworkstation 30 is located in another room, bi-directional audio may be provided for communication betweentrainees 26 and trainers located atworkstation 30. In addition, video monitoring ofparticipation location 24 may be provided to the trainer located atworkstation 30. - Full
surround simulation system 20 includes a total of sixscreens 22 arranged such that anangle 27 formed between corresponding faces 29 ofscreens 22 is approximately one hundred and twenty degrees. As such, the sixscreens 22 are arranged in a hexagonal pattern. In addition, each ofscreens 22 may be approximately ten feet wide by seven and a half feet high. Of course, those skilled in the art will recognize that other sizes ofscreens 22 may be provided. For example, a twelve foot wide by six foot nine inch high screen may be utilized for high definition formatted video. Thus, the configuration ofsimulation system 20 provides a multi-directional simulated environment in which a situation, or event, is unfolding. Althoughscreens 22 are shown as being generally flat, the present invention may be adapted to includescreens 22 that are curved. In such a configuration, screens 22 would form a generally circular pattern rather than the illustrated hexagonal pattern. - Full
surround simulation system 20 provides a visually presented situation onto each ofscreens 22 so thattrainees 26 inparticipation location 24 are fully immersed in the situation. In such a configuration,trainees 26 can train to respond to peripheral visual cues, multi-directional auditory cues, and the like. In a preferred embodiment, the visually presented situation is full motion, pre-recorded video. However, it should be understood that other techniques may be employed such as, video overlay, computer generated imagery, and the like. - The situation presented by
simulation system 20 is pertinent to the type of training and thetrainees 26 participating in the training experience.Trainees 26 may be law enforcement, security, military personnel, and the like. Accordingly, training scenarios projected viarear projection system 28 onto associatedscreens 22 correspond to real life situations in whichtrainees 26 might find themselves. For example, law enforcement scenarios could include response to shots fired at a facility, domestic disputes, hostage situations, and so forth. Security scenarios might include action in a crowded airport departure/arrival terminal, the jet way, or in an aircraft. Military scenarios could include training for a pending mission, a combat situation, an ambush, and so forth. -
Trainees 26 are provided with aweapon 31.Weapon 31 may be implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized bytrainees 26 in the course of duty. However, for purposes of the simulation,weapon 31 is equipped with a laser insert instead of actual ammunition.Trainees 26 actuateweapon 31 to selectively project a laser beam, represented by anarrow 33, toward any ofscreens 22 in response to the situation presented bysimulation system 20. In a preferred embodiment,weapon 31 is a laser device that projects infra red (IR) light, although a visible red laser device may also be used. Alternatively, other non-live fire weaponry and/or live-fire weaponry may be employed. - Referring to
FIG. 2 in connection withFIG. 1 ,FIG. 2 shows a block diagram of components that formsimulation system 20.Workstation 30 generally includes asimulation controller 32, and a trackingprocessor 34 in communication withsimulation controller 32. In addition,simulation controller 32 is in communication with each ofmultiple projection controllers 36. One each ofprojection controllers 36 is in communication with one each ofrear projection systems 28. Although separate controller/processor elements are utilized herein for different functions, those skilled in the art will readily appreciate that many of the computing functions performed bysimulation controller 32, trackingprocessor 34, andprojection controllers 36 may alternatively be combined into a comprehensive computing platform. - Each
rear projection system 28 includes aprojector 38 having avideo input 40 in communication with avideo output 42 of itsrespective projection controller 36, and a sound device, i.e., aspeaker 44, having anaudio input 46 in communication with anaudio output 48 of itsrespective projection controller 36. Eachrear projection system 28 further includes adetector 50, in communication with trackingprocessor 34 via a high speedserial bus 51. Thus, the collection ofdetectors 50 defines a detection subsystem ofsimulation system 20.Projector 38 anddetector 50 face amirror 52 ofrear projection system 28. - In general,
simulation controller 32 may include ascenario pointer database 54 that is an index to a number of scenarios (discussed below) that are prerecorded full motion video of various situations that are to be presented totrainees 26. In addition, each ofprojection controllers 36 may include ascenario library 56 pertinent to their location withinsimulation system 20. Eachscenario library 56 includes a portion of the video and audio to be presented via the associated one ofprojectors 38 andspeakers 44. - An operator at
workstation 30 selects one of the scenarios to present totrainees 26 andsimulation controller 32 accessesscenario pointer database 54 to index to the appropriate video identifiers (discussed below) that correspond to the scenario to be presented.Simulation controller 32 then commands each ofprojection controllers 36 to concurrently present corresponding video, represented by anarrow 58, and any associated audio, represented by arcedlines 60. -
Video 58 is projected toward areflective surface 62 ofmirror 52 wherevideo 58 is thus reflected ontoscreen 22 in accordance with conventional rear projection methodology. Depending upon the scenario,trainee 26 may elect to shoot his or herweapon 31, i.e.project laser beam 33, toward an intended target within the scenario. An impact location (discussed below) oflaser beam 33 is detected bydetector 50 viareflective surface 62 ofmirror 52 whenlaser beam 33 is projected ontoscreen 22. Information regarding the impact location is subsequently communicated to trackingprocessor 34 to evaluate trainee response to the presented scenario (discussed below). Optionally, trainee response may then be concatenated into a report 64. -
Simulation controller 32 is a conventional computing system that includes, for example, input devices (keyboard, mouse, etc.), output devices (monitor, printers, etc.), a data reader, memory, programs stored in memory, and so forth.Simulation controller 32 andprojection controllers 36 operate under a primary/secondary computer networking communication protocol in which simulation controller 32 (the primary device) controls projection controllers (the secondary devices). -
Simulation system 20, illustrated inFIG. 1 , includes a quantity of sixscreens 22 and sixrear projection systems 28 arranged in a hexagonal configuration to form the full surround configuration ofsystem 20. However,simulation system 20 need not be limited to only sixscreens 22, but may include more or less than sixscreens 22. Accordingly, the block diagram representation ofsimulation system 20 is shown having a quantity of “N”projection controllers 36 and their associated “N”rear projection systems 28 to illustrate this point. - In a preferred embodiment, each of
projectors 38 is capable of playing high definition video. The term “high definition” refers to being or relating to a television system that has twice as many scan lines per frame as a conventional system, a proportionally sharper image, and a wide-screen format. The high-definition format uses a 16:9 aspect ratio (an image's width divided by its height), although the 4:3 aspect ratio of conventional television may also be used. The high resolution images (1024×768 or 1280×720) allow much more detail to be shown.Simulator system 20places trainees 26 close toscreens 22, so thattrainees 26 can see more detail. Consequently, the high resolution video images are advantageously utilized to provide more realistic imagery totrainees 26. Although the present invention is described in terms of it's use with known high definition video formats, the present invention may further be adapted for future higher resolution video formats. - In a preferred embodiment, each of
detectors 50 is an Institute of Electrical and Electronics Engineers (IEEE) 1394-compliant digital video camera in communication with trackingprocessor 34 via high speedserial bus 51. IEEE 1394 is a digital video serial bus interface standard that offers high-speed communications and isochronous real-time data services. An IEEE 1394 system is advantageously used in place of the more common universal serial bus (USB) due to its faster speed. However, those skilled in the art will recognize that existing and upcoming standards that offer high-speed communications, such as USB 2.0, may alternatively be employed. - Each of
detectors 50 further includes an infrared (IR) filter 66 removably covering alens 68 ofdetector 50.IR filter 66 may be hingedly affixed todetector 50 or may be pivotally affixed todetector 50.IR filter 66covers lens 68 whensimulator system 20 is functioning so as to accurately detect the impact location of laser beam 33 (FIG. 1 ) onscreen 22 by filtering all light except IR. However, prior to onset of the simulation scenario, it is first necessary to calibrate each ofdetectors 50 relative to their associated one ofprojectors 38.IR filter 66 is removed fromlens 68 so that visible light can be let in during the calibration process.IR filter 66 may be manually or automatically moved from in front oflens 68 as represented bydetector 50, labeled “DETECTOR N.” An exemplary calibration process will be described in connection with the training process ofFIG. 10 . -
FIG. 3 shows a side view of one ofrear projection systems 28, i.e., afirst projection system 28′, of simulation system 20 (FIG. 1 ). Only one ofrear projection systems 28 is described in detail herein. However, the following description applies equally to each ofrear projection systems 28 depicted inFIG. 1 . Firstrear projection system 28′ includes aframe structure 70 for placement behind afirst screen 22′. Afirst mirror 52′ is coupled to afirst end 72 offrame structure 70 with a firstreflective surface 62′ facing arear face 74 offirst screen 22′.Frame structure 70 retainsfirst mirror 52′ in a fixed orientation that is substantially parallel tofirst screen 22′. - A
first projector 38′ is situated at asecond end 76 offrame structure 70 at a distance, d, from firstreflective surface 62′ offirst mirror 52′.First projector 38′ is preferably equipped with an adjustment mechanism which can be employed to adjustfirst projector 38′ so that a center of afirst view 78 of the projected video 58 (FIG. 3 ) is approximately centered onfirst screen 22′.First projector 38′ projects first view 78 ofvideo 58 towardfirst mirror 52′, andfirst view 78 reflects fromfirst mirror 52′ ontofirst screen 22′. Afirst detector 50′ is also positioned onframe structure 70.First detector 50′ may also be equipped with an adjustment mechanism which may be employed to adjustfirst detector 50′ so thatfirst detector 50′ has an appropriate view offirst screen 22′ viafirst mirror 52′. - The utilization of first
rear projection system 28′ in simulation system 20 (FIG. 1 ) advantageously saves space by shortening the distance betweenfirst projector 38′ andfirst screen 22′. The distance, d, betweenfirst mirror 52′ andfirst projector 38′ is approximately one half the throw distance offirst projector 38′ to maximize space savings. Furthermore, the use of a rear projection technique effectively frees participant location 24 (FIG. 1 ) of the clutter and distraction of components that would be found in a front projection configuration, and avoids the problem of casting shadows that can occur in a front projection configuration. - The relationship of components on
frame structure 70 simplifies system configuration and calibration, and makes adjusting offirst projector 38′ simpler. As shown,frame structure 70 further includescasters 82 mounted to a bottom thereof. Through the use ofcasters 82, simulation system 20 (FIG. 1 ) can be readily repositioned into different arrangements ofscreens 22 andrear projection systems 28. -
FIG. 4 shows a block diagram of a portion of thesimulation system 20 arranged in afiring range configuration 84. The configuration ofsimulation system 20 shown inFIG. 1 advantageously surrounds and immerses a participant in a realistic, multi-directional environment for situational response training. In addition to the development of decision-making skills in situations that are stressful and potentially dangerous, a comprehensive training program may also involve practicing marksmanship skills with lethal and/or non-lethal weapons and weapons qualification testing. - In
firing range configuration 84, screens 22 are arranged such that corresponding viewing faces 86 ofscreens 22 are aligned to be substantially coplanar. Additionally,rear projection systems 28 are readily repositioned behind the aligned screens 22 via casters 82 (FIG. 3 ).Trainees 26 may then face screens 22, andproject laser beam 33 of theirrespective weapons 31, toward targets presented onscreens 22 viarear projection systems 28. Althoughfiring range configuration 84 shows one oftrainees 26 at each ofscreens 22, it is equally likely that each of screens can accommodate more than onetrainee 26 for marksmanship training and/or weapons qualification testing. Further discussion regarding the use of fullsurround simulation system 20 for marksmanship training and/or qualification testing is presented below in connection withFIG. 12 . -
FIG. 5 shows a table of a highly simplified exemplaryscenario pointer database 54. As discussed briefly in connection withFIG. 2 ,scenario pointer database 54 provides an index to a number of scenarios of prerecorded full motion video of various situations that are to be presented totrainees 26. Simulation controller 32 (FIG. 2 ) accessesscenario pointer database 54 to index to the appropriate video identifiers that correspond to the scenario to be presented.Simulation controller 32 then commandsprojection controllers 36 to concurrently present corresponding video 58 (FIG. 2 ) and any associated audio 60 (FIG. 2 ) stored within their respective scenario libraries 56 (FIG. 2 ). - Exemplary
scenario pointer database 54 includes fourexemplary scenarios 86, labeled “1”, “2”, “3”, and “4”, and referenced in ascenario identifier field 87. Each ofscenarios 86 ispre-recorded video 58 corresponding to a real life situation in whichtrainees 26 might find themselves, as discussed above. In addition, each ofscenarios 86, is split into adjacent portions, i.e.,adjacent views 88, referenced in a videoindex identifier field 90, and assigned toparticular projection controllers 36, referenced in a projectioncontroller identifier field 92. For example, afirst projection controller 36′ is assigned afirst view 88′, identified in videoindex identifier field 90, by the label 1-1. Similarly, asecond projection controller 36″ is assigned asecond view 88″, identified in videoindex identifier field 90, by 1-2. - In a preferred embodiment,
pre-recorded video 58 may be readily filmed utilizing multiple high-definition format cameras with lenses outwardly directed from the same location, or a compound motion picture camera, in order to achieve a 360-degree field-of-view. Post-production processing entails stitching, or seaming, the individual views to form a panoramic view. The panoramic view is subsequently split intoadjacent views 88 that are presented, via rear projection systems 28 (FIG. 2 ), ontoadjacent screens 22. Through the use of digital video editing software,adjacent views 88 can be time locked, for example, through the assignment of appropriate time codes so thatadjacent views 88 ofscenario 86 are played back at the same instant. - The video is desirably split so that the primary subject or subjects of interest in the video is not split over
adjacent screens 22. The splitting of video intoadjacent views 88 for presentation onadjacent screens 88 need not be a one to one correlation. For example, during post-production processing a stitched panoramic video having a 270-degree field-of-view may be projected onto five screens to yield a 300-degree field-of-view. -
Audio 60 may simply be recorded at the time of video production. During post-production processing, particular portions of the audio are assigned to particular slices of the video so that audio relevant to the view is provided. For example, audio 60 (FIG. 2 ) of a door opening should come from speaker 44 (FIG. 2 ) associated with one of screens 22 (FIG. 2 ) at which the door is shown, whileaudio 60 of a person's voice should come fromspeaker 44 associated another ofscreens 22 at which the person is presented. Thus,audio 60 is cost effectively produced using an emulation of three-dimensional audio to match the video. Such an approach is much less expensive, often more realistic, and scales better with system configurations than more complex surround sound techniques. - Although one video and audio production technique is described above that cost-effectively yields a high resolution emulation of a real-life situation, it should be apparent that other video and audio production techniques may be employed. For example, the pre-recorded video may be filmed utilizing a digital camera system having a lens system that can record 360-degree video. Post-production processing then merely entails splitting the 360-degree video into adjacent views to be presented on adjacent screens. Similarly, audio may be produced utilizing one of several surround sound techniques known to those skilled in the art.
- Simulation system 20 (
FIG. 1 ) may employ a branching video technology. Branching video technology enables control of multiple playback paths through a video database. As such,scenarios 86 may optionally branch to a different outcome, i.e., asubscenario 94 based on the action or inaction of trainee 26 (FIG. 1 ). - Referring to
FIG. 6 in connection withFIG. 5 ,FIG. 6 shows a flowchart of an exemplaryvideo playback process 93 for asecond scenario 86″, labeled “2”, that includes video branching tosubscenarios 94. At an onset of the simulation training, an operator initiatessecond scenario 86″, labeled “2”. At a particular junction within the playback ofsecond scenario 86″, a branchingdecision 96 may be required. If no branch is to occur at branchingdecision 96,second scenario 86″ continues. However, if the video is to branch at branchingdecision 96, afirst subscenario 94′, labeled 2A, may be presented to trainee 26 (FIG. 1 ). - In addition,
second scenario 86″ shows that following initiation offirst subscenario 94′, another branchingdecision 98 may be required. When no branching is to occur at branchingdecision 98,first subscenario 94′ continues. Alternatively, when branching is to occur at branchingdecision 98, asecond subscenario 94″, labeled 2C is presented. Following the completion ofsecond scenario 86″,first subscenario 94′, orsecond subscenario 96″,video playback process 93 forsecond scenario 86″ is finished. - An
exemplary scenario 86 in which video branching might occur is as follows: detectors 50 (FIG. 1 ) are surveying their respective screens 22 (FIG. 1 ) for an infrared (IR) spot, indicating that at least one of weapons 31 (FIG. 1 ) has been “fired” to project laser beam 33 (FIG. 1 ) onto one ofscreens 22. Tracking processor 34 (FIG. 2 ) may determine coordinates for a best estimated location of the “shot.” These coordinates are communicated from trackingprocessor 34 to simulation controller 32 (FIG. 2 ). Simulation controller time links the impact location of the “shot” to video 58 (FIG. 1 ) and controls branching of the video accordingly. For example, if a person withinsecond scenario 86 is “shot”,scenario 86 may branch to asubscenario 94 showing the person falling. - Referring back to
FIG. 5 , the presentation ofscenarios 86 can be tailored to the type and complexity of the desired training. For example,scenario 86, labeled “1” may optionally take a single branch. As described above,second scenario 86″ may optionally branch tofirst subscenario 94′, and then optionally branch fromfirst subscenario 94′ tosecond subscenario 94″. Anotherscenario 86, labeled “3” need not branch at all, and yet anotherscenario 86, labeled “4”, may optionally branch to one of twosubscenarios 94. - The present invention contemplates the provision of custom authoring capability of
scenarios 86 to the training organization. To that end, scenario creation software permits a scenario developer to construct situations that can be displayed onscreens 22 from “stock” footage without the demands to perform extensive camera work. In a preferred embodiment, the scenario creation software employs a technique known as compositing. Compositing is the post-production combination of two or more video/film/digital clips into a single image. - In compositing, two images (or clips) are combined in one of several ways using a mask. The most common way is to place one image (the foreground) over another (the background). Where the mask indicates transparency, the background image will show through the foreground. Blue/green screening, also known as chroma keying is a type of compositing where the mask is calculated from the foreground image. Where the image is blue (or green for green screen), the mask is considered to be transparent. This technique is useful when shooting film and video, as a blue or green screen can be placed behind the object being shot and some other image then inserted in that space later.
- The scenario creation software provides the scenario developer with a library of background still and/or motion images. These background images are desirably panoramic images, so that one large picture is continued from one view on one of screens 22 (
FIG. 1 ) to the adjacent one ofscreens 22, and so forth. Green (or blue) screen video clips may be captured by the user, or may be provided within scenario creation software. These video clips may include threatening or non-threatening individuals opening doors, coming around corners, appearing from behind objects, and so forth. - The scenario creation software then enables the scenario developer to display the background image with various foreground clips to form the scenario. In addition, the scenario developer may optionally determine the “logic” behind when and where the clips may appear. For example, the scenario developer could determine that foreground image “A” is to appear at a predetermined and/or random time. In addition, the scenario developer may add “hit zones” to the clips. These “hit zones” are areas where the clip would branch due to interaction by the user. The scenario developer can instruct the scenario to branch to clip “C” if a “hit zone” was activated on clip “B”.
- Through the use of scenario creation software, the software developer is enabled to add, modify, and subtract video clips, still images, and/or audio clips to or from the scenario that they are creating. The scenario developer may then be able to preview and test their scenario during the scenario creation process. Once the scenario developer is satisfied with the content, the scenario creation software can create the files needed by simulation system 20 (
FIG. 1 ), and automatically set up the scenario to be presented onscreens 22. -
FIG. 7 shows an illustrative representation ofadjacent views 88 ofprerecorded video 58 of one ofscenarios 86.Adjacent views 88 are presented onadjacent screens 22. For example,first screen 22′ showsfirst view 88′,second screen 22″ showssecond view 88″, and so forth. Of course, as described above, screens 22 are arranged in a hexagonal configuration. Accordingly,adjacent views 88 surround and immerse trainee 26 (FIG. 1 ) into the situation presented inscenario 86. Upon being presented with such a situation inscenario 86, it is incumbent upontrainee 26 to determine what course of action he or she might take in response to the situation. -
FIG. 7 further illustrates anexemplary impact location 100 of laser beam 33 (FIG. 1 ) projected ontofirst screen 22′. In this instance,trainee 26 has determined that a subject 102 was an imminent threat totrainee 26 and/or to asecond subject 104. Hence, for purposes of demonstration, subject 102 is atarget 105 withinscenario 86 displayed on the multiple screens 22. -
Trainee 26 responded to perceived aggressive behavior exhibited by subject 102 with the force that he or she deemed to be reasonably necessary during the course of the situation unfolding withinscenario 86. As discussed previously, detector 50 (FIG. 1 ) associated with first screen detectsimpact location 100. Tracking processor 34 (FIG. 1 ) receives information fromdetector 50 associated withimpact location 100 indicating thatweapon 31 was actuated bytrainee 26. The received information may entail receipt of the raw digital video, which trackingprocessor 34 then converts to processed information, for example, X and Y coordinates ofimpact location 100. The X and Y coordinates can then be presented totrainee 26 in the form of report 64 (FIG. 2 ), and/or can be communicated to simulation controller 32 (FIG. 2 ) for subsequent video branching, as discussed above. -
FIG. 8 shows a diagram of a halfsurround simulation system 106 in accordance with another preferred embodiment of the present invention. The components presented in fullsurround simulation system 20 are modular and can be readily incorporated into other simulation systems dependent upon training requirements. In this situation, a 180-degree field of view is accomplished. As such, halfsurround simulation system 106 includes threescreens 22, each of which has associated therewith one ofprojectors 38, one ofdetectors 50, and one ofspeakers 44. However, unlike fullsurround simulation system 20, halfsurround simulation system 106 utilizes a conventional front projection technique. In this case,projectors 38 anddetectors 50 are desirably mounted on a ceiling and out of the way oftrainee 26. - The 180-degree field of view enables
trainee 26 to utilize peripheral visual and auditory cues. However space and cost savings is realized relative to fullsurround simulation system 106. Space savings is realized because the overall footprint of halfsurround simulation system 106 is approximately half that of fullsurround simulation system 20, and cost savings is realized by utilizing a smaller number of components. -
FIG. 9 shows a diagram of a three hundred degreesurround simulation system 108 in accordance with yet another preferred embodiment of the present invention. As shown, 300-degreesurround simulation system 108 includes a total of fivescreens 22 and fiverear projection systems 28. Three hundred degreesurround simulation system 108 enables nearly full surround and effective immersion fortrainees 26. However, by using oneless screen 22, anopening 110 is formed betweenscreens 22 for easy ingress, egress, and trainee observation purposes. -
System 108 is further shown as including a remote debrief station 111. Remote debrief station 111 may be located in a different room, as represented by dashedlines 113. Station 111 is in communication withworkstation 30, and more particularly with tracking processor 34 (FIG. 2 ) and/or simulation controller 32 (FIG. 2 ), via a wireline orwireless link 115. In an exemplary situation, software resident atworkstation 30 compiles and transfers pertinent files for off-line review oftrainee 26 response following a simulation experience. Off-line review could entail review and/or playback of the scenario, video/audio files oftrainee 26, results, and so forth. - Although each of the simulation systems of
FIGS. 1, 4 , 8, and 9 show the use of either front projection systems or rear projection systems, it should be understood that a single simulation system may include a combination of front and rear projections systems in order to better accommodate size limitations of the room in which the simulation system is to be housed. - Referring to
FIGS. 1 and 10 ,FIG. 10 shows a flowchart of atraining process 112 of the present invention.Training process 112 is performed utilizing, for example, fullsurround simulation system 20.Training process 112 will be described herein in connection with a single one oftrainees 26 utilizing fullsurround simulation system 20 for simplicity of illustration. However, as discussed above, more than onetrainee 26 may participate intraining process 112 at a given session. In addition,training process 112 applies equivalently when utilizing half surround simulation system 106 (FIG. 8 ) or three hundred degree surround simulation 108 (FIG. 9 ). -
Training process 112 presents one of scenarios 86 (FIG. 5 ), in the form of full motion, realistic video.Trainee 26, withweapon 31, is immersed intoscenario 86 and is enabled to react to a threatening situation. The object of such training is to learn to react safely, and with appropriate use of force and judgment. -
Training process 112 begins with atask 114. Attask 114, an operator calibratessimulation system 20. As such,calibration task 114 is a preliminary activity that can occur prior topositioning trainee 26 withinparticipation location 24 ofsimulation system 20.Calibration task 114 is employed to calibrate each ofdetectors 50 with their associatedprojectors 38. In addition,calibration task 114 may be employed to calibrate, i.e., zero,weapon 31 relative toprojectors 38. - Referring to
FIG. 11 in connection withtask 114,FIG. 11 shows a diagram of anexemplary calibration pattern 116 of squares that may be utilized to calibrate each ofdetectors 50 with their associatedprojectors 38. In order to branch from scenario 86 (FIG. 5 ) to an appropriate subscenario 94 (FIG. 5 ), and/or to obtain high accuracy trainee performance measurements, it is essential that the detection accuracy ofdetectors 50 corresponds with a known standard, i.e.,calibration pattern 116 presented via one ofprojectors 38. For example, ifprojector 38 illuminates one pixel at, for example, X-Y coordinates of 4-4, and the associateddetector 50 detects the illuminated pixel (dot) at, for example, X-Y coordinates of 5-5, then tracking software, resident in tracking processor 34 (FIG. 2 ) must determine the appropriate mathematical adjustments to ensure thatdetector 50 is coordinated withprojector 38. - To that end, at
calibration task 114, IR filter 66 (FIG. 2 ) is removed from lens 68 (FIG. 2 ) ofdetector 50 and visible light is allowed in so thatdetector 50 can detectcalibration pattern 116. As mentioned beforeIR filter 66 removal may be accomplished by manual removal by the operator, or by automatic means. FollowingIR filter 66 removal,projector 38projects calibration pattern 116 for detection bydetector 50, and tracking processor 34 (FIG. 2 ) correlates detected coordinates with projected coordinates. Weapon zeroing may entail projecting laser beam 33 (FIG. 1 ) fromweapon 31 toward a predetermined position, i.e., a “zero” position, oncalibration pattern 116. Interpolation can subsequently be employed to correlate projected coordinates for impact location 100 (FIG. 7 ) with detected coordinates forimpact location 100.Calibration task 114 is performed for eachprojector 38 anddetector 50 pair, either sequentially or concurrently. - With reference back to
FIGS. 1 and 11 , followingtask 114,trainee 26 involvement can begin at a task 118. At task 118,trainee 26 moves intoparticipation location 24, and the operator atworkstation 30 displays a selected one of scenarios 86 (FIG. 5 ). In particular, simulation controller 32 (FIG. 2 ) commands projection controllers 36 (FIG. 2 ) to access their respective scenario libraries 56 (FIG. 2 ) to obtain a portion of video 58 (FIG. 2 ) associated with the desiredscenario 86.Adjacent views 88 ofscenario 86 are subsequently displayed onadjacent screens 22, as described in connection withFIG. 7 . - In conjunction with task 118, a
query task 120 determines whetherlaser beam 33 is detected on one ofscreens 22. That is, atquery task 120, each ofdetectors 50 monitors forlaser beam 33 projected on one ofscreens 22 in response to actuation ofweapon 31. When one ofdetectors 50 detectslaser beam 33, this information is communicated to tracking processor 34 (FIG. 2 ), in the form of, for example, a digital video signal. - When
laser beam 33 is detected atquery task 120, process flow proceeds to atask 122. Attask 122, trackingprocessor 34 determines coordinates describing impact location 100 (FIG. 7 ). - Following
task 122, or alternatively, whenlaser beam 33 is not detected atquery task 120, process flow proceeds to querytask 124.Query task 124 determines whether to branch to one of subscenarios 94 (FIG. 5 ). In particular, simulation controller 32 (FIG. 2 ) determines from received information associated withimpact location 100, i.e., X-Y coordinates, or from the absence of X-Y coordinates, whether to command projection controllers 36 (FIG. 2 ) to branch to onesubscenarios 94. As such, depending upon the desired training approach, this branchingquery task 124 may be due to action (i.e., detection of laser beam 33) or inaction (i.e., nolaser beam 33 detected) oftrainee 26. -
Process 112 proceeds to atask 126 when a determination is made atquery task 124 to branch to one ofsubscenarios 94. Attask 126,simulation controller 32 commands projection controllers 36 (FIG. 2 ) to access their respective scenario libraries 56 (FIG. 2 ) to obtain a portion of video 58 (FIG. 2 ) associated with the desired subscenario 94 (FIG. 5 ).Adjacent views 88 ofsubscenario 94 are subsequently displayed onadjacent screens 22. - When
query task 124 determines not to branch to one ofsubscenarios 94,process 112 continues with aquery task 128.Query task 128 determines whether playback ofscenario 86 is complete. When playback ofscenario 86 is not complete, program control loops back toquery task 120 to continue monitoring forlaser beam 31. Thus,training process 112 allows for the capability of detecting multiple shots fired fromweapon 31. Alternatively, when playback ofscenario 86 is complete, process control proceeds to a query task 130 (discussed below). - Referring back to
task 126, aquery task 132 is performed in conjunction withtask 126.Query task 132 determines whetherlaser beam 33 is detected on one ofscreens 22 in response to the presentation ofsubscenario 94. When one ofdetectors 50 detectslaser beam 33, this information is communicated to tracking processor 34 (FIG. 2 ), in the form of, for example, a digital video signal. - When
laser beam 33 is detected atquery task 132, process flow proceeds to a task 134. At task 134, trackingprocessor 34 determines coordinates describing impact location 100 (FIG. 7 ). - Following task 134, or alternatively, when
laser beam 33 is not detected atquery task 132, process flow proceeds to querytask 136.Query task 136 determines whether to branch to another one of subscenarios 94 (FIG. 5 ). In particular, simulation controller 32 (FIG. 2 ) determines from received information associated withimpact location 100, i.e., X-Y coordinates, or from the absence of X-Y coordinates, whether to command projection controllers 36 (FIG. 2 ) to branch to another one ofsubscenarios 94. - Process 112 loops back to
task 126 when a determination is made atquery task 136 to branch to another one ofsubscenarios 94. The next one ofsubscenarios 94 is subsequently displayed, anddetectors 50 continue to monitor forlaser beam 31. However, whenquery task 136 determines not to branch to another one ofsubscenarios 94,process 112 continues with aquery task 138. -
Query task 138 determines whether playback ofsubscenario 94 is complete. When playback ofsubscenario 94 is incomplete, program control loops back toquery task 132 to continue monitoring forlaser beam 31. Alternatively, when playback ofsubscenario 94 is complete, process control proceeds to querytask 130. - Following completion of playback of either of
scenario 86, determined atquery task 128, or completion of playback ofsubscenario 94, determined atquery task 138,query task 130 determines whether report 64 (FIG. 2 ) is to be generated. A determination can be made when one of tracking processor 34 (FIG. 2 ) orsimulation controller 32 detects an affirmative or negative response to a request for report 64 presented to the operator. When no report 64 is desired,process 112 exits. However, when report 64 is desired,process 112 proceeds to a task 140. - At task 140, report 64 is provided. In an exemplary embodiment, tracking processor 34 (
FIG. 2 ) may process the received information regardingimpact location 100, associate the received information with the displayedscenario 86 and any displayedsubscenarios 94, and combine the information into a format, i.e., report 64, that can be used for review and de-briefing. Report 64 may be formatted for display and provided via a monitor at, for example, remote debrief station 111 (FIG. 9 ) in communication with trackingprocessor 34. Alternatively, or in addition, report 64 may be printed out. Report 64 may include various information pertaining totrainee 26 performance including, for example, location of first andsecond subjects FIG. 7 ), versusimpact location 100, the desired response toscenario 86 versus the actual response oftrainee 26 toscenario 86, and/or the state of the wellbeing oftrainee 26 in response toscenario 86. The state of wellbeing might indicate whether the trainee's response toscenario 86 could have causedtrainee 26 to be injured or killed in a real life situation simulated byscenario 86. - Following task 140,
training process 112 exits. Of course, it should be apparent thattraining process 112 can be optionally repeated utilizing the same one ofscenarios 86 or another one ofscenario 86. -
Training process 112 describes methodology associated with situational response training for honing a trainee's decision-making skills in situations that are stressful and potentially dangerous. Of course, as discussed above, a comprehensive training program may also encompass marksmanship training and/or weapons qualification testing. Fullsurround simulation system 20 may be configured for marksmanship training and weapons qualification testing, as discussed in connection withFIG. 4 . That is, screens 22 may be arranged coplanar with one another to form firing range configuration 84 (FIG. 4 ). -
FIG. 12 shows a diagram ofdetector 50 of simulation system 20 (FIG. 1 ) zoomed in to asmall viewing area 142 for weapons qualification testing. In a preferred embodiment, at least one ofdetectors 50 is outfitted with azoom lens 144.Zoom lens 144 is adjustable to decrease an area of one ofscreens 22, for example,first screen 22′, that is viewed bydetector 50. By either automatically or manually zooming and focusing in tosmall viewing area 142, higher-resolution tracking of laser beam 31 (FIG. 1 ) can be achieved. Although only one is shown, there may additionally be multiple -detectors 50 each configured to detect shots fired in an associatedviewing area 142 onfirst screen 22′. -
Targets 146 presented onfirst screen 22′ via one of projectors 38 (not shown) are proportionately correct and sized to fit withinsmall viewing area 142. Thus, the size oftargets 146 may be reduced by fifty percent relative to their appearance when zoomed out. As shown, there may bemultiple targets 146 presented onfirst screen 22′. Additional information pertinent to qualification testing may also be provided onfirst screen 22′. This additional information may include, for example, distance to the target (for example, 75 meters), wind speed (for example, 5 mph), and so forth. In addition, an operator may optionally enter, viaworkstation 30, information for use by a software ballistic calculator to compute, for example, the effects of wind, barometric pressure, altitude, bullet characteristics, and for forth, on the location of a “shot” fired towardtargets 146. - Report 64 (
FIG. 2 ) may be generated in response to qualification testing that includes data pertinent to shooting accuracy, such as average impact location forlaser beam 31, offset oflaser beam 31 from center, a score, and so forth. -
FIG. 13 shows a block diagram of asimulation system 150 in accordance with an alternative embodiment of the present invention.Simulation system 150 includes many of the components of the previously described simulation systems. That is,simulation system 150 includesmultiple screens 22 surroundingparticipation location 24, arear projection system 28 associated with eachscreen 22, aworkstation 30, and so forth. Accordingly, a description of these components need not be repeated. - In contrast to the aforementioned simulation systems,
simulation system 150 utilizes a non-laser-basedweapon 152. Like weapon 31 (FIG. 1 ),weapon 152 may be implemented by any firearm (i.e., hand-gun, rifle, shotgun, etc.) and/or a non-lethal weapon (i.e., pepper spray, tear gas, stun gun, etc.) that may be utilized bytrainees 26 in the course of duty. However, rather than a laser insert,weapon 152 is outfitted with at least two trackingmarkers 154.Simulation system 150 further includes a detection subsystem formed frommultiple tracking cameras 156 encircling, and desirably positioned above,participation location 24. - In a preferred embodiment, tracking
markers 154 are reflective markers coupled toweapon 152 that are detectable by trackingcameras 156. Thus, trackingcameras 156 can continuously track the movement ofweapon 152. Continuous tracking ofweapon 152 provides ready “aim trace” where the position of weapon 152 (or even trainee 26) can be monitored and then replayed during a debrief.Reflective tracking markers 154 require no power, and trackingcameras 156 can track movement ofweapon 152 in three dimensions, as opposed to two dimensions for projected laser beam tracking. In addition, reflective tracking is not affected by metal objects in close proximity, and reflective tracking operates at a very high update rate. - Accurate reflective tracking calls for a minimum of two
reflective markers 154 perweapon 152 and at least three trackingcameras 156, although four to sixtracking cameras 156 are preferred. Each of trackingcameras 156 emits light (often infrared-light) directly next to the lens of trackingcamera 156.Reflective tracking markers 154 then reflect the light back to trackingcameras 156. A tracking processor (not shown) atworkstation 30 then performs various calculations and combines each view from trackingcameras 156 to create a highly accurate three-dimensional position forweapon 152. Of course, as known to those skilled in the art a calibration process is required for both trackingcameras 156 andweapon 152, and if any of trackingcameras 156 are moved or bumped,simulation system 150 should be recalibrated. -
Weapon 152 may be a pistol, for example, loaded with blank rounds. Actuation ofweapon 152 is thus detectable by trackingcameras 156 as a sudden movement of trackingmarkers 154 caused by the recoil ofweapon 152 in a direction opposite from the direction of the “shot” fired, as signified by abi-directional arrow 158. By using such a technique,multiple weapons 152 can be tracked inparticipation location 24, and the position ofweapons 152, as well as the projection of where a “shot” fired would go, can be calculated with high accuracy.Additional markers 154 may optionally be coupled totrainee 26, for example, on the head region to tracktrainee 26 movement and to correlate the movement oftrainee 26 with the presented scenario. - If
weapon 152 is one that does not typically recoil when actuated,weapon 152 could further be configured to transmit a signal, via a wired or wireless link, indicating actuation ofweapon 152. Alternatively, a weapon may be adapted to include both a laser insert and tracking markers, both of which may be employed to detect actuation of the weapon. -
FIG. 14 shows a simplified block diagram of acomputing system 200 for executing ascenario provision process 202 to generate a scenario for playback in a simulation system, such as those described above. As mentioned in connection withFIG. 6 , the present invention contemplates the provision of custom authoring capability of scenarios to the training organization. To that end, the present invention entails scenario creation code executable oncomputing system 200 and methodology for providing a scenario for use in the simulation systems described above. - Traditional training authoring software for instructional use-of-force training and military simulation can provide three-dimensional components. That is, conventional authoring software enables the manipulation of three-dimensional geometry that represents, for example, human beings. However, due to current technological limitations, computer-generated human characters lack realism in both look and movement, especially in real-time applications. If a trainee believes they are shooting a non-person, rather than an actual person, they may be more likely to use deadly force, even when deadly force is unwarranted. Consequently, a trainee having trained with video game-like “cartoon” characters, may overreact when faced with minimal or non-threats. Similarly, the trainee may be less effective against real threats.
- Other current training approaches utilize interactive full-frame video. This type of video can provide very realistic human look and movement, at least on single screen applications. However, simulations based on full-frame video have limitations with respect to branching because the producers of such content must film every possible branch that may be needed during the simulation. In a practical setting, this means that training courseware becomes increasingly difficult to film as additional threats (i.e., characters) are added. The usual practice is to set up a branching point within the video, then further down the timeline, set up another branching point. This effectively limits the number of characters “on-screen” at any one time to usually a maximum of one or two. Moreover, such video has limited ability for reuse since the actions of the actors are not independent from the background. For video-based applications within the multi-screen simulation systems described above, these limitations are unacceptable.
- As discussed in detail below, the scenario creation code permits a scenario developer to construct situations that can be displayed on screens 22 (
FIG. 1 ) from stock footage without the demands of performing extensive camera work. The present invention may be utilized to create scenarios for the simulation systems described above, as well as other use-of-force training and military simulation systems. Use-of-force training can include firearms as well as less lethal options, such as chemical spray, TZER, baton, and so forth. In addition, the present invention may be utilized to create scenarios for playback in other playback systems that are not related to use-of-force or military training, such as teaching or behavioral therapy environments, sales training, and the like. Moreover, the present invention may be adapted for scenario creation for use within video games. -
Computing system 200 includes aprocessor 204 on which the methods according to the invention can be practiced.Processor 204 is in communication with adata input 206, adisplay 208, and amemory 210 for storing at least one scenario 211 (discussed below) generated in response to the execution ofscenario provision process 202. These elements are interconnected by abus structure 212. -
Data input 206 can encompass a keyboard, mouse, pointing device, and the like for user-provided input toprocessor 204.Display 208 provides output fromprocessor 204 in response to execution ofscenario provision process 202.Computing system 200 can also include network connections, modems, or other devices used for communications with other computer systems or devices. -
Computing system 200 further includes a computer-readable storage medium 214. Computer-readable storage medium 214 may be a magnetic disc, optical disc, or any other volatile or non-volatile mass storage system readable byprocessor 204.Scenario provision process 202 is executable code recorded on computer-readable storage medium 214 for instructingprocessor 204 to createscenario 211 for interactive use in a scenario playback system for visualization and interactive use by trainees 26 (FIG. 1 ). Adatabase 203 may be provided in combination withscenario provision process 202.Database 203 includes actor video clips, objects, sounds, background images, and the like that can be utilized to createscenario 211. -
FIG. 15 shows a flow chart ofscenario provision process 202.Process 202 is executed to createscenario 211 for playback in a simulation system, such as those described above. For clarity of illustration,process 202 is executed to createscenario 211 for playback in three hundred degree surround simulation system 108 (FIG. 9 ). However,process 202 may alternatively be executed to createscenario 211 for full surround simulation system 20 (FIG. 1 ), half surround simulation system 106 (FIG. 8 ), and other single screen or multi-screen simulation systems. In general,process 202 allows a scenario author to customizescenario 211 by choosing and combining elements, such as actor video clips, objects, sounds, and background images.Process 202 further allows the scenario author to define the logic (i.e., a relationship between the elements) withinscenario 211. -
Scenario provision process 202 begins with atask 216. Attask 216,process 202 is initiated. Initiation ofprocess 202 occurs by conventional program start-up techniques and yields the presentation of a main window on display 208 (FIG. 14 ). - Referring to
FIG. 16 in connection withtask 216,FIG. 16 shows ascreen shot image 218 of amain window 220 presented in response to execution ofscenario provision process 202.Main window 220 is the primary opening view ofprocess 202, and includes a number of sub-windows such as ascenario layout window 222, alibrary window 224, ascenario logic window 226, and aproperties window 228.Main window 220 further includes a number of user fields, referred to as buttons, for determining the behavior ofprocess 202 and controlling its execution. The functions of the sub-windows and buttons withinmain window 220 will be revealed below in connection with the execution ofscenario provision process 202. In response totask 216,scenario provision process 202 awaits receipt of commands from a scenario author (not shown) in order to generate scenario 211 (FIG. 14 ). - Referring to
FIG. 15 , atask 230 is performed in response to the receipt of a first input, via data input 206 (FIG. 14 ) from the scenario author. The first input indicates choice of a background image forscenario 211. - Referring to
FIG. 17 in connection withtask 230,FIG. 17 shows ascreen shot image 232 oflibrary window 224 from main window 220 (FIG. 16 ) exposing alist 233 ofbackground images 234 for scenario 211 (FIG. 14 ). Interactive buttons withinlibrary window 224 can include a “background images”button 236, an “actors”button 238, and a “behaviors”button 240. Additional buttons include a “new folder”button 242 and a “create new”button 244.List 233 is revealed when the scenario author clicks onbackground images button 236. As shown,list 233 may be organized in folders representingimage categories 246, such as rural, urban, interior, and the like. However, it should be understood thatlist 233 may be organized in various ways pertinent to the particular organization executingscenario provision process 202 with the creation of new or different folders andimage categories 246. -
Background images 234 may be chosen from those provided withinlist 233 stored in database 203 (FIG. 14 ). Alternatively,new background images 234 may be imported utilizing “create new”button 244. In a preferred embodiment,background images 234 can be obtained utilizing a camera and creating still images within an actual, or real environment.Background images 234 may be in a panoramic format utilizing conventional panoramic photographic techniques and processing for use within the large field-of-view of three hundred degree surround simulation system 108 (FIG. 9 ). The creation, editing, and storage ofbackground images 234 will be described in greater detail in connection with a background editor illustrated inFIGS. 27-29 . - The scenario author may utilize a
conventional pointer 248 to point to one ofbackground images 234. Ashort description 250, in the form of text and/or a thumbnail image, may optionally be presented at the bottom oflibrary window 224 to assist the scenario author in his or her choice of one ofbackground images 234. Once the scenario author has chosen one ofbackground images 234, the scenario author can utilize a conventional drag-and-drop technique by clicking on one ofbackground images 234 and dragging it into scenario layout window 222 (FIG. 16 ). Those skilled in the art will recognize that other conventional techniques, rather than drag-and-drop, may be employed for choosing one ofbackground images 234 and placing it withinscenario layout window 222. - Referring to
FIG. 18 in connection with task 230 (FIG. 15 ) ofscenario provision process 202,FIG. 18 shows ascreen shot image 252 ofmain window 220 following selection of onebackground images 234 provided in list 233 (FIG. 17 ). Afirst background image 234′ is shown inscenario layout window 222.First background image 234′ is presented in fiveadjacent panels 254 withinscenario layout window 222. These fiveadjacent panels 254 correspond to the five adjacent screens 22 (FIG. 9 ) of three hundred degree surround simulation system 108 (FIG. 9 ). As shown,first background image 234′ can be seamlessly presented acrosspanels 254, hence the fivescreens 22 ofsystem 108. Asixth panel 256 inscenario layout window 222 may include a portion of one ofbackground images 234 when creating scenario 211 (FIG. 14 ) for utilization within full surround simulation system 20 (FIG. 1 ). - Referring back to scenario provision process 202 (
FIG. 15 ), followingtask 230 in whichfirst background image 234′ (FIG. 18 ) is chosen and displayed in scenario layout window 222 (FIG. 18 ),process 202 proceeds to a videoclip selection segment 258. Videoclip selection segment 258 includes atask 260.Task 260 is performed in response to the receipt of a second input, via data input 206 (FIG. 14 ) from the scenario author. The second input indicates selection of an actor that may be utilized withinscenario 211. - Referring to
FIG. 19 in connection withtask 260,FIG. 19 shows ascreen shot image 262 oflibrary window 224 from main window 220 (FIG. 16 ) exposing alist 264 ofactors 266 forscenario 211.List 264 is revealed when the scenario author clicks onactors button 238.List 264 may be organized in folders representingactor categories 268, such as friendlies, hostiles, targets, and so forth. However, it should be understood thatlist 264 may be organized in various ways pertinent to the particular organization executingscenario provision process 202 with the creation of new or different folders andactor categories 268. -
Actors 266 may be chosen from those provided withinlist 264 stored in database 203 (FIG. 14 ). Alternatively,new actors 266 may be imported utilizing “create new”button 244, and importing one or more video clips of an actor or actors performing activities, or animation sequences. In a preferred embodiment, video clips ofactors 266 can be obtained by filming an actor against a blue or green screen, and performing post-production processing to create a “mask” or “matte”, of the area that the actor occupies against the blue or green screen. The creation, editing, and storage of video clips ofactors 266 will be described in greater detail in connection withFIGS. 30-34 . - At
task 260 ofprocess 202, the scenario author may utilizepointer 248 to point to one ofactors 266, for example afirst actor 266′, labeled.“Offender 1”. Ashort description 270, in the form of text and/or a thumbnail image, may optionally be presented at the bottom oflibrary window 224 to assist the scenario author in his or her selection of one ofactors 266. - With reference back to process 202 (
FIG. 15 ), once the scenario author has selected one ofactors 266 attask 260, videoclip selection segment 258 proceeds to atask 272.Task 272 is performed in response to the receipt of a third input, via data input 206 (FIG. 14 ) from the scenario author. The third input indicates assignment of a behavior to the selected one ofactors 266. - Referring to
FIG. 20 in connection with task 172,FIG. 20 shows ascreen shot image 274 oflibrary window 224 from main window 220 (FIG. 16 ) exposing alist 276 ofbehaviors 278 for assignment to an actor 266 (FIG. 19 ) from list 264 (FIG. 19 ). In this exemplary illustration,list 276 may be revealed when the scenario author clicks onbehaviors button 240.List 276 may be organized in folders representingbehavior categories 280, such as aggressive, alert, civil, and such. However, it should be understood thatlist 276 may be organized in various ways pertinent to the particular organization executingscenario provision process 202 with the creation of new or different folders andbehavior categories 280. - In accordance with a preferred embodiment of the present invention, each of
behaviors 278 withinlist 276 is the aggregate of actions and/or movements made by an object irrespective of the situation.Behaviors 278 withinlist 276 are not linked with particular actors 266 (FIG. 19 ). Rather, they are the aggregate of possible behaviors provided within database 203 (FIG. 14 ) that may be assigned to particular actors. For example, one ofbehaviors 278, i.e., afirst behavior 278′ labeled “Hostile A”, may be a hostile behavior that includes stand, shoot, and fall if shot, as indicated by itsdescription 282. By way of another example, asecond behavior 278″, labeled “Civil A” may be a civil, or non-hostile, behavior that includes stand, turn, and flee. Again, it is important to note thatbehaviors 278 are not linked with particular actors, but rather are defined by the provider ofscenario provision process 202 as possible actions and/or movements that may be undertaken withinscenario 211. - List 264 (
FIG. 19 ) of actors 266 (FIG. 19 ) is illustrated herein to show the presentation of an aggregate of actors 266 (FIG. 19 ) that may be selected when creatingscenario 211. Similarly,list 276 ofbehaviors 278 is illustrated herein to show the presentation of an aggregate of behaviors 278 (FIG. 20 ) that may be assigned toactors 266 when creatingscenario 211. In actuality,certain behaviors 278 can only be assigned toactors 266 if theactors 266 were initially filmed against a green or blue screen performing thosebehaviors 278. That is, each ofbehaviors 278 represents a script that may be performed by any of a number ofactors 266 and filmed to create video clips for use within scenario provision process 202 (FIG. 15 ). Thus, a particular one ofactors 266 may support a subset ofbehaviors 278 withinlist 276, rather than the totality ofbehaviors 278 inlist 276. - Referring now to
FIG. 21 in connection with task 272 (FIG. 15 ) of scenario provision process 202 (FIG. 15 ),FIG. 21 shows ascreen shot image 284 of an exemplary drop-down menu 286 ofbehaviors 278 supported by a selected one of theactors 266. Drop-down menu 286 represents a subset ofbehaviors 278 in which the selected one ofactors 266 was filmed and for which video clips of thosebehaviors 278 exist in database 203 (FIG. 14 ). When the scenario author selects, for example,first actor 266′ (FIG. 19 ) attask 260, drop-down menu 286 may appear to facilitate assignment of one ofbehaviors 278. By utilizingpointer 248 to point to and select one ofbehaviors 278, the scenario author may assign one ofbehaviors 278, for example,first behavior 278′, tofirst actor 266′. - Although the above description indicates the selection of one of
actors 266 and the subsequent assignment of one ofbehaviors 278 to the selectedactor 266, it should be understood that the present invention enables the opposite occurrence. For example, the scenario author may select one ofbehaviors 278 fromlist 276. In response, a drop-down menu may appear that includes a subset ofactors 266 from list 264 (FIG. 19 ), each of which supports the selected one ofbehaviors 278. The scenario author may subsequently select one ofactors 266 that supports the selected one ofbehaviors 278. - With reference back to scenario provision process 202 (
FIG. 15 ), followingbehavior assignment task 272, process flow proceeds to atask 288. Attask 288, the selected actor and the assigned behavior, i.e.,first actor 266′ (FIG. 19 ) performingfirst behavior 278′, are combined with the selected background image, i.e.,first background image 234′ (FIG. 18 ), in scenario layout window 222 (FIG. 18 ). It should be understood thattask 288 causes the combination offirst background image 234 with video clips corresponding tofirst actor 266′ performingfirst behavior 278′. Due to the blue or green screen filming technique offirst actor 266′,first actor 266′ is a mask portion that forms a foreground image overfirst background image 234′, withfirst background image 234′ being visible in the transparent portion (the blue or green screen background) of the video clips offirst actor 266′. - In a preferred embodiment, the scenario author can utilize a conventional drag-and-drop technique by clicking on
first actor 266′ and dragging it intoscenario layout window 222. By utilizing the drag-and drop technique, the scenario author can determine a location withinfirst background image 234′ in which the author wishesfirst actor 266′ to appear. Those skilled in the art will recognize that other conventional techniques, rather than drag-and-drop, may be employed for choosing one ofactors 266 and placing it withinscenario layout window 222. - In addition, the scenario author can resize
first actor 266′ relative tofirst background image 234′ to characterize a distance offirst actor 266′ from trainee 26 (FIG. 9 ) utilizing simulation system 108 (FIG. 9 ). For example, the scenario author may alter the pixel dimension of the digital image offirst actor 266 by using up/down keys on the keyboard of data input 206 (FIG. 14 ). Alternatively, the scenario author may selectfirst actor 266′ withinfirst background image 234′, position pointer 248 (FIG. 17 ) over a conventional selection handle displayed aroundfirst actor 266′, and resizefirst actor 266′ by clicking on the handle and dragging. In yet another alternative embodiment,scenario provision process 202 may enable the entry of a desired distance offirst actor 266′ fromtrainee 26.Process 202 may then automatically calculate a height offirst actor 266′ withinfirst background image 234′ relative to the desired distance. - Following combining
task 288,scenario provision process 202 proceeds to aquery task 290. Atquery task 290, the scenario author determines whetherscenario 211 is to include another one of actors 266 (FIG. 19 ). When another one ofactors 266 is to be utilized within scenario 211 (FIG. 14 ),process 202 loops back totask 260 so that another one ofactors 266, for example, asecond actor 266″ (FIG. 19 ) is selected, assignment of one of behaviors 278 (FIG. 20 ) is made attask 272, for example,second behavior 278″ (FIG. 20 ), and video clips ofsecond actor 266″ performingsecond behavior 278″ are combined withfirst background image 234′ (FIG. 18 ). Consequently, repetition oftasks actors 266 that would be appropriate forscenario 211. -
FIG. 22 shows ascreen shot image 294 of a portion ofmain window 220 following selection of first andsecond actors 266′ and 266″, respectively, and their associated first andsecond behaviors 278′ and 278″ (FIG. 20 ), respectively, forscenario 211. Since each of first andsecond actors 266′ and 266″ are defined as a mask portion, or matte, during post production processing, each of first andsecond actors 266′ and 266″ overlayfirst background image 234′. - It should be noted that both first and
second actors 266′ and 266″ appear to be behind portions offirst background image 234′. For example,first actor 266′ appears to be partially hidden by arock 296, andsecond actor 266″ appears to be partially hidden byshrubbery 298. During a background editing process, portions offirst background image 234′ can be specified as foreground layers. Thus rock 296 andshrubbery 298 are each defined as a foreground layer withinfirst background image 234′. When regions within a background image are defined as foreground layers, these foreground layers will overlay the mask portion of the video clips corresponding to first andsecond actors 266′ and 266″. This layering feature is described in greater detail in connection with background editing ofFIGS. 27-29 . - With reference back to
FIG. 15 , when a determination is made atquery task 290 that nofurther actors 266 are to be selected, program flow proceeds to atask 300. Attask 300, the scenario author has the opportunity to build the scenario logic flow for scenario 211 (FIG. 14 ). That is, although actors and behaviors have been selected, as of yet, there is no definition of when the actors may appear. Nor is there definition of the interaction, or lack thereof, between the actors and behaviors. That capability is provided to the scenario author to further customizescenario 211 in accordance with his or her particular training agenda. - Referring to
FIGS. 23-24 in connection withtask 290,FIG. 23 shows ascreen shot image 302 ofscenario logic window 226 from main window 220 (FIG. 16 ) for configuring the scenario logic of scenario 211 (FIG. 14 ), andFIG. 24 shows a table 304 of a key ofexemplary symbols 306 utilized withinscenario logic window 226.Symbols 306 represent actions, events, and activities within a logic flow forscenario 211. By interconnectingsymbols 306 withinscenario logic window 226, the “logic”, or relationship between the elements can be readily constructed. - Table 304 includes a “start point”
symbol 308, an “external command”symbol 310, a “trigger”symbol 312, an “event”symbol 314, and “actor/behavior”symbol 316, an “ambient sound”symbol 318, and a “delay”symbol 320.Symbols 306 are provided herein for illustrative purposes. Those skilled in the art will recognize thatsymbols 306 could take on a great variety of shapes. Alternatively, color coding could be utilized to differentiate the various symbols. - As shown in
FIG. 23 , ascenario logic flow 322 for scenario 211 (FIG. 14 ) includes a number ofinterconnected symbols 306. Startpoint symbol 308 is automatically presented withinscenario logic window 226, and provides command and control to the scenario playback system, in this case three hundred degree surround simulation system 108 (FIG. 9 ), to load and initializescenario 211. Actor/behavior symbol(s) 316 may appear inscenario logic window 226 when actors 266 (FIG. 19 ) performing behaviors 278 (FIG. 20 ) are combined with one ofbackground images 234. However, actor/behavior symbol(s) 316 are “floating” or unconnected with regard to any other symbols appearing inscenario logic window 226 until the scenario author creates those connections. - Interactive buttons within
scenario logic window 226 can include an “external command”button 324, a “timer”button 326, and a “sound”button 328.External command symbol 310 is created inscenario logic window 226 when the scenario author clicks onexternal command button 324. External commands are interactions that may be created withinscenario logic flow 322 that occur from outside of simulation system 108 (FIG. 9 ). These external commands may be stored within database 203 (FIG. 14 ), and may be listed in, for example, in properties window 228 (FIG. 16 ) of main window 220 (FIG. 16 ) whenexternal command symbol 310 is created inscenario logic window 226. The scenario author can then select one of the external commands listed inproperties window 228. Exemplary external commands can include an instructor start command which starts the motion video ofscenario 211, a shoot command that causes an actor to be shot, although not by trainee 26 (FIG. 1 ). Other exemplary external commands could include initiating a shoot back device towardtrainee 26, initiating random appearance of another actor, initiating a specialized sound, and so forth. In operation ofscenario 211, these external commands can be displayed for ease of use by the instructor. -
Delay symbol 320 is created inscenario logic window 226 when the scenario author clicks ontimer button 326. The use oftimer button 326 allows the scenario author to input a time delay intoscenario logic flow 322. Appropriate text may appear in, for example,properties window 228 of main window whendelay symbol 320 is created inscenario logic window 226. This text can allow the author to enter a duration of the delay, or can allow the author to select from a number of pre-determined durations of the delay. -
Ambient sound symbol 318 is created inscenario logic window 226 when the scenario author clicks onsound button 328. The use ofsound button 328 allows the scenario author to input ambient sound intoscenario logic flow 322. Text may appear in, for example,properties window 228 of main window whenambient sound symbol 320 is created inscenario logic window 226. This text may be a list of sound files that are stored within database 203 (FIG. 14 ). The scenario author can then select one of the sound files listed inproperties window 228. Exemplary sound files include wilderness sounds, warfare sounds, street noise, traffic, and so forth. Alternatively,properties window 228 may present a browse capability when the scenario author clicks onsound button 328 so that the author is enabled to browse within computing system 200 (FIG. 14 ) or over a network connection for a particular sound file. -
Trigger symbol 312 withinscenario logic flow 322 represents notification to actor/behavior symbol 316 that something has occurred. Whereas,event symbol 314 withinscenario logic flow 322 represents an occurrence of something within an actor's behavior that will cause a reaction withinscenario logic flow 322. In this exemplary embodiment,trigger symbol 312 andevent symbol 314 can be generated when the scenario author “right clicks” on actor/behavior symbol 316. -
FIG. 25 shows ascreen shot image 330 of an exemplary drop-down menu 332 ofevents 334 associated with scenario logic window 226 (FIG. 23 ). When the scenario author “right clicks” on actor/behavior symbol 316 representingfirst actor 266′, drop-down menu 332 is revealed and one ofevents 334 can be selected. Drop-down menu 332 reveals a set ofevents 334 that can occur withinscenario logic flow 322 in response to an actor's behavior. By utilizingpointer 248 to point to and select one ofevents 334, the scenario author may assign one ofevents 334, for example, a “Fall”event 334′, tofirst actor 266′ withinscenario logic flow 322. -
FIG. 26 shows ascreen shot image 336 of exemplary drop-down menu 332 oftriggers 338 associated with scenario logic window 226 (FIG. 23 ). When the scenario author “right clicks” on actor/behavior symbol 316 representingsecond actor 266″, drop-down menu 332 is again revealed and one of the listed triggers 338 can be selected. Drop-down menu 332 reveals a set oftriggers 338 that can provide notification to an associated actor/behavior symbol 316. By utilizingpointer 248 to point to and select one oftriggers 338, the scenario author may assign one oftriggers 338, for example, a “Shot”trigger 338′, tosecond actor 266″ withinscenario logic flow 322. - Referring back to
FIG. 23 , thevarious symbols 306 withinscenario logic flow 322 are interconnected by arrows to define the various relationships and interactions.Solid arrows 340, represent the interconnections made by the scenario author. Whereas, dashedarrows 342 are automatically generated whenevents 334 and/or triggers 338 are assigned to various actor/behavior symbols 316 withinscenario logic flow 322. -
Scenario logic flow 322 describes a “script” for scenario 211 (FIG. 14 ). The “script” is as follows:scenario 211 starts (Start point 308), the instructor initiates events (Instructor start 310), ambient sound immediately begins (Ambient Sound 318), andfirst actor 266′ immediately begins performing his behavior (Offender 1 316). Iffirst actor 266′ falls (Fall 314), a delay is imposed (Delay 320).Second actor 266″ begins performing his behavior (Guard 1 316) following expiration of the delay. The instructor shootssecond actor 266″ (Shoot Guard 1 310) which causes a trigger (Shot 312) notifyingsecond actor 266″ to react. The reaction ofsecond actor 266″ is logged as an event (Fall 314). -
Scenario logic 322 is highly simplified for clarity of understanding. However, in general it should be understood that scenario logic can be generated such that the behavior of a first actor can effect the behavior of a second actor and/or that an external command can effect the behavior of either of the actors. The behaviors of the actors can also be affected by interaction oftrainee 26 withinscenario 211. This interaction can occur at the behavior level of the actors, and is described in greater detail in connection withFIGS. 33-34 . - Returning to
FIG. 15 , after scenario logic flow 322 (FIG. 23 ) is built attask 300,scenario provision process 202 proceeds to atask 344. Attask 344, scenario 211 (FIG. 14 ) is saved into memory 210 (FIG. 14 ). Followingtask 344, atask 346 is performed. Attask 346,scenario 211 is displayed on the scenario playback system, for example, three hundred degree surround simulation system 108 (FIG. 9 ), for interaction with trainee 26 (FIG. 1 ). -
Scenario provision process 202 includesellipses 348 separating scenario savetask 344 andscenario display task 346.Ellipses 348 indicate an omission of standard processing tasks for simplicity of illustration. These processing tasks may include savingscenario 211 in a format compatible for playback atsimulation system 108, writingscenario 211 to a storage medium that is readable bysimulation system 108, conveyingscenario 211 tosimulation system 108, and so forth. Followingtask 346,scenario provision process 202 exits. - Referring to
FIGS. 27-29 ,FIG. 27 shows ascreen shot image 350 of abackground editor window 352 with apan tool 354 enabling a pan capability.FIG. 28 shows ascreen shot image 356 ofbackground editor window 352 with aforeground marking tool 358 enabling a layer capability, andFIG. 29 shows ascreen shot image 360 ofbackground editor window 352 withfirst background image 234′ selected for saving into database 203 (FIG. 14 ). - As mentioned briefly above, background images 234 (
FIG. 17 ) can be obtained utilizing a camera and creating still images within an actual, or real environment. These still images are desirably in a panoramic format. In accordance with the present invention, a still image may be manipulated in a digital environment throughbackground editor window 352 to achieve a desired one ofbackground images 234. - Interactive buttons within
background editor window 352 include a “load panoramic”button 362, a “pan”button 364, and a “layer”button 366. Loadpanoramic button 362 allows a user to browse within computing system 200 (FIG. 14 ), over a network connection, or to load from a digital camera, a particularstill image 368. Once selected, still image 368 will be presented onadjacent panels 370 withinbackground editor window 352, that represent panels 254 (FIG. 18 ) within scenario layout window 222 (FIG. 18 ). - As illustrated in
FIG. 27 , the user can click onpan button 364 to revealpan tool 354.Pan tool 354 allows the user to manipulate still image 368 horizontally and vertically for optimal placement of adjacent views withinpanels 370. Ahorizontal lock 372 and avertical lock 373 can be selected after still image has been manipulated to a desired position. Azoom adjustment element 374 may also be provided to enable the user to move still image 368 inward and outward at an appropriate depth. - As illustrated in
FIG. 28 , the user can click onlayer button 366 to revealforeground marking tool 358.Foreground marking tool 358 allows the user to cover or “paint” over areas within still image 368 that he or she wishes to be specified as a foreground layer.Foreground marking tool 358 may take on a variety of forms for encircling a region, creating a “feathered” edge, subtracting a region, and so forth known to those skilled in the art. In this image, the foreground layer is designated by a shadedregion 376 created by movement offoreground marking tool 358.Shaded region 376 will be saved as a data file in association withstill image 368 to define a foreground layer 378 (FIG. 29 ). - As illustrated in
FIG. 29 , after stillimage 368 has been manipulated into a desired position, as needed, and foreground layer(s) 378 have been specified, the user can select save still image asfirst background image 234′ by conventional procedures using a “save” button 380. It should be noted that foreground layers 378 will not appear as shaded region 376 (FIG. 28 ), but instead foregroundlayers 378 withinfirst background image 234′ will appear as the image of the portion ofstill image 368 that was marked inFIG. 28 . Alternatively, shadedregion 376 may be optionally toggled visible, invisible, or partially transparent. -
FIG. 30 shows an exemplary table 382 ofanimation sequences 384 associated withactors 266 for use within scenario provision process 202 (FIG. 15 ). Table 382 relates to information stored within database 203 (FIG. 14 ) ofscenario provision process 202. - In the context of the following description,
animation sequences 384 are the scripted actions that any ofactors 266 may perform. Video clips 386 may be recorded ofactors 266 performinganimation sequences 384 against a blue or green screen. Information regardingvideo clips 386 are subsequently recorded in association with one ofactors 266. In addition, video clips 386 are distinguished byidentifiers 388, such as a frame number sequence, in table 382 characterizing one ofanimation sequences 384. Thus, video clips 386 portrayactors 266 performingparticular animation sequences 384. - A logical grouping of
animation sequences 384 defines one of behaviors 278 (FIG. 20 ), as shown and discussed in connection withFIGS. 32-34 . When a user wishes to assign one ofbehaviors 278 to one ofactors 266 at task 272 (FIG. 15 ) of scenario provision process 202 (FIG. 15 ), video clips 386 of theanimation sequences 384 that make up the desired one ofbehaviors 278 must first be recorded in database 203 (FIG. 14 ). -
FIGS. 31 a-d show an illustration of asingle frame 390 of an exemplary one ofvideo clips 386 undergoing video filming and editing. Motion picture video filming may be performed utilizing a standard or high definition video camera. Video editing may be performed utilizing video editing software for generating digital “masks” of the actor's performance. Those skilled in the art will recognize that video clips 386 contain many more than a single frame. However, only asingle frame 390 is shown to illustrate post production processing that may occur to generatevideo clips 386 for use with scenario provision process 202 (FIG. 15 ). - At
FIG. 31 a,first actor 266′ is filmed against abackdrop 392 having a single color, such as a green or blue screen. AtFIG. 31 b, a matte 393, sometimes referred to as an alpha channel, is created that defines a mask portion 394 (i.e., the area thatfirst actor 266′ occupies) and a transparent portion 396 (i.e., the remainder offrame 390 in whichbackdrop 392 is visible). AtFIG. 31 c, zones, illustrated as shaded circular andoval regions 398, are defined onmask portion 394. In an exemplary embodiment, thesezones 398 are hit zones that provide information so scenario 211 (FIG. 14 ) can detect discharge of a weapon into one ofzones 398. That is,scenario 211 can determine whether trainee 26 (FIG. 1 ) hits or misses a target, such asfirst actor 266′. -
Zones 398 can be computed using matte 393, i.e., the alpha channel, as a starting point. For example, in the area offrame 390 where the opacity exceeds approximately ninety-five percent, i.e.,mask portion 394, it can be assumed that the image asset, i.e.first actor 266′, is “solid” and therefore can be hit by a bullet. Any less opacity will cause the bullet to “miss” and hit the next object in the path of the bullet. This hit zone information can be enhanced by adding different types ofzones 398 to different areas offirst actor 266′. For example,FIG. 31 c shows circular hitzones 400 andoval hit zones 402. By using differing ones ofzones 398,behavior 278 forfirst actor 266′ can generate an event related to a strike in one of circular and oval hitzones information regarding zones 398 is stored in a file of hit zone information for eachframe 390 in a given one of video clips 386 (FIG. 30 ). - At
FIG. 31 d,single frame 390 is shown withforeground layer 378overlaying mask portion 394 representingfirst actor 266′.FIG. 31 d is provided herein to demonstrate a situation in which foregroundlayer 378 overliesmask portion 394. In such a circumstance, only those hit zones, in this case twocircular hit zones 400 and a single oval hitzone 402 can be hit. - Referring to
FIGS. 32-33 ,FIG. 32 shows ascreen shot image 404 of abehavior editor window 406 showingbehavior logic flow 408 forfirst behavior 278′, andFIG. 33 shows a table 410 of a key ofexemplary symbols 412 utilized withinbehavior editor window 406.Symbols 412 represent actions, events, activities, and video clips withinbehavior logic flow 408. By interconnectingsymbols 412 withinbehavior editor window 406, the “logic”, or relationship between the elements can be readily constructed for one ofbehaviors 278. - Like table 304 (
FIG. 24 ), table 410 includes “start point”symbol 308, “trigger”symbol 312, and “event”symbol 314. In addition, table 410 includes an “animation sequence”symbol 414, a “random”symbol 416 and an “option”symbol 418.Symbols 412 are provided herein for illustrative purposes. Those skilled in the art will recognize thatsymbols 412 could take on a great variety of shapes. Alternatively, color coding could be utilized to differentiate the various symbols. - As shown in
FIG. 32 ,behavior logic flow 408 forfirst behavior 278′ includes a number ofinterconnected symbols 412. Startpoint symbol 308 is automatically presented withinbehavior logic flow 408, and provides command and control to load and initializefirst behavior 278′. - A branching options window 420 facilitates generation of
behavior logic flow 408. Branching options window 420 includes a number of user interactive buttons. For example, window 420 includes a “branch”button 422, an “event”button 424, a “trigger”button 426, a “random”button 428, and an “option”button 430. In general, selection ofbranch button 422 allows for a branch to occur withinbehavior logic flow 408. Selection ofevent button 424 results in the generation ofevent symbol 314, and selection oftrigger button 426 results in the generation oftrigger symbol 312 inbehavior logic flow 408. - It is interesting to note that the definition of trigger and
event symbols behavior logic flow 408 differ slightly from their definitions set forth in connection with scenario logic flow 322 (FIG. 23 ). That is,trigger symbol 312 generated withinbehavior logic flow 408 is a notification that something has occurred within thatbehavior logic flow 408. A trigger withinbehavior logic flow 408 becomes an event withscenario logic flow 322. Similarly,event symbol 314 generated withinbehavior logic flow 408 is an occurrence of something that results in a reaction of the actor in accordance withbehavior logic flow 408. An event withinbehavior logic flow 408 becomes a trigger withinscenario logic flow 322. - Selection of
random button 428 results in the generation ofrandom symbol 416 inbehavior logic flow 408. Similarly selection ofoption button 430 results in the generation ofoption symbol 418 inbehavior logic flow 408. The introduction of random and/oroption symbols behavior logic flow 408 introduces random or unexpected properties to a behavior logic flow. These random or unexpected properties will be discussed in connection withFIG. 34 . - A
properties window 432 allows the selection ofanimation sequences 384. In addition,properties window 432 allows the behavior author to assign various properties to the selected one ofanimation sequences 384. These various properties can include, for example, selection of a particular sound associated with a gunshot. When one ofanimation sequences 384 is generated,animation sequence symbol 414 will appear inbehavior editor window 406. Thevarious symbols 412 will be presented inbehavior editor window 406 as “floating” or unconnected with regard to anyother symbols 412 appearing inwindow 406 until the behavior creation author creates those connections.Symbols 412 withinbehavior logic flow 408 are interconnected byarrows 432 to define the various relationships and interactions. -
Behavior logic flow 408 describes a “script” for one of behaviors 278 (FIG. 20 ), in this casefirst behavior 278′. The “script” is as follows: behavior flow 408 starts (Start point 308) andanimation sequence 384 is presented (Stand 414). If an event occurs (Shot 314), a trigger is generated (Fall 312), and anotheranimation sequence 384 is presented (Fall 414). The trigger (Fall 312) is communicated as needed within scenario logic flow 322 (FIG. 23 ) as an event, Fall 314 (FIG. 23 ). -
FIG. 34 shows a partial screen shotimage 434 ofbehavior editor window 406 showing abehavior logic flow 436 for another one ofbehaviors 278.Behavior logic flow 436 is significantly more complex than behavior logic flow 408 (FIG. 32 ). However,flow 436 is readily constructed utilizing symbols 412 (FIG. 33 ), and introduces various random properties. - The “script” for
behavior logic flow 436 is as follows: behavior flow 436 starts (Start point 308) andanimation sequence 384 is presented (Duck 414). Next, a random property is introduced (Random 416). The random property (Random 416) allowsbehavior logic flow 436 to branch to either an optional side logic flow (Side 418) or an optional stand logic flow (Stand 418).Option symbols 418 indicate that logic flow can include either side logic flow, stand logic flow, or both side and stand logic flows when implementing the random property (Random 416). - First reviewing side logic flow (Side 418),
animation sequence 384 is presented (From Duck: Side & Shoot 414). This translates to “from the duck position, move sideways and shoot). Next,animation sequence 384 is presented (From Side: Shoot 414), meaning from the sideways position shoot weapon. Next, a random property (Random 416) is introduced. The random property allowsbehavior logic flow 436 to branch and present either animation sequence 384 (From Side: Shoot 414) or animation sequence 384 (From Side: Shoot & Duck 414). - During any of the three animation sequences, (From Duck: Side & Shoot 414), (From Side: Shoot 414), and (From Side: Shoot & Duck 414), and event can occur (Shot 314). If an event occurs (Shot 314), a trigger is generated (Fall 312), and another
animation sequence 384 is presented (From Side: Shoot & Fall). If another event occurs (Shot 314), another trigger is generated (Fall 312), and yet another animation sequence 384 (Twitch 414) is presented. If animation sequence 394 (From Side: Shoot & Duck 414) is presented for a period of time, and no event occurs, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414). - Next reviewing stand logic flow (Stand 418),
animation sequence 384 is presented (From Duck: Stand & Shoot 414). This translates to “from the duck position, stand up and shoot). Next,animation sequence 384 is presented (From Stand: Shoot and Duck 414), meaning from the standing position, shoot weapon, then duck. If an event associated with animation sequences 384 (From Duck: Stand. & Shoot 414) and (From Stand: Shoot and Duck 414) does not occur, i.e., Shot 314 does not occur, behavior logic flow 436 loops back to animation sequence 384 (Duck 414). - However, during either of the two animation sequences 384 (From Duck: Stand & Shoot 414) and (From Stand: Shoot and Duck 414), and event can occur (Shot 314). If an event occurs (shot 314), a trigger is generated (Fall 312), and another
animation sequence 384 is presented (From Stand: Shoot & Fall 414). If another event occurs (Shot 314), another trigger is generated (Fall 312), and yet another animation sequence 384 (Twitch 414) is presented. - Although only two behavior logic flows for behaviors 278 (
FIG. 20 ) are described herein, it should be apparent that a variety of customized behavior logic flows can be developed and stored within database 203 (FIG. 14 ). When actors 266 (FIG. 19 ) are filmed performing particular animation sequences 384 (FIG. 30 ), video clips 386 (FIG. 30 ) associated with theseanimation sequences 384 can be assembled in accordance withbehaviors 278 to form an actor/behavior definition for scenario 211 (FIG. 14 ). - In summary, the present invention teaches of a method for scenario provision in a simulation system that utilizes executable code operable on a computing system. The executable code is in the form of a scenario provision process that permits the user to create new scenarios with the importation of sounds and image objects, such as, panoramic pictures, still digital pictures, standard and high-definition video files, green or blue screen video. Green or blue screen based filming provides for extensive reusability of content, as individual “actors” can be filmed and then “dropped” into various settings with various other “actors.” In addition, the program and method permits the user to place the image objects (for example, actor video clips) in a desired location within a background image. The program and method further allows a user to manipulate a panoramic image for use as a background image in a single or multi-screen scenario playback system. The program and method permits the user to assign sounds and image objects to layers so that the user can define what object is displayed in front of or behind another object. In addition, the program and method enables the user to readily construct scenario logic flow defining a scenario through a readily manipulated and understandable flowchart style user interface.
- Although the preferred embodiments of the invention have been illustrated and described in detail, it will be readily apparent to those skilled in the art that various modifications may be made therein without departing from the spirit of the invention or from the scope of the appended claims. For example, the process steps discussed and the images provided herein can take on great number of variations and can be performed and shown in a differing order then that which was presented.
Claims (32)
1. A method for providing a scenario for use in a scenario playback system, said method utilizing a computing system executing scenario creation code, and said method comprising:
choosing, at said computing system, a background image for said scenario;
selecting a video clip from a database of video clips stored in said computing system, said video clip having a mask portion and a transparent portion;
combining said video clip with said background image to create said scenario, said mask portion forming a foreground image over said background image; and
displaying said scenario on a display of said scenario playback system for interaction with a user.
2. A method as claimed in claim 1 wherein said background image is chosen from a plurality of background images stored in said computing system, each of which portrays an environment.
3. A method as claimed in claim 1 wherein said background image is a panoramic format image, said display includes a first screen and a second screen adjacent said first screen, and said displaying operation comprises:
presenting a first view of said background image on said first screen; and
presenting a second view of said background image on said second screen, said first and second views being adjacent portions of said background image.
4. A method as claimed in claim 3 further comprising manipulating said background image relative to said first and second screens to form said first and second views.
5. A method as claimed in claim 1 further comprising specifying a foreground layer from a portion of said background image, said foreground layer overlaying said mask portion of said video clip in said situational scenario.
6. A method as claimed in claim 1 further comprising recording, prior to said selecting operation, said video clips in said database, said video clips portraying an actor performing animation sequences.
7. A method as claimed in claim 6 wherein said actor is a first actor, and said method further comprises:
recording, prior to said selecting operation, additional video clips into said database, said additional video clips portraying a actor performing said animation sequences; and
enabling a selection of said video clips for said first actor and said additional video clips for said second actor for combination with said background image.
8. A method as claimed in claim 6 further comprising distinguishing each of said video clips in said database by an identifier characterizing one of said animation sequences.
9. A method as claimed in claim 6 wherein said recording operation comprises:
filming said actor performing said animation sequences against a backdrop having a single color to obtain said video clips; and
creating a matte defining said transparent portion and said mask portion such that an image of said actor forms said mask portion.
10. A method as claimed in claim 6 wherein for one of said animation sequences, said method further comprises:
defining zones on said mask portion;
storing, in said computing system, zone information corresponding to said zones in association with said one of said animation sequences;
detecting, in response to said presenting operation, an event within one of said zones; and
initiating, within said scenario, a response to said event.
11. A method as claimed in claim 10 wherein:
said defining operation defines hit zones;
said detecting operation detects discharge of a weapon into one of said hit zones; and
said initiating operation includes branching to one of said animation sequences portraying a reaction of said actor to said discharge.
12. A method as claimed in claim 10 wherein:
said defining operation defines a first hit zone and a second hit zone;
said detecting operation detects discharge of a weapon into one of said first and second hit zones; and
said initiating operation includes:
when said discharge is detected in said first hit zone, branching to a first one of said animation sequences portraying a first reaction of said actor to said discharge; and
when said discharge is detected in said second hit zone, branching to a second one of said animation sequences portraying a second reaction of said actor to said discharge.
13. A method as claimed in claim 1 wherein said video clips portray an actor performing animation sequences, said video clip is a first video clip of a first one of said animation sequences, and said method further comprises:
selecting a second video clip from said database, said second video clip being a second one of said animation sequences of said actor; and
linking said first and second video clips in a logic flow of said first and second video clips to form a behavior for said actor, and said combining operation combines said first and second video clips in said logic flow with said background image to create said scenario of said actor exhibiting said behavior.
14. A method as claimed in claim 13 further comprising:
selecting a third video clip from said database, said third video clip being a third one of said animation sequences of said actor;
selectively linking said third video clip with said first and second video clips in said logic flow;
assigning an event to said first video clip;
initiating said third video clip when said event occurs during said presenting operation; and
initiating said second video clip when said event fails to occur.
15. A method as claimed in claim 1 wherein database of video clips includes of a plurality of actors performing a plurality of animation sequences, said database further includes a plurality of behaviors formed from logic flows of ones of said animation sequences, and:
said selecting operation comprises:
choosing a first actor from said plurality of actors;
assigning a first behavior to said first actor from said plurality of behaviors;
choosing a second actor from said plurality of actors; and
assigning a second behavior to said second actor from said plurality of behaviors; and
said combining operation includes combining said video clips of said first actor exhibiting said first behavior and said video clips of said second actor exhibiting said second behavior with said background image to create said scenario that includes said first and second actors.
16. A method as claimed in claim 15 further comprising:
assigning an event to one of said video clips of said first behavior; and
effecting said second behavior for said second actor in response to initiation of said event within said first behavior for said first actor.
17. A method as claimed in claim 1 wherein said combining operation comprises employing a drag-and-drop function to determine a location of said mask portion of said video clip against said background image.
18. A method as claimed in claim 1 wherein said combining operation comprises resizing said mask portion of said video clip relative to said background image to characterize a distance of said mask portion from said user.
19. A method as claimed in claim 1 wherein said combining operation comprises imposing a time delay on an appearance of said video clip in combination with said background image.
20. A method as claimed in claim 1 wherein said method further comprises:
assigning an event to said video clip;
linking a trigger with said video clip, said trigger being associated with said event; and
said displaying operation includes displaying a second video clip when said trigger is activated indicating an occurrence of said event.
21. A method as claimed in claim 1 wherein said video clip includes an audio signal, and said displaying operation comprises broadcasting said audio signal.
22. A computer-readable storage medium containing executable code for instructing a processor to create a scenario for interactive use in a scenario playback system, said executable code instructing said processor to perform operations comprising:
receiving a first input indicating choice of a background image for said scenario, said background image being one of a plurality of background images stored in a memory associated with said processor, each of said background images portraying an environment;
receiving a second input indicating selection of an actor from a plurality of actors stored in said memory;
receiving a third input indicating assignment of a behavior from a plurality of behaviors stored in said memory;
accessing video clips of said actor from a database of said video clips stored in said memory, said video clips portraying said actor performing animation sequences in accordance with said behavior, each of said video clips having a mask portion and a transparent portion;
combining said video clips with said background image to create said scenario, said mask portion forming a foreground image over said background image; and
saving said scenario for presentation on a display of said scenario playback system for interaction with a user.
23. A computer-readable storage medium as claimed in claim 22 wherein said actor is a first actor, said behavior is a first behavior, and said executable code instructs said processor to perform further operations comprising:
receiving a fourth input indicating selection of a second actor from said plurality of actors;
receiving a fifth input indicating assignment of a second behavior from said plurality of behaviors;
accessing additional video clips of said second actor from said database, said additional video clips portraying said second actor performing said animation sequences in accordance with said second behavior, each of said additional video clips having a mask portion and a transparent portion; and
said combining operation includes combining said video clips of said first actor and said additional video clips of said second actor with said background image to create said scenario that includes said first and second actors.
24. A computer-readable storage medium as claimed in claim 23 wherein said executable code instructs said processor to perform further operations comprising:
assigning an event to one of said video clips of said first behavior; and
effecting said second behavior for said second actor in response to initiation of said event within said first behavior for said first actor.
25. A method for providing a scenario for use in a scenario playback system, said method utilizing a computing system executing scenario creation code, and said method comprising:
choosing, at said computing system, a background image for said scenario;
selecting a video clip from a database of video clips stored in said computing system, said video clip having a mask portion and a transparent portion;
combining said video clip with said background image to create said scenario, said mask portion forming a foreground image over said background image, said combining operation including:
employing a drag-and-drop function to determine a location of said mask portion of said video clip against said background image; and
specifying a foreground layer from a portion of said background image, said foreground layer overlaying said mask portion of said video clip at said location; and
displaying said scenario on a display of said scenario playback system for interaction with a user.
26. A method as claimed in claim 25 wherein said background image is a panoramic format image, said display includes a first screen and a second screen adjacent said first screen, and said presenting operation comprises:
presenting a first view of said background image on said first screen; and
presenting a second view of said background image on said second screen, said first and second views being adjacent portions of said background image.
27. A method as claimed in claim 25 wherein said combining operation comprises resizing said mask portion of said video clip relative to said background image to characterize a distance of said mask portion from said user.
28. A method as claimed in claim 25 wherein said combining operation comprises imposing a time delay on an appearance of said video clip in combination with said background image.
29. A method as claimed in claim 25 wherein said method further comprises:
assigning an event to said video clip;
linking a trigger with said video clip, said trigger being associated with said event; and
initiating said event when said trigger is activated through interaction by said user during said presenting operation.
30. A method as claimed in claim 25 wherein said video clip includes an audio signal, and said displaying operation comprises broadcasting said audio signal.
31. A method for providing a scenario for use in a scenario playback system, said method utilizing a computing system executing scenario creation code, and said method comprising:
filming an actor performing animation sequences against a backdrop having a single color to obtain video clips;
creating a matte defining a transparent portion and a mask portion of said video clips such that an image of said actor forms said mask portion;
differentiating said video clips by identifiers characterizing said animation sequences;
storing, at said computing system, said video clips in connection with said identifiers in a database; and
selecting one of said video clips from said database for combination with a background image to create said scenario; and displaying said scenario on a display of a scenario playback system, said mask portion forming a foreground image over said background image.
32. A method as claimed in claim 31 wherein said database of video clips includes of a plurality of actors performing a plurality of animation sequences, said database further includes a plurality of behaviors formed from logic flows of ones of said animation sequences, and:
said selecting operation comprises selecting said actor from said plurality of actors and assigning a behavior from said plurality of behaviors; and
said combining operation comprises combining said video clips of said actor exhibiting said behavior with a background image to create said scenario.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/286,124 US20060105299A1 (en) | 2004-03-15 | 2005-11-22 | Method and program for scenario provision in a simulation system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US80094204A | 2004-03-15 | 2004-03-15 | |
US63308704P | 2004-12-03 | 2004-12-03 | |
US11/286,124 US20060105299A1 (en) | 2004-03-15 | 2005-11-22 | Method and program for scenario provision in a simulation system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US80094204A Continuation-In-Part | 2004-03-15 | 2004-03-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060105299A1 true US20060105299A1 (en) | 2006-05-18 |
Family
ID=36386776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/286,124 Abandoned US20060105299A1 (en) | 2004-03-15 | 2005-11-22 | Method and program for scenario provision in a simulation system |
Country Status (1)
Country | Link |
---|---|
US (1) | US20060105299A1 (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070196809A1 (en) * | 2006-02-21 | 2007-08-23 | Mr. Prabir Sen | Digital Reality Sports, Games Events and Activities in three dimensional and interactive space display environment and information processing medium |
US20080206720A1 (en) * | 2007-02-28 | 2008-08-28 | Nelson Stephen E | Immersive video projection system and associated video image rendering system for a virtual reality simulator |
US20080220397A1 (en) * | 2006-12-07 | 2008-09-11 | Livesight Target Systems Inc. | Method of Firearms and/or Use of Force Training, Target, and Training Simulator |
US20100045774A1 (en) * | 2008-08-22 | 2010-02-25 | Promos Technologies Inc. | Solid-state panoramic image capture apparatus |
US20100091036A1 (en) * | 2008-10-10 | 2010-04-15 | Honeywell International Inc. | Method and System for Integrating Virtual Entities Within Live Video |
US20100112528A1 (en) * | 2008-10-31 | 2010-05-06 | Government Of The United States As Represented By The Secretary Of The Navy | Human behavioral simulator for cognitive decision-making |
US20100203952A1 (en) * | 2009-02-12 | 2010-08-12 | Zalewski Gary M | Object Based Observation |
US20100250478A1 (en) * | 2009-03-30 | 2010-09-30 | International Business Machines Corporation | Demo Verification Provisioning |
US20110053120A1 (en) * | 2006-05-01 | 2011-03-03 | George Galanis | Marksmanship training device |
US20110111374A1 (en) * | 2005-11-22 | 2011-05-12 | Moshe Charles | Training system |
US20120156661A1 (en) * | 2010-12-16 | 2012-06-21 | Lockheed Martin Corporation | Method and apparatus for gross motor virtual feedback |
US20150125828A1 (en) * | 2012-08-10 | 2015-05-07 | Ti Training Corp. | Disruptor device simulation system |
US20160019427A1 (en) * | 2013-03-11 | 2016-01-21 | Michael Scott Martin | Video surveillence system for detecting firearms |
US20160059136A1 (en) * | 2004-12-03 | 2016-03-03 | Bob Ferris | Simulated firearms entertainment system |
US20160106380A1 (en) * | 2013-05-01 | 2016-04-21 | Third Eye Technologies Limited | Apparatus for use in the performance of cognitive behaviour therapy and method of performance |
US9355572B2 (en) | 2007-08-30 | 2016-05-31 | Conflict Kinetics Corporation | System and method for elevated speed firearms training |
WO2016154717A1 (en) * | 2015-03-30 | 2016-10-06 | Cae Inc. | A method and system for generating an interactive training scenario based on a recorded real time simulation |
US9501611B2 (en) | 2015-03-30 | 2016-11-22 | Cae Inc | Method and system for customizing a recorded real time simulation based on simulation metadata |
US9638495B2 (en) | 2007-08-30 | 2017-05-02 | Conflict Kinetics Corporation | System for elevated speed firearms training scenarios |
US9646398B2 (en) * | 2014-07-09 | 2017-05-09 | Splunk Inc. | Minimizing blur operations for creating a blur effect for an image |
US20170176127A1 (en) * | 2004-12-03 | 2017-06-22 | Bob Ferris | Simulated firearms entertainment system |
US20170319951A1 (en) * | 2016-05-03 | 2017-11-09 | Performance Designed Products Llc | Video gaming system and method of operation |
CN108124187A (en) * | 2017-11-24 | 2018-06-05 | 互影科技(北京)有限公司 | The generation method and device of interactive video |
US10076711B2 (en) | 2015-09-15 | 2018-09-18 | Square Enix Holdings Co., Ltd. | Remote rendering server with broadcaster |
US10266263B2 (en) * | 2017-01-23 | 2019-04-23 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for omni-directional obstacle avoidance in aerial systems |
US10691303B2 (en) * | 2017-09-11 | 2020-06-23 | Cubic Corporation | Immersive virtual environment (IVE) tools and architecture |
US10861308B1 (en) * | 2019-05-29 | 2020-12-08 | Siemens Industry, Inc. | System and method to improve emergency response time |
WO2021011679A1 (en) * | 2019-07-15 | 2021-01-21 | Street Smarts VR | Magazine simulator for usage with weapons in a virtual reality system |
CN112595169A (en) * | 2021-01-04 | 2021-04-02 | 北京信安通靶场装备科技有限公司 | Actual combat simulation system and actual combat simulation display control method |
CN113720202A (en) * | 2020-05-12 | 2021-11-30 | 广东仁光科技有限公司 | Immersive 3D image shooting training target range software system and method |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4137651A (en) * | 1976-09-30 | 1979-02-06 | The United States Of America As Represented By The Secretary Of The Army | Moving target practice firing simulator |
US4223454A (en) * | 1978-09-18 | 1980-09-23 | The United States Of America As Represented By The Secretary Of The Navy | Marksmanship training system |
US4359223A (en) * | 1979-11-01 | 1982-11-16 | Sanders Associates, Inc. | Interactive video playback system |
US4657511A (en) * | 1983-12-15 | 1987-04-14 | Giravions Dorand | Indoor training device for weapon firing |
US4680012A (en) * | 1984-07-07 | 1987-07-14 | Ferranti, Plc | Projected imaged weapon training apparatus |
US4948371A (en) * | 1989-04-25 | 1990-08-14 | The United States Of America As Represented By The United States Department Of Energy | System for training and evaluation of security personnel in use of firearms |
US5130794A (en) * | 1990-03-29 | 1992-07-14 | Ritchey Kurtis J | Panoramic display system |
US5213503A (en) * | 1991-11-05 | 1993-05-25 | The United States Of America As Represented By The Secretary Of The Navy | Team trainer |
US5215464A (en) * | 1991-11-05 | 1993-06-01 | Marshall Albert H | Aggressor shoot-back simulation |
US5641288A (en) * | 1996-01-11 | 1997-06-24 | Zaenglein, Jr.; William G. | Shooting simulating process and training device using a virtual reality display screen |
US5689437A (en) * | 1996-05-31 | 1997-11-18 | Nec Corporation | Video display method and apparatus |
US5696892A (en) * | 1992-07-10 | 1997-12-09 | The Walt Disney Company | Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images |
US5882204A (en) * | 1995-07-13 | 1999-03-16 | Dennis J. Lannazzo | Football interactive simulation trainer |
US5947738A (en) * | 1996-08-26 | 1999-09-07 | Advanced Interactive Systems, Inc. | Simulated weapon with gas cartridge |
US5980254A (en) * | 1996-05-02 | 1999-11-09 | Advanced Interactive Systems, Inc. | Electronically controlled weapons range with return fire |
US6154723A (en) * | 1996-12-06 | 2000-11-28 | The Board Of Trustees Of The University Of Illinois | Virtual reality 3D interface system for data creation, viewing and editing |
US6167562A (en) * | 1996-05-08 | 2000-12-26 | Kaneko Co., Ltd. | Apparatus for creating an animation program and method for creating the same |
US20020018065A1 (en) * | 2000-07-11 | 2002-02-14 | Hiroaki Tobita | Image editing system and method, image processing system and method, and recording media therefor |
US6604064B1 (en) * | 1999-11-29 | 2003-08-05 | The United States Of America As Represented By The Secretary Of The Navy | Moving weapons platform simulation system and training method |
US6616452B2 (en) * | 2000-06-09 | 2003-09-09 | Beamhit, Llc | Firearm laser training system and method facilitating firearm training with various targets and visual feedback of simulated projectile impact locations |
US20040146840A1 (en) * | 2003-01-27 | 2004-07-29 | Hoover Steven G | Simulator with fore and aft video displays |
-
2005
- 2005-11-22 US US11/286,124 patent/US20060105299A1/en not_active Abandoned
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4137651A (en) * | 1976-09-30 | 1979-02-06 | The United States Of America As Represented By The Secretary Of The Army | Moving target practice firing simulator |
US4223454A (en) * | 1978-09-18 | 1980-09-23 | The United States Of America As Represented By The Secretary Of The Navy | Marksmanship training system |
US4359223A (en) * | 1979-11-01 | 1982-11-16 | Sanders Associates, Inc. | Interactive video playback system |
US4657511A (en) * | 1983-12-15 | 1987-04-14 | Giravions Dorand | Indoor training device for weapon firing |
US4680012A (en) * | 1984-07-07 | 1987-07-14 | Ferranti, Plc | Projected imaged weapon training apparatus |
US4948371A (en) * | 1989-04-25 | 1990-08-14 | The United States Of America As Represented By The United States Department Of Energy | System for training and evaluation of security personnel in use of firearms |
US5130794A (en) * | 1990-03-29 | 1992-07-14 | Ritchey Kurtis J | Panoramic display system |
US5213503A (en) * | 1991-11-05 | 1993-05-25 | The United States Of America As Represented By The Secretary Of The Navy | Team trainer |
US5215464A (en) * | 1991-11-05 | 1993-06-01 | Marshall Albert H | Aggressor shoot-back simulation |
US5696892A (en) * | 1992-07-10 | 1997-12-09 | The Walt Disney Company | Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images |
US5882204A (en) * | 1995-07-13 | 1999-03-16 | Dennis J. Lannazzo | Football interactive simulation trainer |
US5641288A (en) * | 1996-01-11 | 1997-06-24 | Zaenglein, Jr.; William G. | Shooting simulating process and training device using a virtual reality display screen |
US5980254A (en) * | 1996-05-02 | 1999-11-09 | Advanced Interactive Systems, Inc. | Electronically controlled weapons range with return fire |
US6167562A (en) * | 1996-05-08 | 2000-12-26 | Kaneko Co., Ltd. | Apparatus for creating an animation program and method for creating the same |
US5689437A (en) * | 1996-05-31 | 1997-11-18 | Nec Corporation | Video display method and apparatus |
US5947738A (en) * | 1996-08-26 | 1999-09-07 | Advanced Interactive Systems, Inc. | Simulated weapon with gas cartridge |
US6154723A (en) * | 1996-12-06 | 2000-11-28 | The Board Of Trustees Of The University Of Illinois | Virtual reality 3D interface system for data creation, viewing and editing |
US6604064B1 (en) * | 1999-11-29 | 2003-08-05 | The United States Of America As Represented By The Secretary Of The Navy | Moving weapons platform simulation system and training method |
US6616452B2 (en) * | 2000-06-09 | 2003-09-09 | Beamhit, Llc | Firearm laser training system and method facilitating firearm training with various targets and visual feedback of simulated projectile impact locations |
US20020018065A1 (en) * | 2000-07-11 | 2002-02-14 | Hiroaki Tobita | Image editing system and method, image processing system and method, and recording media therefor |
US20040146840A1 (en) * | 2003-01-27 | 2004-07-29 | Hoover Steven G | Simulator with fore and aft video displays |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170176127A1 (en) * | 2004-12-03 | 2017-06-22 | Bob Ferris | Simulated firearms entertainment system |
US20160059136A1 (en) * | 2004-12-03 | 2016-03-03 | Bob Ferris | Simulated firearms entertainment system |
US20110111374A1 (en) * | 2005-11-22 | 2011-05-12 | Moshe Charles | Training system |
US20070196809A1 (en) * | 2006-02-21 | 2007-08-23 | Mr. Prabir Sen | Digital Reality Sports, Games Events and Activities in three dimensional and interactive space display environment and information processing medium |
US20110053120A1 (en) * | 2006-05-01 | 2011-03-03 | George Galanis | Marksmanship training device |
US20080220397A1 (en) * | 2006-12-07 | 2008-09-11 | Livesight Target Systems Inc. | Method of Firearms and/or Use of Force Training, Target, and Training Simulator |
US20080206720A1 (en) * | 2007-02-28 | 2008-08-28 | Nelson Stephen E | Immersive video projection system and associated video image rendering system for a virtual reality simulator |
US9638495B2 (en) | 2007-08-30 | 2017-05-02 | Conflict Kinetics Corporation | System for elevated speed firearms training scenarios |
US10969190B2 (en) | 2007-08-30 | 2021-04-06 | Conflict Kinetics Corporation | System for elevated speed firearms training |
US9355572B2 (en) | 2007-08-30 | 2016-05-31 | Conflict Kinetics Corporation | System and method for elevated speed firearms training |
US9138647B2 (en) | 2007-08-31 | 2015-09-22 | Sony Computer Entertainment America Llc | Game play skill training |
US20100045774A1 (en) * | 2008-08-22 | 2010-02-25 | Promos Technologies Inc. | Solid-state panoramic image capture apparatus |
US8305425B2 (en) * | 2008-08-22 | 2012-11-06 | Promos Technologies, Inc. | Solid-state panoramic image capture apparatus |
US20100091036A1 (en) * | 2008-10-10 | 2010-04-15 | Honeywell International Inc. | Method and System for Integrating Virtual Entities Within Live Video |
US20100112528A1 (en) * | 2008-10-31 | 2010-05-06 | Government Of The United States As Represented By The Secretary Of The Navy | Human behavioral simulator for cognitive decision-making |
US8721451B2 (en) | 2009-02-12 | 2014-05-13 | Sony Computer Entertainment America Llc | Game play skill training |
US20100203952A1 (en) * | 2009-02-12 | 2010-08-12 | Zalewski Gary M | Object Based Observation |
US8235817B2 (en) | 2009-02-12 | 2012-08-07 | Sony Computer Entertainment America Llc | Object based observation |
US8306934B2 (en) * | 2009-03-30 | 2012-11-06 | International Business Machines Corporation | Demo verification provisioning |
US20100250478A1 (en) * | 2009-03-30 | 2010-09-30 | International Business Machines Corporation | Demo Verification Provisioning |
US20120156661A1 (en) * | 2010-12-16 | 2012-06-21 | Lockheed Martin Corporation | Method and apparatus for gross motor virtual feedback |
US20150125828A1 (en) * | 2012-08-10 | 2015-05-07 | Ti Training Corp. | Disruptor device simulation system |
US9885545B2 (en) * | 2012-08-10 | 2018-02-06 | Ti Training Corp. | Disruptor device simulation system |
US20160019427A1 (en) * | 2013-03-11 | 2016-01-21 | Michael Scott Martin | Video surveillence system for detecting firearms |
US20160106380A1 (en) * | 2013-05-01 | 2016-04-21 | Third Eye Technologies Limited | Apparatus for use in the performance of cognitive behaviour therapy and method of performance |
US9978127B2 (en) * | 2014-07-09 | 2018-05-22 | Splunk Inc. | Aligning a result image with a source image to create a blur effect for the source image |
US9646398B2 (en) * | 2014-07-09 | 2017-05-09 | Splunk Inc. | Minimizing blur operations for creating a blur effect for an image |
US9754359B2 (en) * | 2014-07-09 | 2017-09-05 | Splunk Inc. | Identifying previously-blurred areas for creating a blur effect for an image |
US10152773B2 (en) * | 2014-07-09 | 2018-12-11 | Splunk Inc. | Creating a blurred area for an image to reuse for minimizing blur operations |
US9501611B2 (en) | 2015-03-30 | 2016-11-22 | Cae Inc | Method and system for customizing a recorded real time simulation based on simulation metadata |
WO2016154717A1 (en) * | 2015-03-30 | 2016-10-06 | Cae Inc. | A method and system for generating an interactive training scenario based on a recorded real time simulation |
US10076711B2 (en) | 2015-09-15 | 2018-09-18 | Square Enix Holdings Co., Ltd. | Remote rendering server with broadcaster |
US20170319951A1 (en) * | 2016-05-03 | 2017-11-09 | Performance Designed Products Llc | Video gaming system and method of operation |
US10245506B2 (en) * | 2016-05-03 | 2019-04-02 | Performance Designed Products Llc | Video gaming system and method of operation |
US10500482B2 (en) | 2016-05-03 | 2019-12-10 | Performance Designed Products Llc | Method of operating a video gaming system |
US10266263B2 (en) * | 2017-01-23 | 2019-04-23 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for omni-directional obstacle avoidance in aerial systems |
US10691303B2 (en) * | 2017-09-11 | 2020-06-23 | Cubic Corporation | Immersive virtual environment (IVE) tools and architecture |
CN108124187A (en) * | 2017-11-24 | 2018-06-05 | 互影科技(北京)有限公司 | The generation method and device of interactive video |
US10861308B1 (en) * | 2019-05-29 | 2020-12-08 | Siemens Industry, Inc. | System and method to improve emergency response time |
WO2021011679A1 (en) * | 2019-07-15 | 2021-01-21 | Street Smarts VR | Magazine simulator for usage with weapons in a virtual reality system |
US11346630B2 (en) | 2019-07-15 | 2022-05-31 | Street Smarts Vr Inc. | Magazine simulator for usage with weapons in a virtual reality system |
US11674772B2 (en) | 2019-07-15 | 2023-06-13 | Street Smarts VR | Virtual reality system for usage with simulation devices |
CN113720202A (en) * | 2020-05-12 | 2021-11-30 | 广东仁光科技有限公司 | Immersive 3D image shooting training target range software system and method |
CN112595169A (en) * | 2021-01-04 | 2021-04-02 | 北京信安通靶场装备科技有限公司 | Actual combat simulation system and actual combat simulation display control method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20060105299A1 (en) | Method and program for scenario provision in a simulation system | |
US8770977B2 (en) | Instructor-lead training environment and interfaces therewith | |
US8123526B2 (en) | Simulator with fore and AFT video displays | |
US4223454A (en) | Marksmanship training system | |
US4680012A (en) | Projected imaged weapon training apparatus | |
EP0106051B1 (en) | Gunnery training apparatus | |
Harris et al. | Exploring the role of virtual reality in military decision training | |
Júnior et al. | System model for shooting training based on interactive video, three-dimensional computer graphics and laser ray capture | |
US20090305198A1 (en) | Gunnery training device using a weapon | |
US11719503B2 (en) | Firearm training system and method utilizing distributed stimulus projection | |
US20230224510A1 (en) | Apparats, Method, and System Utilizing USB or Wireless Cameras and Online Network for Force-on-Force Training Where the Participants Can Be In the Same Room, Different Rooms, or Different Geographic Locations | |
CN112185205A (en) | Immersive parallel training system | |
Saunders et al. | AUGGMED: developing multiplayer serious games technology to enhance first responder training | |
KR20070040494A (en) | 3d shooting simulation system | |
CN215064086U (en) | Shooting range system | |
Bennett et al. | Improving situational awareness training for Patriot radar operators | |
KR102655962B1 (en) | Remote Drone Tactical Training System | |
Rashid | Use of VR technology and passive haptics for MANPADS training system | |
Evensen et al. | Using Virtual Environments to Evaluate the Operational Benefit of Augmented Reality | |
Seymour et al. | Modifying law enforcement training simulators for use in basic research | |
Goldberg et al. | Training dismounted combatants in virtual environments | |
Jarmasz et al. | Blended solutions for counter-IED training | |
KR19990045317A (en) | Video shooting training system | |
CN116798288A (en) | Sentry terminal simulator and military duty training assessment simulation equipment | |
Clark et al. | User manual for the Dismounted Infantry Virtual After Action Review System (DIVAARS) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIRTRA SYSTEMS, INC., ARIZONA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FERRIS, ROBERT D.;HILL, ROBERT L.;REEL/FRAME:017276/0615 Effective date: 20051121 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |