US9733333B2 - Systems and methods for monitoring participant attentiveness within events and group assortments - Google Patents

Systems and methods for monitoring participant attentiveness within events and group assortments Download PDF

Info

Publication number
US9733333B2
US9733333B2 US14/272,590 US201414272590A US9733333B2 US 9733333 B2 US9733333 B2 US 9733333B2 US 201414272590 A US201414272590 A US 201414272590A US 9733333 B2 US9733333 B2 US 9733333B2
Authority
US
United States
Prior art keywords
user
event
communications
steady state
participation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/272,590
Other versions
US20150326458A1 (en
Inventor
Steven M. Gottlieb
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shindig Inc
Original Assignee
Shindig Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shindig Inc filed Critical Shindig Inc
Priority to US14/272,590 priority Critical patent/US9733333B2/en
Assigned to SHINDIG, INC. reassignment SHINDIG, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOTTLIEB, STEVEN M.
Publication of US20150326458A1 publication Critical patent/US20150326458A1/en
Application granted granted Critical
Publication of US9733333B2 publication Critical patent/US9733333B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • G01S3/808Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
    • G01S3/8083Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems determining direction of source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • Online events such as online classes, are quickly growing in popularity and abundance. What previously could only occur within a physical location (e.g., a classroom), may now be accessible from the comforts of one's home. This has tremendous benefits for many individuals, as it allows people to not miss events due to a variety of conditions (e.g., illness, weather, etc.). Furthermore, the number of individuals capable of accessing events may now grow larger than any physical location could accommodate, with the individuals only needing a network connection to “attend” an event.
  • Such methods may include monitoring communications for online participants of an event, determining steady state levels of the communications, detecting changes within the monitored communications, and storing the changes in an event participant log. To determine the steady state level, the communications may be monitored for a period of time and modeled so that changes corresponding to physical events rather than random fluctuations may be discerned from random fluctuations. Changes exceeding a predefined threshold, such as three standard deviations from the modeled data, may signify a “real” change within the communications, whereas changes smaller than the predefined threshold may be classified as non-relevant anomalies.
  • event statistics may be assigned to each online participant based on the stored changes. For example, a user who has been determined to not be paying attention may receive a low participation score or grade, whereas a user who has actively participated in the event may receive a high participation score.
  • FIG. 1 is a block diagram depicting a system in accordance with various embodiments
  • FIG. 2 is an illustrative block diagram of a device in accordance with various embodiments
  • FIG. 3A is an illustrative graph of a steady state participation level in accordance with various embodiments.
  • FIG. 3B is an illustrative graph of a detected change within monitored communications in accordance with various embodiments
  • FIG. 4 is a schematic illustration of a display screen in accordance with various embodiments.
  • FIG. 5 is a schematic illustration of a system for detecting and monitoring audio communications in accordance with various embodiments
  • FIG. 6 is a schematic illustration of a display screen in accordance with various embodiments.
  • FIGS. 7A and 7B are schematic illustrations of various audio levels before and after a user joins a group in accordance with various embodiments
  • FIG. 8 is an illustrative flowchart of a process for storing changes within monitored communications in accordance with various embodiments
  • FIG. 9 is a illustrative flowchart of a process for determining changes within monitored communications in accordance with various embodiments.
  • FIG. 10 is an illustrative flowchart of a process for transmitting event participation data to users in accordance with various embodiments
  • FIG. 11 is an illustrative flowchart of a process for transmitting grades to users in accordance with various embodiments
  • FIG. 12 is an illustrative flowchart of a process for transmitting a level of attentiveness to an event administrator in accordance with various embodiments.
  • FIG. 13 is an illustrative flowchart of a process for modifying communications received by a user within a group in accordance with various embodiments.
  • the present invention may take form in various components and arrangements of components, and in various techniques, methods, or procedures and arrangements of steps.
  • the referenced drawings are only for the purpose of illustrated embodiments, and are not to be construed as limiting the present invention.
  • Various inventive features are described below that can each be used independently of one another or in combination with other features.
  • FIG. 1 is a block diagram depicting a system in accordance with various embodiments.
  • System 100 may include server 102 , user devices 104 , and host device 108 , which may communicate with one another across network 106 . Although only three user devices 104 , one host device 108 , and one server 102 are shown within FIG. 1 , persons of ordinary skill in the art will recognize that any number of user devices, host devices, and servers may be used.
  • Server 102 may be any number of servers capable of facilitating communications and/or servicing requests from user devices 104 and/or host device 108 .
  • User device 104 may send and/or receive data from server 102 and/or host device 108 via network 106 .
  • host device 108 may send and/or receive data from server 102 and/or user devices 104 via network 106 .
  • network 106 may facilitate communications between one or more user devices 104 .
  • Network 106 may correspond to any network, combination of networks, or network devices that may carry data communications.
  • network 106 may be any one or any combination of local area networks (“LAN”), wide area networks (“WAN”), telephone networks, wireless networks, point-to-point networks, star networks, token ring networks, hub networks, or any other type of network, or any combination thereof.
  • Network 106 may support any number of protocols such as Wi-Fi (e.g., 802.11 protocol), Bluetooth®, radio frequency systems (e.g., 900 MHZ, 1.4.
  • network 106 may provide wired communications paths for user devices 104 and/or host device 108 .
  • User devices 104 may correspond to any electronic device or system capable of communicating over network 106 with server 102 , host device 108 , and/or with one or more additional user devices 104 .
  • user devices 104 may be portable media players, cellular telephones, pocket-sized personal computers, personal digital assistants (“PDAs”), desktop computers, laptop computers, and/or tablet computers.
  • PDAs personal digital assistants
  • User devices 104 may include one or more processors, storage, memory, communications circuitry, input/output interfaces, as well as any other suitable feature.
  • one or more components of user device 104 may be combined or omitted.
  • Host device 108 may correspond to any electronic device or system capable of communicating over network 106 with server 102 and/or user devices 104 .
  • host device 108 may be a portable media player, cellular telephone, pocket-sized personal computer, personal digital assistant (“PDA”), desktop computer, laptop computer, and/or tablet computer.
  • PDA personal digital assistant
  • host device 108 may be substantially similar to user devices 104 , and the previous description may apply.
  • one or more additional host devices may be included within system 100 and/or host device 108 may be omitted from system 100 .
  • a user application executed on user device 104 may handle requests independently and/or in conjunction with server 102 .
  • FIG. 2 is an illustrative block diagram of a device in accordance with various embodiments.
  • Device 200 may, in some embodiments, correspond to one of user devices 104 and/or host device 108 of FIG. 1 .
  • Persons of ordinary skill in the art will recognize that device 200 is merely one example of a device that may be implanted within a server-device system, and it is not limited to being only one part of the system.
  • one or more components included within device 200 may be added and/or omitted.
  • device 200 may include processor 202 , storage 204 , memory 206 , communications circuitry 208 , input interface 210 , and output interface 216 .
  • Input interface 210 may, in some embodiments, include camera 212 and microphones 214 .
  • Output interface 216 may, in some embodiments, include display 218 and speaker 220 .
  • one or more of the previously mentioned components may be combined or omitted, and/or one or more components may be added.
  • memory 204 and storage 206 may be combined into a single element for storing data.
  • device 200 may additionally include a power supply, a bus connector, or any other additional component.
  • device 200 may include multiple instances of one or more of the components included therein. However, for the sake of simplicity only one of each component has been shown in FIG. 2 .
  • Processor 202 may include any processing circuitry, such as one or more processors capable of controlling the operations and functionality of device 200 . In some embodiments, processor 202 may facilitate communications between various components within device 202 . Processor 202 may run the device's operation system, applications resident on the device, firmware applications, media applications, and/or any other type of application, or any combination thereof. In some embodiments, processor 202 may process one or more inputs detected by device 200 and perform one or more actions in response to the detected inputs.
  • Storage 204 may include one or more storage mediums.
  • Various types of storage mediums may include, but are not limited to, hard-drives, solid state drives, flash memory, permanent memory (e.g., ROM), or any other storage type, or any combination thereof. Any form of data or content may be stored within storage 204 , such as photographs, music files, videos, contact information, applications, documents, or any other file, or any combination thereof.
  • Memory 206 may include cache memory, semi-permanent memory (e.g., RAM), or any other memory type, or any combination thereof. In some embodiments, memory 206 may be used in place of and/or in addition to external storage for storing data on device 200 .
  • Communications circuitry 208 may include any circuitry capable of connecting to a communications network (e.g., network 106 ) and/or transmitting communications (voice or data) to one or more devices (e.g., user devices 104 and/or host device 108 ) and/or servers (e.g., server 102 ). Communications circuitry 208 may interface with the communications network using any suitable communications protocol including, but not limited to, Wi-Fi (e.g., 802.11 protocol), Bluetooth®, radio frequency systems (e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communications systems), infrared, GSM, GSM plus EDGE, CDMA, quadband, VOIP, or any other protocol, or any combination thereof.
  • Wi-Fi e.g., 802.11 protocol
  • Bluetooth® radio frequency systems
  • radio frequency systems e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communications systems
  • infrared GSM, GSM plus EDGE, CDMA, quadband,
  • Input interface 210 may include any suitable mechanism or component for receiving inputs from a user operating device 200 .
  • Input interface 210 may also include, but is not limited to, an external keyboard, mouse, joystick, microphone, musical interface (e.g., musical keyboard), or any other suitable input mechanism, or any combination thereof.
  • user interface 210 may include camera 212 .
  • Camera 212 may correspond to any image capturing component capable of capturing images and/or videos.
  • camera 212 may capture photographs, sequences of photographs, rapid shots, videos, or any other type of image, or any combination thereof.
  • device 200 may include one or more instances of camera 212 .
  • device 200 may include a front-facing camera and a rear-facing camera. Although only one camera is shown in FIG. 2 to be within device 200 , persons of ordinary skill in the art will recognize that any number of cameras, and any camera type may be included.
  • device 200 may include microphones 214 .
  • Microphones 214 may be any component capable of detecting audio signals.
  • microphones 214 may include one more sensors for generating electrical signals and circuitry capable of processing the generated electrical signals.
  • microphones 214 may correspond to multiple microphones, such as a first microphone and a second microphone.
  • device 200 may include multiple microphones capable of detecting various frequency levels (e.g., high-frequency microphone, low-frequency microphone, etc.).
  • device 200 may include one or external microphones connected thereto and used in conjunction with, or instead of, microphones 214 .
  • Output interface 216 may include any suitable mechanism or component for generating outputs from a user operating device 200 .
  • output interface 216 may include display 218 .
  • Display 218 may correspond to any type of display capable of presenting content to a user and/or on a device.
  • Display 218 may be any size and may be located on one or more regions/sides of device 200 .
  • display 218 may fully occupy a first side of device 200 , or may occupy a portion of the first side.
  • Various display types may include, but are not limited to, liquid crystal displays (“LCD”), monochrome displays, color graphics adapter (“CGA”) displays, enhanced graphics adapter (“EGA”) displays, variable graphics array (“VGA”) displays, or any other display type, or any combination thereof.
  • display 218 may be a touch screen and/or an interactive display.
  • the touch screen may include a multi-touch panel coupled to processor 202 .
  • display 218 may be a touch screen and may include capacitive sensing panels.
  • display 218 may also correspond to a component of input interface 210 , as it may recognize touch inputs.
  • output interface 216 may include speaker 220 .
  • Speaker 220 may correspond to any suitable mechanism for outputting audio signals.
  • speaker 220 may include one or more speaker units, transducers, or array of speakers and/or transducers capable of broadcasting audio signals and audio content to a room where device 200 may be located.
  • speaker 220 may correspond to headphones or ear buds capable of broadcasting audio directly to a user.
  • FIG. 3A is an illustrative graph of a steady state participation level in accordance with various embodiments.
  • Graph 300 is a two-dimensional graph of data plotted over time.
  • Graph 300 may include first axis 302 and second axis 304 .
  • first axis 302 may be a time axis. As data is obtained, the time associated with the data increases. For example, data captured closer to the begin of a measurement, run, or data acquisition time period, may have a lower or smaller time value along axis 302 than data captured later on.
  • second axis 304 may be a sample index axis. A sample index may correspond to any type of data unit to be captured.
  • graph 300 may correspond to a magnitude of audio detected by one or more microphones.
  • lower values of data along axis 304 may correspond to lower levels of audio detected, whereas higher values may correspond to higher levels of audio detected.
  • Persons of ordinary skill in the art will recognize that any unit may be used for axis 304 including, but not limited to, decibel level, microphone directionality offset, outputted volume, or any other unit, or any combination thereof.
  • Graph 300 may also include data points 306 .
  • Data points 306 may correspond to data obtained or produced by a user device, a host device, and/or a server, over a period of time.
  • data points 306 may correspond to a level of audio produced by a user and captured by a user device.
  • data points 306 may correspond to a level of audio produced by multiple user devices accessing an online event, such as a classroom.
  • data points 306 may be obtained over the course of a period of time to determine a steady state level of sound or noise within an event or area. For example, in order to determine an average amount of noise within a room, a user device may record sound within the room for a period of time. This may allow the device to determine the average sound level within the room. In some embodiments, a model fit of the data may be generated to determine the steady state level over the period of time. For example, fit 308 may be generated to model data points 306 . Fit 308 may represent the steady state level of the data over the period of time graphed within graph 300 .
  • the model fit may allow for extrapolation to other periods of time. This may allow the user device and/or the server to determine whether any new sounds exceed the “average level of noise” based on the model fit of the data.
  • Persons of ordinary skill in the art will recognize that any model fit, and/or any data modeling technique may be used to determine the steady state level of data points 306 , and the aforementioned use of an average is merely exemplary.
  • fit 308 may, in some embodiments, correspond to an exponential fit of data 306 or a linear fit of data 306 , however higher order polynomials, moving averages, random samplings, and/or maximum likelihood functions, may be used.
  • fit 308 may indicate an average level of data points 306 over a period of time.
  • the data points may all fall within a certain “error margin” of fit 308 .
  • data points 306 may all fall within one or more standard deviations, ⁇ , of fit 308 .
  • the formula used to determine standard deviation is:
  • N corresponds to the total number of data points
  • i corresponds to each individual data point
  • corresponds to the average data point typically found by summing all data points and dividing by the number of data points.
  • a captured data point exceeds a predefined number of standard deviations from the mean, then that data point may not fall within expected fluctuations of the data. For this reason, typically the larger the data acquisition, the more accurate the model, and therefore the more confident one may be, that a point may be an outlier.
  • FIG. 3B is an illustrative graph of a detected change within monitored communications in accordance with various embodiment.
  • graph 350 of FIG. 3B may be substantially similar to graph 300 of FIG. 3A , with the exception that graph 350 may correspond to data points exceeding a predefined threshold.
  • Graph 350 may include first axis 302 and second axis 304 , which may, in some embodiments, be substantially similar to first axis 302 and second axis 304 of FIG. 3A , and the previous description of the latter may apply to the former.
  • Graph 350 may include data points 318 .
  • data points 318 may substantially similar to data points 306 of FIG. 3A , with the exception that data points 318 may include one or more data points indicating a change in the received data above a predefined threshold value.
  • data points 318 may be described in four (4) groupings of data points.
  • Data points 316 may correspond to a first grouping of data points corresponding to an earlier period of time when data was acquired.
  • Data points 316 may correspond to a grouping of data located about model fit 308 obtained from fitting data points 306 of FIG. 3A . In some embodiments, because data points 316 are located about model fit 308 , they may be considered “background” data.
  • data points 316 may all fall within a region defined by model fit 308 and one standard deviation threshold line 312 . This may correspond to data points 316 all having values less than one standard deviation from fit 308 .
  • data points 326 which may be a second grouping of data points 318 , may be substantially similar to data points 316 , with the exception that the former may be acquired later in time than the former.
  • First standard deviation line 312 may, in some embodiments, be included within graph 350 .
  • Line 312 may correspond to one unit of standard deviation (see Equation 1), based on data points 306 .
  • Data points 318 that fall within line 312 and fit 308 such as data points 316 and 328 , may, in some embodiments, be considered statistically within reasonable bounds of the mean. Therefore, data groupings, such as groupings 316 and 326 may, in some embodiments, be considered data not indicative of a detected change from the modeled data.
  • graph 350 may also include threshold line 338 .
  • Line 338 may correspond to any number of standard deviations from fit 308 .
  • line 338 may correspond to three standard deviations from fit 308 .
  • Data points that fall within line 312 and line 338 such as data points 348 , may correspond to data points outside of the statistically insignificant region, but not significant enough to be considered a detected change.
  • data points 348 may include some data points that exceed second standard deviation line 314 , but are less than threshold line 348 .
  • line 314 may be used as the threshold value. In this scenario, some data points from data points 348 may be considered significant because they exceed line 314 .
  • data points that exceed one standard deviation, such as data points 348 may be deemed significant or indicative of a change.
  • Data points 310 may, in some embodiments, correspond to a third grouping of data points that exceed the defined threshold indicated by line 338 . Thus, data points 310 may be considered statistically significant because they exceed the threshold value defined by line 338 . Persons of ordinary skill in the art will recognize however that any threshold may be used, and the threshold may be set by the user, the user device, the host device, and/or the server.
  • a user may place a microphone within a room to determine an ambient level of noise.
  • the microphone may detect data points, such as data points 306 of FIG. 3A , and this may be used to determine a model of the ambient noise within the room.
  • the microphone may then be kept in the same room, and used to detect any new sounds within the room. If a sound exceeds a predefined threshold from the determined ambient noise level, the sound may be determined to be significant, whereas sounds that are substantially within one or more standard deviations from the model of the ambient noise level and less than the threshold value may also be considered ambient noise.
  • FIG. 4 is a schematic illustration of a display screen in accordance with various embodiments.
  • Display screen 400 may include user report boxes 402 , 404 , 406 , and 408 .
  • display screen 400 may correspond to a user interface presented on an event administrator's device, such as host device 108 of FIG. 1 .
  • the event administrator may correspond to a teacher of an online class.
  • each user report box may correspond to a report box of a student accessing the online class.
  • Persons of ordinary skill in the art will recognize that any number of user report boxes may be included within display screen 400 , and the use of four (4) user report boxes is merely exemplary.
  • each user report box may correspond to a separate user, or users, accessing the interactive online event monitored by the event administrator.
  • box 402 may correspond to a first user
  • boxes 404 , 406 and 408 may respectively correspond to a second user, a third user, and a fourth user.
  • Each user may access the interactive online event remotely from different locations using separate user devices (e.g., user devices 104 of FIG. 1 ).
  • Each of user boxes 402 , 404 , 406 , and 408 may include a user name and a participation grade for the corresponding user.
  • user box 402 may correspond to the first user and may have user name 402 a “USER 1 ”.
  • user name 404 a may correspond to the second user, “USER 2 ”
  • user name 406 a may correspond to the third user, “USER 3 ”
  • user name 408 a may correspond to the fourth user, “USER 4 ”.
  • Any user name may be used to correspond to a particular user. For example, a user may input a specific user name or handle to be displayed as their user name, or an email address, a social media network name, or any other suitable name, or any combination thereof may be used.
  • additional information may be displayed along with the user name.
  • a user may log into the online event using a social media network account profile.
  • the server e.g, server 102
  • the server may pull relevant information from the social media network profile and display some or all of the pulled information within a user's report box.
  • metadata corresponding to the user may be displayed within their user report box.
  • the server may determine a user's log-in location based on their user device's IP address, and may display the location within the user report box.
  • Each user report box may also include a participation grade.
  • the participation grade may be based on a user's level of participation within an event. For example, a student participating heavily within a class may receive a high participation grade.
  • the participation grade may be automated.
  • the server may assign a participation grade to a user based on the user's determined level of attentiveness.
  • participation grade 402 b may correspond to USER 1 's participation grade
  • participation grade 404 b , 406 b , and 408 b may each corresponds to USER 2 , USER 3 , and USER 4 's participation grade, respectively.
  • FIG. 5 is a schematic illustration of a system for detecting and monitoring audio communications in accordance with various embodiments.
  • System 500 may include user 502 and user device 504 .
  • user device 504 may be substantially similar to user device 200 of FIG. 2 , and the previous description of the latter may apply.
  • multiple instances of user device 504 may be included within system 500 .
  • a first user device may be used to display images to user 502 whereas a second user device may be used to receive and/or output communications from/to user 502 .
  • User device 504 may, in some embodiments, include communications receiver 506 and/or communications outputs 510 and 512 .
  • communications receiver 506 may correspond to one or more microphones, transducers, and/or transducer arrays.
  • Various types of microphones may include, but are not limited to, omnidirectional microphones, unidirectional microphones, cardioid microphones, bi-directional microphones, shotgun microphones, and/or boundary microphones.
  • multiple instance of receiver 506 may be included within or external to user device 504 .
  • communications receiver 506 may correspond to one or more video capturing devices.
  • receiver 506 may correspond to a camera capable of capturing still images and/or video.
  • multiple instances of receiver 506 may be included, where one or more receivers may be operable to receive audio communications while one or more other receivers may be operable to receive video communications.
  • Communications outputs 510 and 512 may, for example, correspond to any speaker, set of speakers, transducer, transducer array, headset, or any other component capable of outputting communications.
  • communications receiver 506 and communications outputs 510 and 512 may be combined into a single component.
  • system 500 may include additional communications receivers 508 .
  • additional communications receivers 508 may be placed in addition to a first microphone placed in front of a user, a second and third microphone may be placed on either side of the user, orthogonal to the position of the first microphone but on opposing sides.
  • Receivers 506 and 508 may, for example, each be placed at predefined distance and angles from the user.
  • communications receivers 508 may be substantially similar to communications receiver 506 , and the previous description may apply.
  • persons of ordinary skill in the art will recognize that although a three-microphone system is shown within system 500 , any orientation, pattern, or setup of communications receivers may be used within the system, and the use of three microphones placed in a triangle pattern is merely exemplary.
  • one or more cardioid microphones may be placed in any suitable pattern around user 502 to capture a large amount of audio communications that may emanate from user 502 .
  • Transmission line 516 may, for example, correspond to the “line of sight” of the transmission coming from user 502 and directed towards user device 504 .
  • a user may speak into their user device placed in front of them, and transmission line 516 may correspond to the direction of the user's audio signal.
  • Transmission lines 518 may, for example, correspond to the “line of sight” of the audio signal outputting from user 502 in a direction of receivers 508 .
  • user 502 may turn and speak into a cellular telephone located on their right hand side. The communication may be represented by transmission line 518 .
  • FIG. 6 is a schematic illustration of a display screen in accordance with various embodiments.
  • Display screen 600 may include user interface 602 , which may be displayed to an online participant accessing an event.
  • user 502 of FIG. 5 may be attending an online class, and user interface 602 may be displayed to the user on their user device (e.g., user device 504 ).
  • a user may be presented with user interface 602 at the end of an online event.
  • the user attending the online class may be presented within user interface 602 after the class has ended.
  • the interface 602 may present a user's participation report and grade for the event.
  • the participation report may correspond to a determined level of participation and/or attentiveness of the user during the online event. This may allow the server, or systems administrator, to provide participation grades to event attendees without having to physically monitor each one.
  • each participant may have one or more receivers located about them that may capture communications from the user. The communications may be monitored and analyzed against the online participant themselves (e.g., for feedback), as well as any other online participants, to determine if and when a participant has diverted attention away from the online event.
  • one or more receivers e.g., receivers 506 and 508 of FIG. 5
  • the one or more receivers may determine a steady state level of audio communication from the user and/or the ambient environment of the user.
  • one or more receivers of a receiver array may be located in a direction orthogonal to the direction of the user device.
  • receivers 508 may be located orthogonal to, and a certain distance away from, user device 504 .
  • audio signal detected by receivers 508 may correspond to the user producing communications in the direction of transmission lines 518 directed at receiver 508 .
  • the user may be communicating in a different direction than transmission line 516 directed at user device 504 , and therefore may not be interacting with the online event being broadcast thereon.
  • user interface 602 may correspond to “USER 1 ” 604 .
  • User interface 602 may display USER 1 's participation log based on the determined levels of attentiveness of the user.
  • indicator 606 may indicate a percentage of time that communications received by a first microphone was not at a predetermined steady state level. In this particular scenario, indicator 606 indicates that microphone 1 had not been at steady state levels for only 20% of the time of the online event.
  • microphone 1 may correspond to receiver 506 of FIG. 5 , and therefore receiver 506 may have only received communications exceeding steady state levels for 20% of the time of the online event. Thus, this may correspond to the user communicating with user device 504 for 20% of the time.
  • Indicators 608 A and 608 B may indicate a percentage of time that microphones 2 and 3 may have not been at steady state levels. For example, microphone 2 may have exceeded steady state levels for 50% of the time of the online event, whereas microphone 3 may have exceeded steady state levels for 30% of the time of the online event. In some embodiments, microphones 2 and 3 may correspond to receivers 508 . In this particular scenario, because the steady state levels at each of microphones 2 and 3 is greater than the level at microphone 1 (e.g., receiver 506 ), then the system may determine that user was not paying attention. This may be due, in part, to the fact that the received communications at receivers 508 was higher than that at receiver 506 , which may be included within user device 504 .
  • the user may receive participation grade 620 , corresponding to a “C”.
  • participation grade 620 corresponding to a “C”.
  • FIGS. 7A and 7B are schematic illustrations of various audio levels before and after a user joins a group in accordance with various embodiments.
  • System 700 of FIG. 7A may represent a percentage of an online participant's received communications prior to joining a group.
  • an online participant's microphone(s) may be open, and may be capable of receiving audio from all participants and/or any presenters of an online event.
  • bar 702 A may depict an amount of audio communications received from the online event prior to the user joining a group.
  • substantially all of the participant's received communications may be received from the online event (e.g., event participants, presenters, etc.), as indicated by region 704 .
  • Bar 702 B of FIG. 7B may display various amounts of audio communications received by an online participant after joining a group.
  • the online participant may still receive communications from the online event, as indicated by region 704 of bar 702 B, however the online participant may also receive communications from the group, as indicated by region 706 .
  • the communications received therefrom may be greater or more prominent than the communications received from the online event.
  • the user may be taking place in an online conference.
  • the user Prior to joining a group, the user may be capable of receiving audio communications from the event administrator, presenter(s), and/or other participants in the general event forum.
  • the user may, in some embodiments, join a group of other online participants within the event discussing a more focused topic. For example, the user may be interested in a specific panel occurring within the online conference. In this scenario, the user may still receive communications that were previously received prior to joining the group, however those communications may be lowered in volume.
  • communications may be transmitted to the user device at normal volume, or the volume with which they were received, and a flag or indication may be sent to the user indicating that some of the communications should be adjusted.
  • a user may receive all communications from an online class, but receive an indication that they should raise the volume of the teacher (or lower the volume of everyone but the teacher). In this way, the user may selectively decide which individuals are broadcasted at certain volume levels. Additional communications from the group itself may now occupy a larger percentage of the user's received communication because the group may now be the focus of the user's attention. In this scenario, upon entering the group, the group's communications may raise or be received at a higher volume level by the online participant than other communications (e.g., event communications).
  • one or more adjusters may be included within the background of the display screen. The adjusters may allow the user to adjust volume levels for specific aspects of the online event. For example, an adjuster may be included that allows the user to raise or lower the volume of the audience and/or the presenter.
  • FIG. 8 is an illustrative flowchart of a process for storing changes within monitored communications in accordance with various embodiments.
  • Process 800 may begin at step 802 .
  • communications within an event may be monitored.
  • the event may be an interactive online event where multiple online participants may communicate with one another. For example, students may log into an online classroom and may be presented with lecture materials from a teacher or from other students.
  • a user's communications with their user device may be monitored.
  • a user may be provided with multiple receivers capable of detecting communications directed substantially at them.
  • receivers 506 and 508 of FIG. 5 may be provided to detect communications from user 502 .
  • the communications may be directed at receiver 506 located on or within user device 504 .
  • receiver 506 may correspond to a microphone located in a user device
  • receivers 508 may correspond to external microphones located proximate to the user.
  • each microphone in the system may be able to detect when audio is produced by the user, and the proportion of audio directed towards that particular microphone.
  • receivers 506 and 508 may be cardioid microphones, and each may be capable of detecting received audio.
  • audio directed substantially at microphone 506 may be detected peripherally by microphones 508 .
  • the system may be capable of detecting how much of the audio directed at microphone 506 is captured by microphones 508 to determine the user's intended direction of communication. For example, if a large percentage of the audio is detected by receivers 508 , then the system may recognize that the user may not communicating with their user device, and may have their attention diverted elsewhere (e.g., a friend, video game, etc.).
  • the user device may monitor the communications produced by the event or within a user's system (e.g., user device 504 and receivers 506 and 508 ). For example, microphones for each online participant accessing the event may be monitored individually and/or collectively. As another example, the user device may monitor the outputted communications from the user device's speakers to determine an amount of ambient noise produced by the event. This may help determine any contributing feedback noise that may occur by the receivers detecting audio produced by the event itself.
  • a user's system e.g., user device 504 and receivers 506 and 508 .
  • microphones for each online participant accessing the event may be monitored individually and/or collectively.
  • the user device may monitor the outputted communications from the user device's speakers to determine an amount of ambient noise produced by the event. This may help determine any contributing feedback noise that may occur by the receivers detecting audio produced by the event itself.
  • a steady state level may be determined based on the monitored communications.
  • the steady state level may correspond to a determined level of ambient noise within a particular region or area where the communications are being monitored.
  • receivers 506 and 508 may monitor the communications produced within system 500 for a period of time to determine an average amount of noise produced at each microphone or collectively.
  • the average amount of noise may reflect a typical amount of noise expected to be received by a microphone.
  • the audio detected by the receivers may be modeled to determine an average amount of noise, or a steady state level of the noise.
  • data points 306 may be modeled using fit 308 .
  • Fit 308 may model the received communications.
  • the model may allow the user and/or the system to have an estimation of a level of the noise that is detectable by a user device at any point in the future. Thus, an extrapolation to future time periods may be used to determine whether changes in the received communications occur.
  • any changes in the monitored communications may be detected.
  • the detected changes may correspond to communications that exceed expected communication levels based on the determined steady state level.
  • data points 310 of FIG. 3B may correspond to detected communications that exceed a predefined threshold for a steady state level (e.g., threshold line 338 ).
  • data points 310 exceed line 338 , which may correspond to a predefined threshold above the steady state level signified by fit 312 .
  • Fit 312 may, in some embodiments, be substantially similar to fit 308 of FIG. 3A , and line 338 may correspond to a particular number of standard deviations above the model fit.
  • data points 310 may correspond to detected communications that exceed the determined steady state level by more than the predefined number of standard deviations.
  • the detected change in the communications need only exceed the average level of communications based on a model fit of the monitored communications. For example, data points 348 may exceed fit 308 , even though they may not exceed the predefined threshold indicated by line 338 . In this particular scenario, data points 348 may be “detected” because the threshold has been set to fit 308 only. In some embodiments, the threshold may be set to one standard deviation line 312 or two standard deviation line 314 .
  • any detected change may be stored within an event participation log.
  • the event participation log may be capable of storing each detected change, as well as when the change occurred and by how much the detected change exceeded the threshold value. For example, at 5 minutes after a teacher began to present their materials, a spike in audio communications from a certain online participant may have been detected. The timestamp and intensity level of the spike may be stored within the event participation log along with one or more additional pieces of data associated with the data spike.
  • the detected change may be analyzed prior to being stored within the event participation log. For example, the detected change may be analyzed to determine which receiver detected the change. If the change was detected by a receiver located in the direction of expected event communications (e.g., receiver 506 ), this change may be classified as an event signal and therefore the user may be determined to be participating in the event. If, however, the change was detected by a receiver not located in the direction of any expected event communication (e.g., receivers 508 ), the change may be classified as a non-event signal. Non-event signals may signify that the user is not participating in the event, and therefore the system, host, or event administrator may determine that the user is not paying attention to the event and may therefore receive a lower participation score.
  • Non-event signals may signify that the user is not participating in the event, and therefore the system, host, or event administrator may determine that the user is not paying attention to the event and may therefore receive a lower participation score.
  • FIG. 9 is a illustrative flowchart of a process for determining changes within monitored communications in accordance with various embodiments.
  • Process 900 may begin at step 902 .
  • audio communications may be monitored.
  • a steady state level of audio may be determined.
  • a change may be detected from the steady state level.
  • steps 902 , 904 , and 906 may be substantially similar to steps 802 , 804 , and 806 of FIG. 8 , and the previous description of the latter may apply to the former.
  • a query may be run to determine whether the detected change from the steady state level is greater than a threshold.
  • data points 310 may correspond to a detected change above line 338 .
  • line 338 may correspond to a predefined threshold above a steady state level of the monitored communication (e.g., fit 308 ).
  • the predefined threshold may correspond to any threshold level.
  • line 338 may correspond to three (3) standard deviations from a model fit of the data corresponding to a steady state level. Statistically speaking, three standard deviations from the mean corresponds to a 99.7% confidence interval. Thus, any data exceeding three standard deviations of would only have a 0.3% chance of not being a detected change.
  • persons of ordinary skill in the art will recognize that any number of standard deviations may be used to determine the significance of a particular data point or grouping, and the use of three standard deviations is merely exemplary.
  • process 900 may return to step 902 where audio communications may be monitored.
  • the detected change from step 906 may be used to dynamically update the steady state level.
  • process 900 may return to step 906 where the process may wait to detect additional changes from the steady state level.
  • process 900 may move to step 910 .
  • the detected change from step 906 may be recorded in a participation log.
  • the participation log may store all activities related to one or more participants of an event. For example, the participation log may be used to determine participation grades for students accessing the online event. If a change is detected within a students monitored communications it may correspond to the user either actively participating in the event or the user not paying attention to the event (e.g., communicating with friends, playing video games, etc.).
  • one or more algorithms resident on the user device and/or the server may look to determine the user's activities at the time of the detected change.
  • the detected change may correspond to the user actively communicating with their user device and thus the event, or the detected change may correspond to the user engaging in one or more additional activities (e.g., video game, communicating with a friend), and therefore not participating in the event.
  • the one or more algorithms will look to see if the user provided one or more inputs to the event using their user device. If so, then it may be determined that user is participating in the event. However, if there are no noticeable event inputs or other indications that the user is interacting with the event, then the detected change may correspond to the user not participating in the event and the change may be flagged as a non-event signal.
  • FIG. 10 is an illustrative flowchart of a process for transmitting event participation data to users in accordance with various embodiments.
  • Process 1000 may begin at step 1002 .
  • a first audio signal may be received at a first microphone or audio receiver.
  • a system such as system 500 of FIG. 5
  • Each receiver may be capable of receiving/detecting communications from a user, such as user 502 .
  • receiver 506 may receive communications from user 502 via communications line 516 .
  • a determination may be made as to which communications receiver captured the first audio signal.
  • the system may include multiple communications receivers set up to detect audio signals.
  • system 500 of FIG. 5 may include receiver 506 placed in front of user 502 , as well as receivers 508 placed adjacent to user 502 .
  • Persons of ordinary skill in the art will recognize that any number of audio receivers, which may be oriented in any suitable configuration, may be used, and the use of three (3) receivers is merely exemplary.
  • event participation data may be created based on which receiver the communications has been captured by.
  • event participation data corresponding to the user participating in the event may be created.
  • Receiver 506 may, in some embodiments, correspond to a microphone included within, or substantially on, user device 504 .
  • communications received by receiver 506 may correspond to communications that are directed to the user device.
  • event participation data corresponding to the user not participating in the event may be created.
  • Receivers 508 may, in some embodiments, be positioned substantially perpendicular and in-line with user 502 . Thus, any communications detected by receivers 508 would correspond to user 502 being oriented substantially away from user device 504 , and therefore not associated with the event.
  • the event participation data may be transmitted to the event administrator device. For example, data corresponding to a user's participation levels may be sent to the event administrator so that the event administrator may assign a participation grade to the user.
  • the event participation data sent to the event administrator may indicate that the user had been paying attention.
  • the event participation data indicates that the user was not participating in the event (e.g., audio signal detected by receivers 508 )
  • the event administrator may assign a participation grade to the user reflective of the user not paying attention to the event.
  • FIG. 11 is an illustrative flowchart of a process for transmitting grades to users in accordance with various embodiments.
  • Process 1100 may begin at step 1102 .
  • a first audio signal may be received by a communications receiver.
  • transmission line 516 of FIG. 5 may be received at communications receiver 506 on user device 504 .
  • step 1102 of FIG. 11 may be substantially similar to step 1002 of FIG. 10 , and the previous description of the latter may apply to the former.
  • a query may be run to determine which communications receiver from an array of communications receivers captured the audio signal.
  • the user may be surrounded by one or more communications receivers, such as microphones, which may be positioned in any suitable orientation.
  • user 502 of FIG. 5 may be positioned a first distance away from receiver 506 , as well as being positioned a second distance away from receivers 508 .
  • various microphones may be equally positioned around a user in a circular manner. Persons of ordinary skill in the art will recognize that any microphone array may be used to obtain audio signals, and the various array type may also depend on the type of microphone used to obtain audio signals, and the aforementioned setups are merely exemplary.
  • certain receivers capturing communications within the array may signify that a user is participating in an event, whereas others may signify that the user is not participating in the event. For example, if the user is asking a question to an event presenter, the user will most likely be directing his/her communications towards their user device. However, if the user is not participating in the event, then the user will most likely not be directing his/her communications towards their user device. For example, if the user is communicating with a friend, or interacting with a video game, then their communications will be directed elsewhere. These communications may be detected by one or more other communications receivers placed proximate to the user, but not in the line of sight of the user device.
  • step 1104 If, at step 1104 , it is determined that a first communications receiver, such as receiver 506 , captures the audio signal, then process 1100 may proceed to step 1106 .
  • event data corresponding to the user participating in an event may be generated.
  • the communication detected by the device's microphone may correspond to event communications.
  • the user may be answering a teacher's question within an online event and may answer the question using their user device.
  • the communications receiver located on the user device may detect the audio communications from the user, and generate event data indicating the user is participating in the event.
  • event data corresponding to the user not participating in the event may be generated.
  • event data corresponding to the user not participating in the event.
  • audio communications are received at microphones adjacent to the user but not located on or proximate to the user device, then the user most likely is not paying attention or participating in the event.
  • communications directed at receivers 508 may correspond to the user not interacting with an event on their user device. Thus, in this scenario, the user is most likely not participating in the event, and therefore event data corresponding to the user not participating may be generated.
  • the created event data may correspond to any indicator or flag signifying the user's participation level.
  • the generated event data may be a Boolean operator capable of reading “1” or “TRUE” if the event data generated corresponds to the user participating in the event.
  • the generated event data may read “0” or “FALSE” if the event data generated correspond to the user not participating in the event.
  • Persons of ordinary skill in the art will recognize that any event data may be generated, and the event data may be depicted in any suitable format, and the aforementioned scenario is merely exemplary.
  • a grade may be transmitted to the user.
  • the grade may be based on the user's level of participation within the event. For example, if event data corresponding to the user participating in the event is created at step 1106 , then the user may receive a higher participation grade than if the event data generated corresponded to the user not participating in the event.
  • USER 4 of FIG. 4 may receive participation grade 408 b , “A”, which may indicate that the user has participated in the event, whereas USER 3 may receive participation grade 406 b , “D”, indicating that this user has not participated in the event.
  • the grade for the event may be transmitted to the user at the end of the event. This may allow the system to determine an overall level of attentiveness for the user throughout the event's duration. In some embodiments, the grade may be transmitted to a user's parent or guardian instead of, or in addition to, the user. This may allow, for example, a teacher to communicate with a student's parent directly, instead of with the student, who may not convey the message to their parent.
  • FIG. 12 is an illustrative flowchart of a process for transmitting a level of attentiveness to an event administrator in accordance with various embodiments.
  • Process 1200 may begin at step 1202 .
  • communications between online participants of an event may be monitored.
  • an online participant of an event may communicate with one or more additional online participants of the event within a group or subgroup.
  • the participants of the group may communicate with one another during the online event. For example, a first participant may ask a second participant a question about material presented within the event. The group may be formed between these two participants prior to the question being asked or in response to the question being asked.
  • the communications between the online participants may be monitored by a system facilitating the communications.
  • a server may be able to facilitate communications between user devices accessing an online event hosted on the server.
  • the server may be capable of monitoring each user device to detect communications transmitted and received in order to determine which user devices are in communication with each other.
  • a level of attentiveness for each online participant may be determined.
  • the level of attentiveness may be based on any number of factors including, but not limited to, the duration of time of the communications between the online participants, the content involved in the communications, the participants involved in the communications, and/or any other factor, or any combination thereof.
  • the server may determine the level of attentiveness based on when the communications between the online participants began. For example, if at a certain point within a presentation the presenter began to describe a very complex topic, communications between participants may occur to help clarify the topic amongst the participants. Therefore, the monitored communications may indicate that the online participants are paying attention to the material but may also require additional explanations. This may help highlight to the presenter a need to clarify certain topics.
  • the presenter may transmit a video to each participant within the online event.
  • two or more participants may begin communicating within a group. Based on the these communications, the system may determine that the two participants are not participating within the event because they are no longer watching the video and instead are communicating within one another.
  • the determined level of attentiveness may be transmitted to an event administrator.
  • the event administrator may, in some embodiments, correspond to a teacher or presenter of the online event. For example, if the online event corresponds to an online class, then the event administrator may be a teacher.
  • the event administrator may receive the determined level of attentiveness for each online participant and based thereon, assign a participation grade to the participants. For example, if it is determined that the online participants have not been paying attention in the online class, these participants may receive a low participation grade.
  • the event administrator may store the level of attentiveness for each user within an event database. This may allow the event administrator to aggregate or analyze the level of attentiveness for various users to determine the users' grades, and/or ways or areas to augment their presentation to make it more engaging to the participants.
  • FIG. 13 is an illustrative flowchart of a process for modifying communications received by a user within a group in accordance with various embodiments.
  • Process 1300 may begin at step 1302 .
  • a determination may be made that a first user device has entered a group.
  • a first user accessing an online event may enter into a group of online participants within the online event.
  • a user may want to join a group including one or more participants accessing an online event.
  • the user may select an option to join the group, request an invitation to enter the group, knock on the room of the group, or be brought in by another group member, or the user may join using any other suitable mechanism, or any combination thereof.
  • the user may be capable of entering the group without accessing the event, but may access the event after joining the group.
  • communications may be transmitting from the first user device to the group. For example, after a user enters a group within an online event, the user may begin to send communications to additional members of the group.
  • the user may be capable of transmitting audio, video, textual communications, documents, or any other form of communication, or any combination thereof, to the other group members.
  • the user may be able to transmit audio signals to additional group members.
  • communications received by the user may be modified in response to the user joining and transmitting communications to the group.
  • the user prior to joining the group, the user may have been capable of receiving communications solely from the event.
  • the mixture of audio received by the user prior to joining the group may have consisted of only the event's audio.
  • the audio mixture received by the user may be split so that some audio is from the event and some audio is from the group. In this way, the user may perceive to be in a real group within the physical event where they would hear communications from group members while the event occurs in the background.
  • the system may automatically mix the group's audio and the event's audio to suitable levels such that the user appropriately hears both.
  • any mixture of the group's communications and the event's communications may occur.
  • the mixture may be half group/half event, one third event/two thirds group, or any other partitioning, or any combination thereof.
  • the user may be able to “pause” the communications of the event as they enter the group. This may allow the user to receive the group's communications only, but still be able to receive the event's communications at a later point in time by un-pausing the communications from the event.
  • the various embodiments described herein may be implemented using a variety of means including, but not limited to, software, hardware, and/or a combination of software and hardware.
  • the embodiments may also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium may be any data storage device that is capable of storing data that can be read by a computer system.
  • Various types of computer readable media include, but are not limited to, read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, or optical data storage devices, or any other type of medium, or any combination thereof.
  • the computer readable medium may be distributed over network-coupled computer systems.

Abstract

Systems, methods, and non-transitory computer readable medium are described for monitoring participant attentiveness within events and for group assortments. In some embodiments, communications received from an online participant of an event may be monitored. Based on the monitored communications, a steady state level may be determined. Changes within the monitored communications from the steady state level may be detected and then stored within an event participation log.

Description

FIELD OF THE INVENTION
This generally relates to systems and methods for monitoring participant attentiveness within events and for group assortments.
BACKGROUND OF THE INVENTION
Online events, such as online classes, are quickly growing in popularity and abundance. What previously could only occur within a physical location (e.g., a classroom), may now be accessible from the comforts of one's home. This has tremendous benefits for many individuals, as it allows people to not miss events due to a variety of conditions (e.g., illness, weather, etc.). Furthermore, the number of individuals capable of accessing events may now grow larger than any physical location could accommodate, with the individuals only needing a network connection to “attend” an event.
As an illustrative example, many school systems are implementing online classrooms to help eliminate “snow days” from occurring. Although most children loathe the idea of no more snow days, this comes as a tremendous advantage to the educational system because course materials may now be disseminated regardless of whether or not the school is open. However, as useful as online classes may be, inherent issues may arise from a student working from home and on their personal computer. For example, students attending an online class may also be able to surf the web and/or access one or more social media networks. As another example, students may have an online classroom running in the background and may play a video game or may converse with one or more family members or friends. This may be a costly problem in that students will not participant fully in the event and the event administrators (e.g., teachers) have little to no way of detecting such a situation.
Thus, it would be beneficial for there to be systems and methods that allow for participants of online events to be monitored to determine participation and attentiveness levels.
SUMMARY OF THE INVENTION
Systems, methods, and non-transitory computer readable media for monitoring participant attentiveness within events and for group assortments are provided. Such methods may include monitoring communications for online participants of an event, determining steady state levels of the communications, detecting changes within the monitored communications, and storing the changes in an event participant log. To determine the steady state level, the communications may be monitored for a period of time and modeled so that changes corresponding to physical events rather than random fluctuations may be discerned from random fluctuations. Changes exceeding a predefined threshold, such as three standard deviations from the modeled data, may signify a “real” change within the communications, whereas changes smaller than the predefined threshold may be classified as non-relevant anomalies. In some embodiments, event statistics may be assigned to each online participant based on the stored changes. For example, a user who has been determined to not be paying attention may receive a low participation score or grade, whereas a user who has actively participated in the event may receive a high participation score.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other features of the present invention, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram depicting a system in accordance with various embodiments;
FIG. 2 is an illustrative block diagram of a device in accordance with various embodiments;
FIG. 3A is an illustrative graph of a steady state participation level in accordance with various embodiments;
FIG. 3B is an illustrative graph of a detected change within monitored communications in accordance with various embodiments;
FIG. 4 is a schematic illustration of a display screen in accordance with various embodiments;
FIG. 5 is a schematic illustration of a system for detecting and monitoring audio communications in accordance with various embodiments;
FIG. 6 is a schematic illustration of a display screen in accordance with various embodiments;
FIGS. 7A and 7B are schematic illustrations of various audio levels before and after a user joins a group in accordance with various embodiments;
FIG. 8 is an illustrative flowchart of a process for storing changes within monitored communications in accordance with various embodiments;
FIG. 9 is a illustrative flowchart of a process for determining changes within monitored communications in accordance with various embodiments;
FIG. 10 is an illustrative flowchart of a process for transmitting event participation data to users in accordance with various embodiments;
FIG. 11 is an illustrative flowchart of a process for transmitting grades to users in accordance with various embodiments;
FIG. 12 is an illustrative flowchart of a process for transmitting a level of attentiveness to an event administrator in accordance with various embodiments; and
FIG. 13 is an illustrative flowchart of a process for modifying communications received by a user within a group in accordance with various embodiments.
DETAILED DESCRIPTION OF THE INVENTION
The present invention may take form in various components and arrangements of components, and in various techniques, methods, or procedures and arrangements of steps. The referenced drawings are only for the purpose of illustrated embodiments, and are not to be construed as limiting the present invention. Various inventive features are described below that can each be used independently of one another or in combination with other features.
FIG. 1 is a block diagram depicting a system in accordance with various embodiments. System 100 may include server 102, user devices 104, and host device 108, which may communicate with one another across network 106. Although only three user devices 104, one host device 108, and one server 102 are shown within FIG. 1, persons of ordinary skill in the art will recognize that any number of user devices, host devices, and servers may be used.
Server 102 may be any number of servers capable of facilitating communications and/or servicing requests from user devices 104 and/or host device 108. User device 104 may send and/or receive data from server 102 and/or host device 108 via network 106. Similarly, host device 108 may send and/or receive data from server 102 and/or user devices 104 via network 106. In some embodiments, network 106 may facilitate communications between one or more user devices 104.
Network 106 may correspond to any network, combination of networks, or network devices that may carry data communications. For example, network 106 may be any one or any combination of local area networks (“LAN”), wide area networks (“WAN”), telephone networks, wireless networks, point-to-point networks, star networks, token ring networks, hub networks, or any other type of network, or any combination thereof. Network 106 may support any number of protocols such as Wi-Fi (e.g., 802.11 protocol), Bluetooth®, radio frequency systems (e.g., 900 MHZ, 1.4. GHZ, and 5.6 GHZ communication systems), cellular networks (e.g., GSM, AMPS, GPRS, CDMA, EV-DO, EDGE, 3GSM, DECT, IS-136/TDMA, iDen, LTE, or any other suitable cellular network protocol), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, BitTorrent, FTP, RTP, RTSP, SSH, Voice over IP (“VOIP”), or any other communications protocol, or any combination thereof. In some embodiments, network 106 may provide wired communications paths for user devices 104 and/or host device 108.
User devices 104 may correspond to any electronic device or system capable of communicating over network 106 with server 102, host device 108, and/or with one or more additional user devices 104. For example, user devices 104 may be portable media players, cellular telephones, pocket-sized personal computers, personal digital assistants (“PDAs”), desktop computers, laptop computers, and/or tablet computers. User devices 104 may include one or more processors, storage, memory, communications circuitry, input/output interfaces, as well as any other suitable feature. Furthermore, one or more components of user device 104 may be combined or omitted.
Host device 108 may correspond to any electronic device or system capable of communicating over network 106 with server 102 and/or user devices 104. For example, host device 108 may be a portable media player, cellular telephone, pocket-sized personal computer, personal digital assistant (“PDA”), desktop computer, laptop computer, and/or tablet computer. In some embodiments, host device 108 may be substantially similar to user devices 104, and the previous description may apply. In some embodiments, one or more additional host devices may be included within system 100 and/or host device 108 may be omitted from system 100.
Although examples of embodiments may be described for a user-server model with a server servicing requests of one or more user applications, persons of ordinary skill in the art will recognize that any other model (e.g., peer-to-peer), may be available for implementation of the described embodiments. For example, a user application executed on user device 104 may handle requests independently and/or in conjunction with server 102.
FIG. 2 is an illustrative block diagram of a device in accordance with various embodiments. Device 200 may, in some embodiments, correspond to one of user devices 104 and/or host device 108 of FIG. 1. Persons of ordinary skill in the art will recognize that device 200 is merely one example of a device that may be implanted within a server-device system, and it is not limited to being only one part of the system. Furthermore, one or more components included within device 200 may be added and/or omitted.
In some embodiments, device 200 may include processor 202, storage 204, memory 206, communications circuitry 208, input interface 210, and output interface 216. Input interface 210 may, in some embodiments, include camera 212 and microphones 214. Output interface 216 may, in some embodiments, include display 218 and speaker 220. In some embodiments, one or more of the previously mentioned components may be combined or omitted, and/or one or more components may be added. For example, memory 204 and storage 206 may be combined into a single element for storing data. As another example, device 200 may additionally include a power supply, a bus connector, or any other additional component. In some embodiments, device 200 may include multiple instances of one or more of the components included therein. However, for the sake of simplicity only one of each component has been shown in FIG. 2.
Processor 202 may include any processing circuitry, such as one or more processors capable of controlling the operations and functionality of device 200. In some embodiments, processor 202 may facilitate communications between various components within device 202. Processor 202 may run the device's operation system, applications resident on the device, firmware applications, media applications, and/or any other type of application, or any combination thereof. In some embodiments, processor 202 may process one or more inputs detected by device 200 and perform one or more actions in response to the detected inputs.
Storage 204 may include one or more storage mediums. Various types of storage mediums may include, but are not limited to, hard-drives, solid state drives, flash memory, permanent memory (e.g., ROM), or any other storage type, or any combination thereof. Any form of data or content may be stored within storage 204, such as photographs, music files, videos, contact information, applications, documents, or any other file, or any combination thereof. Memory 206 may include cache memory, semi-permanent memory (e.g., RAM), or any other memory type, or any combination thereof. In some embodiments, memory 206 may be used in place of and/or in addition to external storage for storing data on device 200.
Communications circuitry 208 may include any circuitry capable of connecting to a communications network (e.g., network 106) and/or transmitting communications (voice or data) to one or more devices (e.g., user devices 104 and/or host device 108) and/or servers (e.g., server 102). Communications circuitry 208 may interface with the communications network using any suitable communications protocol including, but not limited to, Wi-Fi (e.g., 802.11 protocol), Bluetooth®, radio frequency systems (e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communications systems), infrared, GSM, GSM plus EDGE, CDMA, quadband, VOIP, or any other protocol, or any combination thereof.
Input interface 210 may include any suitable mechanism or component for receiving inputs from a user operating device 200. Input interface 210 may also include, but is not limited to, an external keyboard, mouse, joystick, microphone, musical interface (e.g., musical keyboard), or any other suitable input mechanism, or any combination thereof.
In some embodiments, user interface 210 may include camera 212. Camera 212 may correspond to any image capturing component capable of capturing images and/or videos. For example, camera 212 may capture photographs, sequences of photographs, rapid shots, videos, or any other type of image, or any combination thereof. In some embodiments, device 200 may include one or more instances of camera 212. For example, device 200 may include a front-facing camera and a rear-facing camera. Although only one camera is shown in FIG. 2 to be within device 200, persons of ordinary skill in the art will recognize that any number of cameras, and any camera type may be included.
In some embodiments, device 200 may include microphones 214. Microphones 214 may be any component capable of detecting audio signals. For example, microphones 214 may include one more sensors for generating electrical signals and circuitry capable of processing the generated electrical signals. In some embodiments, microphones 214 may correspond to multiple microphones, such as a first microphone and a second microphone. In some embodiments, device 200 may include multiple microphones capable of detecting various frequency levels (e.g., high-frequency microphone, low-frequency microphone, etc.). In some embodiments, device 200 may include one or external microphones connected thereto and used in conjunction with, or instead of, microphones 214.
Output interface 216 may include any suitable mechanism or component for generating outputs from a user operating device 200. In some embodiments, output interface 216 may include display 218. Display 218 may correspond to any type of display capable of presenting content to a user and/or on a device. Display 218 may be any size and may be located on one or more regions/sides of device 200. For example, display 218 may fully occupy a first side of device 200, or may occupy a portion of the first side. Various display types may include, but are not limited to, liquid crystal displays (“LCD”), monochrome displays, color graphics adapter (“CGA”) displays, enhanced graphics adapter (“EGA”) displays, variable graphics array (“VGA”) displays, or any other display type, or any combination thereof. In some embodiments, display 218 may be a touch screen and/or an interactive display. In some embodiments, the touch screen may include a multi-touch panel coupled to processor 202. In some embodiments, display 218 may be a touch screen and may include capacitive sensing panels. In some embodiments, display 218 may also correspond to a component of input interface 210, as it may recognize touch inputs.
In some embodiments, output interface 216 may include speaker 220. Speaker 220 may correspond to any suitable mechanism for outputting audio signals. For example, speaker 220 may include one or more speaker units, transducers, or array of speakers and/or transducers capable of broadcasting audio signals and audio content to a room where device 200 may be located. In some embodiments, speaker 220 may correspond to headphones or ear buds capable of broadcasting audio directly to a user.
FIG. 3A is an illustrative graph of a steady state participation level in accordance with various embodiments. Graph 300 is a two-dimensional graph of data plotted over time. Graph 300 may include first axis 302 and second axis 304. In some embodiments, first axis 302 may be a time axis. As data is obtained, the time associated with the data increases. For example, data captured closer to the begin of a measurement, run, or data acquisition time period, may have a lower or smaller time value along axis 302 than data captured later on. In some embodiments, second axis 304 may be a sample index axis. A sample index may correspond to any type of data unit to be captured. For example, graph 300 may correspond to a magnitude of audio detected by one or more microphones. In this scenario, lower values of data along axis 304 may correspond to lower levels of audio detected, whereas higher values may correspond to higher levels of audio detected. Persons of ordinary skill in the art will recognize that any unit may be used for axis 304 including, but not limited to, decibel level, microphone directionality offset, outputted volume, or any other unit, or any combination thereof.
Graph 300 may also include data points 306. Data points 306 may correspond to data obtained or produced by a user device, a host device, and/or a server, over a period of time. In some embodiments, data points 306 may correspond to a level of audio produced by a user and captured by a user device. In some embodiments, data points 306 may correspond to a level of audio produced by multiple user devices accessing an online event, such as a classroom.
In some embodiments, data points 306 may be obtained over the course of a period of time to determine a steady state level of sound or noise within an event or area. For example, in order to determine an average amount of noise within a room, a user device may record sound within the room for a period of time. This may allow the device to determine the average sound level within the room. In some embodiments, a model fit of the data may be generated to determine the steady state level over the period of time. For example, fit 308 may be generated to model data points 306. Fit 308 may represent the steady state level of the data over the period of time graphed within graph 300.
The model fit may allow for extrapolation to other periods of time. This may allow the user device and/or the server to determine whether any new sounds exceed the “average level of noise” based on the model fit of the data. Persons of ordinary skill in the art will recognize that any model fit, and/or any data modeling technique may be used to determine the steady state level of data points 306, and the aforementioned use of an average is merely exemplary. In fact, fit 308 may, in some embodiments, correspond to an exponential fit of data 306 or a linear fit of data 306, however higher order polynomials, moving averages, random samplings, and/or maximum likelihood functions, may be used.
As shown within graph 300, fit 308 may indicate an average level of data points 306 over a period of time. In some embodiments, the data points may all fall within a certain “error margin” of fit 308. For example, data points 306 may all fall within one or more standard deviations, σ, of fit 308. Typically, the formula used to determine standard deviation is:
σ = 1 N i = 1 N ( x i - μ ) 2 , Equation 1
where N corresponds to the total number of data points, i corresponds to each individual data point, and μ corresponds to the average data point typically found by summing all data points and dividing by the number of data points.
In some embodiments, if a captured data point exceeds a predefined number of standard deviations from the mean, then that data point may not fall within expected fluctuations of the data. For this reason, typically the larger the data acquisition, the more accurate the model, and therefore the more confident one may be, that a point may be an outlier.
FIG. 3B is an illustrative graph of a detected change within monitored communications in accordance with various embodiment. In some embodiments, graph 350 of FIG. 3B may be substantially similar to graph 300 of FIG. 3A, with the exception that graph 350 may correspond to data points exceeding a predefined threshold. Graph 350 may include first axis 302 and second axis 304, which may, in some embodiments, be substantially similar to first axis 302 and second axis 304 of FIG. 3A, and the previous description of the latter may apply to the former.
Graph 350 may include data points 318. In some embodiments, data points 318 may substantially similar to data points 306 of FIG. 3A, with the exception that data points 318 may include one or more data points indicating a change in the received data above a predefined threshold value. In some embodiments, data points 318 may be described in four (4) groupings of data points. Data points 316 may correspond to a first grouping of data points corresponding to an earlier period of time when data was acquired. Data points 316 may correspond to a grouping of data located about model fit 308 obtained from fitting data points 306 of FIG. 3A. In some embodiments, because data points 316 are located about model fit 308, they may be considered “background” data. For example, data points 316 may all fall within a region defined by model fit 308 and one standard deviation threshold line 312. This may correspond to data points 316 all having values less than one standard deviation from fit 308. In some embodiments, data points 326, which may be a second grouping of data points 318, may be substantially similar to data points 316, with the exception that the former may be acquired later in time than the former.
First standard deviation line 312 may, in some embodiments, be included within graph 350. Line 312 may correspond to one unit of standard deviation (see Equation 1), based on data points 306. Typically, data found to be within one standard deviation of the mean is statistically indiscernible from the mean. This is because if the data follows a normal distribution, approximately 68% of the data values will fall within one standard deviation of the mean. Data points 318 that fall within line 312 and fit 308, such as data points 316 and 328, may, in some embodiments, be considered statistically within reasonable bounds of the mean. Therefore, data groupings, such as groupings 316 and 326 may, in some embodiments, be considered data not indicative of a detected change from the modeled data.
In some embodiments, graph 350 may also include threshold line 338. Line 338 may correspond to any number of standard deviations from fit 308. For example, line 338 may correspond to three standard deviations from fit 308. Data points that fall within line 312 and line 338, such as data points 348, may correspond to data points outside of the statistically insignificant region, but not significant enough to be considered a detected change. For example, data points 348 may include some data points that exceed second standard deviation line 314, but are less than threshold line 348. However, persons of ordinary skill in the art will recognize that although line 338 has been used to indicate the predefined threshold for determining significant data points, any line corresponding to any threshold may be used. For example, line 314 may be used as the threshold value. In this scenario, some data points from data points 348 may be considered significant because they exceed line 314. As another example, in some embodiments, data points that exceed one standard deviation, such as data points 348, may be deemed significant or indicative of a change.
Data points 310 may, in some embodiments, correspond to a third grouping of data points that exceed the defined threshold indicated by line 338. Thus, data points 310 may be considered statistically significant because they exceed the threshold value defined by line 338. Persons of ordinary skill in the art will recognize however that any threshold may be used, and the threshold may be set by the user, the user device, the host device, and/or the server.
As an illustrative example of the data depicted within FIGS. 3A and 3B, a user may place a microphone within a room to determine an ambient level of noise. The microphone may detect data points, such as data points 306 of FIG. 3A, and this may be used to determine a model of the ambient noise within the room. In some embodiments, the microphone may then be kept in the same room, and used to detect any new sounds within the room. If a sound exceeds a predefined threshold from the determined ambient noise level, the sound may be determined to be significant, whereas sounds that are substantially within one or more standard deviations from the model of the ambient noise level and less than the threshold value may also be considered ambient noise.
FIG. 4 is a schematic illustration of a display screen in accordance with various embodiments. Display screen 400 may include user report boxes 402, 404, 406, and 408. In some embodiments, display screen 400 may correspond to a user interface presented on an event administrator's device, such as host device 108 of FIG. 1. For example, the event administrator may correspond to a teacher of an online class. In this scenario, each user report box may correspond to a report box of a student accessing the online class. Persons of ordinary skill in the art will recognize that any number of user report boxes may be included within display screen 400, and the use of four (4) user report boxes is merely exemplary.
In some embodiments, each user report box may correspond to a separate user, or users, accessing the interactive online event monitored by the event administrator. For example, box 402 may correspond to a first user, whereas boxes 404, 406 and 408 may respectively correspond to a second user, a third user, and a fourth user. Each user may access the interactive online event remotely from different locations using separate user devices (e.g., user devices 104 of FIG. 1).
Each of user boxes 402, 404, 406, and 408 may include a user name and a participation grade for the corresponding user. For example, user box 402 may correspond to the first user and may have user name 402 a USER 1”. Similarly, user name 404 a may correspond to the second user, “USER 2”, user name 406 a may correspond to the third user, “USER 3”, and user name 408 a may correspond to the fourth user, “USER 4”. Any user name may be used to correspond to a particular user. For example, a user may input a specific user name or handle to be displayed as their user name, or an email address, a social media network name, or any other suitable name, or any combination thereof may be used. In some embodiments, additional information may be displayed along with the user name. For example, a user may log into the online event using a social media network account profile. The server (e.g, server 102) may pull relevant information from the social media network profile and display some or all of the pulled information within a user's report box. In some embodiments, metadata corresponding to the user may be displayed within their user report box. For example, the server may determine a user's log-in location based on their user device's IP address, and may display the location within the user report box.
Each user report box may also include a participation grade. In some embodiments, the participation grade may be based on a user's level of participation within an event. For example, a student participating heavily within a class may receive a high participation grade. In some embodiments, the participation grade may be automated. For example, the server may assign a participation grade to a user based on the user's determined level of attentiveness. In some embodiments, participation grade 402 b may correspond to USER 1's participation grade, whereas participation grade 404 b, 406 b, and 408 b may each corresponds to USER 2, USER 3, and USER 4's participation grade, respectively.
FIG. 5 is a schematic illustration of a system for detecting and monitoring audio communications in accordance with various embodiments. System 500 may include user 502 and user device 504. In some embodiments, user device 504 may be substantially similar to user device 200 of FIG. 2, and the previous description of the latter may apply. In some embodiments, multiple instances of user device 504 may be included within system 500. For example, a first user device may be used to display images to user 502 whereas a second user device may be used to receive and/or output communications from/to user 502.
User device 504 may, in some embodiments, include communications receiver 506 and/or communications outputs 510 and 512. For example, communications receiver 506 may correspond to one or more microphones, transducers, and/or transducer arrays. Various types of microphones may include, but are not limited to, omnidirectional microphones, unidirectional microphones, cardioid microphones, bi-directional microphones, shotgun microphones, and/or boundary microphones. In some embodiments, multiple instance of receiver 506 may be included within or external to user device 504. In some embodiments, communications receiver 506 may correspond to one or more video capturing devices. For example, receiver 506 may correspond to a camera capable of capturing still images and/or video. In some embodiments, multiple instances of receiver 506 may be included, where one or more receivers may be operable to receive audio communications while one or more other receivers may be operable to receive video communications.
Communications outputs 510 and 512 may, for example, correspond to any speaker, set of speakers, transducer, transducer array, headset, or any other component capable of outputting communications. In some embodiments, communications receiver 506 and communications outputs 510 and 512 may be combined into a single component.
In some embodiments, system 500 may include additional communications receivers 508. For example, in addition to a first microphone placed in front of a user, a second and third microphone may be placed on either side of the user, orthogonal to the position of the first microphone but on opposing sides. Receivers 506 and 508 may, for example, each be placed at predefined distance and angles from the user. In some embodiments, communications receivers 508 may be substantially similar to communications receiver 506, and the previous description may apply. Furthermore, persons of ordinary skill in the art will recognize that although a three-microphone system is shown within system 500, any orientation, pattern, or setup of communications receivers may be used within the system, and the use of three microphones placed in a triangle pattern is merely exemplary. For example, one or more cardioid microphones may be placed in any suitable pattern around user 502 to capture a large amount of audio communications that may emanate from user 502.
In some embodiments, user 502 may communicate with user device 504 in the direction of transmission line 516. Transmission line 516 may, for example, correspond to the “line of sight” of the transmission coming from user 502 and directed towards user device 504. For example, a user may speak into their user device placed in front of them, and transmission line 516 may correspond to the direction of the user's audio signal.
In some embodiments, user 502 may communicate with receivers 508 in the direction of transmission lines 518. Transmission lines 518 may, for example, correspond to the “line of sight” of the audio signal outputting from user 502 in a direction of receivers 508. For example, while watching a video on their user device 504, user 502 may turn and speak into a cellular telephone located on their right hand side. The communication may be represented by transmission line 518.
FIG. 6 is a schematic illustration of a display screen in accordance with various embodiments. Display screen 600 may include user interface 602, which may be displayed to an online participant accessing an event. For example, user 502 of FIG. 5 may be attending an online class, and user interface 602 may be displayed to the user on their user device (e.g., user device 504). In some embodiments, a user may be presented with user interface 602 at the end of an online event. For example, the user attending the online class may be presented within user interface 602 after the class has ended. In some embodiments, the interface 602 may present a user's participation report and grade for the event.
The participation report may correspond to a determined level of participation and/or attentiveness of the user during the online event. This may allow the server, or systems administrator, to provide participation grades to event attendees without having to physically monitor each one. In some embodiments, each participant may have one or more receivers located about them that may capture communications from the user. The communications may be monitored and analyzed against the online participant themselves (e.g., for feedback), as well as any other online participants, to determine if and when a participant has diverted attention away from the online event. For example, one or more receivers (e.g., receivers 506 and 508 of FIG. 5) may monitor an online participant of an online class. The one or more receivers may determine a steady state level of audio communication from the user and/or the ambient environment of the user. In some embodiments, one or more receivers of a receiver array may be located in a direction orthogonal to the direction of the user device. For example, receivers 508 may be located orthogonal to, and a certain distance away from, user device 504. Thus, in some embodiments, audio signal detected by receivers 508 may correspond to the user producing communications in the direction of transmission lines 518 directed at receiver 508. In this scenario, the user may be communicating in a different direction than transmission line 516 directed at user device 504, and therefore may not be interacting with the online event being broadcast thereon.
In some embodiments, user interface 602 may correspond to “USER 1604. User interface 602 may display USER 1's participation log based on the determined levels of attentiveness of the user. For example, indicator 606 may indicate a percentage of time that communications received by a first microphone was not at a predetermined steady state level. In this particular scenario, indicator 606 indicates that microphone 1 had not been at steady state levels for only 20% of the time of the online event. In some embodiments, microphone 1 may correspond to receiver 506 of FIG. 5, and therefore receiver 506 may have only received communications exceeding steady state levels for 20% of the time of the online event. Thus, this may correspond to the user communicating with user device 504 for 20% of the time.
Indicators 608A and 608B may indicate a percentage of time that microphones 2 and 3 may have not been at steady state levels. For example, microphone 2 may have exceeded steady state levels for 50% of the time of the online event, whereas microphone 3 may have exceeded steady state levels for 30% of the time of the online event. In some embodiments, microphones 2 and 3 may correspond to receivers 508. In this particular scenario, because the steady state levels at each of microphones 2 and 3 is greater than the level at microphone 1 (e.g., receiver 506), then the system may determine that user was not paying attention. This may be due, in part, to the fact that the received communications at receivers 508 was higher than that at receiver 506, which may be included within user device 504.
Thus, because the user's attention, based on the determined amount of time spent out of steady state levels at the orthogonal microphones, was greater than the amount of time captured at the microphone corresponding to the direction of the user device associated with the online event, the user may receive participation grade 620, corresponding to a “C”. Persons of ordinary skill in the art, however, will recognize that any grade may be associated with any amount of time each of the microphones spends not at steady state levels, and the aforementioned is merely exemplary.
FIGS. 7A and 7B are schematic illustrations of various audio levels before and after a user joins a group in accordance with various embodiments. System 700 of FIG. 7A may represent a percentage of an online participant's received communications prior to joining a group. For example, prior to joining a group, an online participant's microphone(s) may be open, and may be capable of receiving audio from all participants and/or any presenters of an online event. For example, bar 702A may depict an amount of audio communications received from the online event prior to the user joining a group. As seen in bar 702A, prior to joining a group, substantially all of the participant's received communications may be received from the online event (e.g., event participants, presenters, etc.), as indicated by region 704.
Bar 702B of FIG. 7B, however, may display various amounts of audio communications received by an online participant after joining a group. For example, the online participant may still receive communications from the online event, as indicated by region 704 of bar 702B, however the online participant may also receive communications from the group, as indicated by region 706. In some embodiments, because the participant has joined the group, the communications received therefrom may be greater or more prominent than the communications received from the online event.
As an illustrative example, the user may be taking place in an online conference. Prior to joining a group, the user may be capable of receiving audio communications from the event administrator, presenter(s), and/or other participants in the general event forum. The user may, in some embodiments, join a group of other online participants within the event discussing a more focused topic. For example, the user may be interested in a specific panel occurring within the online conference. In this scenario, the user may still receive communications that were previously received prior to joining the group, however those communications may be lowered in volume. In some embodiments, communications may be transmitted to the user device at normal volume, or the volume with which they were received, and a flag or indication may be sent to the user indicating that some of the communications should be adjusted. For example, a user may receive all communications from an online class, but receive an indication that they should raise the volume of the teacher (or lower the volume of everyone but the teacher). In this way, the user may selectively decide which individuals are broadcasted at certain volume levels. Additional communications from the group itself may now occupy a larger percentage of the user's received communication because the group may now be the focus of the user's attention. In this scenario, upon entering the group, the group's communications may raise or be received at a higher volume level by the online participant than other communications (e.g., event communications). In some embodiments, one or more adjusters may be included within the background of the display screen. The adjusters may allow the user to adjust volume levels for specific aspects of the online event. For example, an adjuster may be included that allows the user to raise or lower the volume of the audience and/or the presenter.
FIG. 8 is an illustrative flowchart of a process for storing changes within monitored communications in accordance with various embodiments. Process 800 may begin at step 802. At step 802, communications within an event may be monitored. In some embodiments, the event may be an interactive online event where multiple online participants may communicate with one another. For example, students may log into an online classroom and may be presented with lecture materials from a teacher or from other students.
In some embodiments, a user's communications with their user device may be monitored. A user may be provided with multiple receivers capable of detecting communications directed substantially at them. For example, receivers 506 and 508 of FIG. 5 may be provided to detect communications from user 502. In some embodiments, the communications may be directed at receiver 506 located on or within user device 504. For example, receiver 506 may correspond to a microphone located in a user device, whereas receivers 508 may correspond to external microphones located proximate to the user. In some embodiments, each microphone in the system may be able to detect when audio is produced by the user, and the proportion of audio directed towards that particular microphone. For example, receivers 506 and 508 may be cardioid microphones, and each may be capable of detecting received audio. In some embodiments, audio directed substantially at microphone 506 may be detected peripherally by microphones 508. The system may be capable of detecting how much of the audio directed at microphone 506 is captured by microphones 508 to determine the user's intended direction of communication. For example, if a large percentage of the audio is detected by receivers 508, then the system may recognize that the user may not communicating with their user device, and may have their attention diverted elsewhere (e.g., a friend, video game, etc.).
In some embodiments, the user device may monitor the communications produced by the event or within a user's system (e.g., user device 504 and receivers 506 and 508). For example, microphones for each online participant accessing the event may be monitored individually and/or collectively. As another example, the user device may monitor the outputted communications from the user device's speakers to determine an amount of ambient noise produced by the event. This may help determine any contributing feedback noise that may occur by the receivers detecting audio produced by the event itself.
At step 804, a steady state level may be determined based on the monitored communications. The steady state level may correspond to a determined level of ambient noise within a particular region or area where the communications are being monitored. For example, receivers 506 and 508 may monitor the communications produced within system 500 for a period of time to determine an average amount of noise produced at each microphone or collectively. In some embodiments, the average amount of noise may reflect a typical amount of noise expected to be received by a microphone.
In some embodiments, the audio detected by the receivers may be modeled to determine an average amount of noise, or a steady state level of the noise. For example, data points 306 may be modeled using fit 308. Fit 308 may model the received communications. The model may allow the user and/or the system to have an estimation of a level of the noise that is detectable by a user device at any point in the future. Thus, an extrapolation to future time periods may be used to determine whether changes in the received communications occur.
At step 806, any changes in the monitored communications may be detected. The detected changes may correspond to communications that exceed expected communication levels based on the determined steady state level. For example, data points 310 of FIG. 3B may correspond to detected communications that exceed a predefined threshold for a steady state level (e.g., threshold line 338). In this particular scenario, data points 310 exceed line 338, which may correspond to a predefined threshold above the steady state level signified by fit 312. Fit 312 may, in some embodiments, be substantially similar to fit 308 of FIG. 3A, and line 338 may correspond to a particular number of standard deviations above the model fit. Thus, data points 310 may correspond to detected communications that exceed the determined steady state level by more than the predefined number of standard deviations.
In some embodiments, the detected change in the communications need only exceed the average level of communications based on a model fit of the monitored communications. For example, data points 348 may exceed fit 308, even though they may not exceed the predefined threshold indicated by line 338. In this particular scenario, data points 348 may be “detected” because the threshold has been set to fit 308 only. In some embodiments, the threshold may be set to one standard deviation line 312 or two standard deviation line 314.
At step 808, any detected change may be stored within an event participation log. The event participation log may be capable of storing each detected change, as well as when the change occurred and by how much the detected change exceeded the threshold value. For example, at 5 minutes after a teacher began to present their materials, a spike in audio communications from a certain online participant may have been detected. The timestamp and intensity level of the spike may be stored within the event participation log along with one or more additional pieces of data associated with the data spike.
In some embodiments, the detected change may be analyzed prior to being stored within the event participation log. For example, the detected change may be analyzed to determine which receiver detected the change. If the change was detected by a receiver located in the direction of expected event communications (e.g., receiver 506), this change may be classified as an event signal and therefore the user may be determined to be participating in the event. If, however, the change was detected by a receiver not located in the direction of any expected event communication (e.g., receivers 508), the change may be classified as a non-event signal. Non-event signals may signify that the user is not participating in the event, and therefore the system, host, or event administrator may determine that the user is not paying attention to the event and may therefore receive a lower participation score.
FIG. 9 is a illustrative flowchart of a process for determining changes within monitored communications in accordance with various embodiments. Process 900 may begin at step 902. At step 902, audio communications may be monitored. At step 904, a steady state level of audio may be determined. At step 906, a change may be detected from the steady state level. In some embodiments, steps 902, 904, and 906 may be substantially similar to steps 802, 804, and 806 of FIG. 8, and the previous description of the latter may apply to the former.
At step 908, a query may be run to determine whether the detected change from the steady state level is greater than a threshold. For example, data points 310 may correspond to a detected change above line 338. In some embodiments, line 338 may correspond to a predefined threshold above a steady state level of the monitored communication (e.g., fit 308). The predefined threshold may correspond to any threshold level. For example, line 338 may correspond to three (3) standard deviations from a model fit of the data corresponding to a steady state level. Statistically speaking, three standard deviations from the mean corresponds to a 99.7% confidence interval. Thus, any data exceeding three standard deviations of would only have a 0.3% chance of not being a detected change. However, persons of ordinary skill in the art will recognize that any number of standard deviations may be used to determine the significance of a particular data point or grouping, and the use of three standard deviations is merely exemplary.
If, at step 908, the detected change is determined to be less than the threshold, process 900 may return to step 902 where audio communications may be monitored. In some embodiments, upon return to step 902, the detected change from step 906 may be used to dynamically update the steady state level. However, in some embodiments, after step 908, process 900 may return to step 906 where the process may wait to detect additional changes from the steady state level.
If, however, at step 908, the detected change is determined to be greater than the threshold value, process 900 may move to step 910. At step 910, the detected change from step 906 may be recorded in a participation log. In some embodiments, the participation log may store all activities related to one or more participants of an event. For example, the participation log may be used to determine participation grades for students accessing the online event. If a change is detected within a students monitored communications it may correspond to the user either actively participating in the event or the user not paying attention to the event (e.g., communicating with friends, playing video games, etc.).
In some embodiments, if the change is detected, one or more algorithms resident on the user device and/or the server may look to determine the user's activities at the time of the detected change. For example, the detected change may correspond to the user actively communicating with their user device and thus the event, or the detected change may correspond to the user engaging in one or more additional activities (e.g., video game, communicating with a friend), and therefore not participating in the event. Thus, in some embodiments, the one or more algorithms will look to see if the user provided one or more inputs to the event using their user device. If so, then it may be determined that user is participating in the event. However, if there are no noticeable event inputs or other indications that the user is interacting with the event, then the detected change may correspond to the user not participating in the event and the change may be flagged as a non-event signal.
FIG. 10 is an illustrative flowchart of a process for transmitting event participation data to users in accordance with various embodiments. Process 1000 may begin at step 1002. At step 1002, a first audio signal may be received at a first microphone or audio receiver. In some embodiments, a system, such as system 500 of FIG. 5, may include one or more communications receivers, such as receivers 506 and 508. Each receiver may be capable of receiving/detecting communications from a user, such as user 502. For example, receiver 506 may receive communications from user 502 via communications line 516.
At step 1004, a determination may be made as to which communications receiver captured the first audio signal. In some embodiments, the system may include multiple communications receivers set up to detect audio signals. For example, system 500 of FIG. 5 may include receiver 506 placed in front of user 502, as well as receivers 508 placed adjacent to user 502. Persons of ordinary skill in the art will recognize that any number of audio receivers, which may be oriented in any suitable configuration, may be used, and the use of three (3) receivers is merely exemplary.
At step 1006, event participation data may be created based on which receiver the communications has been captured by. Continuing the aforementioned example, if communications are received by receiver 506, event participation data corresponding to the user participating in the event may be created. Receiver 506 may, in some embodiments, correspond to a microphone included within, or substantially on, user device 504. In this particular scenario, communications received by receiver 506 may correspond to communications that are directed to the user device. Conversely, if communications are received by receivers 508, event participation data corresponding to the user not participating in the event may be created. Receivers 508 may, in some embodiments, be positioned substantially perpendicular and in-line with user 502. Thus, any communications detected by receivers 508 would correspond to user 502 being oriented substantially away from user device 504, and therefore not associated with the event.
At step 1008, the event participation data may be transmitted to the event administrator device. For example, data corresponding to a user's participation levels may be sent to the event administrator so that the event administrator may assign a participation grade to the user. In some embodiments, if the created event participation data corresponds to the user participating in the event (e.g., audio signal detected by receiver 506), then the event participation data sent to the event administrator may indicate that the user had been paying attention. However, if the event participation data indicates that the user was not participating in the event (e.g., audio signal detected by receivers 508), then the event administrator may assign a participation grade to the user reflective of the user not paying attention to the event.
FIG. 11 is an illustrative flowchart of a process for transmitting grades to users in accordance with various embodiments. Process 1100 may begin at step 1102. At step 1102, a first audio signal may be received by a communications receiver. For example, transmission line 516 of FIG. 5 may be received at communications receiver 506 on user device 504. In some embodiments, step 1102 of FIG. 11 may be substantially similar to step 1002 of FIG. 10, and the previous description of the latter may apply to the former.
At step 1104, a query may be run to determine which communications receiver from an array of communications receivers captured the audio signal. In some embodiments, the user may be surrounded by one or more communications receivers, such as microphones, which may be positioned in any suitable orientation. For example, user 502 of FIG. 5 may be positioned a first distance away from receiver 506, as well as being positioned a second distance away from receivers 508. As another example, various microphones may be equally positioned around a user in a circular manner. Persons of ordinary skill in the art will recognize that any microphone array may be used to obtain audio signals, and the various array type may also depend on the type of microphone used to obtain audio signals, and the aforementioned setups are merely exemplary.
In some embodiments, certain receivers capturing communications within the array may signify that a user is participating in an event, whereas others may signify that the user is not participating in the event. For example, if the user is asking a question to an event presenter, the user will most likely be directing his/her communications towards their user device. However, if the user is not participating in the event, then the user will most likely not be directing his/her communications towards their user device. For example, if the user is communicating with a friend, or interacting with a video game, then their communications will be directed elsewhere. These communications may be detected by one or more other communications receivers placed proximate to the user, but not in the line of sight of the user device.
If, at step 1104, it is determined that a first communications receiver, such as receiver 506, captures the audio signal, then process 1100 may proceed to step 1106. At step 1106, event data corresponding to the user participating in an event may be generated. In some embodiments, if the user communicates with their user device while accessing an online event using their user device, the communication detected by the device's microphone may correspond to event communications. For example, the user may be answering a teacher's question within an online event and may answer the question using their user device. In this particular scenario, the communications receiver located on the user device may detect the audio communications from the user, and generate event data indicating the user is participating in the event.
If, however, at step 1104, it is determined that a second communications receiver or receivers, such as receivers 508, capture the audio signal, then event data corresponding to the user not participating in the event may be generated. In some embodiments, if audio communications are received at microphones adjacent to the user but not located on or proximate to the user device, then the user most likely is not paying attention or participating in the event. For example, communications directed at receivers 508 may correspond to the user not interacting with an event on their user device. Thus, in this scenario, the user is most likely not participating in the event, and therefore event data corresponding to the user not participating may be generated.
The created event data may correspond to any indicator or flag signifying the user's participation level. For example, the generated event data may be a Boolean operator capable of reading “1” or “TRUE” if the event data generated corresponds to the user participating in the event. As another example, the generated event data may read “0” or “FALSE” if the event data generated correspond to the user not participating in the event. Persons of ordinary skill in the art will recognize that any event data may be generated, and the event data may be depicted in any suitable format, and the aforementioned scenario is merely exemplary.
After event data has been generated at either steps 1106 or 1108, process 1100 may proceed to step 1110. At step 1110, a grade may be transmitted to the user. In some embodiments, the grade may be based on the user's level of participation within the event. For example, if event data corresponding to the user participating in the event is created at step 1106, then the user may receive a higher participation grade than if the event data generated corresponded to the user not participating in the event. As an illustrative example, USER 4 of FIG. 4 may receive participation grade 408 b, “A”, which may indicate that the user has participated in the event, whereas USER 3 may receive participation grade 406 b, “D”, indicating that this user has not participated in the event.
In some embodiments, the grade for the event may be transmitted to the user at the end of the event. This may allow the system to determine an overall level of attentiveness for the user throughout the event's duration. In some embodiments, the grade may be transmitted to a user's parent or guardian instead of, or in addition to, the user. This may allow, for example, a teacher to communicate with a student's parent directly, instead of with the student, who may not convey the message to their parent.
FIG. 12 is an illustrative flowchart of a process for transmitting a level of attentiveness to an event administrator in accordance with various embodiments. Process 1200 may begin at step 1202. At step 1202, communications between online participants of an event may be monitored. For example, an online participant of an event may communicate with one or more additional online participants of the event within a group or subgroup. In some embodiments, the participants of the group may communicate with one another during the online event. For example, a first participant may ask a second participant a question about material presented within the event. The group may be formed between these two participants prior to the question being asked or in response to the question being asked.
In some embodiments, the communications between the online participants may be monitored by a system facilitating the communications. For example, a server may be able to facilitate communications between user devices accessing an online event hosted on the server. The server may be capable of monitoring each user device to detect communications transmitted and received in order to determine which user devices are in communication with each other.
At step 1204, a level of attentiveness for each online participant may be determined. The level of attentiveness may be based on any number of factors including, but not limited to, the duration of time of the communications between the online participants, the content involved in the communications, the participants involved in the communications, and/or any other factor, or any combination thereof. In some embodiments, the server may determine the level of attentiveness based on when the communications between the online participants began. For example, if at a certain point within a presentation the presenter began to describe a very complex topic, communications between participants may occur to help clarify the topic amongst the participants. Therefore, the monitored communications may indicate that the online participants are paying attention to the material but may also require additional explanations. This may help highlight to the presenter a need to clarify certain topics. In some embodiments, the presenter may transmit a video to each participant within the online event. During the video, two or more participants may begin communicating within a group. Based on the these communications, the system may determine that the two participants are not participating within the event because they are no longer watching the video and instead are communicating within one another.
At step 1206, the determined level of attentiveness may be transmitted to an event administrator. The event administrator may, in some embodiments, correspond to a teacher or presenter of the online event. For example, if the online event corresponds to an online class, then the event administrator may be a teacher. In some embodiments, the event administrator may receive the determined level of attentiveness for each online participant and based thereon, assign a participation grade to the participants. For example, if it is determined that the online participants have not been paying attention in the online class, these participants may receive a low participation grade. In some embodiments, the event administrator may store the level of attentiveness for each user within an event database. This may allow the event administrator to aggregate or analyze the level of attentiveness for various users to determine the users' grades, and/or ways or areas to augment their presentation to make it more engaging to the participants.
FIG. 13 is an illustrative flowchart of a process for modifying communications received by a user within a group in accordance with various embodiments. Process 1300 may begin at step 1302. At step 1302, a determination may be made that a first user device has entered a group. In some embodiments, a first user accessing an online event may enter into a group of online participants within the online event. For example, a user may want to join a group including one or more participants accessing an online event. The user may select an option to join the group, request an invitation to enter the group, knock on the room of the group, or be brought in by another group member, or the user may join using any other suitable mechanism, or any combination thereof. In some embodiments, the user may be capable of entering the group without accessing the event, but may access the event after joining the group.
At step 1304, communications may be transmitting from the first user device to the group. For example, after a user enters a group within an online event, the user may begin to send communications to additional members of the group. In some embodiments, the user may be capable of transmitting audio, video, textual communications, documents, or any other form of communication, or any combination thereof, to the other group members. For example, the user may be able to transmit audio signals to additional group members.
At step 1306, communications received by the user may be modified in response to the user joining and transmitting communications to the group. In some embodiments, prior to joining the group, the user may have been capable of receiving communications solely from the event. For example, the mixture of audio received by the user prior to joining the group may have consisted of only the event's audio. After the user joins the group, the audio mixture received by the user may be split so that some audio is from the event and some audio is from the group. In this way, the user may perceive to be in a real group within the physical event where they would hear communications from group members while the event occurs in the background. In some embodiments, the system may automatically mix the group's audio and the event's audio to suitable levels such that the user appropriately hears both. Persons of ordinary skill in the art will recognize that any mixture of the group's communications and the event's communications may occur. For example, the mixture may be half group/half event, one third event/two thirds group, or any other partitioning, or any combination thereof. In some embodiments, the user may be able to “pause” the communications of the event as they enter the group. This may allow the user to receive the group's communications only, but still be able to receive the event's communications at a later point in time by un-pausing the communications from the event.
The various embodiments described herein may be implemented using a variety of means including, but not limited to, software, hardware, and/or a combination of software and hardware. The embodiments may also be embodied as computer readable code on a computer readable medium. The computer readable medium may be any data storage device that is capable of storing data that can be read by a computer system. Various types of computer readable media include, but are not limited to, read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, or optical data storage devices, or any other type of medium, or any combination thereof. The computer readable medium may be distributed over network-coupled computer systems. Furthermore, the above described embodiments are presented for the purposes of illustration are not to be construed as limitations.

Claims (8)

What is claimed is:
1. A method for assessing participant attentiveness within an interactive online event, the method comprising:
receiving a first plurality of audio signals from a user device, the user device comprising a plurality of microphones arranged in a plurality of directions around the user device, wherein the microphones are located a certain distance away from the user device;
determining, based on the first plurality of audio signals, a steady state sound level of communication, wherein determining a steady state sound level comprises modeling the first plurality of audio signals to determine a maximum likelihood function representative of the steady state sound level;
receiving a second plurality of audio signals from the user device;
determining, based on the second plurality of audio signals, a change from the steady state sound level of the second plurality of audio signals, wherein determining the change from the steady state sound level comprises determining that the change exceeds the steady state sound level by at least two standard deviations;
determining, based on a microphone of the plurality of microphones having a largest contribution to the change, a level of attentiveness associated with the user device;
storing the change in an event participation log; and
providing a participation report corresponding to a determined level of participation and attentiveness of the user during the online event, wherein participation grades may be derived from the report and provided to event attendees.
2. The method of claim 1, wherein the event participation log is accessible by an administrator of the event.
3. The method of claim 1, wherein the event participation log is useable to grade the online participant.
4. The method of claim 1, wherein storing further comprises:
recording a duration of the change from the steady state level.
5. A non-transitory computer readable medium containing instructions that, when executed by at least one processor of a computing device, cause the computing device to:
receive a first plurality of audio signals from a user device, the user device comprising a plurality of microphones arranged in a plurality of directions around the user device, wherein the microphones are located a certain distance away from the user device;
determine, based on the first plurality of audio signals, a steady state sound level of communication, wherein determining a steady state sound level comprises modeling the first plurality of audio signals to determine a maximum likelihood function representative of the steady state sound level;
receive a second plurality of audio signals from the user device;
determine, based on the second plurality of audio signals, a change from the steady state sound level of the second plurality of audio signals, wherein determining the change from the steady state sound level comprises determining that the change exceeds the steady state sound level by at least two standard deviations;
store the change in an event participation log; and
provide a participation report corresponding to a determined level of participation and attentiveness of the user during the online event, wherein participation grades may be derived from the report and provided to event attendees.
6. The non-transitory computer readable medium of claim 5, wherein the event participation log is accessible by an administrator of the event.
7. The non-transitory computer readable medium of claim 5, wherein the event participation log is useable to grade the online participant.
8. The non-transitory computer readable medium of claim 5, wherein storing further comprises:
recording a duration of the change from the steady state level.
US14/272,590 2014-05-08 2014-05-08 Systems and methods for monitoring participant attentiveness within events and group assortments Active 2035-05-23 US9733333B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/272,590 US9733333B2 (en) 2014-05-08 2014-05-08 Systems and methods for monitoring participant attentiveness within events and group assortments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/272,590 US9733333B2 (en) 2014-05-08 2014-05-08 Systems and methods for monitoring participant attentiveness within events and group assortments

Publications (2)

Publication Number Publication Date
US20150326458A1 US20150326458A1 (en) 2015-11-12
US9733333B2 true US9733333B2 (en) 2017-08-15

Family

ID=54368802

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/272,590 Active 2035-05-23 US9733333B2 (en) 2014-05-08 2014-05-08 Systems and methods for monitoring participant attentiveness within events and group assortments

Country Status (1)

Country Link
US (1) US9733333B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343268A1 (en) * 2013-09-11 2016-11-24 Lincoln Global, Inc. Learning management system for a real-time simulated virtual reality welding training environment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9712579B2 (en) 2009-04-01 2017-07-18 Shindig. Inc. Systems and methods for creating and publishing customizable images from within online events
US20180278462A1 (en) * 2016-08-24 2018-09-27 Bernt Erik Bjontegard Multi-level control, variable access, multi-user contextually intelligent communication platform
US9711181B2 (en) 2014-07-25 2017-07-18 Shindig. Inc. Systems and methods for creating, editing and publishing recorded videos
US9734410B2 (en) 2015-01-23 2017-08-15 Shindig, Inc. Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
US10600420B2 (en) 2017-05-15 2020-03-24 Microsoft Technology Licensing, Llc Associating a speaker with reactions in a conference session
US10949787B2 (en) 2018-07-31 2021-03-16 International Business Machines Corporation Automated participation evaluator

Citations (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0721726A1 (en) 1993-10-01 1996-07-17 Vicor, Inc. Multimedia collaboration system
US6044146A (en) 1998-02-17 2000-03-28 Genesys Telecommunications Laboratories, Inc. Method and apparatus for call distribution and override with priority
US6241612B1 (en) 1998-11-09 2001-06-05 Cirrus Logic, Inc. Voice communication during a multi-player game
US6259471B1 (en) 1996-03-14 2001-07-10 Alcatel Apparatus and service for transmitting video data
US20020094831A1 (en) 2000-03-03 2002-07-18 Mark Maggenti Communication device for providing dormant mode for a group communication network
US20020102999A1 (en) 2000-03-03 2002-08-01 Qualcomm, Inc. Method and apparatus for enabling group communication services in an existing communication system
US20020143877A1 (en) 2001-02-06 2002-10-03 Hackbarth Randy L. Apparatus and method for use in a data/conference call system to provide collaboration services
US6515681B1 (en) 1999-05-11 2003-02-04 Prophet Financial Systems, Inc. User interface for interacting with online message board
US6559863B1 (en) 2000-02-11 2003-05-06 International Business Machines Corporation System and methodology for video conferencing and internet chatting in a cocktail party style
US6654346B1 (en) 1999-07-19 2003-11-25 Dunti Corporation Communication network across which packets of data are transmitted according to a priority scheme
US20040022202A1 (en) 2002-08-05 2004-02-05 Chih-Lung Yang Method and apparatus for continuously receiving images from a plurality of video channels and for alternately continuously transmitting to each of a plurality of participants in a video conference individual images containing information concerning each of said video channels
US6697614B2 (en) 2001-02-27 2004-02-24 Motorola, Inc. Method and apparatus for distributed arbitration of a right to speak among a plurality of devices participating in a real-time voice conference
US20050032539A1 (en) 2003-08-06 2005-02-10 Noel Paul A. Priority queuing of callers
US20050132288A1 (en) 2003-12-12 2005-06-16 Kirn Kevin N. System and method for realtime messaging having image sharing feature
US20050262542A1 (en) 1998-08-26 2005-11-24 United Video Properties, Inc. Television chat system
US20060002315A1 (en) 2004-04-15 2006-01-05 Citrix Systems, Inc. Selectively sharing screen data
US7007235B1 (en) 1999-04-02 2006-02-28 Massachusetts Institute Of Technology Collaborative agent interaction control and synchronization system
US20060055771A1 (en) 2004-08-24 2006-03-16 Kies Jonathan K System and method for optimizing audio and video data transmission in a wireless system
US20060063555A1 (en) 2004-09-23 2006-03-23 Matthew Robbins Device, system and method of pre-defined power regulation scheme
US20060114314A1 (en) 2004-11-19 2006-06-01 Sony Ericsson Mobile Communications Ab Picture/video telephony for a push-to-talk wireless communications device
US20060140138A1 (en) 2004-12-28 2006-06-29 Hill Thomas C Method for simlutaneous communications management
US20070186171A1 (en) 2006-02-09 2007-08-09 Microsoft Corporation Virtual shadow awareness for multi-user editors
US20070234220A1 (en) 2006-03-29 2007-10-04 Autodesk Inc. Large display attention focus system
US20070265074A1 (en) 2006-05-09 2007-11-15 Nintendo Co., Ltd., Game program and game apparatus
US20080002668A1 (en) 2006-06-30 2008-01-03 Sony Ericsson Mobile Communications Ab Portable communication device and method for simultaneously
US20080037763A1 (en) 2006-07-26 2008-02-14 Shmuel Shaffer Queuing and routing telephone calls
US20080136898A1 (en) 2006-12-12 2008-06-12 Aviv Eisenberg Method for creating a videoconferencing displayed image
US20080137559A1 (en) 2006-12-07 2008-06-12 Kabushiki Kaisha Toshiba Conference system
US20080146339A1 (en) 2006-12-14 2008-06-19 Arlen Lynn Olsen Massive Multiplayer Online Sports Teams and Events
US20080181260A1 (en) 2007-01-31 2008-07-31 Stanislav Vonog Method and system for precise synchronization of audio and video streams during a distributed communication session with multiple participants
GB2446529A (en) 2004-06-25 2008-08-13 Sony Comp Entertainment Europe Audio communication between networked games terminals
US20080274810A1 (en) 2005-02-25 2008-11-06 Sawako-Eeva Hayashi Controlling Communications Between Players of a Multi-Player Game
US20080320082A1 (en) * 2007-06-19 2008-12-25 Matthew Kuhlke Reporting participant attention level to presenter during a web-based rich-media conference
US7478129B1 (en) 2000-04-18 2009-01-13 Helen Jeanne Chemtob Method and apparatus for providing group interaction via communications networks
US7487211B2 (en) 2002-07-01 2009-02-03 Microsoft Corporation Interactive, computer network-based video conferencing system and process
US20090033737A1 (en) 2007-08-02 2009-02-05 Stuart Goose Method and System for Video Conferencing in a Virtual Environment
US20090040289A1 (en) 2007-08-08 2009-02-12 Qnx Software Systems (Wavemakers), Inc. Video phone system
US7495687B2 (en) 2005-09-07 2009-02-24 F4W, Inc. System and methods for video surveillance in networks
US7515560B2 (en) 2005-09-07 2009-04-07 F4W, Inc. Apparatus and method for dynamically updating and communicating within flexible networks
WO2009077936A2 (en) 2007-12-17 2009-06-25 Koninklijke Philips Electronics N.V. Method of controlling communications between at least two users of a communication system
US20090204906A1 (en) 2008-02-11 2009-08-13 Dialogic Corporation System and method for performing video collaboration
US20090209339A1 (en) 2008-02-14 2009-08-20 Aruze Gaming America, Inc. Gaming Apparatus Capable of Conversation with Player, Control Method Thereof, Gaming System Capable of Conversation with Player, and Control Method Thereof
US20090210789A1 (en) 2008-02-14 2009-08-20 Microsoft Corporation Techniques to generate a visual composition for a multimedia conference event
US7593032B2 (en) 2005-07-20 2009-09-22 Vidyo, Inc. System and method for a conference server architecture for low delay and distributed conferencing applications
US20090249244A1 (en) 2000-10-10 2009-10-01 Addnclick, Inc. Dynamic information management system and method for content delivery and sharing in content-, metadata- & viewer-based, live social networking among users concurrently engaged in the same and/or similar content
US20090254843A1 (en) 2008-04-05 2009-10-08 Social Communications Company Shared virtual area communication environment based apparatus and methods
US20090288007A1 (en) 2008-04-05 2009-11-19 Social Communications Company Spatial interfaces for realtime networked communications
US20100005411A1 (en) 2008-07-02 2010-01-07 Icharts, Inc. Creation, sharing and embedding of interactive charts
US20100026780A1 (en) 2008-07-31 2010-02-04 Nokia Corporation Electronic device directional audio capture
US20100030578A1 (en) 2008-03-21 2010-02-04 Siddique M A Sami System and method for collaborative shopping, business and entertainment
US20100026802A1 (en) 2000-10-24 2010-02-04 Object Video, Inc. Video analytic rule detection system and method
US20100122184A1 (en) 2008-09-19 2010-05-13 Musigy Usa, Inc. Method and System for Distributed Computing Interface
US20100131868A1 (en) 2008-11-26 2010-05-27 Cisco Technology, Inc. Limitedly sharing application windows in application sharing sessions
US20100146085A1 (en) 2008-12-05 2010-06-10 Social Communications Company Realtime kernel
US7787447B1 (en) 2000-12-28 2010-08-31 Nortel Networks Limited Voice optimization in a network having voice over the internet protocol communication devices
US20100316232A1 (en) 2009-06-16 2010-12-16 Microsoft Corporation Spatial Audio for Audio Conferencing
WO2011025989A1 (en) 2009-08-27 2011-03-03 Musigy Usa, Inc. A system and method for pervasive computing
US20110060992A1 (en) 2009-09-07 2011-03-10 Jevons Oliver Matthew Video-collaboration apparatus and method
US20110072366A1 (en) 2009-09-18 2011-03-24 Barry Spencer Systems and methods for multimedia multipoint real-time conferencing
US20110078532A1 (en) 2009-09-29 2011-03-31 Musigy Usa, Inc. Method and system for low-latency transfer protocol
US20110185286A1 (en) 2007-10-24 2011-07-28 Social Communications Company Web browser interface for spatial communication environments
US20110209104A1 (en) 2010-02-25 2011-08-25 Microsoft Corporation Multi-screen synchronous slide gesture
US20110270922A1 (en) 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Managing participants in a conference via a conference user interface
US20110279634A1 (en) 2010-05-12 2011-11-17 Alagu Periyannan Systems and methods for real-time multimedia communications across multiple standards and proprietary devices
US20120002001A1 (en) 2010-07-01 2012-01-05 Cisco Technology Conference participant visualization
WO2012021174A2 (en) 2010-08-12 2012-02-16 Net Power And Light Inc. EXPERIENCE OR "SENTIO" CODECS, AND METHODS AND SYSTEMS FOR IMPROVING QoE AND ENCODING BASED ON QoE EXPERIENCES
WO2012021901A2 (en) 2010-08-13 2012-02-16 Net Power And Light Inc. Methods and systems for virtual experiences
WO2012021173A2 (en) 2010-08-12 2012-02-16 Net Power And Light Inc. System architecture and methods for experiential computing
US20120038550A1 (en) 2010-08-13 2012-02-16 Net Power And Light, Inc. System architecture and methods for distributed multi-sensor gesture processing
US20120039382A1 (en) 2010-08-12 2012-02-16 Net Power And Light, Inc. Experience or "sentio" codecs, and methods and systems for improving QoE and encoding based on QoE experiences
US20120060101A1 (en) 2010-08-30 2012-03-08 Net Power And Light, Inc. Method and system for an interactive event experience
US8144187B2 (en) 2008-03-14 2012-03-27 Microsoft Corporation Multiple video stream capability negotiation
US20120084682A1 (en) 2010-10-01 2012-04-05 Imerj LLC Maintaining focus upon swapping of images
US20120098919A1 (en) 2010-10-22 2012-04-26 Aaron Tang Video integration
WO2012054895A2 (en) 2010-10-21 2012-04-26 Net Power And Light Inc. System architecture and method for composing and directing participant experiences
WO2012054089A2 (en) 2010-10-21 2012-04-26 Net Power And Light Inc. Distributed processing pipeline and distributed layered application processing
US20120110163A1 (en) 2010-11-02 2012-05-03 Net Power And Light, Inc. Method and system for data packet queue recovery
US20120110162A1 (en) 2010-11-02 2012-05-03 Net Power And Light, Inc. Method and system for resource-aware dynamic bandwidth control
US20120182384A1 (en) 2011-01-17 2012-07-19 Anderson Eric C System and method for interactive video conferencing
US8230355B1 (en) 2006-03-22 2012-07-24 Adobe Systems Incorporated Visual representation of a characteristic of an object in a space
US20120192087A1 (en) 2011-01-26 2012-07-26 Net Power And Light, Inc. Method and system for a virtual playdate
US20120198334A1 (en) 2008-09-19 2012-08-02 Net Power And Light, Inc. Methods and systems for image sharing in a collaborative work space
WO2012135384A2 (en) 2011-03-28 2012-10-04 Net Power And Light, Inc. Information mixer and system control for attention management
US20120280905A1 (en) 2011-05-05 2012-11-08 Net Power And Light, Inc. Identifying gestures using multiple sensors
US20120331089A1 (en) 2011-06-21 2012-12-27 Net Power And Light, Inc. Just-in-time transcoding of application content
WO2012177641A2 (en) 2011-06-21 2012-12-27 Net Power And Light Inc. Method and system for providing gathering experience
US20130014027A1 (en) 2011-07-08 2013-01-10 Net Power And Light, Inc. Method and system for representing audiences in ensemble experiences
US20130014028A1 (en) 2011-07-09 2013-01-10 Net Power And Light, Inc. Method and system for drawing
US20130019184A1 (en) 2011-07-11 2013-01-17 Net Power And Light, Inc. Methods and systems for virtual experiences
US20130018960A1 (en) 2011-07-14 2013-01-17 Surfari Inc. Group Interaction around Common Online Content
US20130024785A1 (en) 2009-01-15 2013-01-24 Social Communications Company Communicating between a virtual area and a physical space
US20130021431A1 (en) 2011-03-28 2013-01-24 Net Power And Light, Inc. Information mixer and system control for attention management
US8390670B1 (en) 2008-11-24 2013-03-05 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US20130063542A1 (en) 2011-09-14 2013-03-14 Cisco Technology, Inc. System and method for configuring video data
US20130073978A1 (en) 2011-09-16 2013-03-21 Social Communications Company Capabilities based management of virtual areas
WO2013043207A1 (en) 2010-04-30 2013-03-28 American Teleconferencing Services, Ltd. Event management/production for an online event
US20130088518A1 (en) 2011-10-10 2013-04-11 Net Power And Light, Inc. Methods and systems for providing a graphical user interface
US20130097512A1 (en) 2011-09-08 2013-04-18 Samsung Electronics Co., Ltd. Apparatus and content playback method thereof
US20130123019A1 (en) 2002-12-10 2013-05-16 David R. Sullivan System and method for managing audio and video channels for video game players and spectators
US20130173531A1 (en) 2010-05-24 2013-07-04 Intersect Ptp, Inc. Systems and methods for collaborative storytelling in a virtual space
US20130169742A1 (en) 2011-12-28 2013-07-04 Google Inc. Video conferencing with unlimited dynamic active participants
US20130201279A1 (en) 2005-07-20 2013-08-08 Mehmet Reha Civanlar System and Method for Scalable and Low-Delay Videoconferencing Using Scalable Video Coding
US20130216206A1 (en) 2010-03-08 2013-08-22 Vumanity Media, Inc. Generation of Composited Video Programming
US20130239063A1 (en) 2012-03-06 2013-09-12 Apple Inc. Selection of multiple images
US20130254287A1 (en) 2011-11-05 2013-09-26 Abhishek Biswas Online Social Interaction, Education, and Health Care by Analysing Affect and Cognitive Features
WO2013149079A1 (en) 2012-03-28 2013-10-03 Net Power And Light, Inc. Information mixer and system control for attention management
US20130289983A1 (en) 2012-04-26 2013-10-31 Hyorim Park Electronic device and method of controlling the same
US20130321648A1 (en) 2012-06-01 2013-12-05 Nintendo Co., Ltd. Computer-readable medium, information processing apparatus, information processing system and information processing method
US20140004496A1 (en) 2012-06-29 2014-01-02 Fujitsu Limited Dynamic evolving virtual classroom
US20140019882A1 (en) 2010-12-27 2014-01-16 Google Inc. Social network collaboration space
US8635293B2 (en) 2011-06-13 2014-01-21 Microsoft Corporation Asynchronous video threads
US20140040784A1 (en) 2007-06-20 2014-02-06 Google Inc. Multi-user chat
US8647206B1 (en) 2009-01-15 2014-02-11 Shindig, Inc. Systems and methods for interfacing video games and user communications
US8749610B1 (en) 2011-11-29 2014-06-10 Google Inc. Managing nodes of a synchronous communication conference
US20140176665A1 (en) * 2008-11-24 2014-06-26 Shindig, Inc. Systems and methods for facilitating multi-user events
US20140184723A1 (en) 2012-12-31 2014-07-03 T-Mobile Usa, Inc. Display and Service Adjustments to Enable Multi-Tasking During a Video Call
US8779265B1 (en) 2009-04-24 2014-07-15 Shindig, Inc. Networks of portable electronic devices that collectively generate sound
US20140229888A1 (en) 2013-02-14 2014-08-14 Eulina KO Mobile terminal and method of controlling the mobile terminal
US20140325428A1 (en) 2013-04-29 2014-10-30 Lg Electronics Inc. Mobile terminal and method of controlling the mobile terminal
US8929516B2 (en) 2002-11-22 2015-01-06 Intellisist, Inc. System and method for transmitting voice messages to a discussion group
US20150025888A1 (en) * 2013-07-22 2015-01-22 Nuance Communications, Inc. Speaker recognition and voice tagging for improved service
US20150046800A1 (en) 2009-08-26 2015-02-12 Eustace Prince Isidore Advanced Editing and Interfacing in User Applications
US20150049885A1 (en) * 2013-08-19 2015-02-19 Avaya Inc. Pairwise audio capture device selection
US20150052453A1 (en) 2013-08-13 2015-02-19 Bank Of America Corporation Virtual Position Display and Indicators
US20150106750A1 (en) 2012-07-12 2015-04-16 Sony Corporation Display control apparatus, display control method, program, and communication system
US20150365305A1 (en) * 2009-04-07 2015-12-17 Verisign, Inc. Domain name system traffic analysis
US9241131B2 (en) 2012-06-08 2016-01-19 Samsung Electronics Co., Ltd. Multiple channel communication using multiple cameras

Patent Citations (171)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0721726A1 (en) 1993-10-01 1996-07-17 Vicor, Inc. Multimedia collaboration system
US6259471B1 (en) 1996-03-14 2001-07-10 Alcatel Apparatus and service for transmitting video data
US6044146A (en) 1998-02-17 2000-03-28 Genesys Telecommunications Laboratories, Inc. Method and apparatus for call distribution and override with priority
US20050262542A1 (en) 1998-08-26 2005-11-24 United Video Properties, Inc. Television chat system
US6241612B1 (en) 1998-11-09 2001-06-05 Cirrus Logic, Inc. Voice communication during a multi-player game
US7007235B1 (en) 1999-04-02 2006-02-28 Massachusetts Institute Of Technology Collaborative agent interaction control and synchronization system
US6515681B1 (en) 1999-05-11 2003-02-04 Prophet Financial Systems, Inc. User interface for interacting with online message board
US6654346B1 (en) 1999-07-19 2003-11-25 Dunti Corporation Communication network across which packets of data are transmitted according to a priority scheme
US6559863B1 (en) 2000-02-11 2003-05-06 International Business Machines Corporation System and methodology for video conferencing and internet chatting in a cocktail party style
US20020094831A1 (en) 2000-03-03 2002-07-18 Mark Maggenti Communication device for providing dormant mode for a group communication network
US20020102999A1 (en) 2000-03-03 2002-08-01 Qualcomm, Inc. Method and apparatus for enabling group communication services in an existing communication system
US7478129B1 (en) 2000-04-18 2009-01-13 Helen Jeanne Chemtob Method and apparatus for providing group interaction via communications networks
US20090249244A1 (en) 2000-10-10 2009-10-01 Addnclick, Inc. Dynamic information management system and method for content delivery and sharing in content-, metadata- & viewer-based, live social networking among users concurrently engaged in the same and/or similar content
US20100026802A1 (en) 2000-10-24 2010-02-04 Object Video, Inc. Video analytic rule detection system and method
US7787447B1 (en) 2000-12-28 2010-08-31 Nortel Networks Limited Voice optimization in a network having voice over the internet protocol communication devices
US20020143877A1 (en) 2001-02-06 2002-10-03 Hackbarth Randy L. Apparatus and method for use in a data/conference call system to provide collaboration services
US6697614B2 (en) 2001-02-27 2004-02-24 Motorola, Inc. Method and apparatus for distributed arbitration of a right to speak among a plurality of devices participating in a real-time voice conference
US7487211B2 (en) 2002-07-01 2009-02-03 Microsoft Corporation Interactive, computer network-based video conferencing system and process
US20040022202A1 (en) 2002-08-05 2004-02-05 Chih-Lung Yang Method and apparatus for continuously receiving images from a plurality of video channels and for alternately continuously transmitting to each of a plurality of participants in a video conference individual images containing information concerning each of said video channels
US8929516B2 (en) 2002-11-22 2015-01-06 Intellisist, Inc. System and method for transmitting voice messages to a discussion group
US20130123019A1 (en) 2002-12-10 2013-05-16 David R. Sullivan System and method for managing audio and video channels for video game players and spectators
US20050032539A1 (en) 2003-08-06 2005-02-10 Noel Paul A. Priority queuing of callers
US20050132288A1 (en) 2003-12-12 2005-06-16 Kirn Kevin N. System and method for realtime messaging having image sharing feature
US20060002315A1 (en) 2004-04-15 2006-01-05 Citrix Systems, Inc. Selectively sharing screen data
GB2446529A (en) 2004-06-25 2008-08-13 Sony Comp Entertainment Europe Audio communication between networked games terminals
US20060055771A1 (en) 2004-08-24 2006-03-16 Kies Jonathan K System and method for optimizing audio and video data transmission in a wireless system
US20060063555A1 (en) 2004-09-23 2006-03-23 Matthew Robbins Device, system and method of pre-defined power regulation scheme
US20060114314A1 (en) 2004-11-19 2006-06-01 Sony Ericsson Mobile Communications Ab Picture/video telephony for a push-to-talk wireless communications device
US20060140138A1 (en) 2004-12-28 2006-06-29 Hill Thomas C Method for simlutaneous communications management
US20080274810A1 (en) 2005-02-25 2008-11-06 Sawako-Eeva Hayashi Controlling Communications Between Players of a Multi-Player Game
US7593032B2 (en) 2005-07-20 2009-09-22 Vidyo, Inc. System and method for a conference server architecture for low delay and distributed conferencing applications
US20130201279A1 (en) 2005-07-20 2013-08-08 Mehmet Reha Civanlar System and Method for Scalable and Low-Delay Videoconferencing Using Scalable Video Coding
US7495687B2 (en) 2005-09-07 2009-02-24 F4W, Inc. System and methods for video surveillance in networks
US7515560B2 (en) 2005-09-07 2009-04-07 F4W, Inc. Apparatus and method for dynamically updating and communicating within flexible networks
US20070186171A1 (en) 2006-02-09 2007-08-09 Microsoft Corporation Virtual shadow awareness for multi-user editors
US8230355B1 (en) 2006-03-22 2012-07-24 Adobe Systems Incorporated Visual representation of a characteristic of an object in a space
US20070234220A1 (en) 2006-03-29 2007-10-04 Autodesk Inc. Large display attention focus system
US20070265074A1 (en) 2006-05-09 2007-11-15 Nintendo Co., Ltd., Game program and game apparatus
US20080002668A1 (en) 2006-06-30 2008-01-03 Sony Ericsson Mobile Communications Ab Portable communication device and method for simultaneously
US20080037763A1 (en) 2006-07-26 2008-02-14 Shmuel Shaffer Queuing and routing telephone calls
US20080137559A1 (en) 2006-12-07 2008-06-12 Kabushiki Kaisha Toshiba Conference system
US20080136898A1 (en) 2006-12-12 2008-06-12 Aviv Eisenberg Method for creating a videoconferencing displayed image
US20080146339A1 (en) 2006-12-14 2008-06-19 Arlen Lynn Olsen Massive Multiplayer Online Sports Teams and Events
US20110258474A1 (en) 2007-01-31 2011-10-20 Net Power And Light, Inc. Method and system for precise synchronization of audio and video streams during a distributed communication session with multiple participants
US20080181260A1 (en) 2007-01-31 2008-07-31 Stanislav Vonog Method and system for precise synchronization of audio and video streams during a distributed communication session with multiple participants
US8225127B2 (en) 2007-01-31 2012-07-17 Net Power And Light, Inc. Method and system for precise synchronization of audio and video streams during a distributed communication session with multiple participants
US20120254649A1 (en) 2007-01-31 2012-10-04 Net Power And Light, Inc. Method and system for precise synchronization of audio and video streams during a distributed communication session with multiple participants
US20080320082A1 (en) * 2007-06-19 2008-12-25 Matthew Kuhlke Reporting participant attention level to presenter during a web-based rich-media conference
US20140040784A1 (en) 2007-06-20 2014-02-06 Google Inc. Multi-user chat
US20090033737A1 (en) 2007-08-02 2009-02-05 Stuart Goose Method and System for Video Conferencing in a Virtual Environment
US20090040289A1 (en) 2007-08-08 2009-02-12 Qnx Software Systems (Wavemakers), Inc. Video phone system
US20110185286A1 (en) 2007-10-24 2011-07-28 Social Communications Company Web browser interface for spatial communication environments
WO2009077936A2 (en) 2007-12-17 2009-06-25 Koninklijke Philips Electronics N.V. Method of controlling communications between at least two users of a communication system
US20090204906A1 (en) 2008-02-11 2009-08-13 Dialogic Corporation System and method for performing video collaboration
US20090210789A1 (en) 2008-02-14 2009-08-20 Microsoft Corporation Techniques to generate a visual composition for a multimedia conference event
US20090209339A1 (en) 2008-02-14 2009-08-20 Aruze Gaming America, Inc. Gaming Apparatus Capable of Conversation with Player, Control Method Thereof, Gaming System Capable of Conversation with Player, and Control Method Thereof
US8144187B2 (en) 2008-03-14 2012-03-27 Microsoft Corporation Multiple video stream capability negotiation
US20100030578A1 (en) 2008-03-21 2010-02-04 Siddique M A Sami System and method for collaborative shopping, business and entertainment
US20090288007A1 (en) 2008-04-05 2009-11-19 Social Communications Company Spatial interfaces for realtime networked communications
US20090254843A1 (en) 2008-04-05 2009-10-08 Social Communications Company Shared virtual area communication environment based apparatus and methods
US20100005411A1 (en) 2008-07-02 2010-01-07 Icharts, Inc. Creation, sharing and embedding of interactive charts
US20110164141A1 (en) 2008-07-21 2011-07-07 Marius Tico Electronic Device Directional Audio-Video Capture
US20100026780A1 (en) 2008-07-31 2010-02-04 Nokia Corporation Electronic device directional audio capture
US20120198334A1 (en) 2008-09-19 2012-08-02 Net Power And Light, Inc. Methods and systems for image sharing in a collaborative work space
US20100122184A1 (en) 2008-09-19 2010-05-13 Musigy Usa, Inc. Method and System for Distributed Computing Interface
US20120084672A1 (en) 2008-09-19 2012-04-05 Net Power And Light, Inc. Methods and systems for sharing images synchronized across a distributed computing interface
US8917310B2 (en) 2008-11-24 2014-12-23 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US8390670B1 (en) 2008-11-24 2013-03-05 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US8405702B1 (en) 2008-11-24 2013-03-26 Shindig, Inc. Multiparty communications systems and methods that utilize multiple modes of communication
US20130191479A1 (en) 2008-11-24 2013-07-25 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US9215412B2 (en) 2008-11-24 2015-12-15 Shindig, Inc. Multiparty communications systems and methods that optimize communications based on mode and available bandwidth
US20140176665A1 (en) * 2008-11-24 2014-06-26 Shindig, Inc. Systems and methods for facilitating multi-user events
US8902272B1 (en) 2008-11-24 2014-12-02 Shindig, Inc. Multiparty communications systems and methods that employ composite communications
US20100131868A1 (en) 2008-11-26 2010-05-27 Cisco Technology, Inc. Limitedly sharing application windows in application sharing sessions
US20100146085A1 (en) 2008-12-05 2010-06-10 Social Communications Company Realtime kernel
US8647206B1 (en) 2009-01-15 2014-02-11 Shindig, Inc. Systems and methods for interfacing video games and user communications
US20130024785A1 (en) 2009-01-15 2013-01-24 Social Communications Company Communicating between a virtual area and a physical space
US20150365305A1 (en) * 2009-04-07 2015-12-17 Verisign, Inc. Domain name system traffic analysis
US8779265B1 (en) 2009-04-24 2014-07-15 Shindig, Inc. Networks of portable electronic devices that collectively generate sound
US20100316232A1 (en) 2009-06-16 2010-12-16 Microsoft Corporation Spatial Audio for Audio Conferencing
US20150046800A1 (en) 2009-08-26 2015-02-12 Eustace Prince Isidore Advanced Editing and Interfacing in User Applications
CA2771785A1 (en) 2009-08-27 2011-03-03 Net Power And Light, Inc. A system and method for pervasive computing
US8060560B2 (en) 2009-08-27 2011-11-15 Net Power And Light, Inc. System and method for pervasive computing
EP2471221A1 (en) 2009-08-27 2012-07-04 Net Power And Light, Inc. A system and method for pervasive computing
WO2011025989A1 (en) 2009-08-27 2011-03-03 Musigy Usa, Inc. A system and method for pervasive computing
US20110055317A1 (en) 2009-08-27 2011-03-03 Musigy Usa, Inc. System and Method for Pervasive Computing
US20120124128A1 (en) 2009-08-27 2012-05-17 Net Power And Light, Inc. System for pervasive computing
US20110060992A1 (en) 2009-09-07 2011-03-10 Jevons Oliver Matthew Video-collaboration apparatus and method
US20110072366A1 (en) 2009-09-18 2011-03-24 Barry Spencer Systems and methods for multimedia multipoint real-time conferencing
US20120084456A1 (en) 2009-09-29 2012-04-05 Net Power And Light, Inc. Method and system for low-latency transfer protocol
US20110078532A1 (en) 2009-09-29 2011-03-31 Musigy Usa, Inc. Method and system for low-latency transfer protocol
US8171154B2 (en) 2009-09-29 2012-05-01 Net Power And Light, Inc. Method and system for low-latency transfer protocol
US8527654B2 (en) 2009-09-29 2013-09-03 Net Power And Light, Inc. Method and system for low-latency transfer protocol
CA2774014A1 (en) 2009-09-29 2011-04-07 Net Power And Light, Inc. Method and system for low-latency transfer protocol
US20120246227A1 (en) 2009-09-29 2012-09-27 Net Power And Light, Inc. Method and system for low-latency transfer protocol
WO2011041229A2 (en) 2009-09-29 2011-04-07 Net Power And Light, Inc. Method and system for low-latency transfer protocol
EP2484091A2 (en) 2009-09-29 2012-08-08 Net Power And Light, Inc. Method and system for low-latency transfer protocol
US20110209104A1 (en) 2010-02-25 2011-08-25 Microsoft Corporation Multi-screen synchronous slide gesture
US20130216206A1 (en) 2010-03-08 2013-08-22 Vumanity Media, Inc. Generation of Composited Video Programming
WO2013043207A1 (en) 2010-04-30 2013-03-28 American Teleconferencing Services, Ltd. Event management/production for an online event
US20110270922A1 (en) 2010-04-30 2011-11-03 American Teleconferencing Services Ltd. Managing participants in a conference via a conference user interface
US20110279634A1 (en) 2010-05-12 2011-11-17 Alagu Periyannan Systems and methods for real-time multimedia communications across multiple standards and proprietary devices
US20130173531A1 (en) 2010-05-24 2013-07-04 Intersect Ptp, Inc. Systems and methods for collaborative storytelling in a virtual space
US20120002001A1 (en) 2010-07-01 2012-01-05 Cisco Technology Conference participant visualization
US8558868B2 (en) 2010-07-01 2013-10-15 Cisco Technology, Inc. Conference participant visualization
US20120041859A1 (en) 2010-08-12 2012-02-16 Net Power And Light, Inc. System architecture and methods for experiential computing
US8463677B2 (en) 2010-08-12 2013-06-11 Net Power And Light, Inc. System architecture and methods for experimental computing
WO2012021174A2 (en) 2010-08-12 2012-02-16 Net Power And Light Inc. EXPERIENCE OR "SENTIO" CODECS, AND METHODS AND SYSTEMS FOR IMPROVING QoE AND ENCODING BASED ON QoE EXPERIENCES
WO2012021173A2 (en) 2010-08-12 2012-02-16 Net Power And Light Inc. System architecture and methods for experiential computing
US20120039382A1 (en) 2010-08-12 2012-02-16 Net Power And Light, Inc. Experience or "sentio" codecs, and methods and systems for improving QoE and encoding based on QoE experiences
WO2012021901A2 (en) 2010-08-13 2012-02-16 Net Power And Light Inc. Methods and systems for virtual experiences
US20120272162A1 (en) 2010-08-13 2012-10-25 Net Power And Light, Inc. Methods and systems for virtual experiences
US20120038550A1 (en) 2010-08-13 2012-02-16 Net Power And Light, Inc. System architecture and methods for distributed multi-sensor gesture processing
US20120060101A1 (en) 2010-08-30 2012-03-08 Net Power And Light, Inc. Method and system for an interactive event experience
US20120084682A1 (en) 2010-10-01 2012-04-05 Imerj LLC Maintaining focus upon swapping of images
WO2012054089A2 (en) 2010-10-21 2012-04-26 Net Power And Light Inc. Distributed processing pipeline and distributed layered application processing
US20130156093A1 (en) 2010-10-21 2013-06-20 Net Power And Light, Inc. System architecture and method for composing and directing participant experiences
WO2012054895A2 (en) 2010-10-21 2012-04-26 Net Power And Light Inc. System architecture and method for composing and directing participant experiences
US20120151541A1 (en) 2010-10-21 2012-06-14 Stanislav Vonog System architecture and method for composing and directing participant experiences
US20120127183A1 (en) 2010-10-21 2012-05-24 Net Power And Light, Inc. Distribution Processing Pipeline and Distributed Layered Application Processing
US8429704B2 (en) 2010-10-21 2013-04-23 Net Power And Light, Inc. System architecture and method for composing and directing participant experiences
EP2630630A2 (en) 2010-10-21 2013-08-28 Net Power And Light, Inc. System architecture and method for composing and directing participant experiences
US20120098919A1 (en) 2010-10-22 2012-04-26 Aaron Tang Video integration
WO2012060977A1 (en) 2010-11-02 2012-05-10 Net Power And Light Inc. Method and system for resource-aware dynamic bandwidth control
EP2636194A1 (en) 2010-11-02 2013-09-11 Net Power And Light, Inc. Method and system for resource-aware dynamic bandwidth control
WO2012060978A1 (en) 2010-11-02 2012-05-10 Net Power And Light Inc. Method and system for data packet queue recovery
US20120110162A1 (en) 2010-11-02 2012-05-03 Net Power And Light, Inc. Method and system for resource-aware dynamic bandwidth control
US8458328B2 (en) 2010-11-02 2013-06-04 Net Power And Light, Inc. Method and system for data packet queue recovery
US20120110163A1 (en) 2010-11-02 2012-05-03 Net Power And Light, Inc. Method and system for data packet queue recovery
US20140019882A1 (en) 2010-12-27 2014-01-16 Google Inc. Social network collaboration space
US20120182384A1 (en) 2011-01-17 2012-07-19 Anderson Eric C System and method for interactive video conferencing
WO2012103376A2 (en) 2011-01-26 2012-08-02 Net Power And Light Inc. Method and system for a virtual playdate
US20120192087A1 (en) 2011-01-26 2012-07-26 Net Power And Light, Inc. Method and system for a virtual playdate
WO2012135384A2 (en) 2011-03-28 2012-10-04 Net Power And Light, Inc. Information mixer and system control for attention management
US20130021431A1 (en) 2011-03-28 2013-01-24 Net Power And Light, Inc. Information mixer and system control for attention management
US20120249719A1 (en) 2011-03-28 2012-10-04 Net Power And Light, Inc. Information mixer and system control for attention management
US20120297320A1 (en) 2011-03-28 2012-11-22 Net Power And Light, Inc. Information mixer and system control for attention management
US20120293600A1 (en) 2011-03-28 2012-11-22 Net Power And Light, Inc. Information mixer and system control for attention management
WO2012151471A2 (en) 2011-05-05 2012-11-08 Net Power And Light Inc. Identifying gestures using multiple sensors
US20120280905A1 (en) 2011-05-05 2012-11-08 Net Power And Light, Inc. Identifying gestures using multiple sensors
US8635293B2 (en) 2011-06-13 2014-01-21 Microsoft Corporation Asynchronous video threads
WO2012177779A2 (en) 2011-06-21 2012-12-27 Net Power And Light, Inc. Just-in-time transcoding of application content
US20120331387A1 (en) 2011-06-21 2012-12-27 Net Power And Light, Inc. Method and system for providing gathering experience
WO2012177641A2 (en) 2011-06-21 2012-12-27 Net Power And Light Inc. Method and system for providing gathering experience
US8549167B2 (en) 2011-06-21 2013-10-01 Net Power And Light, Inc. Just-in-time transcoding of application content
US20120326866A1 (en) 2011-06-21 2012-12-27 Net Power And Light, Inc. Method and system for providing gathering experience
US20120331089A1 (en) 2011-06-21 2012-12-27 Net Power And Light, Inc. Just-in-time transcoding of application content
US20130014027A1 (en) 2011-07-08 2013-01-10 Net Power And Light, Inc. Method and system for representing audiences in ensemble experiences
US20130014028A1 (en) 2011-07-09 2013-01-10 Net Power And Light, Inc. Method and system for drawing
US20130019184A1 (en) 2011-07-11 2013-01-17 Net Power And Light, Inc. Methods and systems for virtual experiences
US20130018960A1 (en) 2011-07-14 2013-01-17 Surfari Inc. Group Interaction around Common Online Content
US20130097512A1 (en) 2011-09-08 2013-04-18 Samsung Electronics Co., Ltd. Apparatus and content playback method thereof
US20130063542A1 (en) 2011-09-14 2013-03-14 Cisco Technology, Inc. System and method for configuring video data
US20130073978A1 (en) 2011-09-16 2013-03-21 Social Communications Company Capabilities based management of virtual areas
US20130088518A1 (en) 2011-10-10 2013-04-11 Net Power And Light, Inc. Methods and systems for providing a graphical user interface
US20130254287A1 (en) 2011-11-05 2013-09-26 Abhishek Biswas Online Social Interaction, Education, and Health Care by Analysing Affect and Cognitive Features
US8749610B1 (en) 2011-11-29 2014-06-10 Google Inc. Managing nodes of a synchronous communication conference
US20130169742A1 (en) 2011-12-28 2013-07-04 Google Inc. Video conferencing with unlimited dynamic active participants
US20130239063A1 (en) 2012-03-06 2013-09-12 Apple Inc. Selection of multiple images
WO2013149079A1 (en) 2012-03-28 2013-10-03 Net Power And Light, Inc. Information mixer and system control for attention management
US20130289983A1 (en) 2012-04-26 2013-10-31 Hyorim Park Electronic device and method of controlling the same
US20130321648A1 (en) 2012-06-01 2013-12-05 Nintendo Co., Ltd. Computer-readable medium, information processing apparatus, information processing system and information processing method
US9241131B2 (en) 2012-06-08 2016-01-19 Samsung Electronics Co., Ltd. Multiple channel communication using multiple cameras
US20140004496A1 (en) 2012-06-29 2014-01-02 Fujitsu Limited Dynamic evolving virtual classroom
US20150106750A1 (en) 2012-07-12 2015-04-16 Sony Corporation Display control apparatus, display control method, program, and communication system
US20140184723A1 (en) 2012-12-31 2014-07-03 T-Mobile Usa, Inc. Display and Service Adjustments to Enable Multi-Tasking During a Video Call
US20140229888A1 (en) 2013-02-14 2014-08-14 Eulina KO Mobile terminal and method of controlling the mobile terminal
US20140325428A1 (en) 2013-04-29 2014-10-30 Lg Electronics Inc. Mobile terminal and method of controlling the mobile terminal
US20150025888A1 (en) * 2013-07-22 2015-01-22 Nuance Communications, Inc. Speaker recognition and voice tagging for improved service
US20150052453A1 (en) 2013-08-13 2015-02-19 Bank Of America Corporation Virtual Position Display and Indicators
US20150049885A1 (en) * 2013-08-19 2015-02-19 Avaya Inc. Pairwise audio capture device selection

Non-Patent Citations (19)

* Cited by examiner, † Cited by third party
Title
2007 WebEx Meeting Center User's Guide.
2011 Blackboard Collaborate User's Guide.
About TokBox, Inc., All about TokBox, http://www.tokbox.com/about, retrieved Feb. 4, 2011, p. 1.
Cisco (Cisco WebEx, WebEx Event Center Users Guide, Version 6.5, Last updated: 072310-IC Last updated: 111810, retrieved on Mar. 2, 2016 from https://www.ieee.org/about/volunteers/remote-conferencing/webex-event-center-user-guide.pdf). *
Cisco (Cisco WebEx, WebEx Event Center Users Guide, Version 6.5, Last updated: 072310-IC Last updated: 111810, retrieved on Mar. 2, 2016 from https://www.ieee.org/about/volunteers/remote—conferencing/webex—event—center—user—guide.pdf). *
Cisco2 (Userguide Cisco WebEx Audio Controls Guide and Release Notes for FR29, retrieved on Sep. 1, 2016 using way back machine dated May 14, 2013, retrieved from https://web.archive.org/web/20130514031844/http://www.meetingconnect.net/files/CiscoWebExAudioControlsandReleaseNotes.pdf). *
CrunchBase Profile, CrunchBase readeo, http://www.crunchbase.com/company/readeo, retrieved Feb. 3, 2011, pp. 1-2.
CrunchBase Profile, CrunchBase Rounds, http://www.crunchbase.com/company/6rounds, retrieved Feb. 4, 2011, pp. 1-2.
CrunchBase Profile, CrunchBase TokBox, http://www.crunchbase.com/company/tokbox, retrieved Feb. 4, 2011, pp. 1-3.
MacDonald, Heidi-Shindig Offers Authors Interactive Video Conferencing-Blog posted Sep. 12, 2012-Publishers Weekly. Retrieved from [http://publishersweekly.com] on [Aug. 15, 2016]. 5 Pages.
MacDonald, Heidi—Shindig Offers Authors Interactive Video Conferencing—Blog posted Sep. 12, 2012—Publishers Weekly. Retrieved from [http://publishersweekly.com] on [Aug. 15, 2016]. 5 Pages.
Online Collaboration GoToMeeting, http://www.gotomeeting.com/fec/online-collaboration, retrieved Feb. 4, 2011, pp. 1-4.
Online Collaboration GoToMeeting, http://www.gotomeeting.com/fec/online—collaboration, retrieved Feb. 4, 2011, pp. 1-4.
Readeo Press Release, http://www.mmpublicity.com, Feb. 25, 2010, pp. 1-2.
Rounds.com, Make friends online and enjoy free webcame chats, http://www.rounds.com/about, retrieved Feb. 4, 2011, pp. 1-3.
Shindig, Various Informational Pages Published as of Jul. 21, 2012-Retrieved via Internet Archive from [http://shindigevents.com] on [Aug. 5, 2016].
Shindig, Various Informational Pages Published as of Jul. 21, 2012—Retrieved via Internet Archive from [http://shindigevents.com] on [Aug. 5, 2016].
Slideshare-Shindig Magazine Video Chat Events. Slide Presentation published Oct. 9, 2012. Retrieved from [http://slideshart.net] on [Aug. 11, 2016]. 11 Pages.
Slideshare—Shindig Magazine Video Chat Events. Slide Presentation published Oct. 9, 2012. Retrieved from [http://slideshart.net] on [Aug. 11, 2016]. 11 Pages.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160343268A1 (en) * 2013-09-11 2016-11-24 Lincoln Global, Inc. Learning management system for a real-time simulated virtual reality welding training environment
US10198962B2 (en) * 2013-09-11 2019-02-05 Lincoln Global, Inc. Learning management system for a real-time simulated virtual reality welding training environment

Also Published As

Publication number Publication date
US20150326458A1 (en) 2015-11-12

Similar Documents

Publication Publication Date Title
US9733333B2 (en) Systems and methods for monitoring participant attentiveness within events and group assortments
US10546235B2 (en) Relativistic sentiment analyzer
US9734410B2 (en) Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
US20140176665A1 (en) Systems and methods for facilitating multi-user events
US10542237B2 (en) Systems and methods for facilitating communications amongst multiple users
EP3189622B1 (en) System and method for tracking events and providing feedback in a virtual conference
US20140229866A1 (en) Systems and methods for grouping participants of multi-user events
US9426421B2 (en) System and method for determining conference participation
US20180203587A1 (en) Systems and methods for forming group communications within an online event
CN112262367A (en) Audio selection based on user engagement
US20150304376A1 (en) Systems and methods for providing a composite audience view
US20140136626A1 (en) Interactive Presentations
US11528449B2 (en) System and methods to determine readiness in video collaboration
JP6817580B2 (en) Information processing method, information processing system and information processing equipment
US20220191263A1 (en) Systems and methods to automatically perform actions based on media content
US11606465B2 (en) Systems and methods to automatically perform actions based on media content
US11290684B1 (en) Systems and methods to automatically perform actions based on media content
US20150301694A1 (en) Systems and methods for integrating in-person and online aspects into a seamless event experience
US10133916B2 (en) Image and identity validation in video chat events
US11595278B2 (en) Systems and methods to automatically perform actions based on media content
JP6267819B1 (en) Class system, class server, class support method, and class support program
CN104424825A (en) Remote teaching method and system
WO2022161446A1 (en) Control method and apparatus, and electronic device
JP2018165978A (en) Lesson system, lesson server, lesson support method, and lesson support program
US11749079B2 (en) Systems and methods to automatically perform actions based on media content

Legal Events

Date Code Title Description
AS Assignment

Owner name: SHINDIG, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOTTLIEB, STEVEN M.;REEL/FRAME:032846/0894

Effective date: 20140505

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4