US20090062943A1 - Methods and apparatus for automatically controlling the sound level based on the content - Google Patents
Methods and apparatus for automatically controlling the sound level based on the content Download PDFInfo
- Publication number
- US20090062943A1 US20090062943A1 US11/895,723 US89572307A US2009062943A1 US 20090062943 A1 US20090062943 A1 US 20090062943A1 US 89572307 A US89572307 A US 89572307A US 2009062943 A1 US2009062943 A1 US 2009062943A1
- Authority
- US
- United States
- Prior art keywords
- content
- sound level
- information
- current sound
- detecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
Definitions
- the present invention relates generally to controlling the sound level and, more particularly, to automatically controlling the sound level based on the content.
- the audio signals are reproduced at sound levels that are either too low or too high for the user.
- the audio signals associated with a television commercial may be reproduced too loudly at times for the user.
- the audio signals associated with a television program maybe reproduced too softly for the user.
- the methods and apparatuses detect content and information related to the content; utilize the content at a current sound level; and modify the current sound level based on the information and the content.
- FIG. 1 is a diagram illustrating an environment within which the methods and apparatuses for automatically controlling the sound level based on the content are implemented;
- FIG. 2 is a simplified block diagram illustrating one embodiment in which the methods and apparatuses for automatically controlling the sound level based on the content are implemented;
- FIG. 3 is a simplified block diagram illustrating a system, consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content;
- FIG. 4 illustrates an exemplary record consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content
- FIG. 5 is a flow diagram consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content.
- FIG. 6 is a flow diagram consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content.
- references to “electronic device” includes a device such as a personal digital video recorder, digital audio player, gaming console, a set top box, a personal computer, a cellular telephone, a personal digital assistant, a specialized computer such as an electronic interface with an automobile, and the like.
- references to “content” includes audio streams, images, video streams, photographs, graphical displays, text files, software applications, electronic messages, and the like.
- the methods and apparatuses for automatically controlling the sound level based on the content are configured to adjust the current sound level while utilizing the content based on preferences of the user.
- the current sound level is adjusted multiple times based on the current location of the content.
- the current sound level may be adjusted based on the content type such as music, television, commercials, and the like.
- use of other devices also adjusts the current sound level of the content. For example, the detection of a telephone ringing or a telephone in use may decrease the current sound level of the content.
- FIG. 1 is a diagram illustrating an environment within which the methods and apparatuses for automatically controlling the sound level based on the content are implemented.
- the environment includes an electronic device 110 (e.g., a computing platform configured to act as a client device, such as a personal digital video recorder, digital audio player, computer, a personal digital assistant, a cellular telephone, a camera device, a set top box, a gaming console), a user interface 115 , a network 120 (e.g., a local area network, a home network, the Internet), and a server 130 (e.g., a computing platform configured to act as a server).
- the network 120 can be implemented via wireless or wired solutions.
- one or more user interface 115 components are made integral with the electronic device 110 (e.g., keypad and video display screen input and output interfaces in the same housing as personal digital assistant electronics (e.g., as in a Clie® manufactured by Sony Corporation).
- one or more user interface 115 components e.g., a keyboard, a pointing device such as a mouse and trackball, a microphone, a speaker, a display, a camera
- the user utilizes interface 115 to access and control content and applications stored in electronic device 110 , server 130 , or a remote storage device (not shown) coupled via network 120 .
- embodiments for automatically controlling the sound level based on the content as described below are executed by an electronic processor in electronic device 110 , in server 130 , or by processors in electronic device 110 and in server 130 acting together.
- Server 130 is illustrated in FIG. 1 as being a single computing platform, but in other instances are two or more interconnected computing platforms that act as a server.
- FIG. 2 is a simplified diagram illustrating an exemplary architecture in which the methods and apparatuses for automatically controlling the sound level based on the content are implemented.
- the exemplary architecture includes a plurality of electronic devices 110 , a server device 130 , and a network 120 connecting electronic devices 110 to server 130 and each electronic device 110 to each other.
- the plurality of electronic devices 110 are each configured to include a computer-readable medium 209 , such as random access memory, coupled to an electronic processor 208 .
- Processor 208 executes program instructions stored in the computer-readable medium 209 .
- a unique user operates each electronic device 110 via an interface 115 as described with reference to FIG. 1 .
- Server device 130 includes a processor 211 coupled to a computer-readable medium 212 .
- the server device 130 is coupled to one or more additional external or internal devices, such as, without limitation, a secondary data storage element, such as database 240 .
- processors 208 and 211 are manufactured by Intel Corporation, of Santa Clara, Calif. In other instances, other microprocessors are used.
- the plurality of client devices 110 and the server 130 include instructions for a customized application for automatically controlling the sound level based on the content.
- the plurality of computer-readable medium 209 and 212 contain, in part, the customized application.
- the plurality of client devices 110 and the server 130 are configured to receive and transmit electronic messages for use with the customized application.
- the network 120 is configured to transmit electronic messages for use with the customized application.
- One or more user applications are stored in memories 209 , in memory 211 , or a single user application is stored in part in one memory 209 and in part in memory 211 .
- a stored user application regardless of storage location, is made customizable based on automatically controlling the sound level based on the content as determined using embodiments described below.
- FIG. 3 illustrates one embodiment of a system 300 for automatically controlling the sound level based on the content.
- the system 300 includes a content detection module 310 , a sound level detection module 320 , a storage module 330 , an interface module 340 , a control module 350 , a profile module 360 , a sound level adjustment module 370 , and a device detection module 380 .
- control module 350 communicates with the content detection module 310 , the sound level detection module 320 , the storage module 330 , the interface module 340 , the control module 350 , the profile module 360 , the sound level adjustment module 370 , and the device detection module 380 .
- control module 350 coordinates tasks, requests, and communications between the content detection module 310 , the sound level detection module 320 , the storage module 330 , the interface module 340 , the control module 350 , the profile module 360 , the sound level adjustment module 370 , and the device detection module 380 .
- the content detection module 310 detects content such as images, text, graphics, video, audio, and the like. In one embodiment, the content detection module 310 is configured to uniquely identify the content.
- the content detection module 310 detects information related to the content.
- information related to the content may include title of the content, content type, specific sound level of the content at specific locations, and the like. Further, information related to the content may be stored within profile information as shown in FIG. 4 or within metadata corresponding with the content.
- the sound level detection module 320 detects the sound level associated with the content. In one embodiment, the sound level detection module 320 detects a predetermined sound level for the specific content. In one embodiment, the predetermined sound level can be determined from the profile information associated with the content. In one embodiment, the predetermined sound level varies based on the portion of the content. In another embodiment, the predetermined sound level is constant throughout the content.
- the sound level detection module 320 detects changes to the sound level while the content is being played. For example, a user may manually change the sound level of the content while the content is being played in one embodiment. In some instances, the sound level may be changed multiple times throughout the content based on preferences of the user. In one embodiment, the sound level detection module 320 detects these changes in sound level and the location within the content that these changes occur.
- the storage module 330 stores a plurality of profiles wherein each profile is associated with various content and other data associated with the content. In one embodiment, the profile stores exemplary information as shown in a profile in FIG. 4 . In one embodiment, the storage module 330 is located within the server device 130 . In another embodiment, portions of the storage module 330 are located within the electronic device 110 .
- the interface module 340 detects the electronic device 110 as the electronic device 110 is connected to the network 120 .
- the interface module 340 detects input from the interface device 115 such as a keyboard, a mouse, a microphone, a still camera, a video camera, and the like.
- the interface module 340 provides output to the interface device 115 such as a display, speakers, external storage devices, an external network, and the like.
- the profile module 360 processes profile information related to the specific content.
- exemplary profile information is shown within a record illustrated in FIG. 4 .
- each profile corresponds with a particular content.
- groups of profiles correspond with a particular user.
- the sound level adjustment module 370 adjusts the sound level of the content detected within the content detection module 310 .
- the sound level is adjusted by the sound level adjustment module 370 based on the current sound level detected by the sound level detection module. In another embodiment, the sound level is adjusted by the sound level adjustment module 370 based on the information stored within the profile module 360 . In another embodiment, the sound level is adjusted by the sound level adjustment module 370 based on the devices detected within the device detection module 380 .
- the device detection module 380 detects a presence of devices.
- the devices include stationary devices such as video cassette recorders, DVD players, and televisions.
- the devices also include portable devices such as laptop computers, cellular telephones, personal digital assistants, portable music players, and portable video players.
- the device detection module 380 detects each device for status.
- status of the device includes whether the device is on, off, playing content, and the like.
- the device detection module 380 is configured to detect whether a telephone is being utilized. In other examples, the telephone may be substituted for another device.
- the system 300 in FIG. 3 is shown for exemplary purposes and is merely one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content. Additional modules may be added to the system 300 without departing from the scope of the methods and apparatuses for automatically controlling the sound level based on the content. Similarly, modules may be combined or deleted without departing from the scope of the methods and apparatuses for automatically controlling the sound level based on the content.
- FIG. 4 illustrates a simplified record 400 that corresponds to a profile that describes a specific content.
- the record 400 is stored within the storage module 330 and utilized within the system 300 .
- the record 400 includes a content identification field 405 , a location within content field 410 , a sound level field 415 , a content type field 420 , and a user identification field 425 .
- the content identification field 405 identifies a specific content associated with the record 400 .
- the content's name is utilized as a label for the content identification field 405 .
- the location within content field 410 is associated with a specific location within the content.
- the specific location within the content may be identified by a time stamp.
- the sound level field 415 identifies the sound level that is desired for the content that is associated with the record 400 . In one embodiment, a single sound level is assigned to the content. In another embodiment, different sound levels are assigned to different portions of the content as described by the location within content field 410 .
- the content type field 420 identifies the type of content that is associated with the identified content with the record 400 .
- the types of content include music, television, commercials, talk radio, and the like.
- the types of content may be further distinguished by types of music such as rock, classical, jazz, heavy metal, and the like.
- the user identification field 425 identifies a user associated with the record 400 .
- a user's name is utilized as a label for the user identification field 425 .
- the flow diagrams as depicted in FIGS. 5 and 6 are one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content.
- the blocks within the flow diagrams can be performed in a different sequence without departing from the spirit of the methods and apparatuses for automatically controlling the sound level based on the content. Further, blocks can be deleted, added, or combined without departing from the spirit of the methods and apparatuses for automatically controlling the sound level based on the content.
- the flow diagram in FIG. 5 illustrates changing sound levels for content according to one embodiment of the invention.
- content is identified.
- specific content such as a television show that is being utilized is detected and identified.
- content type associated with the identified content is also identified.
- the types of content include music, television, commercials, talk radio, and the like.
- the types of content may be further distinguished by types of music such as rock, classical, jazz, heavy metal, and the like.
- the detection of the content type is performed through detection of information associated with the identified content such as metadata, profile information, and the like.
- preferences are detected that are associated with the identified content.
- the preferences are stored within a profile as exemplified within record 400 .
- the preferences include sound level preferences for the entire content or portions of the content, association with particular users, and the content type of the content.
- Block 520 a match is performed between the identified content within the Block 505 and the preferences as detected within the Block 515 .
- the classification preference includes sound level preferences for a specific content type.
- the sound level for the content is set at a default sound level. If the content type as detected within the Block 510 matches a sound level preference for the specific content type within the Block 530 , then the content is played at the predetermined sound level preference. In another embodiment, if the content type is not sufficiently identified within the Block 510 , then the identified content is played at a default sound level.
- each portion of the content is played at the predetermined sound level. For instance, if different portions of the content have different sound levels, then each portion of the content is played at the corresponding sound levels.
- each of the content types is associated with a unique sound level. Based on the content type detected within the Block 510 , the identified content is played at the preferred sound level for the detected content type.
- device(s) are detected.
- one of the devices may include a telephone, a computer, a video device, and an audio device.
- Block 545 if a signal from the detected device is not detected, then devices are continually detected within the Block 540 .
- the sound level of the identified content is changed.
- the signal may indicate an incoming telephone call through a ring indicator, a telephone connection, a telephone disconnection, initiating sound through a video device or audio device, and terminating sound through a video device or audio device.
- changing the sound level may either increase or decrease the new sound level relative to the prior sound level. For example, if the signal indicates a telephone connection, then the new sound level may be decreased relative to the prior sound level. Similarly, if the signal indicates a telephone disconnection, then the new sound level may be increased relative to the prior sound level.
- the flow diagram in FIG. 6 illustrates capturing sound levels according to one embodiment of the invention.
- a user is detected.
- the identity of the user is detected through a logon process initiated by the user.
- the user is associated with a profile as illustrated as an exemplary record 400 within FIG. 4 .
- content utilized by the detected user is also detected.
- specific content such as a television show that is being viewed by the user is detected and identified.
- the current location of the content being utilized is also identified. For example, the current location or time of the television show is identified and updated as the user watches the television show. Further, the television device utilized to view the television show is also detected.
- the sound level of the content utilized is captured.
- a change in the sound level is captured.
- the location of the content is noted where the change in the sound level occurs.
- the change in the sound level may be detected through a change in a volume control knob or other input.
- the sound level is stored within a profile information that corresponds with the content and the user.
- the location of the content is also stored with the corresponding sound level information.
- an average sound level is stored for the identified content.
- the average sound level is calculated as the average sound level over the course of playing the content.
- the average sound level is stored for future use for this identified content. Further, the average sound level can also be utilized and averaged for the content type of the identified content.
- a most common sound level is stored for the identified content.
- the most common sound level is the sound level that occurs for the greatest amount of time over the course of playing the content.
- the most common sound level is stored for future use for this identified content. Further, the most common sound level can also be utilized and averaged for the content type of the identified content.
Abstract
Description
- The present invention relates generally to controlling the sound level and, more particularly, to automatically controlling the sound level based on the content.
- In conjunction with content, there are many devices that are capable of reproducing audio signals for a user. In some instances, the audio signals are reproduced at sound levels that are either too low or too high for the user. For example, the audio signals associated with a television commercial may be reproduced too loudly at times for the user. Similarly, the audio signals associated with a television program maybe reproduced too softly for the user.
- In one embodiment, the methods and apparatuses detect content and information related to the content; utilize the content at a current sound level; and modify the current sound level based on the information and the content.
- The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate and explain one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content. In the drawings,
-
FIG. 1 is a diagram illustrating an environment within which the methods and apparatuses for automatically controlling the sound level based on the content are implemented; -
FIG. 2 is a simplified block diagram illustrating one embodiment in which the methods and apparatuses for automatically controlling the sound level based on the content are implemented; -
FIG. 3 is a simplified block diagram illustrating a system, consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content; -
FIG. 4 illustrates an exemplary record consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content; -
FIG. 5 is a flow diagram consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content; and -
FIG. 6 is a flow diagram consistent with one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content. - The following detailed description of the methods and apparatuses for automatically controlling the sound level based on the content refers to the accompanying drawings. The detailed description is not intended to limit the methods and apparatuses for automatically controlling the sound level based on the content. Instead, the scope of the methods and apparatuses for automatically selecting a profile is defined by the appended claims and equivalents. Those skilled in the art will recognize that many other implementations are possible, consistent with the methods and apparatuses for automatically controlling the sound level based on the content.
- References to “electronic device” includes a device such as a personal digital video recorder, digital audio player, gaming console, a set top box, a personal computer, a cellular telephone, a personal digital assistant, a specialized computer such as an electronic interface with an automobile, and the like.
- References to “content” includes audio streams, images, video streams, photographs, graphical displays, text files, software applications, electronic messages, and the like.
- In one embodiment, the methods and apparatuses for automatically controlling the sound level based on the content are configured to adjust the current sound level while utilizing the content based on preferences of the user. In one embodiment, the current sound level is adjusted multiple times based on the current location of the content. Further, the current sound level may be adjusted based on the content type such as music, television, commercials, and the like. In one embodiment, use of other devices also adjusts the current sound level of the content. For example, the detection of a telephone ringing or a telephone in use may decrease the current sound level of the content.
-
FIG. 1 is a diagram illustrating an environment within which the methods and apparatuses for automatically controlling the sound level based on the content are implemented. The environment includes an electronic device 110 (e.g., a computing platform configured to act as a client device, such as a personal digital video recorder, digital audio player, computer, a personal digital assistant, a cellular telephone, a camera device, a set top box, a gaming console), auser interface 115, a network 120 (e.g., a local area network, a home network, the Internet), and a server 130 (e.g., a computing platform configured to act as a server). In one embodiment, thenetwork 120 can be implemented via wireless or wired solutions. - In one embodiment, one or
more user interface 115 components are made integral with the electronic device 110 (e.g., keypad and video display screen input and output interfaces in the same housing as personal digital assistant electronics (e.g., as in a Clie® manufactured by Sony Corporation). In other embodiments, one ormore user interface 115 components (e.g., a keyboard, a pointing device such as a mouse and trackball, a microphone, a speaker, a display, a camera) are physically separate from, and are conventionally coupled to,electronic device 110. The user utilizesinterface 115 to access and control content and applications stored inelectronic device 110,server 130, or a remote storage device (not shown) coupled vianetwork 120. - In accordance with the invention, embodiments for automatically controlling the sound level based on the content as described below are executed by an electronic processor in
electronic device 110, inserver 130, or by processors inelectronic device 110 and inserver 130 acting together.Server 130 is illustrated inFIG. 1 as being a single computing platform, but in other instances are two or more interconnected computing platforms that act as a server. -
FIG. 2 is a simplified diagram illustrating an exemplary architecture in which the methods and apparatuses for automatically controlling the sound level based on the content are implemented. The exemplary architecture includes a plurality ofelectronic devices 110, aserver device 130, and anetwork 120 connectingelectronic devices 110 toserver 130 and eachelectronic device 110 to each other. The plurality ofelectronic devices 110 are each configured to include a computer-readable medium 209, such as random access memory, coupled to anelectronic processor 208.Processor 208 executes program instructions stored in the computer-readable medium 209. A unique user operates eachelectronic device 110 via aninterface 115 as described with reference toFIG. 1 . -
Server device 130 includes aprocessor 211 coupled to a computer-readable medium 212. In one embodiment, theserver device 130 is coupled to one or more additional external or internal devices, such as, without limitation, a secondary data storage element, such asdatabase 240. - In one instance,
processors - The plurality of
client devices 110 and theserver 130 include instructions for a customized application for automatically controlling the sound level based on the content. In one embodiment, the plurality of computer-readable medium client devices 110 and theserver 130 are configured to receive and transmit electronic messages for use with the customized application. Similarly, thenetwork 120 is configured to transmit electronic messages for use with the customized application. - One or more user applications are stored in
memories 209, inmemory 211, or a single user application is stored in part in onememory 209 and in part inmemory 211. In one instance, a stored user application, regardless of storage location, is made customizable based on automatically controlling the sound level based on the content as determined using embodiments described below. -
FIG. 3 illustrates one embodiment of asystem 300 for automatically controlling the sound level based on the content. Thesystem 300 includes acontent detection module 310, a soundlevel detection module 320, astorage module 330, aninterface module 340, acontrol module 350, aprofile module 360, a sound level adjustment module 370, and adevice detection module 380. - In one embodiment, the
control module 350 communicates with thecontent detection module 310, the soundlevel detection module 320, thestorage module 330, theinterface module 340, thecontrol module 350, theprofile module 360, the sound level adjustment module 370, and thedevice detection module 380. - In one embodiment, the
control module 350 coordinates tasks, requests, and communications between thecontent detection module 310, the soundlevel detection module 320, thestorage module 330, theinterface module 340, thecontrol module 350, theprofile module 360, the sound level adjustment module 370, and thedevice detection module 380. - In one embodiment, the
content detection module 310 detects content such as images, text, graphics, video, audio, and the like. In one embodiment, thecontent detection module 310 is configured to uniquely identify the content. - In addition to detecting the content, the
content detection module 310 detects information related to the content. In one embodiment, information related to the content may include title of the content, content type, specific sound level of the content at specific locations, and the like. Further, information related to the content may be stored within profile information as shown inFIG. 4 or within metadata corresponding with the content. - In one embodiment, the sound
level detection module 320 detects the sound level associated with the content. In one embodiment, the soundlevel detection module 320 detects a predetermined sound level for the specific content. In one embodiment, the predetermined sound level can be determined from the profile information associated with the content. In one embodiment, the predetermined sound level varies based on the portion of the content. In another embodiment, the predetermined sound level is constant throughout the content. - In another embodiment, the sound
level detection module 320 detects changes to the sound level while the content is being played. For example, a user may manually change the sound level of the content while the content is being played in one embodiment. In some instances, the sound level may be changed multiple times throughout the content based on preferences of the user. In one embodiment, the soundlevel detection module 320 detects these changes in sound level and the location within the content that these changes occur. - In one embodiment, the
storage module 330 stores a plurality of profiles wherein each profile is associated with various content and other data associated with the content. In one embodiment, the profile stores exemplary information as shown in a profile inFIG. 4 . In one embodiment, thestorage module 330 is located within theserver device 130. In another embodiment, portions of thestorage module 330 are located within theelectronic device 110. - In one embodiment, the
interface module 340 detects theelectronic device 110 as theelectronic device 110 is connected to thenetwork 120. - In another embodiment, the
interface module 340 detects input from theinterface device 115 such as a keyboard, a mouse, a microphone, a still camera, a video camera, and the like. - In yet another embodiment, the
interface module 340 provides output to theinterface device 115 such as a display, speakers, external storage devices, an external network, and the like. - In one embodiment, the
profile module 360 processes profile information related to the specific content. In one embodiment, exemplary profile information is shown within a record illustrated inFIG. 4 . In one embodiment, each profile corresponds with a particular content. In another embodiment, groups of profiles correspond with a particular user. - In one embodiment, the sound level adjustment module 370 adjusts the sound level of the content detected within the
content detection module 310. - In one embodiment, the sound level is adjusted by the sound level adjustment module 370 based on the current sound level detected by the sound level detection module. In another embodiment, the sound level is adjusted by the sound level adjustment module 370 based on the information stored within the
profile module 360. In another embodiment, the sound level is adjusted by the sound level adjustment module 370 based on the devices detected within thedevice detection module 380. - In one embodiment, the
device detection module 380 detects a presence of devices. In one embodiment, the devices include stationary devices such as video cassette recorders, DVD players, and televisions. In another embodiment, the devices also include portable devices such as laptop computers, cellular telephones, personal digital assistants, portable music players, and portable video players. - In one embodiment, the
device detection module 380 detects each device for status. In one embodiment, status of the device includes whether the device is on, off, playing content, and the like. For example, thedevice detection module 380 is configured to detect whether a telephone is being utilized. In other examples, the telephone may be substituted for another device. - The
system 300 inFIG. 3 is shown for exemplary purposes and is merely one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content. Additional modules may be added to thesystem 300 without departing from the scope of the methods and apparatuses for automatically controlling the sound level based on the content. Similarly, modules may be combined or deleted without departing from the scope of the methods and apparatuses for automatically controlling the sound level based on the content. -
FIG. 4 illustrates asimplified record 400 that corresponds to a profile that describes a specific content. In one embodiment, therecord 400 is stored within thestorage module 330 and utilized within thesystem 300. In one embodiment, therecord 400 includes acontent identification field 405, a location withincontent field 410, asound level field 415, acontent type field 420, and auser identification field 425. - In one embodiment, the
content identification field 405 identifies a specific content associated with therecord 400. In one example, the content's name is utilized as a label for thecontent identification field 405. - In one embodiment, the location within
content field 410 is associated with a specific location within the content. In one embodiment, the specific location within the content may be identified by a time stamp. - In one embodiment, the
sound level field 415 identifies the sound level that is desired for the content that is associated with therecord 400. In one embodiment, a single sound level is assigned to the content. In another embodiment, different sound levels are assigned to different portions of the content as described by the location withincontent field 410. - In one embodiment, the
content type field 420 identifies the type of content that is associated with the identified content with therecord 400. In one embodiment, the types of content include music, television, commercials, talk radio, and the like. In another embodiment, within the music category, the types of content may be further distinguished by types of music such as rock, classical, jazz, heavy metal, and the like. - In one embodiment, the
user identification field 425 identifies a user associated with therecord 400. In one example, a user's name is utilized as a label for theuser identification field 425. - The flow diagrams as depicted in
FIGS. 5 and 6 are one embodiment of the methods and apparatuses for automatically controlling the sound level based on the content. The blocks within the flow diagrams can be performed in a different sequence without departing from the spirit of the methods and apparatuses for automatically controlling the sound level based on the content. Further, blocks can be deleted, added, or combined without departing from the spirit of the methods and apparatuses for automatically controlling the sound level based on the content. - The flow diagram in
FIG. 5 illustrates changing sound levels for content according to one embodiment of the invention. - In
Block 505, content is identified. In one embodiment, specific content such as a television show that is being utilized is detected and identified. - In
Block 510, content type associated with the identified content is also identified. In one embodiment, the types of content include music, television, commercials, talk radio, and the like. In another embodiment, within the music category, the types of content may be further distinguished by types of music such as rock, classical, jazz, heavy metal, and the like. In one embodiment, the detection of the content type is performed through detection of information associated with the identified content such as metadata, profile information, and the like. - In
Block 515, preferences are detected that are associated with the identified content. In one embodiment, the preferences are stored within a profile as exemplified withinrecord 400. In one embodiment, the preferences include sound level preferences for the entire content or portions of the content, association with particular users, and the content type of the content. - In
Block 520, a match is performed between the identified content within theBlock 505 and the preferences as detected within theBlock 515. - If there is no match, then a classification preference is detected within
Block 525. In one embodiment, the classification preference includes sound level preferences for a specific content type. - In
Block 530, the sound level for the content is set at a default sound level. If the content type as detected within theBlock 510 matches a sound level preference for the specific content type within theBlock 530, then the content is played at the predetermined sound level preference. In another embodiment, if the content type is not sufficiently identified within theBlock 510, then the identified content is played at a default sound level. - If there is a match within the
Block 520, then the content is played at a predetermined sound level inBlock 535. In one embodiment, each portion of the content is played at the predetermined sound level. For instance, if different portions of the content have different sound levels, then each portion of the content is played at the corresponding sound levels. - In another embodiment, each of the content types is associated with a unique sound level. Based on the content type detected within the
Block 510, the identified content is played at the preferred sound level for the detected content type. - In
Block 540, device(s) are detected. In one embodiment, one of the devices may include a telephone, a computer, a video device, and an audio device. - In
Block 545, if a signal from the detected device is not detected, then devices are continually detected within theBlock 540. - In
Block 545, if a signal from the detected device is detected, then the sound level of the identified content is changed. In one embodiment, the signal may indicate an incoming telephone call through a ring indicator, a telephone connection, a telephone disconnection, initiating sound through a video device or audio device, and terminating sound through a video device or audio device. - In one embodiment, changing the sound level may either increase or decrease the new sound level relative to the prior sound level. For example, if the signal indicates a telephone connection, then the new sound level may be decreased relative to the prior sound level. Similarly, if the signal indicates a telephone disconnection, then the new sound level may be increased relative to the prior sound level.
- The flow diagram in
FIG. 6 illustrates capturing sound levels according to one embodiment of the invention. - In
Block 610, a user is detected. In one embodiment, the identity of the user is detected through a logon process initiated by the user. In one embodiment, the user is associated with a profile as illustrated as anexemplary record 400 withinFIG. 4 . - In
Block 620, content utilized by the detected user is also detected. In one embodiment, specific content such as a television show that is being viewed by the user is detected and identified. In another embodiment, the current location of the content being utilized is also identified. For example, the current location or time of the television show is identified and updated as the user watches the television show. Further, the television device utilized to view the television show is also detected. - In
Block 630, the sound level of the content utilized is captured. In one embodiment, a change in the sound level is captured. Further, the location of the content is noted where the change in the sound level occurs. In one embodiment, the change in the sound level may be detected through a change in a volume control knob or other input. - In
Block 640, the sound level is stored within a profile information that corresponds with the content and the user. In one embodiment, the location of the content is also stored with the corresponding sound level information. - In
Block 650, an average sound level is stored for the identified content. In one embodiment, the average sound level is calculated as the average sound level over the course of playing the content. In one embodiment, the average sound level is stored for future use for this identified content. Further, the average sound level can also be utilized and averaged for the content type of the identified content. - In another embodiment, a most common sound level is stored for the identified content. In one embodiment, the most common sound level is the sound level that occurs for the greatest amount of time over the course of playing the content. In one embodiment, the most common sound level is stored for future use for this identified content. Further, the most common sound level can also be utilized and averaged for the content type of the identified content.
- The foregoing descriptions of specific embodiments of the invention have been presented for purposes of illustration and description. For example, the invention is described within the context of dynamically detecting and generating image information as merely one embodiment of the invention. The invention may be applied to a variety of other applications.
- They are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed, and naturally many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.
Claims (27)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/895,723 US20090062943A1 (en) | 2007-08-27 | 2007-08-27 | Methods and apparatus for automatically controlling the sound level based on the content |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/895,723 US20090062943A1 (en) | 2007-08-27 | 2007-08-27 | Methods and apparatus for automatically controlling the sound level based on the content |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090062943A1 true US20090062943A1 (en) | 2009-03-05 |
Family
ID=40408722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/895,723 Abandoned US20090062943A1 (en) | 2007-08-27 | 2007-08-27 | Methods and apparatus for automatically controlling the sound level based on the content |
Country Status (1)
Country | Link |
---|---|
US (1) | US20090062943A1 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20060280312A1 (en) * | 2003-08-27 | 2006-12-14 | Mao Xiao D | Methods and apparatus for capturing audio signals based on a visual image |
US20070223732A1 (en) * | 2003-08-27 | 2007-09-27 | Mao Xiao D | Methods and apparatuses for adjusting a visual image based on an audio signal |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US7783061B2 (en) | 2003-08-27 | 2010-08-24 | Sony Computer Entertainment Inc. | Methods and apparatus for the targeted sound detection |
US7803050B2 (en) | 2002-07-27 | 2010-09-28 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
US20110014981A1 (en) * | 2006-05-08 | 2011-01-20 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
WO2011078866A1 (en) * | 2009-12-23 | 2011-06-30 | Intel Corporation | Methods and apparatus for automatically obtaining and synchronizing contextual content and applications |
US8160269B2 (en) | 2003-08-27 | 2012-04-17 | Sony Computer Entertainment Inc. | Methods and apparatuses for adjusting a listening area for capturing sounds |
US8233642B2 (en) | 2003-08-27 | 2012-07-31 | Sony Computer Entertainment Inc. | Methods and apparatuses for capturing an audio signal based on a location of the signal |
US20140350705A1 (en) * | 2013-05-24 | 2014-11-27 | Hon Hai Precision Industry Co., Ltd. | Music playing system and method |
US8947347B2 (en) | 2003-08-27 | 2015-02-03 | Sony Computer Entertainment Inc. | Controlling actions in a video game unit |
US20150066175A1 (en) * | 2013-08-29 | 2015-03-05 | Avid Technology, Inc. | Audio processing in multiple latency domains |
US20150073575A1 (en) * | 2013-09-09 | 2015-03-12 | George Sarkis | Combination multimedia, brain wave, and subliminal affirmation media player and recorder |
US9174119B2 (en) | 2002-07-27 | 2015-11-03 | Sony Computer Entertainement America, LLC | Controller for providing inputs to control execution of a program when inputs are combined |
US9380383B2 (en) * | 2013-09-06 | 2016-06-28 | Gracenote, Inc. | Modifying playback of content using pre-processed profile information |
FR3049754A1 (en) * | 2016-03-31 | 2017-10-06 | Orange | METHOD OF ADAPTING SOUND LEVEL OF RESTITUTION OF CONTENT, COMPUTER PROGRAM AND CORRESPONDING RESIDENTIAL GATEWAY. |
US20170302241A1 (en) * | 2012-11-13 | 2017-10-19 | Snell Limited | Management of broadcast audio loudness |
US10171054B1 (en) | 2017-08-24 | 2019-01-01 | International Business Machines Corporation | Audio adjustment based on dynamic and static rules |
Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4624012A (en) * | 1982-05-06 | 1986-11-18 | Texas Instruments Incorporated | Method and apparatus for converting voice characteristics of synthesized speech |
US4963858A (en) * | 1987-09-08 | 1990-10-16 | Chien Fong K | Changeable input ratio mouse |
US5018736A (en) * | 1989-10-27 | 1991-05-28 | Wakeman & Deforrest Corporation | Interactive game system and method |
US5113449A (en) * | 1982-08-16 | 1992-05-12 | Texas Instruments Incorporated | Method and apparatus for altering voice characteristics of synthesized speech |
US5128671A (en) * | 1990-04-12 | 1992-07-07 | Ltv Aerospace And Defense Company | Control device having multiple degrees of freedom |
US5144114A (en) * | 1989-09-15 | 1992-09-01 | Ncr Corporation | Volume control apparatus |
US5181181A (en) * | 1990-09-27 | 1993-01-19 | Triton Technologies, Inc. | Computer apparatus input device for three-dimensional information |
US5214615A (en) * | 1990-02-26 | 1993-05-25 | Will Bauer | Three-dimensional displacement of a body with computer interface |
US5227985A (en) * | 1991-08-19 | 1993-07-13 | University Of Maryland | Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object |
US5262777A (en) * | 1991-11-16 | 1993-11-16 | Sri International | Device for generating multidimensional input signals to a computer |
US5296871A (en) * | 1992-07-27 | 1994-03-22 | Paley W Bradford | Three-dimensional mouse with tactile feedback |
US5327521A (en) * | 1992-03-02 | 1994-07-05 | The Walt Disney Company | Speech transformation system |
US5991693A (en) * | 1996-02-23 | 1999-11-23 | Mindcraft Technologies, Inc. | Wireless I/O apparatus and method of computer-assisted instruction |
US5993314A (en) * | 1997-02-10 | 1999-11-30 | Stadium Games, Ltd. | Method and apparatus for interactive audience participation by audio command |
US6188442B1 (en) * | 1997-08-01 | 2001-02-13 | International Business Machines Corporation | Multiviewer display system for television monitors |
US20020048376A1 (en) * | 2000-08-24 | 2002-04-25 | Masakazu Ukita | Signal processing apparatus and signal processing method |
US20020159608A1 (en) * | 2001-02-27 | 2002-10-31 | International Business Machines Corporation | Audio device characterization for accurate predictable volume control |
US20030158737A1 (en) * | 2002-02-15 | 2003-08-21 | Csicsatka Tibor George | Method and apparatus for incorporating additional audio information into audio data file identifying information |
US20030195009A1 (en) * | 2002-04-12 | 2003-10-16 | Hitoshi Endo | Information delivering method, information delivering device, information delivery program, and computer-readable recording medium containing the information delivery program recorded thereon |
US20030228138A1 (en) * | 1997-11-21 | 2003-12-11 | Jvc Victor Company Of Japan, Ltd. | Encoding apparatus of audio signal, audio disc and disc reproducing apparatus |
US20040037183A1 (en) * | 2002-08-21 | 2004-02-26 | Yamaha Corporation | Sound recording/reproducing method and apparatus |
US20040046736A1 (en) * | 1997-08-22 | 2004-03-11 | Pryor Timothy R. | Novel man machine interfaces and applications |
US20040199420A1 (en) * | 2003-04-03 | 2004-10-07 | International Business Machines Corporation | Apparatus and method for verifying audio output at a client device |
US20040204155A1 (en) * | 2002-05-21 | 2004-10-14 | Shary Nassimi | Non-rechargeable wireless headset |
US20040207597A1 (en) * | 2002-07-27 | 2004-10-21 | Sony Computer Entertainment Inc. | Method and apparatus for light input device |
US20040255321A1 (en) * | 2002-06-20 | 2004-12-16 | Bellsouth Intellectual Property Corporation | Content blocking |
US6833865B1 (en) * | 1998-09-01 | 2004-12-21 | Virage, Inc. | Embedded metadata engines in digital capture devices |
US20050020223A1 (en) * | 2001-02-20 | 2005-01-27 | Ellis Michael D. | Enhanced radio systems and methods |
US20050047611A1 (en) * | 2003-08-27 | 2005-03-03 | Xiadong Mao | Audio input system |
US20050059488A1 (en) * | 2003-09-15 | 2005-03-17 | Sony Computer Entertainment Inc. | Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion |
US20050078840A1 (en) * | 2003-08-25 | 2005-04-14 | Riedl Steven E. | Methods and systems for determining audio loudness levels in programming |
US20050097618A1 (en) * | 2003-11-04 | 2005-05-05 | Universal Electronics Inc. | System and method for saving and recalling state data for media and home appliances |
US20050120034A1 (en) * | 1999-09-16 | 2005-06-02 | Sezan Muhammed I. | Audiovisual information management system with advertising |
US20050126369A1 (en) * | 2003-12-12 | 2005-06-16 | Nokia Corporation | Automatic extraction of musical portions of an audio stream |
US20050226431A1 (en) * | 2004-04-07 | 2005-10-13 | Xiadong Mao | Method and apparatus to detect and remove audio disturbances |
US20050267750A1 (en) * | 2004-05-27 | 2005-12-01 | Anonymous Media, Llc | Media usage monitoring and measurement system and method |
US7038661B2 (en) * | 2003-06-13 | 2006-05-02 | Microsoft Corporation | Pointing device and cursor for use in intelligent computing environments |
US20060139322A1 (en) * | 2002-07-27 | 2006-06-29 | Sony Computer Entertainment America Inc. | Man-machine interface using a deformable device |
US7079807B1 (en) * | 1998-12-11 | 2006-07-18 | Daum Daniel T | Substantially integrated digital network and broadcast radio method and apparatus |
US20060204012A1 (en) * | 2002-07-27 | 2006-09-14 | Sony Computer Entertainment Inc. | Selective sound source listening in conjunction with computer interactive processing |
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20060239471A1 (en) * | 2003-08-27 | 2006-10-26 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20060252541A1 (en) * | 2002-07-27 | 2006-11-09 | Sony Computer Entertainment Inc. | Method and system for applying gearing effects to visual tracking |
US20060252477A1 (en) * | 2002-07-27 | 2006-11-09 | Sony Computer Entertainment Inc. | Method and system for applying gearing effects to mutlti-channel mixed input |
US20060252475A1 (en) * | 2002-07-27 | 2006-11-09 | Zalewski Gary M | Method and system for applying gearing effects to inertial tracking |
US20060252474A1 (en) * | 2002-07-27 | 2006-11-09 | Zalewski Gary M | Method and system for applying gearing effects to acoustical tracking |
US20060256081A1 (en) * | 2002-07-27 | 2006-11-16 | Sony Computer Entertainment America Inc. | Scheme for detecting and tracking user manipulation of a game controller body |
US20060264260A1 (en) * | 2002-07-27 | 2006-11-23 | Sony Computer Entertainment Inc. | Detectable and trackable hand-held controller |
US20060264258A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | Multi-input game control mixer |
US20060264259A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | System for tracking user manipulations within an environment |
US20060269072A1 (en) * | 2003-08-27 | 2006-11-30 | Mao Xiao D | Methods and apparatuses for adjusting a listening area for capturing sounds |
US20060269073A1 (en) * | 2003-08-27 | 2006-11-30 | Mao Xiao D | Methods and apparatuses for capturing an audio signal based on a location of the signal |
US20060274032A1 (en) * | 2002-07-27 | 2006-12-07 | Xiadong Mao | Tracking device for use in obtaining information for controlling game program execution |
US20060274911A1 (en) * | 2002-07-27 | 2006-12-07 | Xiadong Mao | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
US20060277571A1 (en) * | 2002-07-27 | 2006-12-07 | Sony Computer Entertainment Inc. | Computer image and audio processing of intensity and input devices for interfacing with a computer program |
US20060280312A1 (en) * | 2003-08-27 | 2006-12-14 | Mao Xiao D | Methods and apparatus for capturing audio signals based on a visual image |
US20060282873A1 (en) * | 2002-07-27 | 2006-12-14 | Sony Computer Entertainment Inc. | Hand-held controller having detectable elements for tracking purposes |
US20060287085A1 (en) * | 2002-07-27 | 2006-12-21 | Xiadong Mao | Inertially trackable hand-held controller |
US20060287086A1 (en) * | 2002-07-27 | 2006-12-21 | Sony Computer Entertainment America Inc. | Scheme for translating movements of a hand-held controller into inputs for a system |
US20060287087A1 (en) * | 2002-07-27 | 2006-12-21 | Sony Computer Entertainment America Inc. | Method for mapping movements of a hand-held controller to game commands |
US7181297B1 (en) * | 1999-09-28 | 2007-02-20 | Sound Id | System and method for delivering customized audio data |
US20070070256A1 (en) * | 2005-07-29 | 2007-03-29 | Sony Corporation | Broadcast receiving device and broadcast receiving method |
US20070083380A1 (en) * | 2005-10-10 | 2007-04-12 | Yahoo! Inc. | Data container and set of metadata for association with a media item and composite media items |
US20070143482A1 (en) * | 2005-12-20 | 2007-06-21 | Zancho William F | System and method for handling multiple user preferences in a domain |
US20070177743A1 (en) * | 2004-04-08 | 2007-08-02 | Koninklijke Philips Electronics, N.V. | Audio level control |
US20070199040A1 (en) * | 2006-02-23 | 2007-08-23 | Lawrence Kates | Multi-channel parallel digital video recorder |
US20070223732A1 (en) * | 2003-08-27 | 2007-09-27 | Mao Xiao D | Methods and apparatuses for adjusting a visual image based on an audio signal |
US20070261077A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Using audio/visual environment to select ads on game platform |
US20070260517A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Profile detection |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20070265075A1 (en) * | 2006-05-10 | 2007-11-15 | Sony Computer Entertainment America Inc. | Attachable structure for use with hand-held controller having tracking ability |
US20070298882A1 (en) * | 2003-09-15 | 2007-12-27 | Sony Computer Entertainment Inc. | Methods and systems for enabling direction detection when interfacing with a computer program |
US20080001714A1 (en) * | 2004-12-08 | 2008-01-03 | Fujitsu Limited | Tag information selecting method, electronic apparatus and computer-readable storage medium |
US20080013745A1 (en) * | 2006-07-14 | 2008-01-17 | Broadcom Corporation | Automatic volume control for audio signals |
US20080013751A1 (en) * | 2006-07-17 | 2008-01-17 | Per Hiselius | Volume dependent audio frequency gain profile |
US20080045140A1 (en) * | 2006-08-18 | 2008-02-21 | Xerox Corporation | Audio system employing multiple mobile devices in concert |
US20080058973A1 (en) * | 2006-08-29 | 2008-03-06 | Tomohiro Hirata | Music playback system and music playback machine |
US20080077261A1 (en) * | 2006-08-29 | 2008-03-27 | Motorola, Inc. | Method and system for sharing an audio experience |
US20080089525A1 (en) * | 2006-10-11 | 2008-04-17 | Kauko Jarmo | Mobile communication terminal and method therefor |
US20080096654A1 (en) * | 2006-10-20 | 2008-04-24 | Sony Computer Entertainment America Inc. | Game control using three-dimensional motions of controller |
US20080098448A1 (en) * | 2006-10-19 | 2008-04-24 | Sony Computer Entertainment America Inc. | Controller configured to track user's level of anxiety and other mental and physical attributes |
US20080096657A1 (en) * | 2006-10-20 | 2008-04-24 | Sony Computer Entertainment America Inc. | Method for aiming and shooting using motion sensing controller |
US20080101638A1 (en) * | 2006-10-25 | 2008-05-01 | Ziller Carl R | Portable electronic device and personal hands-free accessory with audio disable |
US20080100825A1 (en) * | 2006-09-28 | 2008-05-01 | Sony Computer Entertainment America Inc. | Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen |
US20080120115A1 (en) * | 2006-11-16 | 2008-05-22 | Xiao Dong Mao | Methods and apparatuses for dynamically adjusting an audio signal based on a parameter |
US20080152165A1 (en) * | 2005-07-01 | 2008-06-26 | Luca Zacchi | Ad-hoc proximity multi-speaker entertainment |
US20080154404A1 (en) * | 2006-12-26 | 2008-06-26 | Sandisk Il Ltd. | Disposable media player |
US20080162668A1 (en) * | 2006-12-29 | 2008-07-03 | John David Miller | Method and apparatus for mutually-shared media experiences |
US7414596B2 (en) * | 2003-09-30 | 2008-08-19 | Canon Kabushiki Kaisha | Data conversion method and apparatus, and orientation measurement apparatus |
US20080222546A1 (en) * | 2007-03-08 | 2008-09-11 | Mudd Dennis M | System and method for personalizing playback content through interaction with a playback device |
US20080250319A1 (en) * | 2007-04-05 | 2008-10-09 | Research In Motion Limited | System and method for determining media playback behaviour in a media application for a portable media device |
US7471988B2 (en) * | 2001-09-11 | 2008-12-30 | Thomas Licensing | Method and apparatus for automatic equalization mode activation |
US20090016540A1 (en) * | 2006-01-25 | 2009-01-15 | Tc Electronics A/S | Auditory perception controlling device and method |
US7489299B2 (en) * | 2003-10-23 | 2009-02-10 | Hillcrest Laboratories, Inc. | User interface devices and methods employing accelerometers |
US20090047993A1 (en) * | 2007-08-14 | 2009-02-19 | Vasa Yojak H | Method of using music metadata to save music listening preferences |
US7678983B2 (en) * | 2005-12-09 | 2010-03-16 | Sony Corporation | Music edit device, music edit information creating method, and recording medium where music edit information is recorded |
US7773755B2 (en) * | 2004-08-27 | 2010-08-10 | Sony Corporation | Reproduction apparatus and reproduction system |
US8027487B2 (en) * | 2005-12-02 | 2011-09-27 | Samsung Electronics Co., Ltd. | Method of setting equalizer for audio file and method of reproducing audio file |
US8375416B2 (en) * | 2006-10-27 | 2013-02-12 | Starz Entertainment, Llc | Media build for multi-channel distribution |
-
2007
- 2007-08-27 US US11/895,723 patent/US20090062943A1/en not_active Abandoned
Patent Citations (102)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4624012A (en) * | 1982-05-06 | 1986-11-18 | Texas Instruments Incorporated | Method and apparatus for converting voice characteristics of synthesized speech |
US5113449A (en) * | 1982-08-16 | 1992-05-12 | Texas Instruments Incorporated | Method and apparatus for altering voice characteristics of synthesized speech |
US4963858A (en) * | 1987-09-08 | 1990-10-16 | Chien Fong K | Changeable input ratio mouse |
US5144114A (en) * | 1989-09-15 | 1992-09-01 | Ncr Corporation | Volume control apparatus |
US5018736A (en) * | 1989-10-27 | 1991-05-28 | Wakeman & Deforrest Corporation | Interactive game system and method |
US5214615A (en) * | 1990-02-26 | 1993-05-25 | Will Bauer | Three-dimensional displacement of a body with computer interface |
US5128671A (en) * | 1990-04-12 | 1992-07-07 | Ltv Aerospace And Defense Company | Control device having multiple degrees of freedom |
US5181181A (en) * | 1990-09-27 | 1993-01-19 | Triton Technologies, Inc. | Computer apparatus input device for three-dimensional information |
US5227985A (en) * | 1991-08-19 | 1993-07-13 | University Of Maryland | Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object |
US5262777A (en) * | 1991-11-16 | 1993-11-16 | Sri International | Device for generating multidimensional input signals to a computer |
US5327521A (en) * | 1992-03-02 | 1994-07-05 | The Walt Disney Company | Speech transformation system |
US5296871A (en) * | 1992-07-27 | 1994-03-22 | Paley W Bradford | Three-dimensional mouse with tactile feedback |
US5991693A (en) * | 1996-02-23 | 1999-11-23 | Mindcraft Technologies, Inc. | Wireless I/O apparatus and method of computer-assisted instruction |
US5993314A (en) * | 1997-02-10 | 1999-11-30 | Stadium Games, Ltd. | Method and apparatus for interactive audience participation by audio command |
US6188442B1 (en) * | 1997-08-01 | 2001-02-13 | International Business Machines Corporation | Multiviewer display system for television monitors |
US20040046736A1 (en) * | 1997-08-22 | 2004-03-11 | Pryor Timothy R. | Novel man machine interfaces and applications |
US20030228138A1 (en) * | 1997-11-21 | 2003-12-11 | Jvc Victor Company Of Japan, Ltd. | Encoding apparatus of audio signal, audio disc and disc reproducing apparatus |
US6833865B1 (en) * | 1998-09-01 | 2004-12-21 | Virage, Inc. | Embedded metadata engines in digital capture devices |
US7079807B1 (en) * | 1998-12-11 | 2006-07-18 | Daum Daniel T | Substantially integrated digital network and broadcast radio method and apparatus |
US20050120034A1 (en) * | 1999-09-16 | 2005-06-02 | Sezan Muhammed I. | Audiovisual information management system with advertising |
US7181297B1 (en) * | 1999-09-28 | 2007-02-20 | Sound Id | System and method for delivering customized audio data |
US20020048376A1 (en) * | 2000-08-24 | 2002-04-25 | Masakazu Ukita | Signal processing apparatus and signal processing method |
US7343141B2 (en) * | 2001-02-20 | 2008-03-11 | Ellis Michael D | Concurrent content capturing radio systems and methods |
US20050020223A1 (en) * | 2001-02-20 | 2005-01-27 | Ellis Michael D. | Enhanced radio systems and methods |
US20020159608A1 (en) * | 2001-02-27 | 2002-10-31 | International Business Machines Corporation | Audio device characterization for accurate predictable volume control |
US7471988B2 (en) * | 2001-09-11 | 2008-12-30 | Thomas Licensing | Method and apparatus for automatic equalization mode activation |
US20030158737A1 (en) * | 2002-02-15 | 2003-08-21 | Csicsatka Tibor George | Method and apparatus for incorporating additional audio information into audio data file identifying information |
US20030195009A1 (en) * | 2002-04-12 | 2003-10-16 | Hitoshi Endo | Information delivering method, information delivering device, information delivery program, and computer-readable recording medium containing the information delivery program recorded thereon |
US20040204155A1 (en) * | 2002-05-21 | 2004-10-14 | Shary Nassimi | Non-rechargeable wireless headset |
US20040255321A1 (en) * | 2002-06-20 | 2004-12-16 | Bellsouth Intellectual Property Corporation | Content blocking |
US20060264259A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | System for tracking user manipulations within an environment |
US20060252474A1 (en) * | 2002-07-27 | 2006-11-09 | Zalewski Gary M | Method and system for applying gearing effects to acoustical tracking |
US20060274032A1 (en) * | 2002-07-27 | 2006-12-07 | Xiadong Mao | Tracking device for use in obtaining information for controlling game program execution |
US20060277571A1 (en) * | 2002-07-27 | 2006-12-07 | Sony Computer Entertainment Inc. | Computer image and audio processing of intensity and input devices for interfacing with a computer program |
US7918733B2 (en) * | 2002-07-27 | 2011-04-05 | Sony Computer Entertainment America Inc. | Multi-input game control mixer |
US20040207597A1 (en) * | 2002-07-27 | 2004-10-21 | Sony Computer Entertainment Inc. | Method and apparatus for light input device |
US20060282873A1 (en) * | 2002-07-27 | 2006-12-14 | Sony Computer Entertainment Inc. | Hand-held controller having detectable elements for tracking purposes |
US20060139322A1 (en) * | 2002-07-27 | 2006-06-29 | Sony Computer Entertainment America Inc. | Man-machine interface using a deformable device |
US20060287087A1 (en) * | 2002-07-27 | 2006-12-21 | Sony Computer Entertainment America Inc. | Method for mapping movements of a hand-held controller to game commands |
US7102615B2 (en) * | 2002-07-27 | 2006-09-05 | Sony Computer Entertainment Inc. | Man-machine interface using a deformable device |
US20060204012A1 (en) * | 2002-07-27 | 2006-09-14 | Sony Computer Entertainment Inc. | Selective sound source listening in conjunction with computer interactive processing |
US20060287086A1 (en) * | 2002-07-27 | 2006-12-21 | Sony Computer Entertainment America Inc. | Scheme for translating movements of a hand-held controller into inputs for a system |
US20060287085A1 (en) * | 2002-07-27 | 2006-12-21 | Xiadong Mao | Inertially trackable hand-held controller |
US20060252541A1 (en) * | 2002-07-27 | 2006-11-09 | Sony Computer Entertainment Inc. | Method and system for applying gearing effects to visual tracking |
US20060252477A1 (en) * | 2002-07-27 | 2006-11-09 | Sony Computer Entertainment Inc. | Method and system for applying gearing effects to mutlti-channel mixed input |
US20060252475A1 (en) * | 2002-07-27 | 2006-11-09 | Zalewski Gary M | Method and system for applying gearing effects to inertial tracking |
US20060274911A1 (en) * | 2002-07-27 | 2006-12-07 | Xiadong Mao | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
US20060256081A1 (en) * | 2002-07-27 | 2006-11-16 | Sony Computer Entertainment America Inc. | Scheme for detecting and tracking user manipulation of a game controller body |
US20060264260A1 (en) * | 2002-07-27 | 2006-11-23 | Sony Computer Entertainment Inc. | Detectable and trackable hand-held controller |
US20060264258A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | Multi-input game control mixer |
US20040037183A1 (en) * | 2002-08-21 | 2004-02-26 | Yamaha Corporation | Sound recording/reproducing method and apparatus |
US20040199420A1 (en) * | 2003-04-03 | 2004-10-07 | International Business Machines Corporation | Apparatus and method for verifying audio output at a client device |
US7038661B2 (en) * | 2003-06-13 | 2006-05-02 | Microsoft Corporation | Pointing device and cursor for use in intelligent computing environments |
US20050078840A1 (en) * | 2003-08-25 | 2005-04-14 | Riedl Steven E. | Methods and systems for determining audio loudness levels in programming |
US20060239471A1 (en) * | 2003-08-27 | 2006-10-26 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20060280312A1 (en) * | 2003-08-27 | 2006-12-14 | Mao Xiao D | Methods and apparatus for capturing audio signals based on a visual image |
US20060269072A1 (en) * | 2003-08-27 | 2006-11-30 | Mao Xiao D | Methods and apparatuses for adjusting a listening area for capturing sounds |
US20050047611A1 (en) * | 2003-08-27 | 2005-03-03 | Xiadong Mao | Audio input system |
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20070223732A1 (en) * | 2003-08-27 | 2007-09-27 | Mao Xiao D | Methods and apparatuses for adjusting a visual image based on an audio signal |
US20060269073A1 (en) * | 2003-08-27 | 2006-11-30 | Mao Xiao D | Methods and apparatuses for capturing an audio signal based on a location of the signal |
US20070298882A1 (en) * | 2003-09-15 | 2007-12-27 | Sony Computer Entertainment Inc. | Methods and systems for enabling direction detection when interfacing with a computer program |
US20050059488A1 (en) * | 2003-09-15 | 2005-03-17 | Sony Computer Entertainment Inc. | Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion |
US7414596B2 (en) * | 2003-09-30 | 2008-08-19 | Canon Kabushiki Kaisha | Data conversion method and apparatus, and orientation measurement apparatus |
US7489299B2 (en) * | 2003-10-23 | 2009-02-10 | Hillcrest Laboratories, Inc. | User interface devices and methods employing accelerometers |
US20050097618A1 (en) * | 2003-11-04 | 2005-05-05 | Universal Electronics Inc. | System and method for saving and recalling state data for media and home appliances |
US20050126369A1 (en) * | 2003-12-12 | 2005-06-16 | Nokia Corporation | Automatic extraction of musical portions of an audio stream |
US20050226431A1 (en) * | 2004-04-07 | 2005-10-13 | Xiadong Mao | Method and apparatus to detect and remove audio disturbances |
US20070177743A1 (en) * | 2004-04-08 | 2007-08-02 | Koninklijke Philips Electronics, N.V. | Audio level control |
US20050267750A1 (en) * | 2004-05-27 | 2005-12-01 | Anonymous Media, Llc | Media usage monitoring and measurement system and method |
US7773755B2 (en) * | 2004-08-27 | 2010-08-10 | Sony Corporation | Reproduction apparatus and reproduction system |
US20080001714A1 (en) * | 2004-12-08 | 2008-01-03 | Fujitsu Limited | Tag information selecting method, electronic apparatus and computer-readable storage medium |
US20080152165A1 (en) * | 2005-07-01 | 2008-06-26 | Luca Zacchi | Ad-hoc proximity multi-speaker entertainment |
US20070070256A1 (en) * | 2005-07-29 | 2007-03-29 | Sony Corporation | Broadcast receiving device and broadcast receiving method |
US20070083380A1 (en) * | 2005-10-10 | 2007-04-12 | Yahoo! Inc. | Data container and set of metadata for association with a media item and composite media items |
US8027487B2 (en) * | 2005-12-02 | 2011-09-27 | Samsung Electronics Co., Ltd. | Method of setting equalizer for audio file and method of reproducing audio file |
US7678983B2 (en) * | 2005-12-09 | 2010-03-16 | Sony Corporation | Music edit device, music edit information creating method, and recording medium where music edit information is recorded |
US20070143482A1 (en) * | 2005-12-20 | 2007-06-21 | Zancho William F | System and method for handling multiple user preferences in a domain |
US20090016540A1 (en) * | 2006-01-25 | 2009-01-15 | Tc Electronics A/S | Auditory perception controlling device and method |
US20070199040A1 (en) * | 2006-02-23 | 2007-08-23 | Lawrence Kates | Multi-channel parallel digital video recorder |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20070260517A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Profile detection |
US20070261077A1 (en) * | 2006-05-08 | 2007-11-08 | Gary Zalewski | Using audio/visual environment to select ads on game platform |
US20070265075A1 (en) * | 2006-05-10 | 2007-11-15 | Sony Computer Entertainment America Inc. | Attachable structure for use with hand-held controller having tracking ability |
US20080013745A1 (en) * | 2006-07-14 | 2008-01-17 | Broadcom Corporation | Automatic volume control for audio signals |
US20080013751A1 (en) * | 2006-07-17 | 2008-01-17 | Per Hiselius | Volume dependent audio frequency gain profile |
US20080045140A1 (en) * | 2006-08-18 | 2008-02-21 | Xerox Corporation | Audio system employing multiple mobile devices in concert |
US20080058973A1 (en) * | 2006-08-29 | 2008-03-06 | Tomohiro Hirata | Music playback system and music playback machine |
US20080077261A1 (en) * | 2006-08-29 | 2008-03-27 | Motorola, Inc. | Method and system for sharing an audio experience |
US20080100825A1 (en) * | 2006-09-28 | 2008-05-01 | Sony Computer Entertainment America Inc. | Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen |
US20080089525A1 (en) * | 2006-10-11 | 2008-04-17 | Kauko Jarmo | Mobile communication terminal and method therefor |
US20080098448A1 (en) * | 2006-10-19 | 2008-04-24 | Sony Computer Entertainment America Inc. | Controller configured to track user's level of anxiety and other mental and physical attributes |
US20080096657A1 (en) * | 2006-10-20 | 2008-04-24 | Sony Computer Entertainment America Inc. | Method for aiming and shooting using motion sensing controller |
US20080096654A1 (en) * | 2006-10-20 | 2008-04-24 | Sony Computer Entertainment America Inc. | Game control using three-dimensional motions of controller |
US20080101638A1 (en) * | 2006-10-25 | 2008-05-01 | Ziller Carl R | Portable electronic device and personal hands-free accessory with audio disable |
US8375416B2 (en) * | 2006-10-27 | 2013-02-12 | Starz Entertainment, Llc | Media build for multi-channel distribution |
US20080120115A1 (en) * | 2006-11-16 | 2008-05-22 | Xiao Dong Mao | Methods and apparatuses for dynamically adjusting an audio signal based on a parameter |
US20080154404A1 (en) * | 2006-12-26 | 2008-06-26 | Sandisk Il Ltd. | Disposable media player |
US20080162668A1 (en) * | 2006-12-29 | 2008-07-03 | John David Miller | Method and apparatus for mutually-shared media experiences |
US20080222546A1 (en) * | 2007-03-08 | 2008-09-11 | Mudd Dennis M | System and method for personalizing playback content through interaction with a playback device |
US20080250319A1 (en) * | 2007-04-05 | 2008-10-09 | Research In Motion Limited | System and method for determining media playback behaviour in a media application for a portable media device |
US20090047993A1 (en) * | 2007-08-14 | 2009-02-19 | Vasa Yojak H | Method of using music metadata to save music listening preferences |
Non-Patent Citations (1)
Title |
---|
ID3v2 specification and chapter frame addendum: Copyright 2000 and 2005 respectively * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7803050B2 (en) | 2002-07-27 | 2010-09-28 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
US9174119B2 (en) | 2002-07-27 | 2015-11-03 | Sony Computer Entertainement America, LLC | Controller for providing inputs to control execution of a program when inputs are combined |
US8233642B2 (en) | 2003-08-27 | 2012-07-31 | Sony Computer Entertainment Inc. | Methods and apparatuses for capturing an audio signal based on a location of the signal |
US8160269B2 (en) | 2003-08-27 | 2012-04-17 | Sony Computer Entertainment Inc. | Methods and apparatuses for adjusting a listening area for capturing sounds |
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20070223732A1 (en) * | 2003-08-27 | 2007-09-27 | Mao Xiao D | Methods and apparatuses for adjusting a visual image based on an audio signal |
US8073157B2 (en) | 2003-08-27 | 2011-12-06 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US8139793B2 (en) | 2003-08-27 | 2012-03-20 | Sony Computer Entertainment Inc. | Methods and apparatus for capturing audio signals based on a visual image |
US8947347B2 (en) | 2003-08-27 | 2015-02-03 | Sony Computer Entertainment Inc. | Controlling actions in a video game unit |
US7783061B2 (en) | 2003-08-27 | 2010-08-24 | Sony Computer Entertainment Inc. | Methods and apparatus for the targeted sound detection |
US20060280312A1 (en) * | 2003-08-27 | 2006-12-14 | Mao Xiao D | Methods and apparatus for capturing audio signals based on a visual image |
US7809145B2 (en) | 2006-05-04 | 2010-10-05 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20110014981A1 (en) * | 2006-05-08 | 2011-01-20 | Sony Computer Entertainment Inc. | Tracking device with sound emitter for use in obtaining information for controlling game program execution |
WO2011078866A1 (en) * | 2009-12-23 | 2011-06-30 | Intel Corporation | Methods and apparatus for automatically obtaining and synchronizing contextual content and applications |
KR101526652B1 (en) * | 2009-12-23 | 2015-06-08 | 인텔 코포레이션 | Methods and apparatus for automatically obtaining and synchronizing contextual content and applications |
US20170302241A1 (en) * | 2012-11-13 | 2017-10-19 | Snell Limited | Management of broadcast audio loudness |
US10027303B2 (en) * | 2012-11-13 | 2018-07-17 | Snell Advanced Media Limited | Management of broadcast audio loudness |
US20140350705A1 (en) * | 2013-05-24 | 2014-11-27 | Hon Hai Precision Industry Co., Ltd. | Music playing system and method |
US20150066175A1 (en) * | 2013-08-29 | 2015-03-05 | Avid Technology, Inc. | Audio processing in multiple latency domains |
US10735119B2 (en) | 2013-09-06 | 2020-08-04 | Gracenote, Inc. | Modifying playback of content using pre-processed profile information |
US9380383B2 (en) * | 2013-09-06 | 2016-06-28 | Gracenote, Inc. | Modifying playback of content using pre-processed profile information |
US20150073575A1 (en) * | 2013-09-09 | 2015-03-12 | George Sarkis | Combination multimedia, brain wave, and subliminal affirmation media player and recorder |
FR3049754A1 (en) * | 2016-03-31 | 2017-10-06 | Orange | METHOD OF ADAPTING SOUND LEVEL OF RESTITUTION OF CONTENT, COMPUTER PROGRAM AND CORRESPONDING RESIDENTIAL GATEWAY. |
US10171054B1 (en) | 2017-08-24 | 2019-01-01 | International Business Machines Corporation | Audio adjustment based on dynamic and static rules |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090062943A1 (en) | Methods and apparatus for automatically controlling the sound level based on the content | |
US10652500B2 (en) | Display of video subtitles | |
US8255825B2 (en) | Content aware adaptive display | |
US20150317353A1 (en) | Context and activity-driven playlist modification | |
US20110150427A1 (en) | Content providing server, content reproducing apparatus, content providing method, content reproducing method, program, and content providing system | |
US10313713B2 (en) | Methods, systems, and media for identifying and presenting users with multi-lingual media content items | |
WO2021012900A1 (en) | Vibration control method and apparatus, mobile terminal, and computer-readable storage medium | |
US20080147727A1 (en) | Media context information | |
US10466955B1 (en) | Crowdsourced audio normalization for presenting media content | |
US8265935B2 (en) | Method and system for media processing extensions (MPX) for audio and video setting preferences | |
US9053710B1 (en) | Audio content presentation using a presentation profile in a content header | |
US20230171736A1 (en) | Automatically suspending or reducing portable device notifications when viewing audio/video programs | |
US10656901B2 (en) | Automatic audio level adjustment during media item presentation | |
US20170193552A1 (en) | Method and system for grouping devices in a same space for cross-device marketing | |
US20140121794A1 (en) | Method, Apparatus, And Computer Program Product For Providing A Personalized Audio File | |
CN112000251A (en) | Method, apparatus, electronic device and computer readable medium for playing video | |
US20060210042A1 (en) | Auto switch system and method thereof for IP phone and default audio device | |
US7003285B2 (en) | Communication with multi-sensory devices | |
CN115665472A (en) | Transmission content management and control device and method | |
US11336707B1 (en) | Adaptive content transmission | |
CN108694207B (en) | Method and system for displaying file icons | |
CN113099294A (en) | Play control method and device, electronic equipment and readable storage medium | |
US20120207441A1 (en) | Apparatus, and associated method, for augmenting sensory perception of subject of interest | |
CN1881276A (en) | Data presentation systems and methods | |
US11606606B1 (en) | Systems and methods for detecting and analyzing audio in a media presentation environment to determine whether to replay a portion of the media |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NASON, BEN;TSAI, IVY;GOODENOUGH, DAVID;REEL/FRAME:019800/0834;SIGNING DATES FROM 20070810 TO 20070821 |
|
AS | Assignment |
Owner name: SONY NETWORK ENTERTAINMENT PLATFORM INC., JAPAN Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:027446/0001 Effective date: 20100401 |
|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY NETWORK ENTERTAINMENT PLATFORM INC.;REEL/FRAME:027557/0001 Effective date: 20100401 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:039239/0343 Effective date: 20160401 |