US20140219634A1 - Video preview creation based on environment - Google Patents
Video preview creation based on environment Download PDFInfo
- Publication number
- US20140219634A1 US20140219634A1 US14/173,732 US201414173732A US2014219634A1 US 20140219634 A1 US20140219634 A1 US 20140219634A1 US 201414173732 A US201414173732 A US 201414173732A US 2014219634 A1 US2014219634 A1 US 2014219634A1
- Authority
- US
- United States
- Prior art keywords
- video
- preview
- video preview
- full
- encoding technique
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 136
- 238000005457 optimization Methods 0.000 claims abstract description 25
- 239000003086 colorant Substances 0.000 claims description 25
- 230000001419 dependent effect Effects 0.000 claims description 4
- 230000008859 change Effects 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 239000003795 chemical substances by application Substances 0.000 description 4
- 230000003213 activating effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 235000014510 cooky Nutrition 0.000 description 2
- 239000010410 layer Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- 244000107946 Spondias cytherea Species 0.000 description 1
- 241000278713 Theora Species 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 229910052754 neon Inorganic materials 0.000 description 1
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical compound [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
- G06F3/04855—Interaction with scrollbars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/00007—Time or data compression or expansion
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
Definitions
- Users commonly provide video content to websites (e.g., YouTube), which can be referred to as “posting a video.”
- the user can spend a significant amount of time to convey the message of the video before the user selects the video (e.g., by clicking the video displayed on a website).
- the user can associate a title, a static thumbnail image, and/or a textual description with the video.
- Users often have an difficult time when the video originates on a different website and the user tries to upload their video to a video server.
- the title may not be descriptive of the contents of the video
- the static thumbnail image may not summarize the essence of the video
- the description of the video may be a poor signal for whether the video will be interesting to a viewer.
- Video browsing is also limited.
- Other users e.g., viewers
- the viewers can see a video's title and static thumbnail of the video before deciding whether to play the full video.
- the viewers may find it difficult to select particular videos of interest because the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the textual description with the video may be a poor signal for whether the video will be interesting to the viewer.
- the viewers may spend significant amounts of time searching and watching videos that are not enjoyable to the viewer.
- Embodiments of the present invention can create and display portions of videos as video previews.
- the video previews may be associated with a full video, such that the video preview is generated from a portion of the full video.
- the video previews can be generated in various ways based and an identification of the device, application, or network that will be used to activate or play the video preview.
- the video preview can be configured to play a series of images associated with images from the portion of the full video when the video preview is activated (e.g., to convey the essence of the full video via a video preview).
- embodiments of the present invention provide a method for creating video previews without an identification of the device, application, or network that will be used to activate or play the video preview.
- a computing device can generate multiple video previews in anticipation of a selected medium for activating the video preview.
- the computing device can receive parallel input streams of the full video to speed up generation of the multiple video previews.
- embodiments of the present invention provide a method for creating a compressed video file using a palette-based optimization technique.
- a computing device may create a common color palette among multiple images specified in the full video.
- the common color palette can be used to generate the compressed video file.
- FIG. 1 shows a flowchart illustrating a method of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention.
- FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview.
- FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention.
- FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention.
- FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention.
- FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention.
- FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention.
- FIG. 8 shows a block diagram of a computer apparatus according to an embodiment of the present invention.
- a “video preview” or “compressed video file” is a visual representation of a portion of a video (also referred to as a “full video” to contrast a “video preview” of the video).
- the full video may correspond to the entirety of a video file or a portion of the video file, e.g., when only a portion of the video file has been streamed to a user device.
- the preview is shorter than the full video, but the full video can be shorter than the complete video file.
- the preview can convey the essence of the full video.
- the video preview is shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video.
- a preview can be a continuous portion of the full video or include successive frames that are not continuous in the full video (e.g., two successive frames of the preview may actually be one or more seconds apart in the full video).
- Embodiments of the present invention can enhance video viewing by providing short, playable video previews through a graphical user interface (GUI) or provided directly to the user device (e.g., stored in a clipboard). Viewers can use the GUI of video previews to better decide whether to watch a full video, or channel of videos.
- GUI graphical user interface
- the user may create a video preview that may later be accessed by a viewer. For example, the user may select the best 1-10 seconds of a video to convey the essence of the full video.
- the video preview can be shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video.
- the system associated with the GUI may generate a smaller file to associate with the video portion (e.g., animated GIF, MP4, collection of frames, RIFF).
- the system may provide the GUI on a variety of systems.
- the GUI can be provided via an internet browser or client applications (e.g., software configured to be executed on a device), and configured to run on a variety of devices (e.g., mobile, tablet, set-top, television, game console).
- FIG. 1 shows a flowchart illustrating a method 100 of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention.
- the method 100 may comprise a plurality of steps for implementing an embodiment of creating a video preview based on an environment (e.g., the user device, application, or network that will display a video preview or transfer the video preview to a destination).
- an environment e.g., the user device, application, or network that will display a video preview or transfer the video preview to a destination.
- Various computing devices may be used to perform the steps of the method, including video servers, provider servers, user devices, or third party devices.
- a video preview may be generated.
- Embodiments of the invention may provide a graphical user interface for a user that allows the user to request to generate a video preview, the request specifying a portion of a full video to use as the video preview.
- the system may generate the video preview based on the type of device or application that will display the video preview (e.g., using input from the user, using information transmitted from the device, using an identifier specifying the device, application, or network that will display the video preview).
- the input may be active (e.g., the user or device providing an identification the device or application in response to a request, a third party providing information for a plurality of streaming television programs) or passive (e.g., the device transmitting information as a push notification).
- the computing device can determine an encoding technique based on the identifier to generate the video preview and create the video preview from the full video based on the determined encoding technique.
- one or more video previews may be organized into one or more channels or collections.
- the method 100 can associate the video preview generated in step 110 (e.g., a 4 -second animated GIF of a snowboarder jumping off a ledge) with a channel (e.g., a collection of videos about snowboarders).
- the video previews may be organized in a group (e.g., a composite, a playable group, a cluster of video previews) and displayed on a network page. Additional information about the organization and layout of video previews cam be found in U.S. patent application Ser. No. ______, entitled “Generation of Layout of Videos” (Attorney Docket 91283-000750US-897295), which is incorporated by reference in its entirety.
- a GUI may be provided with the video previews.
- the GUI may provide one or more channels (e.g., channel relating to snowboarders, channel relating to counter cultures), one or more videos within a channel (e.g., a first snowboarding video, a second snowboarding video, and a first counter culture video), or a network page displaying one or more video previews.
- the video previews may be shared through social networking pages, text messaging, or other means. Additional information about viewing and sharing video previews can be found in U.S. patent application Ser. No. ______, entitled “Activating a Video Based on Location in Screen” (Attorney Docket 91283-000760US-897296), which is incorporated by reference in its entirety.
- Various systems and computing devices can be involved with various workflows used to create a video preview based on the environment that will display the video preview.
- FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview.
- the computing devices can include a video server 210 , a provider server 220 , a user device 230 , or a third party server 240 according to an embodiment of the present invention.
- any or all of these servers, subsystems, or devices may be considered a computing device.
- the video server 210 can provide, transmit, and store full videos and/or video previews (e.g., Ooyala®, Brightcove®, Vimeo®, YouTube®, CNN®, NFL®, Hulu®, Vevo®.
- the provider server 220 can interact with the video server 210 to provide the video previews.
- the provider server 220 can receive information to generate the video preview (e.g., an identifier specifying a device or application for displaying a video preview, a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video).
- the user device 230 can receive a video preview and/or full video to view, browse, or store the generated video previews.
- the third party server 240 can also receive a video preview and/or full video to view or browse the generated video previews.
- the user device 230 or third party server 240 can also be used to generate the video preview or create a frame object.
- the video server 210 or third party server 240 may also be a content provider for a full video, including one or more images contained in the full video, information about the full video (e.g., title, television channel information, television programming information for a user's location).
- the third party server 240 can interact with the user device 230 to provide the additional information to the user device 230 or provider server 220 (e.g., related to television programming in the Bay Area of California, related to U.S. versus foreign television programming).
- the third party server 240 can identify a particular show (e.g., full video) that the user is likely watching based on the location of the user and channel that the user device 230 is receiving.
- the video server 210 , provider server 220 , a user device 230 , and third party server 240 can be used to receive portions of a full video in a plurality of video streams (e.g., parallel I/O) at the computing device (e.g., provider server 220 ). With multiple portions in the full video received (e.g., at the provider server 220 ), the computing device can create multiple video previews simultaneously (e.g., using multiple encoding techniques).
- a plurality of video streams e.g., parallel I/O
- the computing device can create multiple video previews simultaneously (e.g., using multiple encoding techniques).
- the identification of the user device 230 , application, or network that is used to display the video preview can affect the creation of the video preview.
- a computing device e.g., provider server 220
- the device may be the user device 230 or recipient device of the video preview from the user device (e.g., Apple iPhone® sending the video preview to an Android® device).
- the provider server 220 can create (e.g., encode, compress, transcode) the video preview based on a determined encoding technique (e.g., video codecs including H.264 AVC, MPEG-4 SP, or VP8).
- a determined encoding technique e.g., video codecs including H.264 AVC, MPEG-4 SP, or VP8.
- a video preview may be generated by a provider server 220 , user device 230 , or video server 210 .
- a third party server 240 may generate a video preview using a similar process as a user device 230 .
- FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention.
- a video may begin as a series of frames or images (e.g., raw format) that are encoded by the video server 210 into a full video.
- the full video may reduce the size of the corresponding file and enable a more efficient transmission of the full video to other devices (e.g., provider server 220 , user device 230 ).
- the provider server 220 can transcode the full video (e.g., change the encoding for full video to a different encoding, encoding the full video to the same encoding or re-encoding) in order to generate and transmit the video preview.
- transcoding may change the start time of a video, duration, or caption information.
- the video server 210 may store and provide a full video.
- the full video can be received from a user or generated by the computing device and offered to users through a network page.
- another computing device e.g., a user device 230 , a third party server 240
- a request to generate a video of a full video can be received.
- the request to generate a video preview of a full video can specify a portion of the full video (e.g., the first 10 seconds, the last 15 seconds, the portion of the full video identified by a timestamp).
- the user device 230 may identify a video portion of the full video by identifying a start/end time, a timestamp in the full video, or other identification provided by the GUI.
- the information (e.g., start/end time, timestamp) can be transmitted to the provider server 220 .
- a user device 230 can periodically request to generate a video preview (e.g., every 30 seconds, based on a reoccurring or periodic request).
- the request can include an identification of the video portion or a litany of other information, including a start/end time, link to a full video at the video server 210 , timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information.
- IP internet protocol
- a user-agent string may include information about a user device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software).
- the provider server 220 can receive this and other information from the user device 230 .
- At block 320 at least a portion of the full video can be received.
- at least the portion of the full video can be received at the provider server 220 or other computing device.
- the provider server 220 may request a full video based in part on the information received from the user device 230 .
- the provider server 220 can transmit a request (e.g., email, file, message) to the video server 210 that references the full video (e.g., link, identifier).
- the video server 210 and provider server 220 may be connected through a direct and/or secure connection in order to retrieve the video (e.g., MP4 file, stream, full video portion).
- the video server 210 may transmit the full video (e.g., file, link) to the provider server 220 in response to the request or link to the full video.
- an identifier specifying a device or application can be received.
- the request from the user device 230 can include an identifier, which is in turn used to request a full video from the video server 210 .
- the identifier can specify the device or application that may be used for displaying the video preview.
- the identifier can include a user name, device identifier (e.g., electronic serial number (ESN), international mobile equipment identity (IMEI), mobile equipment identifier (MEID), phone number, subscriber identifier (IMSI), device carrier), identification of an application for displaying the video preview (e.g., network page, browser application, operating system).
- ESN electronic serial number
- IMEI international mobile equipment identity
- MEID mobile equipment identifier
- IMSI subscriber identifier
- device carrier identification of an application for displaying the video preview (e.g., network page, browser application, operating system).
- the identifier can be received through a variety of methods.
- the identifier can be received through an application programming interface (API), from a television programming provider, from an information feed specifying the device, application, or network (e.g., from a third party server 240 or user device 230 ), from metadata, from a user (e.g., passive/active input), or other sources.
- API application programming interface
- the request sent to the video server 210 may vary depending on the type video needed for the full video or requested video preview.
- the full video may be a raw MP4 format (e.g., compressed using advanced audio coding (AAC) encoding, Apple Lossless format).
- AAC advanced audio coding
- the provider server 220 can determine that the desired format for the user device 230 is a different type of file format (e.g., an animated GIF) and request additional information from the video server 210 in order to transcode the MP4 format to an animated GIF format for the user device 230 (e.g., including the device type, application that will play the video preview, etc.).
- an encoding technique can be determined. For example, an encoding technique can be determined based on the identifier specifying the device or application for displaying the video preview. The encoding technique can be used to generate the video preview (e.g., using a portion of the full video).
- a variety of encoding techniques can be used, including a graphics interchange format (GIF), animated GIF, MP4 container, a H.264 video codec, an advanced audio coding (AAC) audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or MPEG-4 SP.
- the encoding technique is dependent on the identification of a type of user device that submitted the request to generate the video preview. In some embodiments, the encoding technique is dependent on the type of application that will display the video preview at the user device.
- One encoding technique can be used to generate a first video preview and a second encoding technique can be used to generate a second video preview.
- the second video preview can be generated to allow the user device to share the second video preview through a particular medium.
- the first video preview and second video preview can be provided to the user device 230 .
- a palette-based size optimization can be included as an encoding technique.
- the encoding technique can include a palette-based size optimization by generating a common color palette for the video preview and limiting the video preview to the common color palette (e.g., limiting the images in the video preview to particular colors identified by the common color palette).
- a video preview can be created.
- the provider server can create the video preview from the full video based on the determined encoding technique.
- the video preview can be created using a plurality of video streams (e.g., parallel I/O).
- the computing device can create multiple video previews (e.g., simultaneously, to expedite video preview creation, etc.).
- the video preview may be provided.
- the video preview can be provided to the user device 230 .
- the video preview may be provided using various methods.
- the video preview can be transmitted via a messaging service to the user device 230 , in an attachment to an email, embedded in a short messaging service (SMS) or text message, provided through a GUI accessible by the user device, or other methods.
- the video preview e.g., file, animated GIF, link to a video preview
- the user may copy/paste the video preview to an email client, SMS, or other application in order to use or share the video with other applications and/or devices.
- the video preview can be provided to the user device 230 in a variety of formats.
- the video preview can be provided with a link to a stored file on a webserver and/or the provider server 220 , an animated GIF file, an MP4 file, or other acceptable file format.
- the video preview can be provided in a format based in part on a particular type of user device 230 (e.g. Apple iPhones can receive a MPEG-4 formatted file, Android machines can receive an AVI formatted file).
- the user device 230 may provide information (e.g., identifier specifying a device, application, device type, or operating system) to the provider server 220 prior to receiving the properly formatted video preview and the provided video preview can correspond with that information.
- multiple video previews can be provided.
- the computing device e.g., provider server 220
- the computing device can send multiple video previews to a device (e.g., the clipboard, temporary data buffer).
- the user may choose to paste the video preview into a particular application (e.g., messaging service, email client) and the properly encoded video preview for that application can be provided to the application.
- the user may access a GUI provided by the provider server 220 that includes one or more request tools (e.g., buttons, text boxes) to access particularly encoded video previews.
- the user may provide (e.g., copy/paste) a link to the video preview and the link can direct the user to a properly encoded video preview.
- the video preview can be provided to the user device 230 identified in a request from the user device.
- the user device can specify the ultimate device or application that the user device 230 intends to use to display the video preview (e.g., through the use of multiple request tools or buttons in a GUI provided by the provider server 220 , through user input).
- the user can select a request tool in the GUI (e.g., “I want a video preview for an SMS message”) and the received video preview can be encoded for the identified use.
- the user can select a request tool in the GUI that identifies a social networking platform (e.g., Facebook®, Twitter®, Google+®, Tumblr®), so that the received video preview can be uploaded directly to the social networking website.
- a social networking platform e.g., Facebook®, Twitter®, Google+®, Tumblr®
- the user device 230 will transmit an identifier specifying a device or an application for displaying the video preview.
- the identifier can be matched with a list of identifiers at the provider server 220 (e.g., in a database) to find a matching identifier. If the received identifier is identified or found at the provider server 220 , the provider server 220 can determine an encoding technique for the user device based on the identifier.
- an application e.g., or a corresponding device configured to execute the application
- the provider server can determine an encoding technique for the application based on the identifier.
- the encoding technique is determined based on the identifier, other encoding techniques can be determined as well. For example, five encoding techniques can be available and each may correspond with one or more identifiers. The encoding technique associated with the received identifier can be selected and used to start creating the video preview. In some embodiments, one or more of the encoding techniques that do not correspond with the received identifier can also be used to create one or more video previews, including a video preview created from a full video.
- an encoding technique can still be determined.
- a default encoding technique can be selected (e.g., an animated GIF) and provided to the user device 230 or application.
- multiple encoding techniques can be used. For example, one or more encoding techniques can be used to create a first video preview, one or more encoding techniques can be used to create a second video preview, and so on.
- the plurality of video previews (including the first and second video previews) can be sent to the same device or application.
- the device may store the plurality of video previews in a temporary storage (e.g., clipboard or cache).
- the appropriate video preview can be selected and used for the appropriate device/application (e.g., Firefox® application receives a video preview using a WebM video container).
- a video preview encoded with a preferred or default encoding technique can be sent to the device or application first, followed by other video previews created using different encoding techniques and/or stored at a provider server 220 .
- FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention.
- the provided video preview can be displayed on a variety of different user devices 230 in a variety of different formats, including handheld devices 410 , laptops 420 , televisions 430 , game consoles 440 , and the like.
- the video preview 450 can be displayed as a series of images from a full video.
- the video preview 450 can include a caption 460 , link 470 (e.g., to the full video, to the video preview, to additional information associated with the video preview, to a stored video preview on a video server 210 ), or other information.
- the video preview may be displayed in a frame object.
- the identifier may specify the device as an Apple® device (e.g., a handheld device 410 or laptop 420 ) or a device running an iOS operating system (e.g., iPhone®, iPad®).
- the provider server 220 can generate a video preview and transmit the video preview to an encoding service (e.g., Cloud Video Encoding, Cloud Video Transcoding) or third party server 240 .
- the provider server 220 can receive the properly encoded video preview and provide the video preview to a user device 230 (e.g., so that the video preview plays when activated).
- the provider server 220 can generate a video preview (e.g., locally) by using a particular encoding technique.
- the encoding technique can include a H.264 video codec (e.g., used up to 1080p), 30 frames per second (FPS), High Profile level 4.1 with advanced audio coding low complexity (AAC-LC) audio codec up to 160 Kbps, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container.
- the encoding technique can include a MPEG4 video codec up to 2.5 Mbps, 640 by 480 pixels, 30 FPS, Simple Profile with AAC-LC audio codec up to 160 Kbps per channel, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container.
- the encoding technique can include a Motion JPEG (M-JPEG) up to 35 Mbps, 1280 by 720 pixels, 30 FPS, audio in ulaw, PCM stereo audio with .avi as the video container.
- M-JPEG Motion JPEG
- the identifier may specify the device as a device that runs an Android® operating system (e.g., operating on a Samsung® handheld device 410 ).
- the provider server 220 can generate a video preview using a particular encoding technique for this particular device.
- the encoding technique can include a H.264 video codec with a 3GPP, MPEG-4, or MPEG-TS video container.
- the encoding technique can include a VP8 video codec with a WebM (.webm) or Matroska (.mkv) video container.
- the encoding technique may also include audio codecs, including AAC-LC, HE-AACv1, HE-AACv2, AAC-ELD AMR-NM, AMR-WB, FLAC, MP3, MIDI, Vorbis, or PCM/WAVE.
- the encoding technique may also include various specifications for video resolution (e.g., 480 by 360 pixels, 320 by 180 pixels), frame rate (e.g., 12 FPS, 30 FPS), video bit rate (e.g., 56 Kbps, 500 Kbps, 2 Mbps), audio channels (e.g., 1 mono or 2 stereo), audio bit rate (e.g., 24 Kbps, 128 Kbps, 192 Kbps), or other specifications.
- the identifier may specify the device as a television 430 .
- the provider server 220 can receive a full video from a third party server 240 (e.g., broadcast center, set top box data provider).
- the encoding technique can include a national television system committee (NTSC), phase alternating line (PAL), or sequential color with memory (SECAM) analog encoding.
- NTSC national television system committee
- PAL phase alternating line
- SECAM sequential color with memory
- the video preview can be provided using a radio frequency (RF) modulation to modulate the signal onto a very high frequency (VHF) or ultra-high frequency (UHF) carrier.
- RF radio frequency
- the encoding techniques can be similar to the technique described above (e.g., .mp4 container, H.264 video codec, AAC audio codec, etc.).
- the encoding technique can include a MPEG video codec to generate the video preview, followed by a MPEG-4 video codec adjusting the size and format of the video preview for the satellite television receiver (e.g., television 430 ).
- the video preview can be encrypted from the provider server 220 and decrypted at the television 430 .
- the identifier may specify the device as a game console 440 .
- the provider server 220 can receive an identifier specifying the device is a game console.
- the provider server can also receive a full video from the game console 440 (e.g., a stream of images showing the user interacting as a digital character in a played game to use as the full video).
- the encoding technique can include PAL, NTSC, animated GIF, MP4 container, a H.264 video codec, an AAC audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or other encoding techniques supported by the game console.
- the game console 440 can provide a video/audio capture application programming interface (API).
- the provider server 220 can capture the images provided on the game console 440 via the API (e.g., the game play could be the “full video”) and create the video preview at the provider server using the images.
- the determined encoding technique can also be used to encode captions 460 .
- the video preview may be created from the full video based on the determined encoding technique.
- the caption may also use the determined encoding technique.
- the caption may include a dual-layer file (e.g., soft captioning), where each layer is encoded using the encoding technique, so that the caption may be adjusted independently from the video preview (e.g., change language of the text in the caption).
- the video preview and caption can overlap (e.g., where the caption can be displayed on top of the video preview layer without altering the video preview itself).
- the video preview and caption can be transcoded in order to incorporate the caption with the video preview in a single-layered video preview (e.g., caption “burned in” to the video). Additional information about incorporating captions can be found in U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), which is incorporated by reference in its entirety.
- FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention.
- the environment 500 can comprise a plurality of computing devices, including a provider server 220 , one or more user devices 230 (e.g., 530 , 540 ), and an application 550 .
- the provider server 220 can create and provide a plurality of video previews 520 , 522 , 524 to the other devices and applications in the environment.
- the identifier may specify that the application is a network browser (e.g., Firefox, Internet Explorer, Chrome).
- the provider server 220 can create a video preview using a particular encoding technique for the network browser.
- the encoding technique can include WebM (.webm) video container based on which encoding techniques the application supports.
- the provider server 220 can create multiple video previews using multiple encoding techniques (e.g., including an .mp4 video container).
- the first encoding technique can be provided to the user and other encoding techniques can be used to create video previews for other applications that may also display the video preview.
- Network browsers may use various encoding techniques, including MP4, animated GIF, Ogg Video files (e.g., file extension .ogv, mime type video/ogg), Theora video codec, and Ogg Vorbis audio codec.
- any software application e.g., “app”
- client e.g., email client
- Similar encoding techniques may be implemented with these applications, including GIFs or encoding techniques where video previews will not automatically play.
- audio may be omitted based on constraints of the device or application as well.
- the encoding technique can be determined based on the network. For example, a provider server 220 or user device 230 can identify that a network is relatively slow. The encoding technique can be determined to generate a smaller video preview (e.g., a tiny GIF) instead of a larger file, so that the user device 230 can receive the video preview significantly quicker.
- the identifier may specify that the video preview will be displayed in a messaging service (e.g., SMS, multimedia messaging service (MMS), text message).
- the provider server 220 can determine the encoding technique that can create a smaller video preview, because the video preview will likely be viewed using a slower network connection.
- a video preview or video file may be encoded or compressed using one or more palette-based optimization techniques.
- the video file can be compressed using a common color palette.
- FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention.
- a request to generate a compressed video file can be received.
- the request can specify at least a portion of a full video to be used in creating the compressed video file.
- the specified portion of the full video can comprise a plurality of images.
- the computing device can receive information associated with the specified portion of the full video (e.g., a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video).
- the request can include an identification of the full video or a litany of other information, including a start/end time, link to a full video at the video server 210 , timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information.
- IP internet protocol
- a user-agent string may include information about a user device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software).
- the provider server 220 can receive this and other information from the user device 230 .
- a palette-based optimization technique can be determined.
- the palette-based optimization technique can be used to generate the compressed video file.
- the palette-based optimization technique can limit the number of colors used to create the compressed video file.
- a single color palette can be used for encoding the compressed video file, instead of one color palette for each of the images in the video file.
- the plurality of images can be analyzed using the palette-based optimization technique. For example, the analysis can determine at least one common color palette.
- the plurality of images and common color palette can be used to generate multiple compressed images of the compressed video file.
- a representative image can be chosen (e.g., for a portion of the full video, for a scene, etc.).
- the common color palette can be a single color palette (e.g., a combination of palettes from multiple images) or multiple color palettes (e.g., where one color palette is used for one portion of the images and another color palette is used for another potion of the images).
- one or more images can be analyzed and the union of the colors in the analyzed images can be used to make a single common color palette.
- an image can be chosen as a representative image for each scene.
- one or more images can be analyzed to identify multiple scenes in the images, e.g., where different scenes involve different objects and/or backgrounds.
- a common color palette can be generated for each scene.
- a common color palette can be generated using the union of the colors in each scene to generate a common color palette for the union of the colors in the scenes. The colors may be aggregated and/or the union of the colors may be used to generate the common color palette.
- multiple compressed images can be specified.
- the multiple compressed images can be generated using the one or more common color palettes.
- the images from the mountain scene or the person's face can be identified and encoded using one or more common color palettes.
- the multiple compressed images can include a first image from the top of the mountain, a second image from 10-feet down the mountain, and a third image from 20-feet down the mountain.
- the resultant images may be compressed images that are limited to the defined colors in the palette (e.g., the same color palette can be used for each of the three compressed images, including a color palette that uses four colors out of 256 possible colors).
- a scene may be an image in the full video or video preview that includes a similar background or combination of pixels as one or more other frames in the full video or video preview.
- the scene can include a different rendered view of the image in the full video or video preview.
- a full video may include a first scene showing a President and a second scene showing people walking to meet the President.
- the background or combination of pixels for the first scene may be distinguishable from the second scene.
- the multiple compressed images can include different scenes.
- the scenes may be analyzed based on the compressed images that are used to create the scene.
- the full video can include six compressed images.
- the first three compressed images can include the President speaking to a group of people and the second three compressed images can include the group of people listening to the President.
- the full video can pan between the President and the group of people, or simply capture a plurality of images from the President, pause the camera or edit the frames to remove the panning, and capture a plurality of images from the group listening to the President.
- the specified portion of the full video can be analyzed to determine information about a plurality of scenes in the full video or video preview. For example, before creating the compressed video file, the specified portion of the full video can be analyzed. The analysis can help determine the plurality of scenes in the specified portion of the full video and used to determine a common color palette.
- the common color palette can be an aggregated combination of the scenes, or multiple common color palettes can be determined for each of the plurality of scenes (e.g., if there are two scenes, then two common color palettes can be determined).
- the compressed video file can be created.
- the compressed video file can be created from the plurality of images of the specified portion of the full video.
- the compressed video preview can include the two scenes and the common color palette can be generated from each scene in the video preview.
- a single common color palette may be generated based on a combination of the plurality of scenes.
- the multiple compressed images can be rendered using the common color palette when the compressed video file is viewed.
- the optimization technique can include generating a common color palette.
- a single common color palette can be generated for the entire compressed video file (e.g., one palette shared by each of the images or frames identified in the full video).
- a plurality of frames can be analyzed and used to generate a single image.
- the common color palette can be generated from the single combined image.
- a scene analysis can be one type of optimization technique that is used, without generating a common color palette. For example, when a person is speaking into a camera in the full video, the mouth of the person may change throughout the full video, but the rest of the person's face and background around the person may remain constant.
- the optimization technique can use the same image information for the minimal changing portions of the image instead of storing new image information that is substantially the same as the rest of the image information (e.g., using a cinemagraph generator).
- the scene analysis may consider which portions of the image are static or dynamic through user input using a graphical user interface. For example, with a “brush”-like tool, the user can click and drag over the areas that are to remain dynamic.
- the full video may be encoded to an animated GIF using indexed color.
- the color information for the animated GIF may not be directly stored with image pixel data. Instead, the color information can be stored in an array of color elements that defines the particular color called a palette.
- the palette-based optimization technique can limit the number of palettes used for the frames.
- a common color palette can be generated for a plurality of images in the full video (e.g., the portion of the full video that displays a mountain scene with similar colors, the portion of the full video that displays a person's face in the center of the frame as they walk through a city).
- a common color palette can be generated for a plurality of images using default color specifications (e.g., red-green-blue, black/white, a limited range of red- and blue-tones, etc.).
- multiple common color palettes can be generated for one compressed video file, such that one or more common color palettes are used for one portion of the full video, one or more common color palettes are used for a second portion of the full video, and so on.
- the color can be selected using various methods. For example, a plurality of images (e.g., four frames) can be selected that contain the largest file size. The largest frames may, in some embodiments, identify the most colors so that the common color palette identifies several colors (e.g., a Mardi Gras scene versus a snow storm scene).
- plurality of images can be selected that are selected periodically (e.g., minutes 1, 2, and 3 in a 4 minute full video, in the portion of the full video, in the compressed video file, in the entire full video). The images that are selected periodically can identify a broad representation of the colors in the image (e.g., assuming that the scene will change as the video progresses).
- multiple common color palettes can be identified. For example, in a full video where the camera cuts back and forth between two people having a conversation, the colors associated with each person may vary.
- a common color palette can be generated for each scene. The common color palettes for each scene can beneficially reduce file size from the original full video and provide better quality compressed video file than a single common color palette with multiple scenes.
- FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention.
- the illustration shows a 2-bit indexed image 710 where each pixel 720 is represented by a number/index, and an image 730 , where each number/index corresponds with a color 740 .
- Each pixel may correspond with some value in the color palette (e.g., 0 and 1 in the illustration corresponds with black and white, respectively).
- the image can be encoded in a similar method as shown in FIG. 7 .
- the color information may not be directly associated with image pixel data (e.g., image pixel [0,0] is Red-100), but can be stored in a separate piece of data called a color palette.
- the color palette may be an array of color elements, in which each element (e.g., a color) is indexed by its position within the array.
- the image pixels may not contain the full specification of its color, but can potentially contain its index in the palette.
- the color palette is generated (e.g., bitmap corresponding with the 2-bit indexed image 710 on the top of FIG. 7 )
- the image 730 can be formed using the color palette (e.g., the checkerboard image on the bottom of FIG. 7 ).
- the image 730 can result in a close representation of an original image (e.g., and video preview) that uses less memory or storage.
- a pixel 720 is associated with a corresponding color 740 in a color palette.
- pixel [0,0] can be associated with neon green.
- several pixels are used to create an image or frame, and then several images or frames are used to generate the video preview.
- pixel 720 corresponds with a black color 740 and the pixel next to 720 corresponds with a white color.
- the common color palette can include only black and white because black and white are the only colors in this image or frame.
- the other images or frames of the video preview can be created using only black and white, so that when all the images or frames that use the common color palette are sequentially ordered to form the video preview, the video preview will comprise the colors in the common color palette.
- the reduced number of colors that are stored in a color palette e.g., fewer numbers associated with colors, fewer colors associated with images/frames, a reduced number of colors, etc.
- the common color palette can be generated from one or more images that contain at least a specified file size.
- the specified file size can be above a certain threshold (e.g., an image that is above 1-kilobyte (1 k)) or include the maximum file size when compared with other images in the full video (e.g., the first image is 1 k, the second image is 1.5 k, the third image is 2 k, so the specified file size is 2 k and the common color palette can be generated using the third image).
- the specified file size can be retrieved (e.g., the threshold or maximum file size can be retrieved from a provider server 220 or user device 230 ), dynamically determined (e.g., when the request to generate a compressed video file is received), and/or specified by a user operating a user device 230 .
- a compression can be combined with an optimization technique to further optimize the video preview (e.g., to take advantage of a region of pixels with the same color).
- the left-half of the image may be black (e.g., a black building, a night image in a video preview, etc.).
- the values associated with the pixels in the image showing the black portion can be compressed by storing one value instead of many.
- the one value may be the same or similar for each of those pixels on the left-half of the image, so the compression can store the single color.
- the pixels in the left-half of the image can reference the single stored color.
- the image may contain a frame around the image. The color of the frame can be stored as one color and each of the pixels or portions of the image that are used to create the frame can reference the one color.
- any of the clients or servers may utilize any suitable number of subsystems. Examples of such subsystems or components are shown in FIG. 8 .
- the subsystems shown in FIG. 8 are interconnected via a system bus 875 . Additional subsystems such as a printer 874 , keyboard 878 , fixed disk 879 , monitor 876 , which is coupled to display adapter 882 , and others are shown.
- Peripherals and input/output (I/O) devices which couple to I/O controller 871 , can be connected to the computer system by any number of means known in the art, such as input/output (I/O) port 877 (e.g., USB, FireWire®).
- I/O port 877 or external interface 881 e.g.
- Ethernet, Wi-Fi, etc. can be used to connect the computer apparatus to a wide area network such as the Internet, a mouse input device, or a scanner.
- the interconnection via system bus allows the central processor 873 , which may include one or more processors, to communicate with each subsystem and to control the execution of instructions from system memory 872 or the fixed disk 879 (such as a hard drive or optical disk), as well as the exchange of information between subsystems.
- the system memory 872 and/or the fixed disk 879 may embody a computer readable medium. Any of the data mentioned herein can be output from one component to another component and can be output to the user.
- any of the embodiments of the present invention can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner.
- a processor includes a multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked.
- any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java®, C++ or Perl using, for example, conventional or object-oriented techniques.
- the software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like.
- RAM random access memory
- ROM read only memory
- magnetic medium such as a hard-drive or a floppy disk
- an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like.
- CD compact disk
- DVD digital versatile disk
- flash memory and the like.
- the computer readable medium may be any combination of such storage or transmission devices.
- Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet.
- a computer readable medium may be created using a data signal encoded with such programs.
- Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer program product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer program products within a system or network.
- a computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
- any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps.
- embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps.
- steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.
Abstract
Providing a method for creating and displaying portions of videos called video previews. The video previews may be created using an encoding technique or palette-based optimization technique for the particular user device, application, or network that will display the video preview generated from a portion of the full video. The video previews are configured to play a series of images associated with images from the portion of the full video when the video preview is activated.
Description
- This application is a non-provisional application of U.S. Patent Application No. 61/761,096, filed on Feb. 5, 2013, U.S. Patent Application No. 61/822,105, filed on May 10, 2013, U.S. Patent Application No. 61/847,996, filed on Jul. 18, 2013, and U.S. Patent Application No. 61/905,772, filed on Nov. 18, 2013, which are herein incorporated by reference in their entirety for all purposes.
- This application is related to commonly owned and concurrently filed U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. ______, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Audio” (Attorney Docket 91283-000740US-897294), U.S. patent application Ser. No. ______, entitled “Generation of Layout of Videos” (Attorney Docket 91283-000750US-897295), U.S. patent application Ser. No. ______, entitled “Activating a Video Based on Location in Screen” (Attorney Docket 91283-000760US-897296), which are herein incorporated by reference in their entirety for all purposes.
- Users commonly provide video content to websites (e.g., YouTube), which can be referred to as “posting a video.” The user can spend a significant amount of time to convey the message of the video before the user selects the video (e.g., by clicking the video displayed on a website). For example, the user can associate a title, a static thumbnail image, and/or a textual description with the video. Users often have an difficult time when the video originates on a different website and the user tries to upload their video to a video server. Further, the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the description of the video may be a poor signal for whether the video will be interesting to a viewer.
- Video browsing is also limited. Other users (e.g., viewers) can access and view the video content via the websites. For example, the viewers can see a video's title and static thumbnail of the video before deciding whether to play the full video. However, the viewers may find it difficult to select particular videos of interest because the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the textual description with the video may be a poor signal for whether the video will be interesting to the viewer. Thus, the viewers may spend significant amounts of time searching and watching videos that are not enjoyable to the viewer.
- Embodiments of the present invention can create and display portions of videos as video previews. The video previews may be associated with a full video, such that the video preview is generated from a portion of the full video. The video previews can be generated in various ways based and an identification of the device, application, or network that will be used to activate or play the video preview. Once activated, the video preview can be configured to play a series of images associated with images from the portion of the full video when the video preview is activated (e.g., to convey the essence of the full video via a video preview).
- Additionally, embodiments of the present invention provide a method for creating video previews without an identification of the device, application, or network that will be used to activate or play the video preview. For example, a computing device can generate multiple video previews in anticipation of a selected medium for activating the video preview. In another example, the computing device can receive parallel input streams of the full video to speed up generation of the multiple video previews.
- Further, embodiments of the present invention provide a method for creating a compressed video file using a palette-based optimization technique. For example, a computing device may create a common color palette among multiple images specified in the full video. The common color palette can be used to generate the compressed video file.
- Other embodiments are directed to systems and computer readable media associated with methods described herein.
- A better understanding of the nature and advantages of the present invention may be gained with reference to the following detailed description and the accompanying drawings.
-
FIG. 1 shows a flowchart illustrating a method of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention. -
FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview. -
FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention. -
FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention. -
FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention. -
FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention. -
FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention. -
FIG. 8 shows a block diagram of a computer apparatus according to an embodiment of the present invention. - A “video preview” or “compressed video file” (used interchangeably) is a visual representation of a portion of a video (also referred to as a “full video” to contrast a “video preview” of the video). The full video may correspond to the entirety of a video file or a portion of the video file, e.g., when only a portion of the video file has been streamed to a user device. The preview is shorter than the full video, but the full video can be shorter than the complete video file. The preview can convey the essence of the full video. The video preview is shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. In various embodiments, a preview can be a continuous portion of the full video or include successive frames that are not continuous in the full video (e.g., two successive frames of the preview may actually be one or more seconds apart in the full video).
- Embodiments of the present invention can enhance video viewing by providing short, playable video previews through a graphical user interface (GUI) or provided directly to the user device (e.g., stored in a clipboard). Viewers can use the GUI of video previews to better decide whether to watch a full video, or channel of videos.
- In one embodiment, the user may create a video preview that may later be accessed by a viewer. For example, the user may select the best 1-10 seconds of a video to convey the essence of the full video. The video preview can be shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. The system associated with the GUI may generate a smaller file to associate with the video portion (e.g., animated GIF, MP4, collection of frames, RIFF). The system may provide the GUI on a variety of systems. For example, the GUI can be provided via an internet browser or client applications (e.g., software configured to be executed on a device), and configured to run on a variety of devices (e.g., mobile, tablet, set-top, television, game console).
-
FIG. 1 shows a flowchart illustrating amethod 100 of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention. Themethod 100 may comprise a plurality of steps for implementing an embodiment of creating a video preview based on an environment (e.g., the user device, application, or network that will display a video preview or transfer the video preview to a destination). Various computing devices may be used to perform the steps of the method, including video servers, provider servers, user devices, or third party devices. - At
step 110, a video preview may be generated. Embodiments of the invention may provide a graphical user interface for a user that allows the user to request to generate a video preview, the request specifying a portion of a full video to use as the video preview. The system may generate the video preview based on the type of device or application that will display the video preview (e.g., using input from the user, using information transmitted from the device, using an identifier specifying the device, application, or network that will display the video preview). The input may be active (e.g., the user or device providing an identification the device or application in response to a request, a third party providing information for a plurality of streaming television programs) or passive (e.g., the device transmitting information as a push notification). In response to the input (e.g., identifier), the computing device can determine an encoding technique based on the identifier to generate the video preview and create the video preview from the full video based on the determined encoding technique. - Additional means of generating video previews can be found in U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. ______, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), and U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Audio” (Attorney Docket 91283-000740US-897294), which are incorporated by reference in their entirety.
- At
step 120, one or more video previews may be organized into one or more channels or collections. For example, themethod 100 can associate the video preview generated in step 110 (e.g., a 4-second animated GIF of a snowboarder jumping off a ledge) with a channel (e.g., a collection of videos about snowboarders). In some embodiments, the video previews may be organized in a group (e.g., a composite, a playable group, a cluster of video previews) and displayed on a network page. Additional information about the organization and layout of video previews cam be found in U.S. patent application Ser. No. ______, entitled “Generation of Layout of Videos” (Attorney Docket 91283-000750US-897295), which is incorporated by reference in its entirety. - At
step 130, a GUI may be provided with the video previews. For example, the GUI may provide one or more channels (e.g., channel relating to snowboarders, channel relating to counter cultures), one or more videos within a channel (e.g., a first snowboarding video, a second snowboarding video, and a first counter culture video), or a network page displaying one or more video previews. The video previews may be shared through social networking pages, text messaging, or other means. Additional information about viewing and sharing video previews can be found in U.S. patent application Ser. No. ______, entitled “Activating a Video Based on Location in Screen” (Attorney Docket 91283-000760US-897296), which is incorporated by reference in its entirety. - Various systems and computing devices can be involved with various workflows used to create a video preview based on the environment that will display the video preview.
-
FIG. 2 shows block diagrams of various computing devices used to generate or provide a video preview. For example, the computing devices can include avideo server 210, aprovider server 220, auser device 230, or athird party server 240 according to an embodiment of the present invention. In some embodiments, any or all of these servers, subsystems, or devices may be considered a computing device. - The computing devices can be implemented various ways without diverting from the essence of the invention. For example, the
video server 210 can provide, transmit, and store full videos and/or video previews (e.g., Ooyala®, Brightcove®, Vimeo®, YouTube®, CNN®, NFL®, Hulu®, Vevo®. Theprovider server 220 can interact with thevideo server 210 to provide the video previews. In some embodiments, theprovider server 220 can receive information to generate the video preview (e.g., an identifier specifying a device or application for displaying a video preview, a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video). Theuser device 230 can receive a video preview and/or full video to view, browse, or store the generated video previews. Thethird party server 240 can also receive a video preview and/or full video to view or browse the generated video previews. In some embodiments, theuser device 230 orthird party server 240 can also be used to generate the video preview or create a frame object. Additional information about thevideo server 210,provider server 220,user device 230, andthird party server 240 can be found in U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497) and U.S. patent application Ser. No. ______, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), which are incorporated by reference in their entirety. - The
video server 210 orthird party server 240 may also be a content provider for a full video, including one or more images contained in the full video, information about the full video (e.g., title, television channel information, television programming information for a user's location). In some embodiments, thethird party server 240 can interact with theuser device 230 to provide the additional information to theuser device 230 or provider server 220 (e.g., related to television programming in the Bay Area of California, related to U.S. versus foreign television programming). Thethird party server 240 can identify a particular show (e.g., full video) that the user is likely watching based on the location of the user and channel that theuser device 230 is receiving. - In some embodiments, the
video server 210,provider server 220, auser device 230, andthird party server 240 can be used to receive portions of a full video in a plurality of video streams (e.g., parallel I/O) at the computing device (e.g., provider server 220). With multiple portions in the full video received (e.g., at the provider server 220), the computing device can create multiple video previews simultaneously (e.g., using multiple encoding techniques). - In some embodiments, the identification of the
user device 230, application, or network that is used to display the video preview can affect the creation of the video preview. For example, a computing device (e.g., provider server 220) can receive an identifier specifying a device (e.g., an Android® device) for displaying the video preview. The device may be theuser device 230 or recipient device of the video preview from the user device (e.g., Apple iPhone® sending the video preview to an Android® device). Theprovider server 220 can create (e.g., encode, compress, transcode) the video preview based on a determined encoding technique (e.g., video codecs including H.264 AVC, MPEG-4 SP, or VP8). - A video preview may be generated by a
provider server 220,user device 230, orvideo server 210. In some embodiments, athird party server 240 may generate a video preview using a similar process as auser device 230. - A. Identifying a Video Preview from a Full Video
-
FIG. 3 shows flowchart illustrating a method of identifying a video preview from a full video according to an embodiment of the present invention. For example, a video may begin as a series of frames or images (e.g., raw format) that are encoded by thevideo server 210 into a full video. The full video may reduce the size of the corresponding file and enable a more efficient transmission of the full video to other devices (e.g.,provider server 220, user device 230). In some embodiments, theprovider server 220 can transcode the full video (e.g., change the encoding for full video to a different encoding, encoding the full video to the same encoding or re-encoding) in order to generate and transmit the video preview. For example, transcoding may change the start time of a video, duration, or caption information. - The
video server 210 may store and provide a full video. The full video can be received from a user or generated by the computing device and offered to users through a network page. In some embodiments, another computing device (e.g., auser device 230, a third party server 240) can upload the full video to thevideo server 210. - At
block 310, a request to generate a video of a full video can be received. For example, the request to generate a video preview of a full video can specify a portion of the full video (e.g., the first 10 seconds, the last 15 seconds, the portion of the full video identified by a timestamp). For example, theuser device 230 may identify a video portion of the full video by identifying a start/end time, a timestamp in the full video, or other identification provided by the GUI. The information (e.g., start/end time, timestamp) can be transmitted to theprovider server 220. In some embodiments, auser device 230 can periodically request to generate a video preview (e.g., every 30 seconds, based on a reoccurring or periodic request). - The request can include an identification of the video portion or a litany of other information, including a start/end time, link to a full video at the
video server 210, timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information. A user-agent string, for example, may include information about auser device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software). Theprovider server 220 can receive this and other information from theuser device 230. - At
block 320, at least a portion of the full video can be received. For example, at least the portion of the full video can be received at theprovider server 220 or other computing device. Theprovider server 220 may request a full video based in part on the information received from theuser device 230. For example, theprovider server 220 can transmit a request (e.g., email, file, message) to thevideo server 210 that references the full video (e.g., link, identifier). In some examples, thevideo server 210 andprovider server 220 may be connected through a direct and/or secure connection in order to retrieve the video (e.g., MP4 file, stream, full video portion). Thevideo server 210 may transmit the full video (e.g., file, link) to theprovider server 220 in response to the request or link to the full video. - At
block 330, an identifier specifying a device or application can be received. For example, the request from theuser device 230 can include an identifier, which is in turn used to request a full video from thevideo server 210. The identifier can specify the device or application that may be used for displaying the video preview. The identifier can include a user name, device identifier (e.g., electronic serial number (ESN), international mobile equipment identity (IMEI), mobile equipment identifier (MEID), phone number, subscriber identifier (IMSI), device carrier), identification of an application for displaying the video preview (e.g., network page, browser application, operating system). - The identifier can be received through a variety of methods. For example, the identifier can be received through an application programming interface (API), from a television programming provider, from an information feed specifying the device, application, or network (e.g., from a
third party server 240 or user device 230), from metadata, from a user (e.g., passive/active input), or other sources. - In some embodiments, the request sent to the
video server 210 may vary depending on the type video needed for the full video or requested video preview. For example, the full video may be a raw MP4 format (e.g., compressed using advanced audio coding (AAC) encoding, Apple Lossless format). Theprovider server 220 can determine that the desired format for theuser device 230 is a different type of file format (e.g., an animated GIF) and request additional information from thevideo server 210 in order to transcode the MP4 format to an animated GIF format for the user device 230 (e.g., including the device type, application that will play the video preview, etc.). - At
block 340, an encoding technique can be determined. For example, an encoding technique can be determined based on the identifier specifying the device or application for displaying the video preview. The encoding technique can be used to generate the video preview (e.g., using a portion of the full video). A variety of encoding techniques can be used, including a graphics interchange format (GIF), animated GIF, MP4 container, a H.264 video codec, an advanced audio coding (AAC) audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or MPEG-4 SP. - In some embodiments, the encoding technique is dependent on the identification of a type of user device that submitted the request to generate the video preview. In some embodiments, the encoding technique is dependent on the type of application that will display the video preview at the user device.
- Multiple encoding techniques can be used. For example, one encoding technique can be used to generate a first video preview and a second encoding technique can be used to generate a second video preview. The second video preview can be generated to allow the user device to share the second video preview through a particular medium. The first video preview and second video preview can be provided to the
user device 230. - In some embodiments, a palette-based size optimization can be included as an encoding technique. For example, the encoding technique can include a palette-based size optimization by generating a common color palette for the video preview and limiting the video preview to the common color palette (e.g., limiting the images in the video preview to particular colors identified by the common color palette).
- At block 350, a video preview can be created. For example, the provider server can create the video preview from the full video based on the determined encoding technique. In some embodiments, the video preview can be created using a plurality of video streams (e.g., parallel I/O). With multiple portions in the full video received (e.g., at the provider server 220), the computing device can create multiple video previews (e.g., simultaneously, to expedite video preview creation, etc.).
- At
block 360, the video preview may be provided. For example, the video preview can be provided to theuser device 230. The video preview may be provided using various methods. For example, the video preview can be transmitted via a messaging service to theuser device 230, in an attachment to an email, embedded in a short messaging service (SMS) or text message, provided through a GUI accessible by the user device, or other methods. In some embodiments, the video preview (e.g., file, animated GIF, link to a video preview) may be stored in a temporary location (e.g., clipboard, temporary data buffer) at auser device 230 after the video preview is generated. The user may copy/paste the video preview to an email client, SMS, or other application in order to use or share the video with other applications and/or devices. - The video preview can be provided to the
user device 230 in a variety of formats. For example, the video preview can be provided with a link to a stored file on a webserver and/or theprovider server 220, an animated GIF file, an MP4 file, or other acceptable file format. In some examples, the video preview can be provided in a format based in part on a particular type of user device 230 (e.g. Apple iPhones can receive a MPEG-4 formatted file, Android machines can receive an AVI formatted file). As illustrated, theuser device 230 may provide information (e.g., identifier specifying a device, application, device type, or operating system) to theprovider server 220 prior to receiving the properly formatted video preview and the provided video preview can correspond with that information. - In some embodiments, multiple video previews can be provided. For example, the computing device (e.g., provider server 220) can send multiple video previews to a device (e.g., the clipboard, temporary data buffer). The user may choose to paste the video preview into a particular application (e.g., messaging service, email client) and the properly encoded video preview for that application can be provided to the application. For example, the user may access a GUI provided by the
provider server 220 that includes one or more request tools (e.g., buttons, text boxes) to access particularly encoded video previews. When the user selects one of the request tools, the corresponding video preview can be provided to the user. In some examples, the user may provide (e.g., copy/paste) a link to the video preview and the link can direct the user to a properly encoded video preview. - In some embodiments, the video preview can be provided to the
user device 230 identified in a request from the user device. For example, the user device can specify the ultimate device or application that theuser device 230 intends to use to display the video preview (e.g., through the use of multiple request tools or buttons in a GUI provided by theprovider server 220, through user input). The user can select a request tool in the GUI (e.g., “I want a video preview for an SMS message”) and the received video preview can be encoded for the identified use. In another example, the user can select a request tool in the GUI that identifies a social networking platform (e.g., Facebook®, Twitter®, Google+®, Tumblr®), so that the received video preview can be uploaded directly to the social networking website. - B. Correlating an Identifier with an Encoding Technique
- In some embodiments, the
user device 230 will transmit an identifier specifying a device or an application for displaying the video preview. The identifier can be matched with a list of identifiers at the provider server 220 (e.g., in a database) to find a matching identifier. If the received identifier is identified or found at theprovider server 220, theprovider server 220 can determine an encoding technique for the user device based on the identifier. Similarly, an application (e.g., or a corresponding device configured to execute the application) can provide an identifier to theprovider server 220 that specifies the application for displaying the video preview. If the identifier is found, the provider server can determine an encoding technique for the application based on the identifier. - Once the encoding technique is determined based on the identifier, other encoding techniques can be determined as well. For example, five encoding techniques can be available and each may correspond with one or more identifiers. The encoding technique associated with the received identifier can be selected and used to start creating the video preview. In some embodiments, one or more of the encoding techniques that do not correspond with the received identifier can also be used to create one or more video previews, including a video preview created from a full video.
- When the identifier is not found or the device/application does not provide an identifier, an encoding technique can still be determined. In some embodiments, a default encoding technique can be selected (e.g., an animated GIF) and provided to the
user device 230 or application. - In some embodiments, multiple encoding techniques can be used. For example, one or more encoding techniques can be used to create a first video preview, one or more encoding techniques can be used to create a second video preview, and so on. The plurality of video previews (including the first and second video previews) can be sent to the same device or application. The device may store the plurality of video previews in a temporary storage (e.g., clipboard or cache). When the user would like to display the video preview, the appropriate video preview can be selected and used for the appropriate device/application (e.g., Firefox® application receives a video preview using a WebM video container). In some embodiments, a video preview encoded with a preferred or default encoding technique can be sent to the device or application first, followed by other video previews created using different encoding techniques and/or stored at a
provider server 220. - C. Determining an Encoding Technique Based on the Type of User Device
-
FIG. 4 shows illustrations of a video preview displayed with various devices according to an embodiment of the present invention. For example, the provided video preview can be displayed on a variety ofdifferent user devices 230 in a variety of different formats, includinghandheld devices 410,laptops 420,televisions 430, game consoles 440, and the like. Thevideo preview 450 can be displayed as a series of images from a full video. In some embodiments, thevideo preview 450 can include acaption 460, link 470 (e.g., to the full video, to the video preview, to additional information associated with the video preview, to a stored video preview on a video server 210), or other information. In some examples, the video preview may be displayed in a frame object. - For example, the identifier may specify the device as an Apple® device (e.g., a
handheld device 410 or laptop 420) or a device running an iOS operating system (e.g., iPhone®, iPad®). In some embodiments, theprovider server 220 can generate a video preview and transmit the video preview to an encoding service (e.g., Cloud Video Encoding, Cloud Video Transcoding) orthird party server 240. Theprovider server 220 can receive the properly encoded video preview and provide the video preview to a user device 230 (e.g., so that the video preview plays when activated). In other embodiments, theprovider server 220 can generate a video preview (e.g., locally) by using a particular encoding technique. For example, the encoding technique can include a H.264 video codec (e.g., used up to 1080p), 30 frames per second (FPS), High Profile level 4.1 with advanced audio coding low complexity (AAC-LC) audio codec up to 160 Kbps, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container. In another example, the encoding technique can include a MPEG4 video codec up to 2.5 Mbps, 640 by 480 pixels, 30 FPS, Simple Profile with AAC-LC audio codec up to 160 Kbps per channel, 48 kHz, stereo audio with .m4v, .mp4, and .mov video container. In yet another example, the encoding technique can include a Motion JPEG (M-JPEG) up to 35 Mbps, 1280 by 720 pixels, 30 FPS, audio in ulaw, PCM stereo audio with .avi as the video container. - In another example, the identifier may specify the device as a device that runs an Android® operating system (e.g., operating on a Samsung® handheld device 410). The
provider server 220 can generate a video preview using a particular encoding technique for this particular device. For example, the encoding technique can include a H.264 video codec with a 3GPP, MPEG-4, or MPEG-TS video container. In another example, the encoding technique can include a VP8 video codec with a WebM (.webm) or Matroska (.mkv) video container. The encoding technique may also include audio codecs, including AAC-LC, HE-AACv1, HE-AACv2, AAC-ELD AMR-NM, AMR-WB, FLAC, MP3, MIDI, Vorbis, or PCM/WAVE. The encoding technique may also include various specifications for video resolution (e.g., 480 by 360 pixels, 320 by 180 pixels), frame rate (e.g., 12 FPS, 30 FPS), video bit rate (e.g., 56 Kbps, 500 Kbps, 2 Mbps), audio channels (e.g., 1 mono or 2 stereo), audio bit rate (e.g., 24 Kbps, 128 Kbps, 192 Kbps), or other specifications. - In another example, the identifier may specify the device as a
television 430. In some embodiments, theprovider server 220 can receive a full video from a third party server 240 (e.g., broadcast center, set top box data provider). When an analog television is used (e.g., identified by the identifier that specifies the device), the encoding technique can include a national television system committee (NTSC), phase alternating line (PAL), or sequential color with memory (SECAM) analog encoding. The video preview can be provided using a radio frequency (RF) modulation to modulate the signal onto a very high frequency (VHF) or ultra-high frequency (UHF) carrier. When televisions that run an Android® operating system (e.g., Google TV set top boxes) or an iOS operating system (e.g., Apple TV), the encoding techniques can be similar to the technique described above (e.g., .mp4 container, H.264 video codec, AAC audio codec, etc.). When a satellite television is used, the encoding technique can include a MPEG video codec to generate the video preview, followed by a MPEG-4 video codec adjusting the size and format of the video preview for the satellite television receiver (e.g., television 430). In some embodiments, the video preview can be encrypted from theprovider server 220 and decrypted at thetelevision 430. - In another example, the identifier may specify the device as a
game console 440. In some embodiments, theprovider server 220 can receive an identifier specifying the device is a game console. The provider server can also receive a full video from the game console 440 (e.g., a stream of images showing the user interacting as a digital character in a played game to use as the full video). The encoding technique can include PAL, NTSC, animated GIF, MP4 container, a H.264 video codec, an AAC audio codec, WebM container, VP8 video codec, an Ogg Vorbis audio codec, or other encoding techniques supported by the game console. In some embodiments, thegame console 440 can provide a video/audio capture application programming interface (API). Theprovider server 220 can capture the images provided on thegame console 440 via the API (e.g., the game play could be the “full video”) and create the video preview at the provider server using the images. - It should be appreciated that the provided encoding techniques are illustrations. Other encoding techniques are available without diverting from the essence of the invention.
- D. Encoding Captions
- The determined encoding technique can also be used to encode
captions 460. For example, the video preview may be created from the full video based on the determined encoding technique. The caption may also use the determined encoding technique. For example, the caption may include a dual-layer file (e.g., soft captioning), where each layer is encoded using the encoding technique, so that the caption may be adjusted independently from the video preview (e.g., change language of the text in the caption). The video preview and caption can overlap (e.g., where the caption can be displayed on top of the video preview layer without altering the video preview itself). In another example, the video preview and caption can be transcoded in order to incorporate the caption with the video preview in a single-layered video preview (e.g., caption “burned in” to the video). Additional information about incorporating captions can be found in U.S. patent application Ser. No. ______, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), which is incorporated by reference in its entirety. - E. Determining an Encoding Technique Based on the Application that Will Display the Video Preview
-
FIG. 5 shows illustrations of a video preview displayed in various applications according to an embodiment of the present invention. For example, theenvironment 500 can comprise a plurality of computing devices, including aprovider server 220, one or more user devices 230 (e.g., 530, 540), and anapplication 550. Theprovider server 220 can create and provide a plurality of video previews 520, 522, 524 to the other devices and applications in the environment. - In some embodiments, the identifier may specify that the application is a network browser (e.g., Firefox, Internet Explorer, Chrome). The
provider server 220 can create a video preview using a particular encoding technique for the network browser. For example, the encoding technique can include WebM (.webm) video container based on which encoding techniques the application supports. In another example, theprovider server 220 can create multiple video previews using multiple encoding techniques (e.g., including an .mp4 video container). The first encoding technique can be provided to the user and other encoding techniques can be used to create video previews for other applications that may also display the video preview. Network browsers (e.g., other than Firefox) may use various encoding techniques, including MP4, animated GIF, Ogg Video files (e.g., file extension .ogv, mime type video/ogg), Theora video codec, and Ogg Vorbis audio codec. - Other applications may be identified as well. For example, any software application (e.g., “app”) or client (e.g., email client) that can be configured to run on a mobile device, smartphone, gaming console, or television. Similar encoding techniques may be implemented with these applications, including GIFs or encoding techniques where video previews will not automatically play. In some examples, audio may be omitted based on constraints of the device or application as well.
- F. Determining an Encoding Technique Based on a Network
- In some embodiments, the encoding technique can be determined based on the network. For example, a
provider server 220 oruser device 230 can identify that a network is relatively slow. The encoding technique can be determined to generate a smaller video preview (e.g., a tiny GIF) instead of a larger file, so that theuser device 230 can receive the video preview significantly quicker. In another example, the identifier may specify that the video preview will be displayed in a messaging service (e.g., SMS, multimedia messaging service (MMS), text message). Theprovider server 220 can determine the encoding technique that can create a smaller video preview, because the video preview will likely be viewed using a slower network connection. - A video preview or video file may be encoded or compressed using one or more palette-based optimization techniques. For example, the video file can be compressed using a common color palette.
- A. Use of Common Color Palette
-
FIG. 6 shows flowchart illustrating a method of generating a video preview using a palette-based optimization technique according to an embodiment of the present invention. - At
block 610, a request to generate a compressed video file can be received. For example, the request can specify at least a portion of a full video to be used in creating the compressed video file. The specified portion of the full video can comprise a plurality of images. In some embodiments, the computing device can receive information associated with the specified portion of the full video (e.g., a timestamp of a location in a full video, a request specifying a portion of a full video, a link to the full video, the full video file, a push notification including the link to the full video). - The request can include an identification of the full video or a litany of other information, including a start/end time, link to a full video at the
video server 210, timestamp, the user's internet protocol (IP) address, a user-agent string of the browser, cookies, a user's user identifier (ID), and other information. A user-agent string, for example, may include information about auser device 230 in order for the webserver to choose or limit content based on the known capabilities of a particular version of the user device 230 (e.g., client software). Theprovider server 220 can receive this and other information from theuser device 230. - At
block 620, a palette-based optimization technique can be determined. The palette-based optimization technique can be used to generate the compressed video file. For example, the palette-based optimization technique can limit the number of colors used to create the compressed video file. In another example, a single color palette can be used for encoding the compressed video file, instead of one color palette for each of the images in the video file. - At block 630, the plurality of images can be analyzed using the palette-based optimization technique. For example, the analysis can determine at least one common color palette. The plurality of images and common color palette can be used to generate multiple compressed images of the compressed video file. In some examples, a representative image can be chosen (e.g., for a portion of the full video, for a scene, etc.).
- The common color palette can be a single color palette (e.g., a combination of palettes from multiple images) or multiple color palettes (e.g., where one color palette is used for one portion of the images and another color palette is used for another potion of the images). For example, one or more images can be analyzed and the union of the colors in the analyzed images can be used to make a single common color palette. In another example, an image can be chosen as a representative image for each scene. In yet another example, one or more images can be analyzed to identify multiple scenes in the images, e.g., where different scenes involve different objects and/or backgrounds. A common color palette can be generated for each scene. In another example, a common color palette can be generated using the union of the colors in each scene to generate a common color palette for the union of the colors in the scenes. The colors may be aggregated and/or the union of the colors may be used to generate the common color palette.
- At
block 640, multiple compressed images can be specified. The multiple compressed images can be generated using the one or more common color palettes. For example, the images from the mountain scene or the person's face can be identified and encoded using one or more common color palettes. In the mountain scene, the multiple compressed images can include a first image from the top of the mountain, a second image from 10-feet down the mountain, and a third image from 20-feet down the mountain. The resultant images may be compressed images that are limited to the defined colors in the palette (e.g., the same color palette can be used for each of the three compressed images, including a color palette that uses four colors out of 256 possible colors). - A scene may be an image in the full video or video preview that includes a similar background or combination of pixels as one or more other frames in the full video or video preview. The scene can include a different rendered view of the image in the full video or video preview. For example, a full video may include a first scene showing a President and a second scene showing people walking to meet the President. The background or combination of pixels for the first scene may be distinguishable from the second scene.
- In another example, the multiple compressed images can include different scenes. The scenes may be analyzed based on the compressed images that are used to create the scene. For example, the full video can include six compressed images. The first three compressed images can include the President speaking to a group of people and the second three compressed images can include the group of people listening to the President. The full video can pan between the President and the group of people, or simply capture a plurality of images from the President, pause the camera or edit the frames to remove the panning, and capture a plurality of images from the group listening to the President. There may be two common color palettes, including one common color palette for the President (e.g., navy blues, deep reds) and one common color palette for the group of people (e.g., pastel colors).
- In some examples, the specified portion of the full video can be analyzed to determine information about a plurality of scenes in the full video or video preview. For example, before creating the compressed video file, the specified portion of the full video can be analyzed. The analysis can help determine the plurality of scenes in the specified portion of the full video and used to determine a common color palette. The common color palette can be an aggregated combination of the scenes, or multiple common color palettes can be determined for each of the plurality of scenes (e.g., if there are two scenes, then two common color palettes can be determined).
- At
block 650, the compressed video file can be created. For example, the compressed video file can be created from the plurality of images of the specified portion of the full video. As illustrated, the compressed video preview can include the two scenes and the common color palette can be generated from each scene in the video preview. In another example, a single common color palette may be generated based on a combination of the plurality of scenes. The multiple compressed images can be rendered using the common color palette when the compressed video file is viewed. - B. Optimization Techniques
- A variety of optimization techniques are possible, including palette-based optimization. For example, when generating a compressed video file in an animated GIF format, the optimization technique can include generating a common color palette. A single common color palette can be generated for the entire compressed video file (e.g., one palette shared by each of the images or frames identified in the full video). In some examples, a plurality of frames can be analyzed and used to generate a single image. The common color palette can be generated from the single combined image.
- In some examples, a scene analysis can be one type of optimization technique that is used, without generating a common color palette. For example, when a person is speaking into a camera in the full video, the mouth of the person may change throughout the full video, but the rest of the person's face and background around the person may remain constant. The optimization technique can use the same image information for the minimal changing portions of the image instead of storing new image information that is substantially the same as the rest of the image information (e.g., using a cinemagraph generator). In some embodiments, the scene analysis may consider which portions of the image are static or dynamic through user input using a graphical user interface. For example, with a “brush”-like tool, the user can click and drag over the areas that are to remain dynamic.
- In an embodiment, the full video may be encoded to an animated GIF using indexed color. For example, with indexed color, the color information for the animated GIF may not be directly stored with image pixel data. Instead, the color information can be stored in an array of color elements that defines the particular color called a palette.
- In a standard animated GIF, as much as one palette for every frame can be specified. In some embodiments, the palette-based optimization technique can limit the number of palettes used for the frames. For example, a common color palette can be generated for a plurality of images in the full video (e.g., the portion of the full video that displays a mountain scene with similar colors, the portion of the full video that displays a person's face in the center of the frame as they walk through a city). In another example, a common color palette can be generated for a plurality of images using default color specifications (e.g., red-green-blue, black/white, a limited range of red- and blue-tones, etc.). In another example, multiple common color palettes can be generated for one compressed video file, such that one or more common color palettes are used for one portion of the full video, one or more common color palettes are used for a second portion of the full video, and so on.
- When a single common color palette is used, the color can be selected using various methods. For example, a plurality of images (e.g., four frames) can be selected that contain the largest file size. The largest frames may, in some embodiments, identify the most colors so that the common color palette identifies several colors (e.g., a Mardi Gras scene versus a snow storm scene). In another example, plurality of images can be selected that are selected periodically (e.g.,
minutes 1, 2, and 3 in a 4 minute full video, in the portion of the full video, in the compressed video file, in the entire full video). The images that are selected periodically can identify a broad representation of the colors in the image (e.g., assuming that the scene will change as the video progresses). - In another embodiment, multiple common color palettes (e.g., palette clusters) can be identified. For example, in a full video where the camera cuts back and forth between two people having a conversation, the colors associated with each person may vary. A common color palette can be generated for each scene. The common color palettes for each scene can beneficially reduce file size from the original full video and provide better quality compressed video file than a single common color palette with multiple scenes.
-
FIG. 7 shows an illustration of a common color palette according to an embodiment of the present invention. For example, the illustration shows a 2-bitindexed image 710 where eachpixel 720 is represented by a number/index, and animage 730, where each number/index corresponds with acolor 740. Each pixel may correspond with some value in the color palette (e.g., 0 and 1 in the illustration corresponds with black and white, respectively). In some optimization techniques, the image can be encoded in a similar method as shown inFIG. 7 . The color information may not be directly associated with image pixel data (e.g., image pixel [0,0] is Red-100), but can be stored in a separate piece of data called a color palette. The color palette may be an array of color elements, in which each element (e.g., a color) is indexed by its position within the array. The image pixels may not contain the full specification of its color, but can potentially contain its index in the palette. Once the color palette is generated (e.g., bitmap corresponding with the 2-bitindexed image 710 on the top ofFIG. 7 ), theimage 730 can be formed using the color palette (e.g., the checkerboard image on the bottom ofFIG. 7 ). Theimage 730 can result in a close representation of an original image (e.g., and video preview) that uses less memory or storage. - In some embodiments, a
pixel 720 is associated with acorresponding color 740 in a color palette. For example, pixel [0,0] can be associated with neon green. As discussed, several pixels are used to create an image or frame, and then several images or frames are used to generate the video preview. As illustrated,pixel 720 corresponds with ablack color 740 and the pixel next to 720 corresponds with a white color. The common color palette can include only black and white because black and white are the only colors in this image or frame. The other images or frames of the video preview (e.g., 100 other frames or images) can be created using only black and white, so that when all the images or frames that use the common color palette are sequentially ordered to form the video preview, the video preview will comprise the colors in the common color palette. The reduced number of colors that are stored in a color palette (e.g., fewer numbers associated with colors, fewer colors associated with images/frames, a reduced number of colors, etc.) can result in a reduced size in memory or storage for storing the color palette. - Depending on the optimization technique used, the common color palette can be generated from one or more images that contain at least a specified file size. The specified file size can be above a certain threshold (e.g., an image that is above 1-kilobyte (1 k)) or include the maximum file size when compared with other images in the full video (e.g., the first image is 1 k, the second image is 1.5 k, the third image is 2 k, so the specified file size is 2 k and the common color palette can be generated using the third image). The specified file size can be retrieved (e.g., the threshold or maximum file size can be retrieved from a
provider server 220 or user device 230), dynamically determined (e.g., when the request to generate a compressed video file is received), and/or specified by a user operating auser device 230. - A compression can be combined with an optimization technique to further optimize the video preview (e.g., to take advantage of a region of pixels with the same color). For example, the left-half of the image may be black (e.g., a black building, a night image in a video preview, etc.). The values associated with the pixels in the image showing the black portion can be compressed by storing one value instead of many. The one value may be the same or similar for each of those pixels on the left-half of the image, so the compression can store the single color. The pixels in the left-half of the image can reference the single stored color. In another example, the image may contain a frame around the image. The color of the frame can be stored as one color and each of the pixels or portions of the image that are used to create the frame can reference the one color.
- Any of the clients or servers may utilize any suitable number of subsystems. Examples of such subsystems or components are shown in
FIG. 8 . The subsystems shown inFIG. 8 are interconnected via asystem bus 875. Additional subsystems such as aprinter 874,keyboard 878, fixeddisk 879, monitor 876, which is coupled todisplay adapter 882, and others are shown. Peripherals and input/output (I/O) devices, which couple to I/O controller 871, can be connected to the computer system by any number of means known in the art, such as input/output (I/O) port 877 (e.g., USB, FireWire®). For example, I/O port 877 or external interface 881 (e.g. Ethernet, Wi-Fi, etc.) can be used to connect the computer apparatus to a wide area network such as the Internet, a mouse input device, or a scanner. The interconnection via system bus allows thecentral processor 873, which may include one or more processors, to communicate with each subsystem and to control the execution of instructions fromsystem memory 872 or the fixed disk 879 (such as a hard drive or optical disk), as well as the exchange of information between subsystems. Thesystem memory 872 and/or the fixeddisk 879 may embody a computer readable medium. Any of the data mentioned herein can be output from one component to another component and can be output to the user. - It should be understood that any of the embodiments of the present invention can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As user herein, a processor includes a multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software.
- Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java®, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.
- Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer program product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer program products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
- Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.
- The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention. However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.
- The above description of exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.
- A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary.
Claims (20)
1. A method of creating a video preview, the method comprising:
receiving, at the provider server, a request to generate a video preview of a full video, the request specifying a portion of the full video;
receiving at least the portion of the full video at the provider server;
receiving an identifier specifying a device or an application for displaying the video preview;
determining an encoding technique based on the identifier to generate the video preview, wherein the video preview being of the portion of the full video;
creating, by the provider server, the video preview from the full video based on the determined encoding technique; and
providing, by the provider server, the video preview to a user device.
2. The method of claim 1 , wherein the encoding technique is dependent on the identification of a type of user device that submitted the request to generate the video preview.
3. The method of claim 1 , wherein the encoding technique is dependent on the type of application that will display the video preview at the user device.
4. The method of claim 1 , wherein the encoding technique includes a graphics interchange format (GIF), MP4 container, a H.264 video codec, an advanced audio coding (AAC) audio codec, WebM container, VP8 video codec, or an Ogg Vorbis audio codec.
5. The method of claim 1 , wherein the video preview is a first video preview generated using a first encoding technique, and the method further comprising:
generating a second video preview using a second encoding technique, wherein the second video preview is generated to allow the user device to share the second video preview through a particular medium; and
providing both the first video preview and second video preview to the user device.
6. The method of claim 1 , further comprising:
downloading the full video through a plurality of video streams, wherein the plurality of video streams include video content from the full video; and
generating one or more video previews from video content received through the plurality of video streams.
7. The method of claim 1 , wherein the video preview is a first video preview, the determined encoding technique is a first encoding technique, and the method further comprises:
determining a second encoding technique, wherein the second encoding technique is different than the first encoding technique, and wherein the second encoding technique is used to share the second video preview with a different device or application than the first encoding technique; and
creating a second video preview using a second encoding technique.
8. The method of claim 1 , wherein the encoding technique includes a palette-based size optimization by generating a common color palette for the video preview and limiting the video preview to the common color palette.
9. A method of compressing a video file the method comprising:
receiving, at a computer, a request to generate a compressed video file, the request specifying at least a portion of a full video to be used in creating the compressed video file, wherein the specified portion of the full video comprises a plurality of images;
determining, by a computer, a palette-based optimization technique to generate the compressed video file;
analyzing, by a computer, the plurality of images using the palette-based optimization technique to determine at least one common color palette, each to be used to generate multiple compressed images of the compressed video file;
specifying, by a computer, a first multiple compressed images to be generated using a first common color palette; and
creating, by a computer, the compressed video file from the plurality of images of the specified portion of the full video such that the first multiple compressed images are rendered using the first common color palette when the compressed video file is viewed.
10. The method of claim 9 , further comprising providing, by the provider server, the video preview to a user device.
11. The method of claim 9 , wherein the common color palette for the video preview is limited to a single frame in the video preview.
12. The method of claim 9 , wherein the at least one common color palette is generated from one or more images that contain at least a specified file size.
13. The method of claim 12 , wherein the specified file size is above a threshold.
14. The method of claim 12 , wherein the specified file size corresponds to a maximum file size when compared with other images in the portion of the full video.
15. The method of claim 9 , wherein the common color palette is generated from the one or more images that are selected periodically from the video preview.
16. The method of claim 9 , further comprising:
before creating the compressed video file, analyzing the specified portion of the full video;
determining a plurality of scenes in the specified portion of the full video based on the analysis; and
generating one or more common color palettes for each of the plurality of scenes.
17. The method of claim 9 , further comprising:
before creating the compressed video file, analyzing the specified portion of the full video;
determining a plurality of scenes in the specified portion of the full video;
determining an image that is representative of each scene; and
generating a common color palette that combines colors from the representative images.
18. The method of claim 9 , wherein the common color palette comprises a union of colors identified in the analyzed images.
19. The method of claim 9 , wherein the palette-based optimization technique includes an indexed color technique to manage colors in the frame.
20. A computer product comprising a non-transitory computer readable medium storing a plurality of instructions that when executed control a computer system to create a video preview, the instructions comprising:
receive a request to generate a video preview of a full video, the request specifying a portion of the full video;
receive at least the portion of the full video;
receive an identifier specifying a device or an application for displaying the video preview;
determine an encoding technique based on the identifier to generate the video preview, wherein the video preview being of the portion of the full video;
create the video preview from the full video based on the determined encoding technique; and
provide the video preview to a user device.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/173,732 US20140219634A1 (en) | 2013-02-05 | 2014-02-05 | Video preview creation based on environment |
US14/937,557 US9881646B2 (en) | 2013-02-05 | 2015-11-10 | Video preview creation with audio |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361761096P | 2013-02-05 | 2013-02-05 | |
US201361822105P | 2013-05-10 | 2013-05-10 | |
US201361847996P | 2013-07-18 | 2013-07-18 | |
US201361905772P | 2013-11-18 | 2013-11-18 | |
US14/173,732 US20140219634A1 (en) | 2013-02-05 | 2014-02-05 | Video preview creation based on environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140219634A1 true US20140219634A1 (en) | 2014-08-07 |
Family
ID=51259288
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/173,715 Active 2034-02-07 US9349413B2 (en) | 2013-02-05 | 2014-02-05 | User interface for video preview creation |
US14/173,697 Active US9530452B2 (en) | 2013-02-05 | 2014-02-05 | Video preview creation with link |
US14/173,745 Active 2034-08-20 US9589594B2 (en) | 2013-02-05 | 2014-02-05 | Generation of layout of videos |
US14/173,732 Abandoned US20140219634A1 (en) | 2013-02-05 | 2014-02-05 | Video preview creation based on environment |
US14/173,753 Active 2034-03-27 US9767845B2 (en) | 2013-02-05 | 2014-02-05 | Activating a video based on location in screen |
US14/173,740 Active 2034-03-26 US9244600B2 (en) | 2013-02-05 | 2014-02-05 | Video preview creation with audio |
US14/937,557 Active US9881646B2 (en) | 2013-02-05 | 2015-11-10 | Video preview creation with audio |
US15/091,358 Active US9852762B2 (en) | 2013-02-05 | 2016-04-05 | User interface for video preview creation |
US15/449,174 Active 2034-12-28 US10373646B2 (en) | 2013-02-05 | 2017-03-03 | Generation of layout of videos |
US15/668,465 Abandoned US20180019002A1 (en) | 2013-02-05 | 2017-08-03 | Activating a video based on location in screen |
US15/882,422 Active US10643660B2 (en) | 2013-02-05 | 2018-01-29 | Video preview creation with audio |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/173,715 Active 2034-02-07 US9349413B2 (en) | 2013-02-05 | 2014-02-05 | User interface for video preview creation |
US14/173,697 Active US9530452B2 (en) | 2013-02-05 | 2014-02-05 | Video preview creation with link |
US14/173,745 Active 2034-08-20 US9589594B2 (en) | 2013-02-05 | 2014-02-05 | Generation of layout of videos |
Family Applications After (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/173,753 Active 2034-03-27 US9767845B2 (en) | 2013-02-05 | 2014-02-05 | Activating a video based on location in screen |
US14/173,740 Active 2034-03-26 US9244600B2 (en) | 2013-02-05 | 2014-02-05 | Video preview creation with audio |
US14/937,557 Active US9881646B2 (en) | 2013-02-05 | 2015-11-10 | Video preview creation with audio |
US15/091,358 Active US9852762B2 (en) | 2013-02-05 | 2016-04-05 | User interface for video preview creation |
US15/449,174 Active 2034-12-28 US10373646B2 (en) | 2013-02-05 | 2017-03-03 | Generation of layout of videos |
US15/668,465 Abandoned US20180019002A1 (en) | 2013-02-05 | 2017-08-03 | Activating a video based on location in screen |
US15/882,422 Active US10643660B2 (en) | 2013-02-05 | 2018-01-29 | Video preview creation with audio |
Country Status (1)
Country | Link |
---|---|
US (11) | US9349413B2 (en) |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160196852A1 (en) * | 2015-01-05 | 2016-07-07 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9530452B2 (en) | 2013-02-05 | 2016-12-27 | Alc Holdings, Inc. | Video preview creation with link |
US20170075526A1 (en) * | 2010-12-02 | 2017-03-16 | Instavid Llc | Lithe clip survey facilitation systems and methods |
US9666232B2 (en) | 2014-08-20 | 2017-05-30 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US9679605B2 (en) | 2015-01-29 | 2017-06-13 | Gopro, Inc. | Variable playback speed template for video editing application |
US9721611B2 (en) | 2015-10-20 | 2017-08-01 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US9754159B2 (en) | 2014-03-04 | 2017-09-05 | Gopro, Inc. | Automatic generation of video from spherical content using location-based metadata |
US9761278B1 (en) | 2016-01-04 | 2017-09-12 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US9794632B1 (en) | 2016-04-07 | 2017-10-17 | Gopro, Inc. | Systems and methods for synchronization based on audio track changes in video editing |
US9792502B2 (en) | 2014-07-23 | 2017-10-17 | Gopro, Inc. | Generating video summaries for a video using video summary templates |
US9812175B2 (en) | 2016-02-04 | 2017-11-07 | Gopro, Inc. | Systems and methods for annotating a video |
US9838731B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing with audio mixing option |
US9836853B1 (en) | 2016-09-06 | 2017-12-05 | Gopro, Inc. | Three-dimensional convolutional neural networks for video highlight detection |
US9894393B2 (en) | 2015-08-31 | 2018-02-13 | Gopro, Inc. | Video encoding for reduced streaming latency |
US9911223B2 (en) * | 2016-05-13 | 2018-03-06 | Yahoo Holdings, Inc. | Automatic video segment selection method and apparatus |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US9998769B1 (en) | 2016-06-15 | 2018-06-12 | Gopro, Inc. | Systems and methods for transcoding media files |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10083718B1 (en) | 2017-03-24 | 2018-09-25 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10109319B2 (en) | 2016-01-08 | 2018-10-23 | Gopro, Inc. | Digital media editing |
US10127943B1 (en) | 2017-03-02 | 2018-11-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10187690B1 (en) | 2017-04-24 | 2019-01-22 | Gopro, Inc. | Systems and methods to detect and correlate user responses to media content |
US10186012B2 (en) | 2015-05-20 | 2019-01-22 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10185895B1 (en) | 2017-03-23 | 2019-01-22 | Gopro, Inc. | Systems and methods for classifying activities captured within images |
US10185891B1 (en) | 2016-07-08 | 2019-01-22 | Gopro, Inc. | Systems and methods for compact convolutional neural networks |
US10204273B2 (en) | 2015-10-20 | 2019-02-12 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US10250894B1 (en) | 2016-06-15 | 2019-04-02 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US10262639B1 (en) | 2016-11-08 | 2019-04-16 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10284809B1 (en) | 2016-11-07 | 2019-05-07 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10341712B2 (en) | 2016-04-07 | 2019-07-02 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US10360945B2 (en) | 2011-08-09 | 2019-07-23 | Gopro, Inc. | User interface for editing digital media objects |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10402656B1 (en) | 2017-07-13 | 2019-09-03 | Gopro, Inc. | Systems and methods for accelerating video analysis |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10469909B1 (en) | 2016-07-14 | 2019-11-05 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10474877B2 (en) | 2015-09-22 | 2019-11-12 | Google Llc | Automated effects generation for animated content |
US10534966B1 (en) | 2017-02-02 | 2020-01-14 | Gopro, Inc. | Systems and methods for identifying activities and/or events represented in a video |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
CN111435995A (en) * | 2019-01-15 | 2020-07-21 | 北京字节跳动网络技术有限公司 | Method, device and system for generating dynamic picture |
US11138207B2 (en) | 2015-09-22 | 2021-10-05 | Google Llc | Integrated dynamic interface for expression-based retrieval of expressive media content |
CN113728591A (en) * | 2019-04-19 | 2021-11-30 | 微软技术许可有限责任公司 | Previewing video content referenced by hyperlinks entered in comments |
US11678031B2 (en) | 2019-04-19 | 2023-06-13 | Microsoft Technology Licensing, Llc | Authoring comments including typed hyperlinks that reference video content |
US11785194B2 (en) | 2019-04-19 | 2023-10-10 | Microsoft Technology Licensing, Llc | Contextually-aware control of a user interface displaying a video and related user text |
Families Citing this family (109)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9826197B2 (en) | 2007-01-12 | 2017-11-21 | Activevideo Networks, Inc. | Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device |
US9773059B2 (en) * | 2010-11-09 | 2017-09-26 | Storagedna, Inc. | Tape data management |
EP2815582B1 (en) | 2012-01-09 | 2019-09-04 | ActiveVideo Networks, Inc. | Rendering of an interactive lean-backward user interface on a television |
US9800945B2 (en) | 2012-04-03 | 2017-10-24 | Activevideo Networks, Inc. | Class-based intelligent multiplexing over unmanaged networks |
USD732049S1 (en) | 2012-11-08 | 2015-06-16 | Uber Technologies, Inc. | Computing device display screen with electronic summary or receipt graphical user interface |
US9591339B1 (en) | 2012-11-27 | 2017-03-07 | Apple Inc. | Agnostic media delivery system |
US9774917B1 (en) | 2012-12-10 | 2017-09-26 | Apple Inc. | Channel bar user interface |
US10200761B1 (en) | 2012-12-13 | 2019-02-05 | Apple Inc. | TV side bar user interface |
US9532111B1 (en) | 2012-12-18 | 2016-12-27 | Apple Inc. | Devices and method for providing remote control hints on a display |
US10521188B1 (en) | 2012-12-31 | 2019-12-31 | Apple Inc. | Multi-user TV user interface |
WO2014145921A1 (en) | 2013-03-15 | 2014-09-18 | Activevideo Networks, Inc. | A multiple-mode system and method for providing user selectable video content |
EP3005712A1 (en) | 2013-06-06 | 2016-04-13 | ActiveVideo Networks, Inc. | Overlay rendering of user interface onto source video |
US9270964B1 (en) * | 2013-06-24 | 2016-02-23 | Google Inc. | Extracting audio components of a portion of video to facilitate editing audio of the video |
US9620169B1 (en) * | 2013-07-26 | 2017-04-11 | Dreamtek, Inc. | Systems and methods for creating a processed video output |
USD745550S1 (en) * | 2013-12-02 | 2015-12-15 | Microsoft Corporation | Display screen with animated graphical user interface |
USD745551S1 (en) * | 2014-02-21 | 2015-12-15 | Microsoft Corporation | Display screen with animated graphical user interface |
US20150277686A1 (en) * | 2014-03-25 | 2015-10-01 | ScStan, LLC | Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format |
US9832418B2 (en) * | 2014-04-15 | 2017-11-28 | Google Inc. | Displaying content between loops of a looping media item |
US9788029B2 (en) | 2014-04-25 | 2017-10-10 | Activevideo Networks, Inc. | Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks |
US10158847B2 (en) * | 2014-06-19 | 2018-12-18 | Vefxi Corporation | Real—time stereo 3D and autostereoscopic 3D video and image editing |
CN111078110B (en) | 2014-06-24 | 2023-10-24 | 苹果公司 | Input device and user interface interactions |
JP6482578B2 (en) | 2014-06-24 | 2019-03-13 | アップル インコーポレイテッドApple Inc. | Column interface for navigating in the user interface |
KR102252448B1 (en) | 2014-09-12 | 2021-05-14 | 삼성전자주식회사 | Method for controlling and an electronic device thereof |
US9998518B2 (en) * | 2014-09-18 | 2018-06-12 | Multipop Llc | Media platform for adding synchronized content to media with a duration |
US20170034568A1 (en) * | 2014-09-19 | 2017-02-02 | Panasonic Intellectual Property Management Co., Ltd. | Video audio processing device, video audio processing method, and program |
US20160127807A1 (en) * | 2014-10-29 | 2016-05-05 | EchoStar Technologies, L.L.C. | Dynamically determined audiovisual content guidebook |
US10523985B2 (en) | 2014-12-24 | 2019-12-31 | Activevideo Networks, Inc. | Managing deep and shallow buffers in a thin-client device of a digital media distribution network |
US10264293B2 (en) * | 2014-12-24 | 2019-04-16 | Activevideo Networks, Inc. | Systems and methods for interleaving video streams on a client device |
US20160212487A1 (en) * | 2015-01-19 | 2016-07-21 | Srinivas Rao | Method and system for creating seamless narrated videos using real time streaming media |
US10375444B2 (en) * | 2015-02-13 | 2019-08-06 | Performance and Privacy Ireland Limited | Partial video pre-fetch |
US20160275989A1 (en) * | 2015-03-16 | 2016-09-22 | OZ ehf | Multimedia management system for generating a video clip from a video file |
CN104837050B (en) * | 2015-03-23 | 2018-09-04 | 腾讯科技(北京)有限公司 | A kind of information processing method and terminal |
EP3086321B1 (en) * | 2015-04-24 | 2022-07-06 | ARRIS Enterprises LLC | Designating partial recordings as personalized multimedia clips |
WO2016189347A1 (en) * | 2015-05-22 | 2016-12-01 | Playsight Interactive Ltd. | Event based video generation |
US9727749B2 (en) * | 2015-06-08 | 2017-08-08 | Microsoft Technology Licensing, Llc | Limited-access functionality accessible at login screen |
US20170026721A1 (en) * | 2015-06-17 | 2017-01-26 | Ani-View Ltd. | System and Methods Thereof for Auto-Playing Video Content on Mobile Devices |
US20160372155A1 (en) * | 2015-06-19 | 2016-12-22 | Elmer Tolentino, JR. | Video bit processing |
WO2017004195A1 (en) * | 2015-06-29 | 2017-01-05 | Google Inc. | Transmitting application data for on-device demos |
US9715901B1 (en) * | 2015-06-29 | 2017-07-25 | Twitter, Inc. | Video preview generation |
KR101708318B1 (en) * | 2015-07-23 | 2017-02-20 | 엘지전자 주식회사 | Mobile terminal and control method for the mobile terminal |
US10417317B2 (en) | 2015-07-27 | 2019-09-17 | Adp, Llc | Web page profiler |
US10324600B2 (en) | 2015-07-27 | 2019-06-18 | Adp, Llc | Web page generation system |
US10742764B2 (en) * | 2015-07-27 | 2020-08-11 | Adp, Llc | Web page generation system |
US10521100B2 (en) | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US10521099B2 (en) * | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US20170060404A1 (en) * | 2015-08-28 | 2017-03-02 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US9923941B2 (en) * | 2015-11-05 | 2018-03-20 | International Business Machines Corporation | Method and system for dynamic proximity-based media sharing |
CN105528427B (en) * | 2015-12-08 | 2019-05-10 | 腾讯科技(深圳)有限公司 | Sharing method and device in media file processing method, social application |
CN105635837B (en) * | 2015-12-30 | 2019-04-19 | 努比亚技术有限公司 | A kind of video broadcasting method and device |
US9620140B1 (en) * | 2016-01-12 | 2017-04-11 | Raytheon Company | Voice pitch modification to increase command and control operator situational awareness |
US11012719B2 (en) * | 2016-03-08 | 2021-05-18 | DISH Technologies L.L.C. | Apparatus, systems and methods for control of sporting event presentation based on viewer engagement |
US20170285912A1 (en) * | 2016-03-30 | 2017-10-05 | Google Inc. | Methods, systems, and media for media guidance |
US9762971B1 (en) * | 2016-04-26 | 2017-09-12 | Amazon Technologies, Inc. | Techniques for providing media content browsing |
DK201670582A1 (en) | 2016-06-12 | 2018-01-02 | Apple Inc | Identifying applications on which content is available |
DK201670581A1 (en) | 2016-06-12 | 2018-01-08 | Apple Inc | Device-level authorization for viewing content |
US20170374423A1 (en) * | 2016-06-24 | 2017-12-28 | Glen J. Anderson | Crowd-sourced media playback adjustment |
CN107634974A (en) * | 2016-07-15 | 2018-01-26 | 中兴通讯股份有限公司 | A kind of data transmission method and device |
EP3501170A4 (en) * | 2016-08-19 | 2020-01-01 | Oiid, LLC | Interactive music creation and playback method and system |
US20180102143A1 (en) * | 2016-10-12 | 2018-04-12 | Lr Acquisition, Llc | Modification of media creation techniques and camera behavior based on sensor-driven events |
KR20230111276A (en) | 2016-10-26 | 2023-07-25 | 애플 인크. | User interfaces for browsing content from multiple content applications on an electronic device |
US20180157381A1 (en) * | 2016-12-02 | 2018-06-07 | Facebook, Inc. | Systems and methods for media item selection within a grid-based content feed |
CN106604086B (en) * | 2016-12-08 | 2019-06-04 | 武汉斗鱼网络科技有限公司 | The played in full screen method and system of preview video in Android application |
US11481816B2 (en) * | 2017-02-06 | 2022-10-25 | Meta Platforms, Inc. | Indications for sponsored content items within media items |
US20200137321A1 (en) * | 2017-06-28 | 2020-04-30 | Sourcico Ltd. | Pulsating Image |
CN107277617A (en) * | 2017-07-26 | 2017-10-20 | 深圳Tcl新技术有限公司 | Generation method, television set and the computer-readable recording medium of preview video |
US11323398B1 (en) * | 2017-07-31 | 2022-05-03 | Snap Inc. | Systems, devices, and methods for progressive attachments |
CN107484019A (en) * | 2017-08-03 | 2017-12-15 | 乐蜜有限公司 | The dissemination method and device of a kind of video file |
US20190069006A1 (en) * | 2017-08-29 | 2019-02-28 | Western Digital Technologies, Inc. | Seeking in live-transcoded videos |
CN109660852B (en) * | 2017-10-10 | 2021-06-15 | 武汉斗鱼网络科技有限公司 | Video preview method, storage medium, device and system before release of recorded video |
CN107995535B (en) * | 2017-11-28 | 2019-11-26 | 百度在线网络技术(北京)有限公司 | A kind of method, apparatus, equipment and computer storage medium showing video |
US20190199763A1 (en) * | 2017-12-22 | 2019-06-27 | mindHIVE Inc. | Systems and methods for previewing content |
USD875774S1 (en) * | 2018-01-04 | 2020-02-18 | Panasonic Intellectual Property Management Co., Ltd. | Display screen with graphical user interface |
CN111989645B (en) * | 2018-03-28 | 2022-03-29 | 华为技术有限公司 | Video previewing method and electronic equipment |
CN108401194B (en) * | 2018-04-27 | 2020-06-30 | 广州酷狗计算机科技有限公司 | Time stamp determination method, apparatus and computer-readable storage medium |
CN108769816B (en) * | 2018-04-28 | 2021-08-31 | 腾讯科技(深圳)有限公司 | Video playing method, device and storage medium |
CN108762866B (en) * | 2018-05-09 | 2021-08-13 | 北京酷我科技有限公司 | Short audio rolling display method |
DK201870354A1 (en) | 2018-06-03 | 2019-12-20 | Apple Inc. | Setup procedures for an electronic device |
CN113383292A (en) * | 2018-11-26 | 2021-09-10 | 图片巴特勒股份有限公司 | Demonstration file generation method |
CN109640188B (en) * | 2018-12-28 | 2020-02-07 | 北京微播视界科技有限公司 | Video preview method and device, electronic equipment and computer readable storage medium |
KR20200094525A (en) * | 2019-01-30 | 2020-08-07 | 삼성전자주식회사 | Electronic device for processing a file including a plurality of related data |
US11204959B1 (en) | 2019-02-06 | 2021-12-21 | Snap Inc. | Automated ranking of video clips |
CN109831678A (en) * | 2019-02-26 | 2019-05-31 | 中国联合网络通信集团有限公司 | Short method for processing video frequency and system |
US11683565B2 (en) | 2019-03-24 | 2023-06-20 | Apple Inc. | User interfaces for interacting with channels that provide content that plays in a media browsing application |
US11962836B2 (en) | 2019-03-24 | 2024-04-16 | Apple Inc. | User interfaces for a media browsing application |
WO2020198221A1 (en) | 2019-03-24 | 2020-10-01 | Apple Inc. | User interfaces for viewing and accessing content on an electronic device |
US11057682B2 (en) | 2019-03-24 | 2021-07-06 | Apple Inc. | User interfaces including selectable representations of content items |
JP7172796B2 (en) * | 2019-03-28 | 2022-11-16 | コニカミノルタ株式会社 | Display system, display control device and display control method |
US11863837B2 (en) | 2019-05-31 | 2024-01-02 | Apple Inc. | Notification of augmented reality content on an electronic device |
WO2020243645A1 (en) | 2019-05-31 | 2020-12-03 | Apple Inc. | User interfaces for a podcast browsing and playback application |
US11106748B2 (en) * | 2019-06-28 | 2021-08-31 | Atlassian Pty Ltd. | Systems and methods for generating digital content item previews |
US10990263B1 (en) | 2019-09-03 | 2021-04-27 | Gopro, Inc. | Interface for trimming videos |
US11503264B2 (en) | 2019-09-13 | 2022-11-15 | Netflix, Inc. | Techniques for modifying audiovisual media titles to improve audio transitions |
US11336947B2 (en) | 2019-09-13 | 2022-05-17 | Netflix, Inc. | Audio transitions when streaming audiovisual media titles |
US11567992B2 (en) * | 2019-09-18 | 2023-01-31 | Camilo Lopez | System and method for generating a video |
US11368549B2 (en) * | 2019-12-05 | 2022-06-21 | Microsoft Technology Licensing, Llc | Platform for multi-stream sampling and visualization |
US11843838B2 (en) | 2020-03-24 | 2023-12-12 | Apple Inc. | User interfaces for accessing episodes of a content series |
CN111541945B (en) * | 2020-04-28 | 2022-05-10 | Oppo广东移动通信有限公司 | Video playing progress control method and device, storage medium and electronic equipment |
US11237708B2 (en) | 2020-05-27 | 2022-02-01 | Bank Of America Corporation | Video previews for interactive videos using a markup language |
US11461535B2 (en) | 2020-05-27 | 2022-10-04 | Bank Of America Corporation | Video buffering for interactive videos using a markup language |
US11899895B2 (en) | 2020-06-21 | 2024-02-13 | Apple Inc. | User interfaces for setting up an electronic device |
US11381797B2 (en) | 2020-07-16 | 2022-07-05 | Apple Inc. | Variable audio for audio-visual content |
US11720229B2 (en) | 2020-12-07 | 2023-08-08 | Apple Inc. | User interfaces for browsing and presenting content |
US11550844B2 (en) | 2020-12-07 | 2023-01-10 | Td Ameritrade Ip Company, Inc. | Transformation of database entries for improved association with related content items |
US11934640B2 (en) | 2021-01-29 | 2024-03-19 | Apple Inc. | User interfaces for record labels |
CN113268622A (en) * | 2021-04-21 | 2021-08-17 | 北京达佳互联信息技术有限公司 | Picture browsing method and device, electronic equipment and storage medium |
US11741995B1 (en) * | 2021-09-29 | 2023-08-29 | Gopro, Inc. | Systems and methods for switching between video views |
US11910064B2 (en) * | 2021-11-04 | 2024-02-20 | Rovi Guides, Inc. | Methods and systems for providing preview images for a media asset |
CN113990494B (en) * | 2021-12-24 | 2022-03-25 | 浙江大学 | Tic disorder auxiliary screening system based on video data |
US11729003B1 (en) * | 2022-06-04 | 2023-08-15 | Uab 360 It | Optimized access control for network services |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5630105A (en) * | 1992-09-30 | 1997-05-13 | Hudson Soft Co., Ltd. | Multimedia system for processing a variety of images together with sound |
US5745103A (en) * | 1995-08-02 | 1998-04-28 | Microsoft Corporation | Real-time palette negotiations in multimedia presentations |
US5781183A (en) * | 1992-10-01 | 1998-07-14 | Hudson Soft Co., Ltd. | Image processing apparatus including selecting function for displayed colors |
US20010023200A1 (en) * | 2000-01-12 | 2001-09-20 | Kentaro Horikawa | Data creation device for image display and record medium |
US6335985B1 (en) * | 1998-01-07 | 2002-01-01 | Kabushiki Kaisha Toshiba | Object extraction apparatus |
US20020112244A1 (en) * | 2000-12-19 | 2002-08-15 | Shih-Ping Liou | Collaborative video delivery over heterogeneous networks |
US20040010687A1 (en) * | 2002-06-11 | 2004-01-15 | Yuichi Futa | Content distributing system and data-communication controlling device |
US20050019015A1 (en) * | 2003-06-02 | 2005-01-27 | Jonathan Ackley | System and method of programmatic window control for consumer video players |
US20070074269A1 (en) * | 2002-02-22 | 2007-03-29 | Hai Hua | Video processing device, video recorder/playback module, and methods for use therewith |
US7324119B1 (en) * | 2003-07-14 | 2008-01-29 | Adobe Systems Incorporated | Rendering color images and text |
US20080273804A1 (en) * | 2007-05-02 | 2008-11-06 | Motorola, Inc. | Image Transformation |
US20090249421A1 (en) * | 2008-03-26 | 2009-10-01 | Xiaomei Liu | Distributing digital video content to multiple end-user devices |
US20100260468A1 (en) * | 2009-04-14 | 2010-10-14 | Maher Khatib | Multi-user remote video editing |
US20110007087A1 (en) * | 2009-07-13 | 2011-01-13 | Echostar Technologies L.L.C. | Systems and methods for a common image data array file |
US20120079529A1 (en) * | 2010-09-29 | 2012-03-29 | Verizon Patent And Licensing, Inc. | Multiple device storefront for video provisioning system |
US20120099641A1 (en) * | 2010-10-22 | 2012-04-26 | Motorola, Inc. | Method and apparatus for adjusting video compression parameters for encoding source video based on a viewer's environment |
US20120120095A1 (en) * | 2009-07-31 | 2012-05-17 | Hitoshi Yoshitani | Image processing device, control method for image processing device, control program for image processing device, and recording medium in which control program is recorded |
US20130007198A1 (en) * | 2011-06-30 | 2013-01-03 | Infosys Technologies, Ltd. | Methods for recommending personalized content based on profile and context information and devices thereof |
US20130097238A1 (en) * | 2011-10-18 | 2013-04-18 | Bruce Rogers | Platform-Specific Notification Delivery Channel |
US20130129317A1 (en) * | 2011-06-03 | 2013-05-23 | James A. Moorer | Client Playback of Streaming Video Adapted for Smooth Transitions and Viewing in Advance Display Modes |
US20140359656A1 (en) * | 2013-05-31 | 2014-12-04 | Adobe Systems Incorporated | Placing unobtrusive overlays in video content |
Family Cites Families (96)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5355450A (en) * | 1992-04-10 | 1994-10-11 | Avid Technology, Inc. | Media composer with adjustable source material compression |
US6263507B1 (en) * | 1996-12-05 | 2001-07-17 | Interval Research Corporation | Browser for use in navigating a body of information, with particular application to browsing information represented by audiovisual data |
CA2202106C (en) * | 1997-04-08 | 2002-09-17 | Mgi Software Corp. | A non-timeline, non-linear digital multimedia composition method and system |
US6526577B1 (en) | 1998-12-01 | 2003-02-25 | United Video Properties, Inc. | Enhanced interactive program guide |
US7181691B2 (en) | 1999-09-16 | 2007-02-20 | Sharp Laboratories Of America, Inc. | Audiovisual information management system with presentation service |
JP3617413B2 (en) | 2000-06-02 | 2005-02-02 | 日産自動車株式会社 | Control device for electromagnetically driven valve |
WO2002057898A1 (en) * | 2001-01-16 | 2002-07-25 | Brainshark, Inc. | Method of and system for composing, delivering, viewing and managing audio-visual presentations over a communications network |
US7169996B2 (en) | 2002-11-12 | 2007-01-30 | Medialab Solutions Llc | Systems and methods for generating music using data/music data file transmitted/received via a network |
CA2525587C (en) | 2003-05-15 | 2015-08-11 | Comcast Cable Holdings, Llc | Method and system for playing video |
US20050144016A1 (en) | 2003-12-03 | 2005-06-30 | Christopher Hewitt | Method, software and apparatus for creating audio compositions |
US9715898B2 (en) | 2003-12-16 | 2017-07-25 | Core Wireless Licensing S.A.R.L. | Method and device for compressed-domain video editing |
US20050193341A1 (en) | 2004-02-27 | 2005-09-01 | Hayward Anthony D. | System for aggregating, processing and delivering video footage, documents, audio files and graphics |
JP4385974B2 (en) | 2004-05-13 | 2009-12-16 | ソニー株式会社 | Image display method, image processing apparatus, program, and recording medium |
EP1762095A1 (en) | 2004-06-17 | 2007-03-14 | Koninklijke Philips Electronics N.V. | Personalized summaries using personality attributes |
JP4427733B2 (en) * | 2004-07-16 | 2010-03-10 | ソニー株式会社 | VIDEO / AUDIO PROCESSING SYSTEM, AMPLIFIER DEVICE, AND AUDIO DELAY TREATMENT METHOD |
US20060077817A1 (en) * | 2004-09-13 | 2006-04-13 | Seo Kang S | Method and apparatus for reproducing data from recording medium using local storage |
KR20070049164A (en) * | 2004-09-13 | 2007-05-10 | 엘지전자 주식회사 | Method and apparatus for reproducing data from recording medium using local storage |
US20060059504A1 (en) | 2004-09-14 | 2006-03-16 | Eduardo Gomez | Method for selecting a preview of a media work |
US7558291B2 (en) * | 2005-02-24 | 2009-07-07 | Cisco Technology, Inc. | Device and mechanism to manage consistent delay across multiple participants in a multimedia experience |
US20060204214A1 (en) | 2005-03-14 | 2006-09-14 | Microsoft Corporation | Picture line audio augmentation |
KR100654455B1 (en) | 2005-05-26 | 2006-12-06 | 삼성전자주식회사 | Apparatus and method for providing addition information using extension subtitle file |
KR100710752B1 (en) | 2005-06-03 | 2007-04-24 | 삼성전자주식회사 | System and apparatus and method for generating panorama image |
US20070006262A1 (en) | 2005-06-30 | 2007-01-04 | Microsoft Corporation | Automatic content presentation |
US20070112926A1 (en) * | 2005-11-03 | 2007-05-17 | Hannon Brett | Meeting Management Method and System |
US20070118801A1 (en) | 2005-11-23 | 2007-05-24 | Vizzme, Inc. | Generation and playback of multimedia presentations |
US20070136750A1 (en) | 2005-12-13 | 2007-06-14 | Microsoft Corporation | Active preview for media items |
US8607287B2 (en) | 2005-12-29 | 2013-12-10 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
US20070157240A1 (en) | 2005-12-29 | 2007-07-05 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
US8358704B2 (en) * | 2006-04-04 | 2013-01-22 | Qualcomm Incorporated | Frame level multimedia decoding with frame information table |
US20080036917A1 (en) * | 2006-04-07 | 2008-02-14 | Mark Pascarella | Methods and systems for generating and delivering navigatable composite videos |
US8077153B2 (en) * | 2006-04-19 | 2011-12-13 | Microsoft Corporation | Precise selection techniques for multi-touch screens |
US7844354B2 (en) | 2006-07-27 | 2010-11-30 | International Business Machines Corporation | Adjusting the volume of an audio element responsive to a user scrolling through a browser window |
US7623755B2 (en) * | 2006-08-17 | 2009-11-24 | Adobe Systems Incorporated | Techniques for positioning audio and video clips |
US7844352B2 (en) | 2006-10-20 | 2010-11-30 | Lehigh University | Iterative matrix processor based implementation of real-time model predictive control |
US9729829B2 (en) | 2006-12-05 | 2017-08-08 | Crackle, Inc. | Video sharing platform providing for posting content to other websites |
JP2008146453A (en) | 2006-12-12 | 2008-06-26 | Sony Corp | Picture signal output device and operation input processing method |
AU2006252196B2 (en) | 2006-12-21 | 2009-05-14 | Canon Kabushiki Kaisha | Scrolling Interface |
US20080301579A1 (en) | 2007-06-04 | 2008-12-04 | Yahoo! Inc. | Interactive interface for navigating, previewing, and accessing multimedia content |
US20090094159A1 (en) | 2007-10-05 | 2009-04-09 | Yahoo! Inc. | Stock video purchase |
KR101434498B1 (en) | 2007-10-29 | 2014-09-29 | 삼성전자주식회사 | Portable terminal and method for managing dynamic image thereof |
JP2011505596A (en) | 2007-11-30 | 2011-02-24 | スリーエム イノベイティブ プロパティズ カンパニー | Method for fabricating an optical waveguide |
US7840661B2 (en) * | 2007-12-28 | 2010-11-23 | Yahoo! Inc. | Creating and editing media objects using web requests |
US8181197B2 (en) * | 2008-02-06 | 2012-05-15 | Google Inc. | System and method for voting on popular video intervals |
EP3654271A1 (en) | 2008-02-20 | 2020-05-20 | JAMMIT, Inc. | System for learning and mixing music |
US20090241163A1 (en) * | 2008-03-21 | 2009-09-24 | Samsung Electronics Co. Ltd. | Broadcast picture display method and a digital broadcast receiver using the same |
US8139072B2 (en) | 2008-04-14 | 2012-03-20 | Mcgowan Scott James | Network hardware graphics adapter compression |
US8364698B2 (en) | 2008-07-11 | 2013-01-29 | Videosurf, Inc. | Apparatus and software system for and method of performing a visual-relevance-rank subsequent search |
US20100023984A1 (en) | 2008-07-28 | 2010-01-28 | John Christopher Davi | Identifying Events in Addressable Video Stream for Generation of Summary Video Stream |
US11832024B2 (en) * | 2008-11-20 | 2023-11-28 | Comcast Cable Communications, Llc | Method and apparatus for delivering video and video-related content at sub-asset level |
WO2010068175A2 (en) | 2008-12-10 | 2010-06-17 | Muvee Technologies Pte Ltd | Creating a new video production by intercutting between multiple video clips |
US8655466B2 (en) | 2009-02-27 | 2014-02-18 | Apple Inc. | Correlating changes in audio |
US8259816B2 (en) | 2009-03-12 | 2012-09-04 | MIST Innovations, Inc. | System and method for streaming video to a mobile device |
US8527646B2 (en) | 2009-04-14 | 2013-09-03 | Avid Technology Canada Corp. | Rendering in a multi-user video editing system |
US20120039582A1 (en) * | 2009-04-20 | 2012-02-16 | Koninklijke Philips Electronics N.V. | Verification and synchronization of files obtained separately from a video content |
US8392004B2 (en) | 2009-04-30 | 2013-03-05 | Apple Inc. | Automatic audio adjustment |
JP5523752B2 (en) | 2009-07-08 | 2014-06-18 | 京セラ株式会社 | Display control device |
US8438484B2 (en) * | 2009-11-06 | 2013-05-07 | Sony Corporation | Video preview module to enhance online video experience |
US8736561B2 (en) * | 2010-01-06 | 2014-05-27 | Apple Inc. | Device, method, and graphical user interface with content display modes and display rotation heuristics |
BR112012008744A2 (en) | 2010-01-29 | 2016-03-08 | Hewlett Packard Development Co | portable computer, method for reproducing audio on a portable computer and electronic audio system |
CN102196001B (en) | 2010-03-15 | 2014-03-19 | 腾讯科技(深圳)有限公司 | Movie file downloading device and method |
US8291452B1 (en) * | 2011-05-20 | 2012-10-16 | Google Inc. | Interface for watching a stream of videos |
US20120017150A1 (en) * | 2010-07-15 | 2012-01-19 | MySongToYou, Inc. | Creating and disseminating of user generated media over a network |
WO2012075295A2 (en) | 2010-12-02 | 2012-06-07 | Webshoz, Inc. | Systems, devices and methods for streaming multiple different media content in a digital container |
US9160960B2 (en) | 2010-12-02 | 2015-10-13 | Microsoft Technology Licensing, Llc | Video preview based browsing user interface |
US8923607B1 (en) | 2010-12-08 | 2014-12-30 | Google Inc. | Learning sports highlights using event detection |
CA2825927A1 (en) | 2011-01-28 | 2012-08-02 | Eye IO, LLC | Color conversion based on an hvs model |
JP2012165313A (en) | 2011-02-09 | 2012-08-30 | Sony Corp | Editing device, method, and program |
US8244103B1 (en) * | 2011-03-29 | 2012-08-14 | Capshore, Llc | User interface for method for creating a custom track |
US9779097B2 (en) * | 2011-04-28 | 2017-10-03 | Sony Corporation | Platform agnostic UI/UX and human interaction paradigm |
AU2011202182B1 (en) | 2011-05-11 | 2011-10-13 | Frequency Ip Holdings, Llc | Creation and presentation of selective digital content feeds |
US9135371B2 (en) | 2011-05-09 | 2015-09-15 | Google Inc. | Contextual video browsing |
US20120323897A1 (en) | 2011-06-14 | 2012-12-20 | Microsoft Corporation | Query-dependent audio/video clip search result previews |
JP2013009218A (en) * | 2011-06-27 | 2013-01-10 | Sony Corp | Editing device, method, and program |
WO2013010177A2 (en) | 2011-07-14 | 2013-01-17 | Surfari Inc. | Online groups interacting around common content |
US9973800B2 (en) * | 2011-08-08 | 2018-05-15 | Netflix, Inc. | Merchandising streaming video content |
US10706096B2 (en) | 2011-08-18 | 2020-07-07 | Apple Inc. | Management of local and remote media items |
US10684768B2 (en) * | 2011-10-14 | 2020-06-16 | Autodesk, Inc. | Enhanced target selection for a touch-based input enabled user interface |
US9111579B2 (en) * | 2011-11-14 | 2015-08-18 | Apple Inc. | Media editing with multi-camera media clips |
US20130163963A1 (en) | 2011-12-21 | 2013-06-27 | Cory Crosland | System and method for generating music videos from synchronized user-video recorded content |
US20130191776A1 (en) | 2012-01-20 | 2013-07-25 | The Other Media Limited | Method of activating activatable content on an electronic device display |
KR20130099515A (en) * | 2012-02-29 | 2013-09-06 | 삼성전자주식회사 | Apparatas and method of displaying a contents using for key frame in a terminal |
US9378283B2 (en) | 2012-04-23 | 2016-06-28 | Excalibur Ip, Llc | Instant search results with page previews |
US8959453B1 (en) * | 2012-05-10 | 2015-02-17 | Google Inc. | Autohiding video player controls |
US20130317951A1 (en) | 2012-05-25 | 2013-11-28 | Rawllin International Inc. | Auto-annotation of video content for scrolling display |
US9027064B1 (en) | 2012-06-06 | 2015-05-05 | Susie Opare-Abetia | Unified publishing platform that seamlessly delivers content by streaming for on-demand playback and by store-and-forward delivery for delayed playback |
US9158440B1 (en) * | 2012-08-01 | 2015-10-13 | Google Inc. | Display of information areas in a view of a graphical interface |
US9179232B2 (en) | 2012-09-17 | 2015-11-03 | Nokia Technologies Oy | Method and apparatus for associating audio objects with content and geo-location |
US8610730B1 (en) | 2012-09-19 | 2013-12-17 | Google Inc. | Systems and methods for transferring images and information from a mobile computing device to a computer monitor for display |
US8717500B1 (en) | 2012-10-15 | 2014-05-06 | At&T Intellectual Property I, L.P. | Relational display of images |
KR102126292B1 (en) | 2012-11-19 | 2020-06-24 | 삼성전자주식회사 | Method for displaying a screen in mobile terminal and the mobile terminal therefor |
CN103873944B (en) * | 2012-12-18 | 2017-04-12 | 瑞昱半导体股份有限公司 | Method and computer program product for establishing playback timing correlation between different contents to be playbacked |
US9349413B2 (en) | 2013-02-05 | 2016-05-24 | Alc Holdings, Inc. | User interface for video preview creation |
US9077956B1 (en) * | 2013-03-22 | 2015-07-07 | Amazon Technologies, Inc. | Scene identification |
US20140325568A1 (en) | 2013-04-26 | 2014-10-30 | Microsoft Corporation | Dynamic creation of highlight reel tv show |
US9071867B1 (en) | 2013-07-17 | 2015-06-30 | Google Inc. | Delaying automatic playing of a video based on visibility of the video |
GB2520319A (en) | 2013-11-18 | 2015-05-20 | Nokia Corp | Method, apparatus and computer program product for capturing images |
-
2014
- 2014-02-05 US US14/173,715 patent/US9349413B2/en active Active
- 2014-02-05 US US14/173,697 patent/US9530452B2/en active Active
- 2014-02-05 US US14/173,745 patent/US9589594B2/en active Active
- 2014-02-05 US US14/173,732 patent/US20140219634A1/en not_active Abandoned
- 2014-02-05 US US14/173,753 patent/US9767845B2/en active Active
- 2014-02-05 US US14/173,740 patent/US9244600B2/en active Active
-
2015
- 2015-11-10 US US14/937,557 patent/US9881646B2/en active Active
-
2016
- 2016-04-05 US US15/091,358 patent/US9852762B2/en active Active
-
2017
- 2017-03-03 US US15/449,174 patent/US10373646B2/en active Active
- 2017-08-03 US US15/668,465 patent/US20180019002A1/en not_active Abandoned
-
2018
- 2018-01-29 US US15/882,422 patent/US10643660B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5630105A (en) * | 1992-09-30 | 1997-05-13 | Hudson Soft Co., Ltd. | Multimedia system for processing a variety of images together with sound |
US5781183A (en) * | 1992-10-01 | 1998-07-14 | Hudson Soft Co., Ltd. | Image processing apparatus including selecting function for displayed colors |
US5745103A (en) * | 1995-08-02 | 1998-04-28 | Microsoft Corporation | Real-time palette negotiations in multimedia presentations |
US6335985B1 (en) * | 1998-01-07 | 2002-01-01 | Kabushiki Kaisha Toshiba | Object extraction apparatus |
US20010023200A1 (en) * | 2000-01-12 | 2001-09-20 | Kentaro Horikawa | Data creation device for image display and record medium |
US20020112244A1 (en) * | 2000-12-19 | 2002-08-15 | Shih-Ping Liou | Collaborative video delivery over heterogeneous networks |
US20070074269A1 (en) * | 2002-02-22 | 2007-03-29 | Hai Hua | Video processing device, video recorder/playback module, and methods for use therewith |
US20040010687A1 (en) * | 2002-06-11 | 2004-01-15 | Yuichi Futa | Content distributing system and data-communication controlling device |
US20050019015A1 (en) * | 2003-06-02 | 2005-01-27 | Jonathan Ackley | System and method of programmatic window control for consumer video players |
US7324119B1 (en) * | 2003-07-14 | 2008-01-29 | Adobe Systems Incorporated | Rendering color images and text |
US20080273804A1 (en) * | 2007-05-02 | 2008-11-06 | Motorola, Inc. | Image Transformation |
US20090249421A1 (en) * | 2008-03-26 | 2009-10-01 | Xiaomei Liu | Distributing digital video content to multiple end-user devices |
US20100260468A1 (en) * | 2009-04-14 | 2010-10-14 | Maher Khatib | Multi-user remote video editing |
US20110007087A1 (en) * | 2009-07-13 | 2011-01-13 | Echostar Technologies L.L.C. | Systems and methods for a common image data array file |
US20120120095A1 (en) * | 2009-07-31 | 2012-05-17 | Hitoshi Yoshitani | Image processing device, control method for image processing device, control program for image processing device, and recording medium in which control program is recorded |
US20120079529A1 (en) * | 2010-09-29 | 2012-03-29 | Verizon Patent And Licensing, Inc. | Multiple device storefront for video provisioning system |
US20120099641A1 (en) * | 2010-10-22 | 2012-04-26 | Motorola, Inc. | Method and apparatus for adjusting video compression parameters for encoding source video based on a viewer's environment |
US20130129317A1 (en) * | 2011-06-03 | 2013-05-23 | James A. Moorer | Client Playback of Streaming Video Adapted for Smooth Transitions and Viewing in Advance Display Modes |
US20130007198A1 (en) * | 2011-06-30 | 2013-01-03 | Infosys Technologies, Ltd. | Methods for recommending personalized content based on profile and context information and devices thereof |
US20130097238A1 (en) * | 2011-10-18 | 2013-04-18 | Bruce Rogers | Platform-Specific Notification Delivery Channel |
US20140359656A1 (en) * | 2013-05-31 | 2014-12-04 | Adobe Systems Incorporated | Placing unobtrusive overlays in video content |
Cited By (114)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170075526A1 (en) * | 2010-12-02 | 2017-03-16 | Instavid Llc | Lithe clip survey facilitation systems and methods |
US10042516B2 (en) * | 2010-12-02 | 2018-08-07 | Instavid Llc | Lithe clip survey facilitation systems and methods |
US10360945B2 (en) | 2011-08-09 | 2019-07-23 | Gopro, Inc. | User interface for editing digital media objects |
US9589594B2 (en) | 2013-02-05 | 2017-03-07 | Alc Holdings, Inc. | Generation of layout of videos |
US9530452B2 (en) | 2013-02-05 | 2016-12-27 | Alc Holdings, Inc. | Video preview creation with link |
US10373646B2 (en) | 2013-02-05 | 2019-08-06 | Alc Holdings, Inc. | Generation of layout of videos |
US10643660B2 (en) | 2013-02-05 | 2020-05-05 | Alc Holdings, Inc. | Video preview creation with audio |
US9881646B2 (en) | 2013-02-05 | 2018-01-30 | Alc Holdings, Inc. | Video preview creation with audio |
US9852762B2 (en) | 2013-02-05 | 2017-12-26 | Alc Holdings, Inc. | User interface for video preview creation |
US9767845B2 (en) | 2013-02-05 | 2017-09-19 | Alc Holdings, Inc. | Activating a video based on location in screen |
US10084961B2 (en) | 2014-03-04 | 2018-09-25 | Gopro, Inc. | Automatic generation of video from spherical content using audio/visual analysis |
US9754159B2 (en) | 2014-03-04 | 2017-09-05 | Gopro, Inc. | Automatic generation of video from spherical content using location-based metadata |
US9760768B2 (en) | 2014-03-04 | 2017-09-12 | Gopro, Inc. | Generation of video from spherical content using edit maps |
US9792502B2 (en) | 2014-07-23 | 2017-10-17 | Gopro, Inc. | Generating video summaries for a video using video summary templates |
US10776629B2 (en) | 2014-07-23 | 2020-09-15 | Gopro, Inc. | Scene and activity identification in video summary generation |
US11776579B2 (en) | 2014-07-23 | 2023-10-03 | Gopro, Inc. | Scene and activity identification in video summary generation |
US10074013B2 (en) | 2014-07-23 | 2018-09-11 | Gopro, Inc. | Scene and activity identification in video summary generation |
US9685194B2 (en) | 2014-07-23 | 2017-06-20 | Gopro, Inc. | Voice-based video tagging |
US10339975B2 (en) | 2014-07-23 | 2019-07-02 | Gopro, Inc. | Voice-based video tagging |
US11069380B2 (en) | 2014-07-23 | 2021-07-20 | Gopro, Inc. | Scene and activity identification in video summary generation |
US9984293B2 (en) | 2014-07-23 | 2018-05-29 | Gopro, Inc. | Video scene classification by activity |
US10192585B1 (en) | 2014-08-20 | 2019-01-29 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US10643663B2 (en) | 2014-08-20 | 2020-05-05 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US9666232B2 (en) | 2014-08-20 | 2017-05-30 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US10559324B2 (en) | 2015-01-05 | 2020-02-11 | Gopro, Inc. | Media identifier generation for camera-captured media |
US10096341B2 (en) | 2015-01-05 | 2018-10-09 | Gopro, Inc. | Media identifier generation for camera-captured media |
US20160196852A1 (en) * | 2015-01-05 | 2016-07-07 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9734870B2 (en) * | 2015-01-05 | 2017-08-15 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9966108B1 (en) | 2015-01-29 | 2018-05-08 | Gopro, Inc. | Variable playback speed template for video editing application |
US9679605B2 (en) | 2015-01-29 | 2017-06-13 | Gopro, Inc. | Variable playback speed template for video editing application |
US10395338B2 (en) | 2015-05-20 | 2019-08-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US11164282B2 (en) | 2015-05-20 | 2021-11-02 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10679323B2 (en) | 2015-05-20 | 2020-06-09 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10535115B2 (en) | 2015-05-20 | 2020-01-14 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10529052B2 (en) | 2015-05-20 | 2020-01-07 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10529051B2 (en) | 2015-05-20 | 2020-01-07 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US11688034B2 (en) | 2015-05-20 | 2023-06-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10817977B2 (en) | 2015-05-20 | 2020-10-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10186012B2 (en) | 2015-05-20 | 2019-01-22 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US9894393B2 (en) | 2015-08-31 | 2018-02-13 | Gopro, Inc. | Video encoding for reduced streaming latency |
US10474877B2 (en) | 2015-09-22 | 2019-11-12 | Google Llc | Automated effects generation for animated content |
US11138207B2 (en) | 2015-09-22 | 2021-10-05 | Google Llc | Integrated dynamic interface for expression-based retrieval of expressive media content |
US10789478B2 (en) | 2015-10-20 | 2020-09-29 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US9721611B2 (en) | 2015-10-20 | 2017-08-01 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US11468914B2 (en) | 2015-10-20 | 2022-10-11 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10204273B2 (en) | 2015-10-20 | 2019-02-12 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US10186298B1 (en) | 2015-10-20 | 2019-01-22 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10748577B2 (en) | 2015-10-20 | 2020-08-18 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10095696B1 (en) | 2016-01-04 | 2018-10-09 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content field |
US10423941B1 (en) | 2016-01-04 | 2019-09-24 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US9761278B1 (en) | 2016-01-04 | 2017-09-12 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US11238520B2 (en) | 2016-01-04 | 2022-02-01 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US10607651B2 (en) | 2016-01-08 | 2020-03-31 | Gopro, Inc. | Digital media editing |
US11049522B2 (en) | 2016-01-08 | 2021-06-29 | Gopro, Inc. | Digital media editing |
US10109319B2 (en) | 2016-01-08 | 2018-10-23 | Gopro, Inc. | Digital media editing |
US11238635B2 (en) | 2016-02-04 | 2022-02-01 | Gopro, Inc. | Digital media editing |
US10565769B2 (en) | 2016-02-04 | 2020-02-18 | Gopro, Inc. | Systems and methods for adding visual elements to video content |
US10083537B1 (en) | 2016-02-04 | 2018-09-25 | Gopro, Inc. | Systems and methods for adding a moving visual element to a video |
US9812175B2 (en) | 2016-02-04 | 2017-11-07 | Gopro, Inc. | Systems and methods for annotating a video |
US10769834B2 (en) | 2016-02-04 | 2020-09-08 | Gopro, Inc. | Digital media editing |
US10424102B2 (en) | 2016-02-04 | 2019-09-24 | Gopro, Inc. | Digital media editing |
US10740869B2 (en) | 2016-03-16 | 2020-08-11 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US10817976B2 (en) | 2016-03-31 | 2020-10-27 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US11398008B2 (en) | 2016-03-31 | 2022-07-26 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US9794632B1 (en) | 2016-04-07 | 2017-10-17 | Gopro, Inc. | Systems and methods for synchronization based on audio track changes in video editing |
US10341712B2 (en) | 2016-04-07 | 2019-07-02 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US9838731B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing with audio mixing option |
US9911223B2 (en) * | 2016-05-13 | 2018-03-06 | Yahoo Holdings, Inc. | Automatic video segment selection method and apparatus |
US10565771B2 (en) | 2016-05-13 | 2020-02-18 | Oath Inc. | Automatic video segment selection method and apparatus |
US10250894B1 (en) | 2016-06-15 | 2019-04-02 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US10645407B2 (en) | 2016-06-15 | 2020-05-05 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US11470335B2 (en) | 2016-06-15 | 2022-10-11 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US9998769B1 (en) | 2016-06-15 | 2018-06-12 | Gopro, Inc. | Systems and methods for transcoding media files |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10185891B1 (en) | 2016-07-08 | 2019-01-22 | Gopro, Inc. | Systems and methods for compact convolutional neural networks |
US10469909B1 (en) | 2016-07-14 | 2019-11-05 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10812861B2 (en) | 2016-07-14 | 2020-10-20 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US11057681B2 (en) | 2016-07-14 | 2021-07-06 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US9836853B1 (en) | 2016-09-06 | 2017-12-05 | Gopro, Inc. | Three-dimensional convolutional neural networks for video highlight detection |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10643661B2 (en) | 2016-10-17 | 2020-05-05 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10923154B2 (en) | 2016-10-17 | 2021-02-16 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10284809B1 (en) | 2016-11-07 | 2019-05-07 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10560657B2 (en) | 2016-11-07 | 2020-02-11 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10546566B2 (en) | 2016-11-08 | 2020-01-28 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10262639B1 (en) | 2016-11-08 | 2019-04-16 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10534966B1 (en) | 2017-02-02 | 2020-01-14 | Gopro, Inc. | Systems and methods for identifying activities and/or events represented in a video |
US10776689B2 (en) | 2017-02-24 | 2020-09-15 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US11443771B2 (en) | 2017-03-02 | 2022-09-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10991396B2 (en) | 2017-03-02 | 2021-04-27 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10127943B1 (en) | 2017-03-02 | 2018-11-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10679670B2 (en) | 2017-03-02 | 2020-06-09 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10185895B1 (en) | 2017-03-23 | 2019-01-22 | Gopro, Inc. | Systems and methods for classifying activities captured within images |
US10083718B1 (en) | 2017-03-24 | 2018-09-25 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US11282544B2 (en) | 2017-03-24 | 2022-03-22 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10789985B2 (en) | 2017-03-24 | 2020-09-29 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10187690B1 (en) | 2017-04-24 | 2019-01-22 | Gopro, Inc. | Systems and methods to detect and correlate user responses to media content |
US10614315B2 (en) | 2017-05-12 | 2020-04-07 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10817726B2 (en) | 2017-05-12 | 2020-10-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
US10402656B1 (en) | 2017-07-13 | 2019-09-03 | Gopro, Inc. | Systems and methods for accelerating video analysis |
CN111435995A (en) * | 2019-01-15 | 2020-07-21 | 北京字节跳动网络技术有限公司 | Method, device and system for generating dynamic picture |
CN113728591A (en) * | 2019-04-19 | 2021-11-30 | 微软技术许可有限责任公司 | Previewing video content referenced by hyperlinks entered in comments |
US11678031B2 (en) | 2019-04-19 | 2023-06-13 | Microsoft Technology Licensing, Llc | Authoring comments including typed hyperlinks that reference video content |
US11785194B2 (en) | 2019-04-19 | 2023-10-10 | Microsoft Technology Licensing, Llc | Contextually-aware control of a user interface displaying a video and related user text |
Also Published As
Publication number | Publication date |
---|---|
US9881646B2 (en) | 2018-01-30 |
US9349413B2 (en) | 2016-05-24 |
US9589594B2 (en) | 2017-03-07 |
US20140219637A1 (en) | 2014-08-07 |
US20160064034A1 (en) | 2016-03-03 |
US20140223482A1 (en) | 2014-08-07 |
US20180019002A1 (en) | 2018-01-18 |
US10373646B2 (en) | 2019-08-06 |
US20160217826A1 (en) | 2016-07-28 |
US20140223307A1 (en) | 2014-08-07 |
US9244600B2 (en) | 2016-01-26 |
US20180218756A1 (en) | 2018-08-02 |
US9852762B2 (en) | 2017-12-26 |
US20170270966A1 (en) | 2017-09-21 |
US10643660B2 (en) | 2020-05-05 |
US20140219629A1 (en) | 2014-08-07 |
US9530452B2 (en) | 2016-12-27 |
US20140223306A1 (en) | 2014-08-07 |
US9767845B2 (en) | 2017-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20140219634A1 (en) | Video preview creation based on environment | |
US11470405B2 (en) | Network video streaming with trick play based on separate trick play files | |
US9478256B1 (en) | Video editing processor for video cloud server | |
US10812700B2 (en) | Approach to live multi-camera streaming of events with hand-held cameras | |
US9247317B2 (en) | Content streaming with client device trick play index | |
US9154857B2 (en) | ABR live to VOD system and method | |
WO2016138844A1 (en) | Multimedia file live broadcast method, system and server | |
US10698864B2 (en) | Auxiliary manifest file to provide timed metadata | |
US10623816B2 (en) | Method and apparatus for extracting video from high resolution video | |
US11153615B2 (en) | Method and apparatus for streaming panoramic video | |
US20150124048A1 (en) | Switchable multiple video track platform | |
US20140297804A1 (en) | Control of multimedia content streaming through client-server interactions | |
US20140359678A1 (en) | Device video streaming with trick play based on separate trick play files | |
US20150249848A1 (en) | Intelligent Video Quality Adjustment | |
WO2015144735A1 (en) | Methods, devices, and computer programs for improving streaming of partitioned timed media data | |
WO2014193996A2 (en) | Network video streaming with trick play based on separate trick play files | |
TW201720170A (en) | Methods and systems for client interpretation and presentation of zoom-coded content | |
KR101944601B1 (en) | Method for identifying objects across time periods and corresponding device | |
WO2020006632A1 (en) | Tile stream selection for mobile bandwidth optimization | |
US10674111B2 (en) | Systems and methods for profile based media segment rendering | |
US11388455B2 (en) | Method and apparatus for morphing multiple video streams into single video stream | |
KR101603976B1 (en) | Method and apparatus for concatenating video files | |
Kammachi‐Sreedhar et al. | Omnidirectional video delivery with decoder instance reduction | |
US10893331B1 (en) | Subtitle processing for devices with limited memory | |
WO2018222974A1 (en) | Method and apparatus for morphing multiple video streams into single video stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: REDUX, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCINTOSH, DAVID;PENNELLO, CHRIS;REEL/FRAME:032150/0759 Effective date: 20140204 |
|
AS | Assignment |
Owner name: ALC HOLDINGS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REDUX, INC.;REEL/FRAME:034678/0400 Effective date: 20140624 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |