US20020028026A1 - Extracting photographic images from video - Google Patents
Extracting photographic images from video Download PDFInfo
- Publication number
- US20020028026A1 US20020028026A1 US09/933,617 US93361701A US2002028026A1 US 20020028026 A1 US20020028026 A1 US 20020028026A1 US 93361701 A US93361701 A US 93361701A US 2002028026 A1 US2002028026 A1 US 2002028026A1
- Authority
- US
- United States
- Prior art keywords
- video
- frames
- frame
- segments
- generate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000033001 locomotion Effects 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims description 46
- 238000000034 method Methods 0.000 claims description 32
- 230000008859 change Effects 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 8
- 238000007639 printing Methods 0.000 claims description 6
- 238000002156 mixing Methods 0.000 claims 2
- 230000000644 propagated effect Effects 0.000 claims 2
- 238000004519 manufacturing process Methods 0.000 claims 1
- 230000009466 transformation Effects 0.000 description 29
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 239000002131 composite material Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004091 panning Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001594 aberrant effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000009365 direct transmission Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/4448—Receiver circuitry for the reception of television signals according to analogue transmission standards for frame-grabbing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/786—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/782—Television signal recording using magnetic recording on tape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/781—Television signal recording using magnetic recording on disks or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
Definitions
- the present invention relates to the field of image processing, and more particularly to automatically extracting photographic images from a video.
- a method and apparatus for generating photographs from a video is disclosed. Segments of the video for which frame-to-frame background motion is less than a threshold are identified and, for each of the segments, the video frames in the segment are combined to generate a photograph representative of the segment.
- FIG. 1 illustrates use of a still image generation system to generate a set of still images from a source video
- FIG. 2 illustrates a business model for providing a still image generation service according to one embodiment
- FIG. 3 illustrates a selection window presented on a display of a still image generation system according to one embodiment
- FIG. 4 illustrates a window of a computer system display in which pages of a video album according to one embodiment are presented
- FIG. 5 illustrates a still image generator according to one embodiment
- FIG. 6 is a flow diagram of still image construction according to one embodiment.
- FIG. 7 is a diagram of a video index displayed on a computer system display according to one embodiment.
- a method and apparatus for generating still images from a video is described.
- the individual frames of the video are analyzed to automatically identify at least three different types of shots: still shots, pan shots and zoom shots.
- a still shot is identified, multiple video frames from the still shot are combined to create a single high-resolution image.
- For a pan shot multiple video frames are stitched together to create a high-resolution panoramic image.
- For a zoom shot multiple video frames are combined to produce a multiple-resolution still image. In shots that include both pan and zoom, a multiple-resolution panoramic image is generated. Because the processing of the input video is automatic, the video can be processed unattended and without the need to learn complicated image editing operations.
- the automatic generation of still images from video may be provided as a service to video camera users.
- a user may deliver a video to a still image generation service which creates a set of high quality still images for the user in return for a fee.
- videos of weddings, parties, vacations, real estate tours, insurance records, etc. may be used to generate a corresponding set of high quality photographic images.
- the video may be physically delivered to the still image generation service in the form of a video recording medium such as a disk or tape, or the video may be uploaded electronically from an end user computer.
- the set of still images generated from the video may likewise be provided to the user either on a physical recording medium (including the medium on which the video was supplied) or by transmission via a communications network.
- the still images may be provided to the end user as a set of printed photographs, or posted on a server computer for viewing or download by the end-user or parties authorized by the end-user.
- the end-user or other authorized party may be allowed to select which of the printed photographs to download, paying a fee for each selected still image.
- still images generated from a user-supplied video may be formatted into an electronic album of photographic images referred to herein as a “video album.”
- the video album may be delivered on a recording medium, including the medium on which the source video 10 was recorded, or posted on a user-accessible computer network.
- the album may be prepared automatically, with the individual photographs being arranged based on default criterion such as their order of appearance in the video. Text annotations of the video may be generated automatically based on the corresponding audio track.
- the user may index the individual photographs of the video album according to a number of different type of criteria including, without limitation, order of appearance in the video, nature of the shot (e.g., still image, panoramic image, zoom image), subject matter of the photographs, user preference and so forth.
- the user may also enter text annotations.
- a still image generation service is provided in the form of a video processing kiosk which includes a disk or tape reader into which a user may insert a video recording medium.
- the kiosk includes a video processing engine to identify still, pan and zoom shots as described above and to automatically display a set of high quality still images to the kiosk user.
- the kiosk may then prompt the user to select which of the still images the user wishes to keep.
- the user is given the option of printing the still images using a printing mechanism within the kiosk, to upload the still images to a server computer from which the user may later download the still images, or to have the still images delivered electronically to a destination address supplied by the user (e.g., an email address).
- Full video album services may be provided as discussed above.
- the user may be prompted to pay a fee for initial processing, a fee for each still image selected, or a combination of an initial processing fee and an image selection fee.
- scene cuts in a video are automatically detected to create a set of miniature-view keyframes and corresponding timecodes.
- the miniature-view keyframes referred to herein as thumbnails, may be presented on the display of a computer system to allow a user to select entry points into the video. For example, if the video has been digitized (or recorded in digital form) and is accessible by the computer system, then the user may select a thumbnail of interest to cause the video to begin playing on the display of the computer system starting at the point in the video at which the thumbnail appears. In this way, a navigable index of the video is established, greatly simplifying the activity of searching a video for subject matter of interest.
- FIG. 1 illustrates use of a still image generation system 12 to generate a set of still images 15 from a source video 10 .
- the source video 10 may be supplied to the still image generation system 12 in a number of forms, including on video recording media such as magnetic tape or disk, optical disk, solid state storage and so forth.
- the source video 10 may be delivered electronically, for example, by uploading the video via a communications network to the still image generation system 12 .
- the source video 10 may be recorded in a number of different formats, including without limitation standard NTSC (National Television System Code) analog video or in any number of digital video formats. In the case of an analog format, the source video 10 is digitized by the still image generation system 12 before further processing is performed.
- NTSC National Television System Code
- the still image generation system 12 is implemented by a programmed general purpose computer system and a set of one or more media readers, such as a cassette or diskette reader.
- the media readers may be installed in the computer system or operated as standalone devices which generate an analog or digital video feed.
- a frame digitizing module (often called a “frame grabber”) may be included in the computer system to receive and digitize an analog video signal supplied from an external analog media reader.
- the external media reader may generate a digital output that can be accepted via a communication port of the computer system.
- the set of still images 15 generated by the still image generation system 12 may be output in a number of forms.
- the still image generation system 12 may include a printing device for generating printed images 19 .
- the set of still images 15 may be recorded on a portable storage medium 21 , including on unused recording space on the medium on which the source video 10 was supplied.
- the set of still images 15 may be output in electronic form appropriate for direct transmission to an end-user viewing system 22 (e.g., via e-mail or electronic courier) or for posting on a server computer 17 that can be accessed via a communications network 20 such as the Internet or other computer network.
- the set of still images 15 may be posted on a server computer accessible via the World Wide Web (the “Web”) so that an end-user may view the posted images using a client computer (e.g., viewing system 22 ) and select which images of the set 15 to download.
- a server computer accessible via the World Wide Web (the “Web”) so that an end-user may view the posted images using a client computer (e.g., viewing system 22 ) and select which images of the set 15 to download.
- the Web World Wide Web
- FIG. 2 illustrates a business model for providing a still image generation service according to one embodiment.
- a customer 25 supplies a source video 10 to a still image generation service 26 .
- the source video 10 may be provided, for example, on a portable storage medium or by electronic transmission.
- the still image generation service 26 processes the source video 10 to generate a set of still images 15 .
- the customer provides a fee 27 to the still image generation service in return for the set of still images.
- the fee may be monetary or a supply of information such as profile information that can be resold to advertisers or other parties interested in demographic information.
- the set of still images 15 may be provided as a set of prints, a set of images recorded on a storage medium or by electronic transmission.
- the customer 25 may be permitted to select a subset of the still images, paying a reduced, or per-image fee.
- the fee 27 may be different for the different types of still images depending on a number of factors such as the number of frames that have been combined to produce the still image, the overall size of the still image (e.g., in area or storage requirements), and so forth.
- the still image service 26 may be operated as a drop-off service or as a customer-operated kiosk. In the case of the drop-off service, the customer 25 may drop off (or electronically transmit) the source video 10 and receive the set of still images 15 later (e.g., by pickup or by electronic transmission).
- the customer 25 may insert the source video 10 into a media reader included in the kiosk and stand by while the source video 10 is being processed.
- the customer may interact with a user-interface of the kiosk to specify processing criteria and to select which of the set of still images 15 to keep.
- a still image generation system 26 is implemented by a programmed general purpose computer, such as a desktop or laptop computer of a computer user.
- still image generation software is sold to the user, for example as shrink-wrap or click-wrap software, and installed on the user's computer system. Additional equipment, such as the above-described media reader and playback device may be required.
- the still image generation system 26 may be implemented in the same end-user computer system that is used to provide the viewing system 22 of FIG. 1.
- FIG. 3 illustrates a selection window 30 presented on a display 29 of a still image generation system according to one embodiment.
- the user of the still image generation system e.g., element 12 of FIG. 1
- who may or may not be the person who has requested the set of still images selects from among thumbnail views of still images ( 32 , 34 , 36 ) presented in the selection window 30 , for example, by clicking thumbnails the user wishes to keep.
- each selected thumbnail view of a still image ( 32 , 34 , 36 ) is removed from the selection window 30 to a selections list 39 .
- the selected still images 41 may be printed, transmitted or otherwise delivered to the user as they are selected or after all selections have been made.
- the individual still images 32 , 34 , 36 may be obtained from different types of video shots, including pan shots 31 produced by rotation or translation of the video camera, zoom shots 33 produced by zooming the video camera in or out or both, and still shots 35 produced by keeping the video camera stationary or by user-activation of a repetitive capture input which causes a captured frame to be automatically copied to a number of successive frames.
- the still image generation system may make the still images available one-by-one as they are created from the source video 10 , or the entire source video 10 may be processed to generate the complete set of still images before the set of still images is presented to the user of the system.
- FIG. 4 illustrates a window 50 of a computer system display in which pages 51 of a video album 52 according to one embodiment are presented.
- the video album 52 contains separately viewable pages each containing one or more still images ( 53 A, 53 B, 53 C) that have been generated by combining frames of a video.
- Text descriptions 54 A, 54 B, 54 C are associated with each of the images, and may be automatically extracted from the audio track during video processing.
- a sound and video icons may be associated with the images in the video album. When a viewer clicks the sound icon 55 , a portion of the audio track that corresponds to the video segment used to generate still image 53 B is played.
- the video icon 56 when a viewer clicks the video icon 56 , the video is presented starting at the first frame of the video segment used to generate still image 53 A.
- Virtual reality players may also be associated with the still images presented in the video album 52 .
- a panoramic player is invoked to allow the viewer to pan about within panoramic image 53 A when the viewer clicks the PAN button 57 .
- a pan and zoom player is invoked to allow the viewer to pan and zoom within the multiple resolution still image 53 C.
- the pages 51 of the video album 52 are shown in FIG. 4 as being cascaded over one another, many alternate arrangements of pages may be used.
- the pages 51 may be tiled, or individually selected by any number of scrolling techniques.
- the pages 51 may also be sorted based on a number of different criteria including, but not limited to, order of appearance in the video, nature of still image (e.g., panorama, multiple resolution still, etc.), legend text (e.g., grouping pages containing user-specified keywords together).
- the individual still images of the video album 52 may be reorganized within the video album according to such criteria so that, for example, the video album is chronologically ordered or images are grouped according to subject matter.
- FIG. 5 illustrates a still image generator 60 according to one embodiment.
- the still image generator 60 includes a scene change estimator 61 , a still image constructor 67 , and a background motion estimator 65 .
- the scene change estimator 61 compares successive frames of the source video 10 to one another to determine when a transformation of a scene in the video frames exceeds a threshold.
- the effect of the scene change estimator 61 is to segment the sequence of frames in the source video 10 into one or more subsequences of video frames (i.e., video segments or clips), each of which exhibits a scene transformation that is less than a predetermined threshold.
- the background motion estimator 65 and still image constructor 67 process each video segment identified by the scene change estimator 61 to generate a composite still image having pixel values drawn from two or more of the frames in the video segment.
- the predetermined threshold applied by the scene change estimator 61 defines the incremental transformation of a scene which results in construction of a new still image of the still image set 15 .
- the scene change estimator 61 operates by determining a transformation vector for each pair of adjacent video frames in the source video.
- a first frame is considered to be adjacent a second frame if the first frame immediately precedes or succeeds the second frame in a temporal sequence of frames.
- the transformation vector includes a plurality of scalar components that each indicate a measure of change in the scene from one video frame to the next.
- the scalar components of a transformation vector may include measures of the following changes in the scene: translation, scaling, rotation, panning, tilting, skew, color changes and time elapsed.
- the scene change estimator 61 applies a spatial low pass filter to the frames of the source video 10 before computing the transformation deltas between adjacent frames. After being low pass filtered, the individual frames in the source video 10 contain less information than before filtering so that fewer computations are required to determine the transformation deltas.
- transformation deltas are cleared at the beginning of a video segment and then a transformation delta computed for each pair of adjacent frames in the video segment is added to transformation deltas computed for preceding pairs of adjacent frames to accumulate a sum of transformation deltas.
- the sum of transformation deltas represents a transformation between a starting video frame in a video segment and the most recently compared video frame in the video segment.
- the sum of transformation deltas is compared against a predetermined transformation threshold in decision block 63 to determine if the most recently compared video frame has caused the transformation threshold to be exceeded.
- the transformation threshold may be a vector quantity that includes multiple scalar thresholds, including thresholds for color changes, translation, scaling, rotation, panning, tilting, skew of the scene and time elapsed.
- the transformation threshold is dynamically adjusted in order to achieve a desired ratio of video segments to frames in the source video 10 .
- the transformation threshold is dynamically adjusted in order to achieve a desired average video segment size (i.e., a desired number of video frames per video segment).
- a transformation threshold is dynamically adjusted to achieve a desired average elapsed time per video segment.
- any technique for dynamically adjusting the transformation threshold may be used without departing from the spirit and scope of the present invention.
- the scene is deemed to have changed at decision block 63 and the video frame that precedes the most recently compared video frame is deemed to be the ending frame of the video segment. Consequently, if a predetermined transformation threshold is used, each video segment of the source video 10 is assured to have an overall transformation that is less than the transformation threshold. If a variable transformation threshold is used, on the other hand, considerable variance in the overall transformation delta of respective video segments may result and it may be necessary to iteratively apply the scene change estimator to reduce the variance in the transformation deltas.
- FIG. 6 is a flow diagram of still image construction according to one embodiment.
- the scene change estimator effectively resolves the source video 10 into a plurality of video segments each defined by a sequence of frames.
- the next video segment e.g., video segment
- the video segment is determined to be empty at decision block 83 (i.e., the video segment includes no frames)
- the end of the video has been reached and still image construction for the source video 10 is completed.
- the number of frames in the video segment is compared against a threshold number in decision block 85 to determine whether the segment has a sufficient number of frames to produce a still image.
- the threshold number of frames may be predetermined or adaptively determined based on the lengths of the segments of the source video 10 .
- the user of the still image generation system may specify the threshold number of frames required to produce a still image or the user may specify a starting value that may be adapted according to the lengths of segments of the source video 10 .
- the user of the still image generation system may control how many still images are generated, setting the threshold value to a high number of frames to reduce the number of video segments from which still images are constructed and setting the threshold value to a lower number to increase the number of video segments from which still images are constructed.
- a target number of still images may be specified so that the threshold number may be automatically increased or decreased during processing to converge on the target number of still images.
- processing of the next video segment begins at block 81 . Otherwise, at block 87 , the background motion estimator inspects the video segment indicated by the scene change estimator to identify a dominant motion of the scene depicted in those frames. This dominant motion is considered to be a background motion.
- One technique involves identifying features in the video frames (e.g., using edge detection techniques) and tracking the motion of the features from one video frame to the next.
- Features that exhibit statistically aberrant motion relative to other features are considered to be dynamic objects and are temporarily disregarded.
- Motions that are shared by a large number of features (or by large features) are typically caused by changes in the disposition of the camera used to record the video and are considered to be background motions.
- Another technique for identifying background motion in a video segment is to correlate the frames of the video segment to one another based on common regions and then determine the frame to frame offset of those regions. The frame to frame offset can then be used to determine a background motion for the video segment.
- Still other contemplated techniques for identifying background motion in a video segment include, but are not limited to, coarse-to-fine search methods that use spatially hierarchical decompositions of frames in the video segment; measurements of changes in video frame histogram characteristics over time to identify scene changes; filtering to accentuate features in the video segment that can be used for motion identification; optical flow measurement and analysis; pixel format conversion to alternate color representations (including grayscale) to achieve greater processing speed, greater reliability or both; and robust estimation techniques, such as M-estimation, that eliminate elements of the video frames that do not conform to an estimated dominant motion.
- the still image constructor receives the background motion information from the background motion estimator in block 89 and uses the background motion information to register the frames of the video segment to one another.
- Registration refers to spatially correlating video frames in a manner that accounts for changes caused by background motion. By registering the video frames based on background motion information, regions of the frames that exhibit motions that are different from the background motion will appear in a fixed location in only a small number of the registered video frames. That is, the regions move from frame to frame relative to a static background. These regions are considered to be dynamic objects.
- the still image constructor removes dynamic objects from frames of the video segment to produce a processed sequence of video frames. This technique is described in copending U.S. patent application Ser. No. 09/096,720 filed Jun. 11, 1998, which is hereby incorporated by reference in its entirety.
- the still image constructor generates a still image based on the processed sequence of video frames and the background motion information. Depending on the nature of the background motion, construction of the still image may involve combining two or more processed video frames into a single still image, referred to as a composite image.
- the composite image may be a panoramic image or a high resolution still image.
- a panoramic image is created by stitching two or more processed video frames together and can be used to represent a background scene that has been captured by panning, tilting or translating a camera.
- a high resolution still image is appropriate when the subject of a processed sequence of video frames is a relatively static background scene (i.e., the disposition of the camera used to record the video source is not significantly changed).
- One technique for creating high resolution still images is to analyze the processed sequence of video frames to identify sub-pixel motions between the frames. Sub-pixel motion is caused by slight motions of the camera and can be used to create a composite image that has higher resolution than any of the individual frames captured by the camera. When multiple high resolution still images of the same subject are constructed, the high resolution still images can be composited to form a still image having regions of varying resolution.
- Such an image is referred to herein as a multiple-resolution still image.
- a multiple-resolution still image is displayed during execution of a video album application program on a computer, a user can zoom in or out on different regions of the image. Similarly, a user can pan about a panoramic image. Combinations of pan and zoom are also possible.
- FIG. 7 is a diagram of a video index 96 displayed on a computer system display 50 according to one embodiment.
- a video presentation 95 is displayed in one window of the display 50 , and the video index 96 is displayed in a separate window.
- the video index 96 may be displayed in a tool bar or other location within the same window as the video presentation.
- the video index 96 contains miniaturized versions (thumbnails 97 A- 97 J) of still images generated from the video presentation.
- the threshold number of frames required to signal a still image may be set to a low value so that at least one still image is constructed per scene change.
- the still image generation system automatically detects scene cuts in the source video and generates a corresponding still image. Consequently, the video index 96 contains a thumbnail for each scene cut in the source video.
- each of the thumbnails 97 A- 97 J is time correlated to the corresponding video segment in the source video by a timecode.
- the timecode associated with the thumbnail is used to identify a frame of the video that has a corresponding time offset from the start of the video, and the video is played starting at that time offset. In this way, a user may navigate the video presentation 95 by selecting thumbnails of interest from the video index 96 .
- the thumbnails 97 A- 97 J may be correlated to the viedo presentation by sequence numbers instead of by time codes.
- each frame of the source video may be numbered, so that the number of a video frame that begins a segment of the video used to generate a still image may be associated with a thumbnail of the still image.
- the thumbnail of the still image e.g., by clicking a mouse button when a cursor controlled by the mouse is positioned over the thumbnail
- the source video is played starting at the first frame of the corresponding video segment.
- FIG. 8 is a diagram of an embodiment of a processing system 100 that may be used to perform the above-described processing operations, either as an end-user machine, within a kiosk or as part of a still image generation service.
- the processing system 100 includes a processing unit 110 , memory 120 , display device 130 , user-input device 140 , communications device 150 , media reader 160 , frame grabber 170 and printing device 180 , each coupled to a bus structure 105 .
- the display device 130 and the user-input device 140 may be implemented by a touch-sensitive screen or other simplified user-interface.
- the printing device 180 is preferably a high quality color printer, though a black and white printer may also be used. In the case of a video processing kiosk, the printer 180 is preferably enclosed within the kiosk housing, adjacent an opening through which printed output is made available to the kiosk user.
- the processing unit 110 may include one or more general purpose processors, one or more digital signal processors or any other devices capable of executing a sequence of instructions. When programmed with appropriate instructions, the processing unit may be used to perform the above-described video processing operations.
- the communications device 150 may be a modem, area network card or any other device for coupling the processing system 100 to a computer network.
- the communications device 150 may be used to generate or receive a carrier wave modulated with a data signal, for example, for transmitting or receiving video frames, still images or text from a server computer on the World Wide Web or other network, or for receiving updated program code or function-extending program code that can be executed by the processing unit 110 to implement various embodiments of the present invention.
- the memory 120 may include both system memory (typically, high speed dynamic random-access memory) and various non-volatile storage devices such as magnetic tape, magnetic disk, optical disk, electrically erasable programmable read only memory (EEPROM), or any other computer-readable medium. As shown in FIG. 8, the memory 120 may be used to store program code 122 for performing the above-described processing operations and image data 124 .
- the image data 124 may include, for example, video frames that have been obtained from media reader 160 or from the frame grabber, or still images resulting from combination of video frames.
- operating system program code when power is applied to the processing system 100 , operating system program code is loaded from non-volatile storage into system memory by the processing unit 110 or another device, such as a direct memory access controller (not shown). Sequences of instructions comprised by the operating system are then executed by processing unit 110 to load other sequences of instructions from non-volatile storage into system memory, including sequences of instructions that can be executed to perform the above-described video processing operations.
- program code that can be executed to perform the above-described video processing operations may be obtained from a computer-readable medium, including the above-described carrier wave, and executed in the processing unit 110 .
- the media reader 160 may be a video cassette tape reader, an optical disk reader (e.g., Digital Versatile Disk (DVD) or Compact-Disk (CD)), a magnetic disk reader or any other device capable of reading video data from a portable storage media.
- an optical disk reader e.g., Digital Versatile Disk (DVD) or Compact-Disk (CD)
- CD Compact-Disk
- the content may be processed directly by the processing unit 110 to generate a set of still images.
- the video is stored in an analog format (e.g., NTSC video)
- the signal is sampled and converted to a digital representation.
- the analog-to-digital conversion may be performed by a separate conversion device (not shown), by the frame grabber 170 or by the processing unit 110 itself.
- the frame grabber 170 is used to convert an analog video signal received from a record/playback device 190 (e.g., a video cassette recorder, DVD player, DIVX player, video camera, etc.) or from the media reader 160 into a digitized set of video frames.
- the frame grabber may obtain an analog video signal from the media reader 160 via bus 105 or via a separate transmission path indicated by dashed arrow 162 .
- the output of the frame grabber 170 may be transferred to the memory 120 for processing by the processing unit 10 or processed in place (i.e., within a buffer of the frame grabber) by the processing unit 110 .
Abstract
Generating photographs from a video. Segments of the video for which frame-to-frame background motion is less than a threshold are identified. For each of the segments, video frames in the segment are combined to generate a photograph representative of the segment.
Description
- This is a continuation-in-part of copending U.S. application Ser. No. 09/096,720 filed Jun. 11, 1998.
- The present invention relates to the field of image processing, and more particularly to automatically extracting photographic images from a video.
- Historically, video cameras and still image cameras have been used for different applications and have occupied different consumer markets. Most still image cameras do not possess the image storage and rapid capture capability of video cameras and therefore are unsuitable for capturing and recording video clips. Conversely, the video resolution and quality produced by most video cameras are too low for producing high quality still images. Consequently, people who wish to capture both videos and high quality still images must usually have both a video camera and a still image camera.
- A method and apparatus for generating photographs from a video is disclosed. Segments of the video for which frame-to-frame background motion is less than a threshold are identified and, for each of the segments, the video frames in the segment are combined to generate a photograph representative of the segment.
- Other features and advantages of the invention will be apparent from the accompanying drawings and from the detailed description that follows below.
- The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements and in which:
- FIG. 1 illustrates use of a still image generation system to generate a set of still images from a source video;
- FIG. 2 illustrates a business model for providing a still image generation service according to one embodiment;
- FIG. 3 illustrates a selection window presented on a display of a still image generation system according to one embodiment;
- FIG. 4 illustrates a window of a computer system display in which pages of a video album according to one embodiment are presented;
- FIG. 5 illustrates a still image generator according to one embodiment;
- FIG. 6 is a flow diagram of still image construction according to one embodiment; and
- FIG. 7 is a diagram of a video index displayed on a computer system display according to one embodiment.
- A method and apparatus for generating still images from a video is described. The individual frames of the video are analyzed to automatically identify at least three different types of shots: still shots, pan shots and zoom shots. When a still shot is identified, multiple video frames from the still shot are combined to create a single high-resolution image. For a pan shot, multiple video frames are stitched together to create a high-resolution panoramic image. For a zoom shot, multiple video frames are combined to produce a multiple-resolution still image. In shots that include both pan and zoom, a multiple-resolution panoramic image is generated. Because the processing of the input video is automatic, the video can be processed unattended and without the need to learn complicated image editing operations.
- It is contemplated that the automatic generation of still images from video may be provided as a service to video camera users. A user may deliver a video to a still image generation service which creates a set of high quality still images for the user in return for a fee. In this way, videos of weddings, parties, vacations, real estate tours, insurance records, etc. may be used to generate a corresponding set of high quality photographic images. The video may be physically delivered to the still image generation service in the form of a video recording medium such as a disk or tape, or the video may be uploaded electronically from an end user computer. The set of still images generated from the video may likewise be provided to the user either on a physical recording medium (including the medium on which the video was supplied) or by transmission via a communications network. For example, the still images may be provided to the end user as a set of printed photographs, or posted on a server computer for viewing or download by the end-user or parties authorized by the end-user. In the case of posting the still images on the server computer, the end-user or other authorized party may be allowed to select which of the printed photographs to download, paying a fee for each selected still image.
- In one embodiment, still images generated from a user-supplied video may be formatted into an electronic album of photographic images referred to herein as a “video album.” The video album may be delivered on a recording medium, including the medium on which the
source video 10 was recorded, or posted on a user-accessible computer network. In the case of a still image generation service, the album may be prepared automatically, with the individual photographs being arranged based on default criterion such as their order of appearance in the video. Text annotations of the video may be generated automatically based on the corresponding audio track. In the case of user processing of thesource video 10, the user may index the individual photographs of the video album according to a number of different type of criteria including, without limitation, order of appearance in the video, nature of the shot (e.g., still image, panoramic image, zoom image), subject matter of the photographs, user preference and so forth. The user may also enter text annotations. - In one embodiment, a still image generation service is provided in the form of a video processing kiosk which includes a disk or tape reader into which a user may insert a video recording medium. The kiosk includes a video processing engine to identify still, pan and zoom shots as described above and to automatically display a set of high quality still images to the kiosk user. The kiosk may then prompt the user to select which of the still images the user wishes to keep. In one embodiment, the user is given the option of printing the still images using a printing mechanism within the kiosk, to upload the still images to a server computer from which the user may later download the still images, or to have the still images delivered electronically to a destination address supplied by the user (e.g., an email address). Full video album services may be provided as discussed above. The user may be prompted to pay a fee for initial processing, a fee for each still image selected, or a combination of an initial processing fee and an image selection fee.
- In another embodiment, scene cuts in a video are automatically detected to create a set of miniature-view keyframes and corresponding timecodes. The miniature-view keyframes, referred to herein as thumbnails, may be presented on the display of a computer system to allow a user to select entry points into the video. For example, if the video has been digitized (or recorded in digital form) and is accessible by the computer system, then the user may select a thumbnail of interest to cause the video to begin playing on the display of the computer system starting at the point in the video at which the thumbnail appears. In this way, a navigable index of the video is established, greatly simplifying the activity of searching a video for subject matter of interest.
- FIG. 1 illustrates use of a still
image generation system 12 to generate a set of stillimages 15 from asource video 10. Thesource video 10 may be supplied to the stillimage generation system 12 in a number of forms, including on video recording media such as magnetic tape or disk, optical disk, solid state storage and so forth. Alternatively, thesource video 10 may be delivered electronically, for example, by uploading the video via a communications network to the stillimage generation system 12. Thesource video 10 may be recorded in a number of different formats, including without limitation standard NTSC (National Television System Code) analog video or in any number of digital video formats. In the case of an analog format, thesource video 10 is digitized by the stillimage generation system 12 before further processing is performed. - In a preferred embodiment, the still
image generation system 12 is implemented by a programmed general purpose computer system and a set of one or more media readers, such as a cassette or diskette reader. The media readers may be installed in the computer system or operated as standalone devices which generate an analog or digital video feed. A frame digitizing module (often called a “frame grabber”) may be included in the computer system to receive and digitize an analog video signal supplied from an external analog media reader. Alternatively, the external media reader may generate a digital output that can be accepted via a communication port of the computer system. - The set of still
images 15 generated by the stillimage generation system 12 may be output in a number of forms. For example, the stillimage generation system 12 may include a printing device for generating printedimages 19. Alternatively, the set of stillimages 15 may be recorded on aportable storage medium 21, including on unused recording space on the medium on which thesource video 10 was supplied. Further, the set of stillimages 15 may be output in electronic form appropriate for direct transmission to an end-user viewing system 22 (e.g., via e-mail or electronic courier) or for posting on aserver computer 17 that can be accessed via acommunications network 20 such as the Internet or other computer network. For example, the set of stillimages 15 may be posted on a server computer accessible via the World Wide Web (the “Web”) so that an end-user may view the posted images using a client computer (e.g., viewing system 22) and select which images of theset 15 to download. - FIG. 2 illustrates a business model for providing a still image generation service according to one embodiment. Initially, a
customer 25 supplies asource video 10 to a stillimage generation service 26. Thesource video 10 may be provided, for example, on a portable storage medium or by electronic transmission. The stillimage generation service 26 processes thesource video 10 to generate a set of stillimages 15. Finally, the customer provides afee 27 to the still image generation service in return for the set of still images. The fee may be monetary or a supply of information such as profile information that can be resold to advertisers or other parties interested in demographic information. As discussed above, the set of stillimages 15 may be provided as a set of prints, a set of images recorded on a storage medium or by electronic transmission. Also, thecustomer 25 may be permitted to select a subset of the still images, paying a reduced, or per-image fee. Thefee 27 may be different for the different types of still images depending on a number of factors such as the number of frames that have been combined to produce the still image, the overall size of the still image (e.g., in area or storage requirements), and so forth. As discussed above, thestill image service 26 may be operated as a drop-off service or as a customer-operated kiosk. In the case of the drop-off service, thecustomer 25 may drop off (or electronically transmit) thesource video 10 and receive the set of stillimages 15 later (e.g., by pickup or by electronic transmission). In the case of a kiosk, thecustomer 25 may insert thesource video 10 into a media reader included in the kiosk and stand by while thesource video 10 is being processed. The customer may interact with a user-interface of the kiosk to specify processing criteria and to select which of the set of stillimages 15 to keep. - In an alternative embodiment, a still
image generation system 26 is implemented by a programmed general purpose computer, such as a desktop or laptop computer of a computer user. In that case, still image generation software is sold to the user, for example as shrink-wrap or click-wrap software, and installed on the user's computer system. Additional equipment, such as the above-described media reader and playback device may be required. Thus, the stillimage generation system 26 may be implemented in the same end-user computer system that is used to provide theviewing system 22 of FIG. 1. - FIG. 3 illustrates a
selection window 30 presented on adisplay 29 of a still image generation system according to one embodiment. The user of the still image generation system (e.g.,element 12 of FIG. 1), who may or may not be the person who has requested the set of still images, selects from among thumbnail views of still images (32, 34, 36) presented in theselection window 30, for example, by clicking thumbnails the user wishes to keep. In one embodiment, each selected thumbnail view of a still image (32, 34, 36) is removed from theselection window 30 to aselections list 39. The selected stillimages 41 may be printed, transmitted or otherwise delivered to the user as they are selected or after all selections have been made. As shown, the individual stillimages pan shots 31 produced by rotation or translation of the video camera, zoomshots 33 produced by zooming the video camera in or out or both, and stillshots 35 produced by keeping the video camera stationary or by user-activation of a repetitive capture input which causes a captured frame to be automatically copied to a number of successive frames. The still image generation system may make the still images available one-by-one as they are created from thesource video 10, or theentire source video 10 may be processed to generate the complete set of still images before the set of still images is presented to the user of the system. - FIG. 4 illustrates a
window 50 of a computer system display in which pages 51 of avideo album 52 according to one embodiment are presented. Thevideo album 52 contains separately viewable pages each containing one or more still images (53A, 53B, 53C) that have been generated by combining frames of a video.Text descriptions sound icon 55, a portion of the audio track that corresponds to the video segment used to generate stillimage 53B is played. Similarly, when a viewer clicks thevideo icon 56, the video is presented starting at the first frame of the video segment used to generate still image 53A. Virtual reality players may also be associated with the still images presented in thevideo album 52. For example, a panoramic player is invoked to allow the viewer to pan about withinpanoramic image 53A when the viewer clicks thePAN button 57. Similarly, a pan and zoom player is invoked to allow the viewer to pan and zoom within the multiple resolution stillimage 53C. - Although the
pages 51 of thevideo album 52 are shown in FIG. 4 as being cascaded over one another, many alternate arrangements of pages may be used. For example, thepages 51 may be tiled, or individually selected by any number of scrolling techniques. Thepages 51 may also be sorted based on a number of different criteria including, but not limited to, order of appearance in the video, nature of still image (e.g., panorama, multiple resolution still, etc.), legend text (e.g., grouping pages containing user-specified keywords together). Similarly, the individual still images of thevideo album 52 may be reorganized within the video album according to such criteria so that, for example, the video album is chronologically ordered or images are grouped according to subject matter. - FIG. 5 illustrates a
still image generator 60 according to one embodiment. Thestill image generator 60 includes ascene change estimator 61, astill image constructor 67, and abackground motion estimator 65. - The
scene change estimator 61 compares successive frames of thesource video 10 to one another to determine when a transformation of a scene in the video frames exceeds a threshold. When applied to anentire source video 10, the effect of thescene change estimator 61 is to segment the sequence of frames in thesource video 10 into one or more subsequences of video frames (i.e., video segments or clips), each of which exhibits a scene transformation that is less than a predetermined threshold. Thebackground motion estimator 65 and still imageconstructor 67 process each video segment identified by thescene change estimator 61 to generate a composite still image having pixel values drawn from two or more of the frames in the video segment. Thus, the predetermined threshold applied by thescene change estimator 61 defines the incremental transformation of a scene which results in construction of a new still image of the still image set 15. - According to one embodiment, the
scene change estimator 61 operates by determining a transformation vector for each pair of adjacent video frames in the source video. Herein, a first frame is considered to be adjacent a second frame if the first frame immediately precedes or succeeds the second frame in a temporal sequence of frames. According to one embodiment, the transformation vector includes a plurality of scalar components that each indicate a measure of change in the scene from one video frame to the next. For example, the scalar components of a transformation vector may include measures of the following changes in the scene: translation, scaling, rotation, panning, tilting, skew, color changes and time elapsed. - In one implementation, the
scene change estimator 61 applies a spatial low pass filter to the frames of thesource video 10 before computing the transformation deltas between adjacent frames. After being low pass filtered, the individual frames in thesource video 10 contain less information than before filtering so that fewer computations are required to determine the transformation deltas. In one implementation, transformation deltas are cleared at the beginning of a video segment and then a transformation delta computed for each pair of adjacent frames in the video segment is added to transformation deltas computed for preceding pairs of adjacent frames to accumulate a sum of transformation deltas. In effect, the sum of transformation deltas represents a transformation between a starting video frame in a video segment and the most recently compared video frame in the video segment. In one embodiment, the sum of transformation deltas is compared against a predetermined transformation threshold indecision block 63 to determine if the most recently compared video frame has caused the transformation threshold to be exceeded. The transformation threshold may be a vector quantity that includes multiple scalar thresholds, including thresholds for color changes, translation, scaling, rotation, panning, tilting, skew of the scene and time elapsed. In an alternate embodiment, the transformation threshold is dynamically adjusted in order to achieve a desired ratio of video segments to frames in thesource video 10. In another alternate embodiment, the transformation threshold is dynamically adjusted in order to achieve a desired average video segment size (i.e., a desired number of video frames per video segment). In yet another alternate embodiment, a transformation threshold is dynamically adjusted to achieve a desired average elapsed time per video segment. Generally, any technique for dynamically adjusting the transformation threshold may be used without departing from the spirit and scope of the present invention. - In one embodiment, if the most recently compared video frame causes caused the transformation threshold to be exceeded, the scene is deemed to have changed at
decision block 63 and the video frame that precedes the most recently compared video frame is deemed to be the ending frame of the video segment. Consequently, if a predetermined transformation threshold is used, each video segment of thesource video 10 is assured to have an overall transformation that is less than the transformation threshold. If a variable transformation threshold is used, on the other hand, considerable variance in the overall transformation delta of respective video segments may result and it may be necessary to iteratively apply the scene change estimator to reduce the variance in the transformation deltas. - FIG. 6 is a flow diagram of still image construction according to one embodiment. As discussed above, the scene change estimator effectively resolves the
source video 10 into a plurality of video segments each defined by a sequence of frames. Thus, atblock 81, the next video segment (e.g., video segment) within thesource video 10 is identified (or selected). If the video segment is determined to be empty at decision block 83 (i.e., the video segment includes no frames), then the end of the video has been reached and still image construction for thesource video 10 is completed. Otherwise, the number of frames in the video segment is compared against a threshold number indecision block 85 to determine whether the segment has a sufficient number of frames to produce a still image. The threshold number of frames may be predetermined or adaptively determined based on the lengths of the segments of thesource video 10. Also, the user of the still image generation system may specify the threshold number of frames required to produce a still image or the user may specify a starting value that may be adapted according to the lengths of segments of thesource video 10. In this way, the user of the still image generation system may control how many still images are generated, setting the threshold value to a high number of frames to reduce the number of video segments from which still images are constructed and setting the threshold value to a lower number to increase the number of video segments from which still images are constructed. Alternatively, in an adaptive system, a target number of still images may be specified so that the threshold number may be automatically increased or decreased during processing to converge on the target number of still images. - If the number of frames in the video segment does not exceed the threshold number of frames, then processing of the next video segment begins at
block 81. Otherwise, atblock 87, the background motion estimator inspects the video segment indicated by the scene change estimator to identify a dominant motion of the scene depicted in those frames. This dominant motion is considered to be a background motion. - There are a number of techniques that may be used to identify the background motion in a video segment. One technique, called feature tracking, involves identifying features in the video frames (e.g., using edge detection techniques) and tracking the motion of the features from one video frame to the next. Features that exhibit statistically aberrant motion relative to other features are considered to be dynamic objects and are temporarily disregarded. Motions that are shared by a large number of features (or by large features) are typically caused by changes in the disposition of the camera used to record the video and are considered to be background motions.
- Another technique for identifying background motion in a video segment is to correlate the frames of the video segment to one another based on common regions and then determine the frame to frame offset of those regions. The frame to frame offset can then be used to determine a background motion for the video segment.
- Still other contemplated techniques for identifying background motion in a video segment include, but are not limited to, coarse-to-fine search methods that use spatially hierarchical decompositions of frames in the video segment; measurements of changes in video frame histogram characteristics over time to identify scene changes; filtering to accentuate features in the video segment that can be used for motion identification; optical flow measurement and analysis; pixel format conversion to alternate color representations (including grayscale) to achieve greater processing speed, greater reliability or both; and robust estimation techniques, such as M-estimation, that eliminate elements of the video frames that do not conform to an estimated dominant motion.
- Still referring to FIG. 6, the still image constructor receives the background motion information from the background motion estimator in
block 89 and uses the background motion information to register the frames of the video segment to one another. Registration refers to spatially correlating video frames in a manner that accounts for changes caused by background motion. By registering the video frames based on background motion information, regions of the frames that exhibit motions that are different from the background motion will appear in a fixed location in only a small number of the registered video frames. That is, the regions move from frame to frame relative to a static background. These regions are considered to be dynamic objects. - In one embodiment, the still image constructor removes dynamic objects from frames of the video segment to produce a processed sequence of video frames. This technique is described in copending U.S. patent application Ser. No. 09/096,720 filed Jun. 11, 1998, which is hereby incorporated by reference in its entirety. At
block 89, the still image constructor generates a still image based on the processed sequence of video frames and the background motion information. Depending on the nature of the background motion, construction of the still image may involve combining two or more processed video frames into a single still image, referred to as a composite image. In one embodiment, the composite image may be a panoramic image or a high resolution still image. A panoramic image is created by stitching two or more processed video frames together and can be used to represent a background scene that has been captured by panning, tilting or translating a camera. A high resolution still image is appropriate when the subject of a processed sequence of video frames is a relatively static background scene (i.e., the disposition of the camera used to record the video source is not significantly changed). One technique for creating high resolution still images is to analyze the processed sequence of video frames to identify sub-pixel motions between the frames. Sub-pixel motion is caused by slight motions of the camera and can be used to create a composite image that has higher resolution than any of the individual frames captured by the camera. When multiple high resolution still images of the same subject are constructed, the high resolution still images can be composited to form a still image having regions of varying resolution. Such an image is referred to herein as a multiple-resolution still image. As discussed above, when a multiple-resolution still image is displayed during execution of a video album application program on a computer, a user can zoom in or out on different regions of the image. Similarly, a user can pan about a panoramic image. Combinations of pan and zoom are also possible. - FIG. 7 is a diagram of a
video index 96 displayed on acomputer system display 50 according to one embodiment. Avideo presentation 95 is displayed in one window of thedisplay 50, and thevideo index 96 is displayed in a separate window. In an alternate embodiment, thevideo index 96 may be displayed in a tool bar or other location within the same window as the video presentation. Thevideo index 96 contains miniaturized versions (thumbnails 97A-97J) of still images generated from the video presentation. For the purpose of the video index, the threshold number of frames required to signal a still image may be set to a low value so that at least one still image is constructed per scene change. By this arrangement, the still image generation system automatically detects scene cuts in the source video and generates a corresponding still image. Consequently, thevideo index 96 contains a thumbnail for each scene cut in the source video. In a preferred embodiment, each of thethumbnails 97A-97J is time correlated to the corresponding video segment in the source video by a timecode. Thus, if a user selects a thumbnail of interest in the index, the timecode associated with the thumbnail is used to identify a frame of the video that has a corresponding time offset from the start of the video, and the video is played starting at that time offset. In this way, a user may navigate thevideo presentation 95 by selecting thumbnails of interest from thevideo index 96. In an alternate embodiment, thethumbnails 97A-97J may be correlated to the viedo presentation by sequence numbers instead of by time codes. For example, each frame of the source video may be numbered, so that the number of a video frame that begins a segment of the video used to generate a still image may be associated with a thumbnail of the still image. When the user selects the thumbnail of the still image (e.g., by clicking a mouse button when a cursor controlled by the mouse is positioned over the thumbnail), the source video is played starting at the first frame of the corresponding video segment. - FIG. 8 is a diagram of an embodiment of a
processing system 100 that may be used to perform the above-described processing operations, either as an end-user machine, within a kiosk or as part of a still image generation service. Theprocessing system 100 includes aprocessing unit 110,memory 120,display device 130, user-input device 140,communications device 150,media reader 160,frame grabber 170 andprinting device 180, each coupled to abus structure 105. When the processing system forms part of a video processing kiosk, thedisplay device 130 and the user-input device 140 may be implemented by a touch-sensitive screen or other simplified user-interface. In alternate embodiments, other devices may be used to manipulate elements displayed on thedisplay device 130 and to allow a user to input information and selections into theprocessing system 100. Theprinting device 180 is preferably a high quality color printer, though a black and white printer may also be used. In the case of a video processing kiosk, theprinter 180 is preferably enclosed within the kiosk housing, adjacent an opening through which printed output is made available to the kiosk user. - The
processing unit 110 may include one or more general purpose processors, one or more digital signal processors or any other devices capable of executing a sequence of instructions. When programmed with appropriate instructions, the processing unit may be used to perform the above-described video processing operations. - The
communications device 150 may be a modem, area network card or any other device for coupling theprocessing system 100 to a computer network. Thecommunications device 150 may be used to generate or receive a carrier wave modulated with a data signal, for example, for transmitting or receiving video frames, still images or text from a server computer on the World Wide Web or other network, or for receiving updated program code or function-extending program code that can be executed by theprocessing unit 110 to implement various embodiments of the present invention. - The
memory 120 may include both system memory (typically, high speed dynamic random-access memory) and various non-volatile storage devices such as magnetic tape, magnetic disk, optical disk, electrically erasable programmable read only memory (EEPROM), or any other computer-readable medium. As shown in FIG. 8, thememory 120 may be used to storeprogram code 122 for performing the above-described processing operations andimage data 124. Theimage data 124 may include, for example, video frames that have been obtained frommedia reader 160 or from the frame grabber, or still images resulting from combination of video frames. In one embodiment, when power is applied to theprocessing system 100, operating system program code is loaded from non-volatile storage into system memory by theprocessing unit 110 or another device, such as a direct memory access controller (not shown). Sequences of instructions comprised by the operating system are then executed by processingunit 110 to load other sequences of instructions from non-volatile storage into system memory, including sequences of instructions that can be executed to perform the above-described video processing operations. Thus, program code that can be executed to perform the above-described video processing operations may be obtained from a computer-readable medium, including the above-described carrier wave, and executed in theprocessing unit 110. - The
media reader 160 may be a video cassette tape reader, an optical disk reader (e.g., Digital Versatile Disk (DVD) or Compact-Disk (CD)), a magnetic disk reader or any other device capable of reading video data from a portable storage media. If the video stored on the portable storage media is in a digital format (as in the case of a digital video camera output, for example), the content may be processed directly by theprocessing unit 110 to generate a set of still images. If the video is stored in an analog format (e.g., NTSC video), the signal is sampled and converted to a digital representation. The analog-to-digital conversion may be performed by a separate conversion device (not shown), by theframe grabber 170 or by theprocessing unit 110 itself. Theframe grabber 170 is used to convert an analog video signal received from a record/playback device 190 (e.g., a video cassette recorder, DVD player, DIVX player, video camera, etc.) or from themedia reader 160 into a digitized set of video frames. The frame grabber may obtain an analog video signal from themedia reader 160 viabus 105 or via a separate transmission path indicated by dashedarrow 162. The output of theframe grabber 170 may be transferred to thememory 120 for processing by theprocessing unit 10 or processed in place (i.e., within a buffer of the frame grabber) by theprocessing unit 110. - It should be noted that the individual video processing operations described above may also be performed by specific hardware components that contain hard-wired logic to carry out the recited operations or by any combination of programmed processing components and hard-wired logic. Nothing disclosed herein should be construed as limiting the processing system or other components of a still image generation system to a single embodiment wherein the recited operations are performed by a specific combination of hardware components.
- In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (21)
1. A method of generating photographs from a video, the method comprising:
identifying segments of the video for which frame-to-frame background motion is less than a threshold; and
for each of the segments, combining video frames in the segment to generate a photograph representative of the segment.
2. The method of claim 1 further comprising:
automatically detecting a scene cut in the video; and
selecting at least one video frame of a segment of the video that follows the scene cut to be a photograph.
3. The method of claim 1 wherein combining the video frames to generate a photograph comprises stitching images in the video frames together to generate a panoramic photograph.
4. The method of claim 1 wherein combining video frames to generate a photograph comprises blending pixels from the video frames to generate a photograph having higher resolution than any one of the video frames.
5. The method of claim 1 wherein combining video frames to generate a photograph comprises blending pixels from the video frames to form a multi-resolution photograph.
6. The method of claim 1 wherein identifying segments of the video for which frame-to-frame background motion is less than a threshold comprises identifying a succession of frames of the video that each include a portion of an image in a preceding frame.
7. The method of claim 6 wherein identifying the succession of frames of the video that each include a portion of an image in a preceding frame comprises removing a dynamic object from at least one frame of the succession of frames before comparing the at least one frame to a preceding frame in the succession of frames.
8. A method comprising:
receiving a video from a customer on a machine-readable medium;
processing the video to generate a set of photographs in return for a fee.
9. The method of claim 8 further comprising recording the set of photographs on the machine-readable medium and returning the machine-readable medium to the customer.
10. The method of claim 8 wherein receiving a video from a customer on a machine-readable medium comprises receiving the video in a data signal propagated over a communications network.
11. The method of claim 8 wherein receiving a video from a customer on a machine-readable comprises receiving the video on a machine-readable diskette.
12. The method of claim 8 wherein processing the video to generate a set of photographic images comprises:
identifying segments of the video that exhibit background motion less than a threshold; and
combining video frames in each of the segments of the video to form the set of photographic images.
13. The method of claim 12 wherein combining video frames in each of the segments of the video to form the set of photographic images comprises stitching together images in the video frames of at least one of the segments of the video to form a panoramic photograph.
14. The method of claim 12 wherein combining video frames in each of the segments of the video to form the set of photographic images comprises stitching together images in the video frames of at least one of the segments of the video to form a photograph having higher pixel resolution than any one of the video frames.
15. The method of claim 8 further comprising posting the set of photographic images on a server that is accessible to the customer via a computer network.
16. The method of claim 8 wherein processing the video to generate a set of photographs comprises printing the set of photographs.
17. An apparatus for generating photographs from a video, the apparatus comprising:
a scene change estimator to identify segments of the video for which frame-to-frame background motion is less than a threshold; and
a still image constructor to combine video frames in the segment to generate a photograph representative of the segment.
18. An apparatus for generating photographs from a video, the apparatus comprising:
means for identifying segments of the video for which frame-to-frame background motion is less than a threshold; and
means for combining video frames in the segment to generate a photograph representative of the segment.
19. An article of manufacture including one or more computer-readable media that embody a program of instructions for generating photographs from a video, wherein the program of instructions, when executed by a processing unit, causes the processing unit to:
identify segments of the video for which frame-to-frame background motion is less than a threshold; and
for each of the segments, combine video frames in the segment to generate a photograph representative of the segment.
20. The article of claim 19 wherein the one or more computer-readable media comprises a portable storage medium in which at least a portion of the program of instructions is embodied.
21. The article of claim 19 wherein the one or more computer-readable media comprises a propagated data signal in which the program of instructions is embodied.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/933,617 US20020028026A1 (en) | 1998-06-11 | 2001-08-20 | Extracting photographic images from video |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/096,720 US6278466B1 (en) | 1998-06-11 | 1998-06-11 | Creating animation from a video |
US09/339,475 US6307550B1 (en) | 1998-06-11 | 1999-06-23 | Extracting photographic images from video |
US09/933,617 US20020028026A1 (en) | 1998-06-11 | 2001-08-20 | Extracting photographic images from video |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/339,475 Division US6307550B1 (en) | 1998-06-11 | 1999-06-23 | Extracting photographic images from video |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020028026A1 true US20020028026A1 (en) | 2002-03-07 |
Family
ID=23329166
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/339,475 Expired - Lifetime US6307550B1 (en) | 1998-06-11 | 1999-06-23 | Extracting photographic images from video |
US09/933,617 Abandoned US20020028026A1 (en) | 1998-06-11 | 2001-08-20 | Extracting photographic images from video |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/339,475 Expired - Lifetime US6307550B1 (en) | 1998-06-11 | 1999-06-23 | Extracting photographic images from video |
Country Status (3)
Country | Link |
---|---|
US (2) | US6307550B1 (en) |
AU (1) | AU4801300A (en) |
WO (1) | WO2000079485A1 (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020163532A1 (en) * | 2001-03-30 | 2002-11-07 | Koninklijke Philips Electronics N.V. | Streaming video bookmarks |
US20030095785A1 (en) * | 2001-11-16 | 2003-05-22 | Keisuke Izumi | Digital image processing apparatus, digital image processing method, digital image processing program product, and digital image printing system |
US20030128880A1 (en) * | 2002-01-09 | 2003-07-10 | Hiroshi Akimoto | Video sequences correlation and static analysis and scene changing forecasting in motion estimation |
US20030202110A1 (en) * | 2002-04-30 | 2003-10-30 | Owens James W. | Arrangement of images |
US20050031318A1 (en) * | 1999-03-19 | 2005-02-10 | Hitach, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US20060034533A1 (en) * | 2004-08-12 | 2006-02-16 | Microsoft Corporation | System and method for producing a higher resolution still image from video information |
US20060069999A1 (en) * | 2004-09-29 | 2006-03-30 | Nikon Corporation | Image reproduction apparatus and image reproduction program product |
US20060174204A1 (en) * | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image device resolution transformation |
US20060174205A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Estimating shared image device operational capabilities or resources |
US20060174203A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Viewfinder for shared image device |
US20060170958A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Proximity of shared image devices |
US20060174206A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device synchronization or designation |
US20060171603A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Resampling of transformed shared image techniques |
US20060171695A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device designation |
US20060187228A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Sharing including peripheral shared image device |
US20060187227A1 (en) * | 2005-01-31 | 2006-08-24 | Jung Edward K | Storage aspects for imaging device |
US20060190968A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc, A Limited Corporation Of The State Of The State Of Delaware | Sharing between shared audio devices |
US20060187230A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc | Peripheral shared image device sharing |
US20060274153A1 (en) * | 2005-06-02 | 2006-12-07 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Third party storage of captured data |
US20060274165A1 (en) * | 2005-06-02 | 2006-12-07 | Levien Royce A | Conditional alteration of a saved image |
US20060274163A1 (en) * | 2005-06-02 | 2006-12-07 | Searete Llc. | Saved-image management |
US20060279643A1 (en) * | 2005-06-02 | 2006-12-14 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Storage access technique for captured data |
US20070025639A1 (en) * | 2005-07-28 | 2007-02-01 | Hui Zhou | Method and apparatus for automatically estimating the layout of a sequentially ordered series of frames to be used to form a panorama |
US20070100533A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of State Of Delaware | Preservation and/or degradation of a video/audio data stream |
US20070098348A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Degradation/preservation management of captured data |
US20070097215A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Degradation/preservation management of captured data |
US20070100860A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Preservation and/or degradation of a video/audio data stream |
US20070109411A1 (en) * | 2005-06-02 | 2007-05-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Composite image selectivity |
US20070120981A1 (en) * | 2005-06-02 | 2007-05-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Storage access technique for captured data |
US20070139529A1 (en) * | 2005-06-02 | 2007-06-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Dual mode image capture technique |
US20070162873A1 (en) * | 2006-01-10 | 2007-07-12 | Nokia Corporation | Apparatus, method and computer program product for generating a thumbnail representation of a video sequence |
US20070222865A1 (en) * | 2006-03-15 | 2007-09-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced video/still image correlation |
US20070253675A1 (en) * | 2006-04-28 | 2007-11-01 | Weaver Timothy H | Methods, systems, and products for recording media |
US20070255915A1 (en) * | 2006-04-28 | 2007-11-01 | Timothy Weaver | Methods, systems, and products for recording media |
US20070274563A1 (en) * | 2005-06-02 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of State Of Delaware | Capturing selected image objects |
US20070294717A1 (en) * | 2005-07-08 | 2007-12-20 | Hill Peter N | Methods, systems, and products for conserving bandwidth |
US20080043108A1 (en) * | 2006-08-18 | 2008-02-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Capturing selected image objects |
US20080094472A1 (en) * | 2005-07-12 | 2008-04-24 | Serge Ayer | Method for analyzing the motion of a person during an activity |
US20080106621A1 (en) * | 2005-01-31 | 2008-05-08 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device synchronization or designation |
US20080118120A1 (en) * | 2006-11-22 | 2008-05-22 | Rainer Wegenkittl | Study Navigation System and Method |
WO2008019156A3 (en) * | 2006-08-08 | 2008-06-19 | Digital Media Cartridge Ltd | System and method for cartoon compression |
US20080158366A1 (en) * | 2005-01-31 | 2008-07-03 | Searete Llc | Shared image device designation |
US20080189338A1 (en) * | 2007-02-07 | 2008-08-07 | Weaver Timothy H | Methods, systems, and products for restoring media |
US20080189329A1 (en) * | 2007-02-07 | 2008-08-07 | Weaver Timothy H | Methods, systems, and products for targeting media |
US20080219589A1 (en) * | 2005-06-02 | 2008-09-11 | Searete LLC, a liability corporation of the State of Delaware | Estimating shared image device operational capabilities or resources |
US20090144391A1 (en) * | 2007-11-30 | 2009-06-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio sharing |
US20090172543A1 (en) * | 2007-12-27 | 2009-07-02 | Microsoft Corporation | Thumbnail navigation bar for video |
US20090273809A1 (en) * | 2002-01-11 | 2009-11-05 | Potrait Innovations, Inc. | Systems and methods for producing portraits |
US20100235466A1 (en) * | 2005-01-31 | 2010-09-16 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio sharing |
US20110255610A1 (en) * | 2002-08-28 | 2011-10-20 | Fujifilm Corporation | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames |
US20130097145A1 (en) * | 1998-11-30 | 2013-04-18 | Gemstar Development Corporation | Search engine for video and graphics |
US20130132523A1 (en) * | 2011-05-23 | 2013-05-23 | Thomas Love | Systems for the integrated design, operation and modification of databases and associated web applications |
US20140324705A1 (en) * | 2011-01-05 | 2014-10-30 | Fox Digital Enterprises, Inc. | System and method for exchanging physical media for a secured digital copy |
US9041826B2 (en) | 2005-06-02 | 2015-05-26 | The Invention Science Fund I, Llc | Capturing selected image objects |
US9125169B2 (en) | 2011-12-23 | 2015-09-01 | Rovi Guides, Inc. | Methods and systems for performing actions based on location-based rules |
US9167195B2 (en) | 2005-10-31 | 2015-10-20 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US9294799B2 (en) | 2000-10-11 | 2016-03-22 | Rovi Guides, Inc. | Systems and methods for providing storage of data on servers in an on-demand media delivery system |
WO2018005701A1 (en) * | 2016-06-29 | 2018-01-04 | Cellular South, Inc. Dba C Spire Wireless | Video to data |
US9942511B2 (en) | 2005-10-31 | 2018-04-10 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US20190394350A1 (en) * | 2018-06-25 | 2019-12-26 | Adobe Inc. | Video-based document scanning |
US20200272661A1 (en) * | 2014-08-27 | 2020-08-27 | Nternational Business Machines Corporation | Consolidating video search for an event |
Families Citing this family (106)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6567983B1 (en) * | 1998-04-10 | 2003-05-20 | Fuji Photo Film Co., Ltd. | Electronic album producing and viewing system and method |
US7936381B2 (en) * | 1998-10-30 | 2011-05-03 | Canon Kabushiki Kaisha | Management and setting of photographing condition of image sensing apparatus |
JP2000148771A (en) * | 1998-11-06 | 2000-05-30 | Sony Corp | Processor and method for image processing and provision medium |
JP2000152168A (en) * | 1998-11-13 | 2000-05-30 | Olympus Optical Co Ltd | Image reproducing device |
US6912327B1 (en) * | 1999-01-28 | 2005-06-28 | Kabushiki Kaisha Toshiba | Imagine information describing method, video retrieval method, video reproducing method, and video reproducing apparatus |
US6571024B1 (en) * | 1999-06-18 | 2003-05-27 | Sarnoff Corporation | Method and apparatus for multi-view three dimensional estimation |
WO2001003431A1 (en) * | 1999-07-05 | 2001-01-11 | Hitachi, Ltd. | Video recording method and apparatus, video reproducing method and apparatus, and recording medium |
CN1193593C (en) * | 1999-07-06 | 2005-03-16 | 皇家菲利浦电子有限公司 | Automatic extraction method of the structure of a video sequence |
WO2001003430A2 (en) * | 1999-07-06 | 2001-01-11 | Koninklijke Philips Electronics N.V. | Automatic extraction method of the structure of a video sequence |
US7996878B1 (en) * | 1999-08-31 | 2011-08-09 | At&T Intellectual Property Ii, L.P. | System and method for generating coded video sequences from still media |
JP4251757B2 (en) * | 1999-10-28 | 2009-04-08 | 三洋電機株式会社 | Digital camera |
US6829428B1 (en) * | 1999-12-28 | 2004-12-07 | Elias R. Quintos | Method for compact disc presentation of video movies |
DE60044179D1 (en) * | 1999-12-28 | 2010-05-27 | Sony Corp | System and method for the commercial traffic of images |
EP1670235A1 (en) | 1999-12-28 | 2006-06-14 | Sony Corporation | A portable music player |
US6677981B1 (en) * | 1999-12-31 | 2004-01-13 | Stmicroelectronics, Inc. | Motion play-back of still pictures comprising a panoramic view for simulating perspective |
US6980690B1 (en) * | 2000-01-20 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing apparatus |
JP3784603B2 (en) * | 2000-03-02 | 2006-06-14 | 株式会社日立製作所 | Inspection method and apparatus, and inspection condition setting method in inspection apparatus |
JP4963141B2 (en) * | 2000-04-27 | 2012-06-27 | ソニー株式会社 | Information providing apparatus and method, and program storage medium |
JP2001320667A (en) | 2000-05-12 | 2001-11-16 | Sony Corp | Service providing device and method, reception terminal and method, and service providing system |
WO2001089221A1 (en) * | 2000-05-18 | 2001-11-22 | Imove Inc. | Multiple camera video system which displays selected images |
US20020089587A1 (en) * | 2000-05-18 | 2002-07-11 | Imove Inc. | Intelligent buffering and reporting in a multiple camera data streaming video system |
US7281220B1 (en) * | 2000-05-31 | 2007-10-09 | Intel Corporation | Streaming video programming guide system selecting video files from multiple web sites and automatically generating selectable thumbnail frames and selectable keyword icons |
US6882793B1 (en) | 2000-06-16 | 2005-04-19 | Yesvideo, Inc. | Video processing system |
US6813618B1 (en) * | 2000-08-18 | 2004-11-02 | Alexander C. Loui | System and method for acquisition of related graphical material in a digital graphics album |
US20040012900A1 (en) * | 2000-09-29 | 2004-01-22 | Yasuji Obuchi | Information service system and information service method |
US6897880B2 (en) * | 2001-02-22 | 2005-05-24 | Sony Corporation | User interface for generating parameter values in media presentations based on selected presentation instances |
WO2002093916A2 (en) * | 2001-05-14 | 2002-11-21 | Elder James H | Attentive panoramic visual sensor |
JP2002351878A (en) * | 2001-05-18 | 2002-12-06 | Internatl Business Mach Corp <Ibm> | Digital contents reproduction device, data acquisition system, digital contents reproduction method, metadata management method, electronic watermark embedding method, program, and recording medium |
US20050129274A1 (en) * | 2001-05-30 | 2005-06-16 | Farmer Michael E. | Motion-based segmentor detecting vehicle occupants using optical flow method to remove effects of illumination |
US7908629B2 (en) * | 2001-06-28 | 2011-03-15 | Intel Corporation | Location-based image sharing |
US7480864B2 (en) * | 2001-10-12 | 2009-01-20 | Canon Kabushiki Kaisha | Zoom editor |
US7203380B2 (en) * | 2001-11-16 | 2007-04-10 | Fuji Xerox Co., Ltd. | Video production and compaction with collage picture frame user interface |
JP3787529B2 (en) * | 2002-02-12 | 2006-06-21 | 株式会社日立製作所 | Optical disc recording / reproducing apparatus |
US6983020B2 (en) * | 2002-03-25 | 2006-01-03 | Citrix Online Llc | Method and apparatus for fast block motion detection |
US20030184658A1 (en) * | 2002-03-26 | 2003-10-02 | Fredlund John R. | System and method for capturing motion video segments and providing still and motion image files |
DE60331534D1 (en) * | 2002-04-25 | 2010-04-15 | Sony Corp | PICTURE PROCESSING DEVICE, PICTURE PROCESSING METHOD AND PICTURE PROCESSING PROGRAM |
US8737816B2 (en) * | 2002-08-07 | 2014-05-27 | Hollinbeck Mgmt. Gmbh, Llc | System for selecting video tracks during playback of a media production |
US7739584B2 (en) * | 2002-08-08 | 2010-06-15 | Zane Vella | Electronic messaging synchronized to media presentation |
JP4220749B2 (en) * | 2002-09-30 | 2009-02-04 | 富士フイルム株式会社 | Image service providing device |
US7116716B2 (en) * | 2002-11-01 | 2006-10-03 | Microsoft Corporation | Systems and methods for generating a motion attention model |
US20040088723A1 (en) * | 2002-11-01 | 2004-05-06 | Yu-Fei Ma | Systems and methods for generating a video summary |
US20040125114A1 (en) * | 2002-12-31 | 2004-07-01 | Hauke Schmidt | Multiresolution image synthesis for navigation |
US7546544B1 (en) | 2003-01-06 | 2009-06-09 | Apple Inc. | Method and apparatus for creating multimedia presentations |
US7694225B1 (en) * | 2003-01-06 | 2010-04-06 | Apple Inc. | Method and apparatus for producing a packaged presentation |
US7840905B1 (en) | 2003-01-06 | 2010-11-23 | Apple Inc. | Creating a theme used by an authoring application to produce a multimedia presentation |
US8027482B2 (en) * | 2003-02-13 | 2011-09-27 | Hollinbeck Mgmt. Gmbh, Llc | DVD audio encoding using environmental audio tracks |
US7164798B2 (en) | 2003-02-18 | 2007-01-16 | Microsoft Corporation | Learning-based automatic commercial content detection |
US7340765B2 (en) * | 2003-10-02 | 2008-03-04 | Feldmeier Robert H | Archiving and viewing sports events via Internet |
US20050076387A1 (en) * | 2003-10-02 | 2005-04-07 | Feldmeier Robert H. | Archiving and viewing sports events via Internet |
KR100969966B1 (en) * | 2003-10-06 | 2010-07-15 | 디즈니엔터프라이지즈,인크. | System and method of playback and feature control for video players |
JP3976000B2 (en) * | 2003-11-06 | 2007-09-12 | ソニー株式会社 | Information processing apparatus and method, program recording medium, program, and photographing apparatus |
JP3848319B2 (en) * | 2003-11-11 | 2006-11-22 | キヤノン株式会社 | Information processing method and information processing apparatus |
US8622824B2 (en) * | 2003-12-08 | 2014-01-07 | United Tote Company | Method and system for viewing images of a pari-mutuel gaming activity |
US8238721B2 (en) * | 2004-02-27 | 2012-08-07 | Hollinbeck Mgmt. Gmbh, Llc | Scene changing in video playback devices including device-generated transitions |
US8837921B2 (en) * | 2004-02-27 | 2014-09-16 | Hollinbeck Mgmt. Gmbh, Llc | System for fast angle changing in video playback devices |
US8165448B2 (en) * | 2004-03-24 | 2012-04-24 | Hollinbeck Mgmt. Gmbh, Llc | System using multiple display screens for multiple video streams |
US9053754B2 (en) * | 2004-07-28 | 2015-06-09 | Microsoft Technology Licensing, Llc | Thumbnail generation and presentation for recorded TV programs |
US7986372B2 (en) * | 2004-08-02 | 2011-07-26 | Microsoft Corporation | Systems and methods for smart media content thumbnail extraction |
US20070103544A1 (en) * | 2004-08-26 | 2007-05-10 | Naofumi Nakazawa | Panorama image creation device and panorama image imaging device |
JP4221669B2 (en) * | 2004-09-06 | 2009-02-12 | ソニー株式会社 | Recording apparatus and method, recording medium, and program |
US7382353B2 (en) * | 2004-11-18 | 2008-06-03 | International Business Machines Corporation | Changing a function of a device based on tilt of the device for longer than a time period |
US8045845B2 (en) * | 2005-01-03 | 2011-10-25 | Hollinbeck Mgmt. Gmbh, Llc | System for holding a current track during playback of a multi-track media production |
US20060159432A1 (en) | 2005-01-14 | 2006-07-20 | Citrix Systems, Inc. | System and methods for automatic time-warped playback in rendering a recorded computer session |
US8200828B2 (en) | 2005-01-14 | 2012-06-12 | Citrix Systems, Inc. | Systems and methods for single stack shadowing |
US8935316B2 (en) | 2005-01-14 | 2015-01-13 | Citrix Systems, Inc. | Methods and systems for in-session playback on a local machine of remotely-stored and real time presentation layer protocol data |
US8296441B2 (en) | 2005-01-14 | 2012-10-23 | Citrix Systems, Inc. | Methods and systems for joining a real-time session of presentation layer protocol data |
US8230096B2 (en) | 2005-01-14 | 2012-07-24 | Citrix Systems, Inc. | Methods and systems for generating playback instructions for playback of a recorded computer session |
US8194120B2 (en) * | 2005-05-30 | 2012-06-05 | Fujifilm Corporation | Image capturing apparatus, display apparatus, image capturing method, displaying method and program therefor |
US7663691B2 (en) * | 2005-10-11 | 2010-02-16 | Apple Inc. | Image capture using display device as light source |
US20060284895A1 (en) * | 2005-06-15 | 2006-12-21 | Marcu Gabriel G | Dynamic gamma correction |
US8085318B2 (en) | 2005-10-11 | 2011-12-27 | Apple Inc. | Real-time image capture and manipulation based on streaming data |
US20070112811A1 (en) * | 2005-10-20 | 2007-05-17 | Microsoft Corporation | Architecture for scalable video coding applications |
US8180826B2 (en) | 2005-10-31 | 2012-05-15 | Microsoft Corporation | Media sharing and authoring on the web |
US7773813B2 (en) * | 2005-10-31 | 2010-08-10 | Microsoft Corporation | Capture-intention detection for video content analysis |
US8196032B2 (en) * | 2005-11-01 | 2012-06-05 | Microsoft Corporation | Template-based multimedia authoring and sharing |
JP5026692B2 (en) * | 2005-12-01 | 2012-09-12 | 株式会社ソニー・コンピュータエンタテインメント | Image processing apparatus, image processing method, and program |
US7599918B2 (en) * | 2005-12-29 | 2009-10-06 | Microsoft Corporation | Dynamic search with implicit user intention mining |
US7639897B2 (en) * | 2006-01-24 | 2009-12-29 | Hewlett-Packard Development Company, L.P. | Method and apparatus for composing a panoramic photograph |
CN101867679B (en) * | 2006-03-27 | 2013-07-10 | 三洋电机株式会社 | Thumbnail generating apparatus and image shooting apparatus |
US7577314B2 (en) * | 2006-04-06 | 2009-08-18 | Seiko Epson Corporation | Method and apparatus for generating a panorama background from a set of images |
CN101427283A (en) * | 2006-04-24 | 2009-05-06 | Nxp股份有限公司 | Method and device for generating a panoramic image from a video sequence |
US7925978B1 (en) | 2006-07-20 | 2011-04-12 | Adobe Systems Incorporated | Capturing frames from an external source |
US20080131088A1 (en) * | 2006-11-30 | 2008-06-05 | Mitac Technology Corp. | Image capture method and audio-video recording method of multi-media electronic device |
US7853888B1 (en) * | 2007-01-12 | 2010-12-14 | Adobe Systems Incorporated | Methods and apparatus for displaying thumbnails while copying and pasting |
US20080303949A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Manipulating video streams |
US8122378B2 (en) * | 2007-06-08 | 2012-02-21 | Apple Inc. | Image capture and manipulation |
EP2071511A1 (en) * | 2007-12-13 | 2009-06-17 | Thomson Licensing | Method and device for generating a sequence of images of reduced size |
US20090160936A1 (en) * | 2007-12-21 | 2009-06-25 | Mccormack Kenneth | Methods and apparatus for operating a video camera assembly |
US20100077289A1 (en) * | 2008-09-08 | 2010-03-25 | Eastman Kodak Company | Method and Interface for Indexing Related Media From Multiple Sources |
JP4623200B2 (en) * | 2008-10-27 | 2011-02-02 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
JP4623199B2 (en) | 2008-10-27 | 2011-02-02 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
KR20100095777A (en) * | 2009-02-23 | 2010-09-01 | 삼성전자주식회사 | Apparatus and method for extracting thumbnail of contents in electronic device |
US20100265313A1 (en) * | 2009-04-17 | 2010-10-21 | Sony Corporation | In-camera generation of high quality composite panoramic images |
US8861935B2 (en) * | 2009-08-26 | 2014-10-14 | Verizon Patent And Licensing Inc. | Systems and methods for enhancing utilization of recorded media content programs |
US8750645B2 (en) * | 2009-12-10 | 2014-06-10 | Microsoft Corporation | Generating a composite image from video frames |
US9269072B2 (en) * | 2010-12-23 | 2016-02-23 | Citrix Systems, Inc. | Systems, methods, and devices for facilitating navigation of previously presented screen data in an ongoing online meeting |
US8885881B2 (en) * | 2011-03-24 | 2014-11-11 | Toyota Jidosha Kabushiki Kaisha | Scene determination and prediction |
US9185152B2 (en) | 2011-08-25 | 2015-11-10 | Ustream, Inc. | Bidirectional communication on live multimedia broadcasts |
US8615159B2 (en) | 2011-09-20 | 2013-12-24 | Citrix Systems, Inc. | Methods and systems for cataloging text in a recorded session |
US10726451B1 (en) | 2012-05-02 | 2020-07-28 | James E Plankey | System and method for creating and managing multimedia sales promotions |
US9798712B2 (en) * | 2012-09-11 | 2017-10-24 | Xerox Corporation | Personalized medical record |
JP6231702B2 (en) * | 2014-04-24 | 2017-11-15 | ノキア テクノロジーズ オサケユイチア | Apparatus, method and computer program product for video enhanced photo browsing |
JP2016066842A (en) * | 2014-09-24 | 2016-04-28 | ソニー株式会社 | Signal processing circuit and imaging apparatus |
KR101650153B1 (en) * | 2015-03-19 | 2016-08-23 | 네이버 주식회사 | Cartoon data modifying method and cartoon data modifying device |
CN110909205B (en) * | 2019-11-22 | 2023-04-07 | 北京金山云网络技术有限公司 | Video cover determination method and device, electronic equipment and readable storage medium |
US11330307B2 (en) * | 2019-12-13 | 2022-05-10 | Rovi Guides, Inc. | Systems and methods for generating new content structures from content segments |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0766446B2 (en) | 1985-11-27 | 1995-07-19 | 株式会社日立製作所 | Method of extracting moving object image |
US4698682A (en) | 1986-03-05 | 1987-10-06 | Rca Corporation | Video apparatus and method for producing the illusion of motion from a sequence of still images |
US5261041A (en) | 1990-12-28 | 1993-11-09 | Apple Computer, Inc. | Computer controlled animation system based on definitional animated objects and methods of manipulating same |
JP2677312B2 (en) | 1991-03-11 | 1997-11-17 | 工業技術院長 | Camera work detection method |
GB2255466B (en) | 1991-04-30 | 1995-01-25 | Sony Broadcast & Communication | Digital video effects system for producing moving effects |
US5592228A (en) | 1993-03-04 | 1997-01-07 | Kabushiki Kaisha Toshiba | Video encoder using global motion estimation and polygonal patch motion estimation |
GB2277847B (en) | 1993-05-03 | 1997-08-20 | Grass Valley Group | Method of creating video effects by use of keyframes |
US5751281A (en) | 1995-12-11 | 1998-05-12 | Apple Computer, Inc. | Apparatus and method for storing a movie within a movie |
-
1999
- 1999-06-23 US US09/339,475 patent/US6307550B1/en not_active Expired - Lifetime
-
2000
- 2000-04-20 AU AU48013/00A patent/AU4801300A/en not_active Abandoned
- 2000-04-20 WO PCT/US2000/010973 patent/WO2000079485A1/en active Application Filing
-
2001
- 2001-08-20 US US09/933,617 patent/US20020028026A1/en not_active Abandoned
Cited By (115)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130097145A1 (en) * | 1998-11-30 | 2013-04-18 | Gemstar Development Corporation | Search engine for video and graphics |
US9311405B2 (en) * | 1998-11-30 | 2016-04-12 | Rovi Guides, Inc. | Search engine for video and graphics |
US7349623B1 (en) * | 1999-03-19 | 2008-03-25 | Hitachi, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US7835617B2 (en) | 1999-03-19 | 2010-11-16 | Hitachi, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US20050031318A1 (en) * | 1999-03-19 | 2005-02-10 | Hitach, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US20050031319A1 (en) * | 1999-03-19 | 2005-02-10 | Hitachi, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US7903930B2 (en) | 1999-03-19 | 2011-03-08 | Hitachi, Ltd. | Data recording apparatus and system having sustained high transfer rates |
US20110122759A1 (en) * | 1999-03-19 | 2011-05-26 | Hitachi, Ltd. | Data Recording Apparatus and System Having Sustained High Transfer Rates |
US9294799B2 (en) | 2000-10-11 | 2016-03-22 | Rovi Guides, Inc. | Systems and methods for providing storage of data on servers in an on-demand media delivery system |
US9462317B2 (en) | 2000-10-11 | 2016-10-04 | Rovi Guides, Inc. | Systems and methods for providing storage of data on servers in an on-demand media delivery system |
US7143353B2 (en) * | 2001-03-30 | 2006-11-28 | Koninklijke Philips Electronics, N.V. | Streaming video bookmarks |
US20020163532A1 (en) * | 2001-03-30 | 2002-11-07 | Koninklijke Philips Electronics N.V. | Streaming video bookmarks |
US7271932B2 (en) * | 2001-11-16 | 2007-09-18 | Noritsu Koki Co., Ltd. | Digital image processing apparatus, digital image processing method, digital image processing program product, and digital image printing system |
US20030095785A1 (en) * | 2001-11-16 | 2003-05-22 | Keisuke Izumi | Digital image processing apparatus, digital image processing method, digital image processing program product, and digital image printing system |
US7248741B2 (en) * | 2002-01-09 | 2007-07-24 | Hiroshi Akimoto | Video sequences correlation and static analysis and scene changing forecasting in motion estimation |
US20030128880A1 (en) * | 2002-01-09 | 2003-07-10 | Hiroshi Akimoto | Video sequences correlation and static analysis and scene changing forecasting in motion estimation |
US20090273809A1 (en) * | 2002-01-11 | 2009-11-05 | Potrait Innovations, Inc. | Systems and methods for producing portraits |
US20030202110A1 (en) * | 2002-04-30 | 2003-10-30 | Owens James W. | Arrangement of images |
US8805121B2 (en) * | 2002-08-28 | 2014-08-12 | Fujifilm Corporation | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames |
US20120321220A1 (en) * | 2002-08-28 | 2012-12-20 | Fujifilm Corporation | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames |
US8275219B2 (en) * | 2002-08-28 | 2012-09-25 | Fujifilm Corporation | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames |
US20110255610A1 (en) * | 2002-08-28 | 2011-10-20 | Fujifilm Corporation | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames |
US20060034533A1 (en) * | 2004-08-12 | 2006-02-16 | Microsoft Corporation | System and method for producing a higher resolution still image from video information |
US7548259B2 (en) * | 2004-08-12 | 2009-06-16 | Microsoft Corporation | System and method for producing a higher resolution still image from video information |
US8176426B2 (en) * | 2004-09-29 | 2012-05-08 | Nikon Corporation | Image reproduction apparatus and image reproduction program product |
US20060069999A1 (en) * | 2004-09-29 | 2006-03-30 | Nikon Corporation | Image reproduction apparatus and image reproduction program product |
US9489717B2 (en) | 2005-01-31 | 2016-11-08 | Invention Science Fund I, Llc | Shared image device |
US8606383B2 (en) | 2005-01-31 | 2013-12-10 | The Invention Science Fund I, Llc | Audio sharing |
US20060174204A1 (en) * | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image device resolution transformation |
US20060174205A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Estimating shared image device operational capabilities or resources |
US20060174203A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Viewfinder for shared image device |
US20060170958A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Proximity of shared image devices |
US9910341B2 (en) | 2005-01-31 | 2018-03-06 | The Invention Science Fund I, Llc | Shared image device designation |
US20060174206A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device synchronization or designation |
US7920169B2 (en) | 2005-01-31 | 2011-04-05 | Invention Science Fund I, Llc | Proximity of shared image devices |
US20060171603A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Resampling of transformed shared image techniques |
US7876357B2 (en) | 2005-01-31 | 2011-01-25 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US9124729B2 (en) | 2005-01-31 | 2015-09-01 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US20060171695A1 (en) * | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device designation |
US20100235466A1 (en) * | 2005-01-31 | 2010-09-16 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio sharing |
US20060187228A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Sharing including peripheral shared image device |
US8902320B2 (en) | 2005-01-31 | 2014-12-02 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US20060187227A1 (en) * | 2005-01-31 | 2006-08-24 | Jung Edward K | Storage aspects for imaging device |
US20080106621A1 (en) * | 2005-01-31 | 2008-05-08 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Shared image device synchronization or designation |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US20060190968A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc, A Limited Corporation Of The State Of The State Of Delaware | Sharing between shared audio devices |
US20080158366A1 (en) * | 2005-01-31 | 2008-07-03 | Searete Llc | Shared image device designation |
US20060187230A1 (en) * | 2005-01-31 | 2006-08-24 | Searete Llc | Peripheral shared image device sharing |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US20080219589A1 (en) * | 2005-06-02 | 2008-09-11 | Searete LLC, a liability corporation of the State of Delaware | Estimating shared image device operational capabilities or resources |
US8681225B2 (en) | 2005-06-02 | 2014-03-25 | Royce A. Levien | Storage access technique for captured data |
US9191611B2 (en) | 2005-06-02 | 2015-11-17 | Invention Science Fund I, Llc | Conditional alteration of a saved image |
US20060274153A1 (en) * | 2005-06-02 | 2006-12-07 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Third party storage of captured data |
US9967424B2 (en) | 2005-06-02 | 2018-05-08 | Invention Science Fund I, Llc | Data storage usage protocol |
US20070109411A1 (en) * | 2005-06-02 | 2007-05-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Composite image selectivity |
US9041826B2 (en) | 2005-06-02 | 2015-05-26 | The Invention Science Fund I, Llc | Capturing selected image objects |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US20060274165A1 (en) * | 2005-06-02 | 2006-12-07 | Levien Royce A | Conditional alteration of a saved image |
US20070120981A1 (en) * | 2005-06-02 | 2007-05-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Storage access technique for captured data |
US9451200B2 (en) | 2005-06-02 | 2016-09-20 | Invention Science Fund I, Llc | Storage access technique for captured data |
US20070139529A1 (en) * | 2005-06-02 | 2007-06-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Dual mode image capture technique |
US10097756B2 (en) | 2005-06-02 | 2018-10-09 | Invention Science Fund I, Llc | Enhanced video/still image correlation |
US20070274563A1 (en) * | 2005-06-02 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of State Of Delaware | Capturing selected image objects |
US20060274163A1 (en) * | 2005-06-02 | 2006-12-07 | Searete Llc. | Saved-image management |
US7872675B2 (en) | 2005-06-02 | 2011-01-18 | The Invention Science Fund I, Llc | Saved-image management |
US20060279643A1 (en) * | 2005-06-02 | 2006-12-14 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Storage access technique for captured data |
US9621749B2 (en) | 2005-06-02 | 2017-04-11 | Invention Science Fund I, Llc | Capturing selected image objects |
US20070294717A1 (en) * | 2005-07-08 | 2007-12-20 | Hill Peter N | Methods, systems, and products for conserving bandwidth |
US9432710B2 (en) | 2005-07-08 | 2016-08-30 | At&T Intellectual Property I, L.P. | Methods systems, and products for conserving bandwidth |
US8848058B2 (en) * | 2005-07-12 | 2014-09-30 | Dartfish Sa | Method for analyzing the motion of a person during an activity |
US20080094472A1 (en) * | 2005-07-12 | 2008-04-24 | Serge Ayer | Method for analyzing the motion of a person during an activity |
US20070025639A1 (en) * | 2005-07-28 | 2007-02-01 | Hui Zhou | Method and apparatus for automatically estimating the layout of a sequentially ordered series of frames to be used to form a panorama |
US7474802B2 (en) | 2005-07-28 | 2009-01-06 | Seiko Epson Corporation | Method and apparatus for automatically estimating the layout of a sequentially ordered series of frames to be used to form a panorama |
US20070097215A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Degradation/preservation management of captured data |
US20070100533A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of State Of Delaware | Preservation and/or degradation of a video/audio data stream |
US8253821B2 (en) | 2005-10-31 | 2012-08-28 | The Invention Science Fund I, Llc | Degradation/preservation management of captured data |
US9167195B2 (en) | 2005-10-31 | 2015-10-20 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US8072501B2 (en) | 2005-10-31 | 2011-12-06 | The Invention Science Fund I, Llc | Preservation and/or degradation of a video/audio data stream |
US8233042B2 (en) | 2005-10-31 | 2012-07-31 | The Invention Science Fund I, Llc | Preservation and/or degradation of a video/audio data stream |
US9942511B2 (en) | 2005-10-31 | 2018-04-10 | Invention Science Fund I, Llc | Preservation/degradation of video/audio aspects of a data stream |
US20070100860A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Preservation and/or degradation of a video/audio data stream |
US20070098348A1 (en) * | 2005-10-31 | 2007-05-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Degradation/preservation management of captured data |
US8032840B2 (en) | 2006-01-10 | 2011-10-04 | Nokia Corporation | Apparatus, method and computer program product for generating a thumbnail representation of a video sequence |
USRE47421E1 (en) | 2006-01-10 | 2019-06-04 | Convesant Wireless Licensing S.a r.l. | Apparatus, method and computer program product for generating a thumbnail representation of a video sequence |
US20070162873A1 (en) * | 2006-01-10 | 2007-07-12 | Nokia Corporation | Apparatus, method and computer program product for generating a thumbnail representation of a video sequence |
US20070222865A1 (en) * | 2006-03-15 | 2007-09-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Enhanced video/still image correlation |
US20100077166A1 (en) * | 2006-04-28 | 2010-03-25 | At&T Intellectual Property I, L.P. F/K/A Bellsouth Intellectual Property Corporation | Methods, systems, and products for recording media |
US8291182B2 (en) | 2006-04-28 | 2012-10-16 | At&T Intellectual Property I, L.P. | Methods, systems, and products for recording media |
US20070253675A1 (en) * | 2006-04-28 | 2007-11-01 | Weaver Timothy H | Methods, systems, and products for recording media |
US20070255915A1 (en) * | 2006-04-28 | 2007-11-01 | Timothy Weaver | Methods, systems, and products for recording media |
US8682857B2 (en) | 2006-04-28 | 2014-03-25 | At&T Intellectual Property I, L.P. | Methods, systems, and products for recording media |
US7647464B2 (en) | 2006-04-28 | 2010-01-12 | At&T Intellectual Property, I,L.P. | Methods, systems, and products for recording media to a restoration server |
US20100303150A1 (en) * | 2006-08-08 | 2010-12-02 | Ping-Kang Hsiung | System and method for cartoon compression |
WO2008019156A3 (en) * | 2006-08-08 | 2008-06-19 | Digital Media Cartridge Ltd | System and method for cartoon compression |
US20080043108A1 (en) * | 2006-08-18 | 2008-02-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Capturing selected image objects |
US8964054B2 (en) | 2006-08-18 | 2015-02-24 | The Invention Science Fund I, Llc | Capturing selected image objects |
US7787679B2 (en) * | 2006-11-22 | 2010-08-31 | Agfa Healthcare Inc. | Study navigation system and method |
US20080118120A1 (en) * | 2006-11-22 | 2008-05-22 | Rainer Wegenkittl | Study Navigation System and Method |
US7650368B2 (en) | 2007-02-07 | 2010-01-19 | At&T Intellectual Property I, L.P. | Methods, systems, and products for restoring electronic media |
US7711733B2 (en) | 2007-02-07 | 2010-05-04 | At&T Intellectual Property I,L.P. | Methods, systems, and products for targeting media for storage to communications devices |
US20080189329A1 (en) * | 2007-02-07 | 2008-08-07 | Weaver Timothy H | Methods, systems, and products for targeting media |
US20080189338A1 (en) * | 2007-02-07 | 2008-08-07 | Weaver Timothy H | Methods, systems, and products for restoring media |
US20100185613A1 (en) * | 2007-02-07 | 2010-07-22 | At&T Intellectual Property I, L.P. F/K/A Bellsouth Intellectual Property Corporation | Method, device, and computer program product for targeting media |
US8150845B2 (en) | 2007-02-07 | 2012-04-03 | At&T Intellectual Property I, L.P. | Method, device, and computer program product for targeting media for storage to a communications device |
US20090144391A1 (en) * | 2007-11-30 | 2009-06-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio sharing |
US20090172543A1 (en) * | 2007-12-27 | 2009-07-02 | Microsoft Corporation | Thumbnail navigation bar for video |
US8875023B2 (en) * | 2007-12-27 | 2014-10-28 | Microsoft Corporation | Thumbnail navigation bar for video |
US20140324705A1 (en) * | 2011-01-05 | 2014-10-30 | Fox Digital Enterprises, Inc. | System and method for exchanging physical media for a secured digital copy |
US20130132523A1 (en) * | 2011-05-23 | 2013-05-23 | Thomas Love | Systems for the integrated design, operation and modification of databases and associated web applications |
US9125169B2 (en) | 2011-12-23 | 2015-09-01 | Rovi Guides, Inc. | Methods and systems for performing actions based on location-based rules |
US20200272661A1 (en) * | 2014-08-27 | 2020-08-27 | Nternational Business Machines Corporation | Consolidating video search for an event |
US11847163B2 (en) * | 2014-08-27 | 2023-12-19 | International Business Machines Corporation | Consolidating video search for an event |
WO2018005701A1 (en) * | 2016-06-29 | 2018-01-04 | Cellular South, Inc. Dba C Spire Wireless | Video to data |
US20190394350A1 (en) * | 2018-06-25 | 2019-12-26 | Adobe Inc. | Video-based document scanning |
US10819876B2 (en) * | 2018-06-25 | 2020-10-27 | Adobe Inc. | Video-based document scanning |
Also Published As
Publication number | Publication date |
---|---|
AU4801300A (en) | 2001-01-09 |
WO2000079485A1 (en) | 2000-12-28 |
US6307550B1 (en) | 2001-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6307550B1 (en) | Extracting photographic images from video | |
US6278466B1 (en) | Creating animation from a video | |
US6014183A (en) | Method and apparatus for detecting scene changes in a digital video stream | |
Yeung et al. | Video visualization for compact presentation and fast browsing of pictorial content | |
US6081278A (en) | Animation object having multiple resolution format | |
US7822233B2 (en) | Method and apparatus for organizing digital media based on face recognition | |
US7594177B2 (en) | System and method for video browsing using a cluster index | |
US6268864B1 (en) | Linking a video and an animation | |
US8818038B2 (en) | Method and system for video indexing and video synopsis | |
US9530195B2 (en) | Interactive refocusing of electronic images | |
US7496859B2 (en) | Folder icon display control apparatus | |
US5768447A (en) | Method for indexing image information using a reference model | |
US7383508B2 (en) | Computer user interface for interacting with video cliplets generated from digital video | |
JP4499380B2 (en) | System and method for whiteboard and audio capture | |
JP2994177B2 (en) | System and method for locating boundaries between video segments | |
US8250068B2 (en) | Electronic album editing system, electronic album editing method, and electronic album editing program | |
WO2001028238A2 (en) | Method and apparatus for enhancing and indexing video and audio signals | |
JP2002238027A (en) | Video and audio information processing | |
US20050058431A1 (en) | Generating animated image file from video data file frames | |
US20040181545A1 (en) | Generating and rendering annotated video files | |
WO1999065224A2 (en) | Creating animation from a video | |
US20060088284A1 (en) | Digital photo kiosk and methods for digital image processing | |
US9779306B2 (en) | Content playback system, server, mobile terminal, content playback method, and recording medium | |
Teodosio et al. | Salient stills | |
Teng et al. | Design and evaluation of mProducer: a mobile authoring tool for personal experience computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CISCO WEBEX LLC;REEL/FRAME:027033/0764 Effective date: 20111006 Owner name: CISCO WEBEX LLC, DELAWARE Free format text: CHANGE OF NAME;ASSIGNOR:WEBEX COMMUNICATIONS, INC.;REEL/FRAME:027033/0756 Effective date: 20091005 |