WO2006009663A1 - Digital asset management, targeted searching and desktop searching using digital watermarks - Google Patents

Digital asset management, targeted searching and desktop searching using digital watermarks Download PDF

Info

Publication number
WO2006009663A1
WO2006009663A1 PCT/US2005/020790 US2005020790W WO2006009663A1 WO 2006009663 A1 WO2006009663 A1 WO 2006009663A1 US 2005020790 W US2005020790 W US 2005020790W WO 2006009663 A1 WO2006009663 A1 WO 2006009663A1
Authority
WO
WIPO (PCT)
Prior art keywords
metadata
file
watermark
imagery
searching
Prior art date
Application number
PCT/US2005/020790
Other languages
French (fr)
Inventor
Tony F. Rodriguez
Sean Calhoon
Scott J. Carr
Steven Gray
Alastair M. Reed
Original Assignee
Digimarc Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digimarc Corporation filed Critical Digimarc Corporation
Priority to JP2007518107A priority Critical patent/JP5372369B2/en
Publication of WO2006009663A1 publication Critical patent/WO2006009663A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Definitions

  • the present invention relates generally to digital watermarking. In some implementations the present invention relates to searching networks or desktops.
  • Digital watermarking is a process for modifying media content to embed a machine-readable code into the data content.
  • the data may be modified such that the embedded code is imperceptible or nearly imperceptible to the user, yet may be detected through an automated detection process.
  • digital watermarking is applied to media such as images, audio signals, and video signals.
  • documents e.g., through line, word or character shifting, background texturing, etc.
  • software multi ⁇ dimensional graphics models, and surface textures of objects.
  • Digital watermarking systems have two primary components: an embedding component that embeds the watermark in the media content, and a reading component that detects and reads the embedded watermark.
  • the embedding component embeds a watermark by altering data samples of the media content in the spatial, temporal or some other domain (e.g., Fourier, Discrete Cosine or Wavelet transform domains).
  • the reading component analyzes target content to detect whether a watermark is present, hi applications where the watermark encodes information (e.g., a message), the reader extracts this information from the detected watermark.
  • watermark decoders are deployed at a variety of locations on a computer network such as the internet, including in internet search engines that screen media objects gathered by each search engine, network firewalls that screen media objects that are encountered at the firewall, in local area networks and databases where spiders do not typically reach, in content filters, in client-based web-browsers, etc.
  • Each of these distributed decoders acts as a spider thread that logs (and perhaps acts upon) watermark information.
  • Examples of the types of watermark information include identifiers decoded from watermarks in watermarked media objects, media object counts, addresses of the location of the media objects (where they were found), and other context information (e.g., how the object was being used, who was using it, etc.).
  • the spider threads send their logs or reports to a central spider program that compiles them and aggregates the information into fields of a searchable database.
  • a method of searching a network for watermarked content includes receiving one or more keywords associated with watermarked content and providing the one or more keywords to a network search engine.
  • a listing of URLs that are associated with the one or more keywords are obtained from the network search engine.
  • the URLs are visited and, while visiting each URL, the content at each URL is analyzed for digital watermarking. At least one watermark identifier and a corresponding URL location are reported when found.
  • a system to direct network searching for watermarked content is provided. The method includes: i) a website interface to receive at least one of keywords and network locations from a customer; ii) - A -
  • a method of searching a network for watermarked content includes receiving a visible pattern and searching the network for content corresponding to the visible pattern. Content identified as corresponding to the visible pattern is analyzed for digital watermarking. At least one watermark identifier and a corresponding URL location are reported when digital watermarking is found.
  • Another challenge is to find and manage content stored locally on a user's computer or on her networked computers.
  • Searching tools have recently emerged to allow a user to search and catalog files on her computer. Examples are Google's Google Desktop Search and Microsoft's MSN Desktop Search.
  • Google's Google Desktop Search and Microsoft's MSN Desktop Search.
  • a method including receiving an imagery or audio file; identifying perceptual features in the imagery or audio file; and based on the perceptual features, generating metadata for the imagery or audio file.
  • the method further includes indexing the metadata in a desktop searching index.
  • the identifying includes pattern recognition, color analysis or facial recognition.
  • a desktop searching tool including executable instructions stored in computer memory for execution by electronic processing circuitry.
  • the instructions include instructions to: i. search one or more computer directories for imagery or audio files; ii. upon discovery of an imagery or audio file, analyze the file for a digital watermark embedded therein, and if a digital watermark is embedded therein to recover a plural-bit identifier; iii. obtain metadata from the imagery or audio file; and iv. query a remote database with the plural-bit identifier to determine whether the file metadata is current.
  • the desktop searching tool further includes instructions to refresh the file metadata with metadata from the remote database when the file metadata is not current.
  • a method of controlling a desktop searching tool includes searching one or more computer directories for imagery or audio files; upon discovery of an imagery or audio file, analyzing the file to determine whether a digital watermark is embedded therein. And if a digital watermark is embedded therein, then a plural-bit identifier carried by the digital watermark is recovered there from.
  • the method further includes obtaining metadata from the imagery or audio file, and querying a remote database with the plural-bit identifier to determine whether the file metadata is current.
  • a method to gather metadata associated with imagery or audio includes receiving an imagery or audio file including a content portion and a metadata portion; analyzing the metadata to determine at least one of a time and day when the content portion was created; automatically accessing one or more user software applications to gather information associated with at least one of time and day; and adding the information to the metadata portion.
  • a method of obtaining metadata for a first imagery or audio file includes determining other imagery or audio files that were created within a predetermined window of a creation time for the first imagery or audio file; gathering metadata associated with the other imagery or audio files; and associating at least some of the metadata with the first imagery or audio file.
  • the method includes: providing a graphical user interface through which a user can select a category of metadata from a plurality of categories of metadata; and once selected, applying the selected category of metadata to a file or contents in a directory through a mouse cursor or touch screen, whereby the selected category of metadata is associated with the image or audio file or file directory.
  • FIG. 1 illustrates a system for enhancing network searching.
  • FIG. 2 illustrates a desktop searching tool.
  • FIG. 3 illustrates a metadata repository that communicates with the desktop searching tool of FIG. 2.
  • FIG. 4 illustrates associating a person's metadata profile with images.
  • FIG. 5 illustrates a graphical user interface for selecting automatically gathered or generated metadata.
  • FIG. 6 illustrates a metadata authoring tool on a user's desktop.
  • FIG. 7 is a flow diagram illustrating a desktop indexing method according to yet another aspect of the invention.
  • FIG. 8 illustrates an identification document including electronic memory.
  • FIG. 9A illustrate an identification document issuance processes; and
  • FIG. 9B illustrates a related digital watermarking process.
  • Web searching continues to be a boom for the internet. Examples include Google, Yahoo!, and MSNBC, to name a few. Web searching allows a user to find information that is distributed over the internet.
  • current searching systems have two major problems. First, web crawlers that find information for indexing on a search engine only search around 10-20% (a generous estimate) of the internet. Second, a web crawler traditionally only locates surface information, such as HTML (hypertext markup language) web page, and ignores deep information, including downloadable files, FlashMedia and database information.
  • HTML hypertext markup language
  • a first solution uses an army of client-based web-browsers to locate watermarked content.
  • One implementation of this first solution searches content that a user encounters as she routinely surfs the internet. Once identified, watermarked content and a content location can be reported to a central location. The power of this tool emerges as watermark detectors are incorporated into hundreds or thousands (even millions) of browsing tools. Watermarked content - perhaps located behind password protected or restricted websites - is analyzed after a user enters the website, perhaps after entering a user id or password to gain access to a restricted website.
  • a digital watermark reader is incorporated into (or cooperates with) a user's internet browser or file browser, such as Windows Explorer.
  • a web file browser equipped with watermark reader software e.g., a plug-in, integrated via an Application Programming Interface, or as a shell extension to the operating system
  • the digital watermark reader analyzes content encountered through the browser. For example, say a user visits ESPN.com, CNN.com and then looks at images posted on Lotsoflmages.com.
  • the watermark reader sniffs through the various web pages and web images as the user browses the content.
  • a watermark reader can also be configured to review web-based audio and video as well.
  • the digital watermark reader is looking for watermarked content. Upon finding and decoding watermarked content, the reader obtains a watermark identifier.
  • the identifier can be a numeric identifier or may include text or other identifying information.
  • the watermark reader stores (or immediately reports) the identifier and a web location at which the watermark identifier was found. The report can also include a day/timestamp.
  • the server can, optionally, verify the existence of the watermarked content by visiting the web location and searching for the watermarked content. Alternatively, the server reports the watermarked content to a registered owner of the content. The owner is identified, e.g., through a database lookup that associates identifiers with their owners. (The owner can then use the report to help enforce copyrights, trademarks or other intellectual property rights.).
  • the central server can also maintain a log - a chain of custody if you will - to evidence that watermarked content (e.g., audio, video, images) was found on a particular day, at a particular web location.
  • a watermark reader can alternatively report the content identifier and location directly to an owner of the watermarked content.
  • a watermark includes or links to information that identifies a content owner.
  • the watermark reader uses this information to properly direct a message (e.g., automated email) to the owner when reporting a watermark identifier and location at which the watermark identifier was found.
  • a related implementation of our first solution is a bit more passive.
  • a watermark reader is incorporated into a browser (or screen saver).
  • the watermark- reader-equipped browser searches the internet for watermarked content when the computer is idle or otherwise inactive. For example, the browser automatically searches (e.g., visits) websites when a screen saver is activated, or after a predetermined period of computer inactivity.
  • a browser communicates with a central server to obtain a list of websites to visit.
  • the browser caches the list, and accesses the websites listed therein when the computer is inactive.
  • the server provides the browser with a list of keywords.
  • the keywords are plugged into a search engine, say Google, and the browser then searches resulting websites during periods of computer inactivity.
  • the browser can be configured to accept keywords and automatically access a search engine, where resulting URLs are collected and searched.
  • the central server can hit the search engine, plug in the keywords, and collect the URLs. (Content owners can communicate with the central server, giving it a listing of websites or keywords that the customers would like to have searched.).
  • a watermark-reader-equipped browser can search as a background process.
  • the browser searches websites while a computer user is concentrically pulling together a PowerPoint presentation or typing email.
  • the background process is optionally interrupted when the user clicks the browser icon for web browsing or when the user needs additional computer resources.
  • a regulator e.g., a software module
  • the regulator automatically scales back watermark searching activity if processing or computer resources reach a predetermined level. (A pop-window can also be presented to a user to allow a user to determine whether to continue watermark searching.)
  • a watermark reader encounters a database or flash media (or other content that is difficult to analyze)
  • the watermark reader can report such findings to a central server.
  • the central server can revisit the websites to handle such layered content.
  • the central server may employ algorithms that allow databases or FlashMedia to be explored for watermarked content.
  • a database is an image database. The database is combed, perhaps with a keyword search for file names or metadata, or via a record-by-record search. Each record (or specific records) is then searched for watermarked content.
  • a content owner e.g., a copyright owner of prize- winning Beagle images
  • the content owner embeds her images with digital watermarks prior to posting them on her website.
  • the watermarks preferably carry or link to image identifying information, such as the content owner's name, image identifier, data copyright information, etc.
  • the content owner further discovers that her pirated images are often associated with a particular brand of knock-off dog food, "Yumpsterlishious.”
  • a targeted search e.g., via a search engine
  • “Yumpsterlishious” and/or "Beagles” generates a listing of, oh say, 1024 URLs.
  • Content from each of the 1024 URLs is then analyzed with a watermark reader to locate unauthorized copies of the content owner's images.
  • the location (e.g., URL) of suspect images can be forwarded to the copyright owner for legal enforcement.
  • keywords may include author, photographer, artist, subject matter, dates, etc.
  • the above examples leverage keyword searching (or targeted searching) and digital watermark analysis.
  • a targeted search utilizes metadata associated with content.
  • a search engine looks for keywords in content metadata (e.g., headers, XML tags, etc.).
  • Content including certain keywords in associated metadata e.g., to borrow from the above example, "Beagles" is searched with a watermark reader to determine whether it includes a watermark embedded therein.
  • metadata associated with an audio or video file is searched for keywords, and if the keywords are found, the file is further analyzed with a digital watermark reader.
  • This example uses keywords in metadata to identify likely candidates for watermark detection.
  • Pattern matching algorithms are well known, and we can employ such algorithms while searching the internet for lexical or image based matches.
  • Watermark decoding is performed only on content meeting predetermined pattern criteria. For example, a pattern matching search is initiated for all images or graphics including a stylistic X, a trademark for Xerloppy Corporation. The pattern matching search turns up 72 hits. The images (or graphic files) are then searched to determine whether a digital watermark embedded therein.
  • Routers and switch nodes are monitored to determine internet traffic trends.
  • a watermark- reading web crawler is directed toward the trends. For example, a particular router is monitored to see where traffic originated or is routed from prior to accessing a website including copyrighted (and watermarked) images. The suspected originating or routing websites are crawled in search of watermarked content.
  • Still another targeted searching method routinely analyzes websites in which unauthorized copyrighted materials have been previously found. For example, a server maintains a listing of websites where watermarked content has been previously found. The websites are routinely crawled in search of any watermarked content.
  • FIG. 1 illustrates a system 101 implementing an integrated searching strategy.
  • integrated is used to reflect a system operable to employ both manual and automated searching.
  • One object of system 101 is to identify digital watermarked content on a network, like the internet.
  • the system includes a central control panel or interface 102, through which searching criteria is provided. For example, a customer can enter search terms (e.g., "Beagle") or specific web addresses that she would like searched for watermarked content through, e.g., a web-based customer interface 104. The terms and/or URLs are communicated to interface 102.
  • search terms e.g., "Beagle”
  • specific web addresses e.g., "Beagle”
  • the terms and/or URLs are communicated to interface 102.
  • Interface 102 farms the terms and/or URLs to a watermark detector-enabled web crawler (or searching agent) 120 or to a distributed watermark detector-enabled web crawler (e.g., a soldier from the "army" of web browsers mentioned above).
  • interface 102 provides the terms and/or URLs to directed search module 106, which is a server-based web crawler including a digital watermark detector. The directed search module 106 hits the corresponding URLs in search of watermarked content.
  • the FIG. 1 system further includes a manual searching module 108 in which an operator directs searching.
  • an operator enters a website to be forwarded to web crawler 120, or directs a web browser to a particular website in search of watermarked content.
  • the module 108 is used to interface with a search engine, e.g., Google, where keywords are entered and resulting URL are provided to watermark detector-enabled web browsers.
  • a search engine e.g., Google
  • modules 110 which may include some human interaction
  • Modules 110 may also provide the systems with addition URLs to visit. These URLs may be directly provided to web crawler 120, but are preferably controlled by control panel 102.
  • Results from web crawler 120 are provided to a database 130 for customer reports or for further analysis.
  • Search engines employ web crawlers to categorize web pages. For example, website text is obtained by a crawler and used to create keyword indexing. Website owners can also register a website by listing the URL and keywords.
  • An improvement is to include a digital watermark analysis in the registration or categorization process.
  • the search engine's web crawler employs a digital watermark reader and scans a target website for digital watermarking.
  • a digital watermark includes a unique identifier, and perhaps text. The identifier and/or text are used as keywords when cataloging the target website.
  • the search engine may associate a web address with a watermark numeric identifier and any text carried by the watermark, and may even indicate that the website includes digital watermarking.
  • the watermark- based keywords are searchable along with any keywords derived from text or HTML found on the website.
  • content can include XML tags.
  • the tags can include a field which indicates that one or more items of content on a website include digital watermarking.
  • the web crawler/search engine need not decode the watermarks; but rather, determines from the XML fields (or header data) that the website includes digital watermarking.
  • the web crawler or associated search engine includes a "watermarking is present" indicator as a keyword associated with the website.
  • a search engine keyword search may include all websites including "watermarking is present," plus any relevant keywords (e.g., "Beagles"). Resulting website listings can be searched for digital watermarking.
  • Another searching tool facilitates communication with a plurality of mobile devices and leverages search results generated by the various mobile devices.
  • 23-year old Ginger goes clubbing on Saturday night. She finds her way to her favorite hangout and meets up with three of her closest friends. The music is loud and conversation is stifled through the noise and haze. But wireless communication is uninhibited. Ginger - as always - has packed along her wireless device (e.g., Pocket PC, Blackberry, cell phone, etc.).
  • Her device is, e.g., BlueTooth enabled and readily communicates with similar devices carried by Ginger's friends.
  • Ginger's device communicates with the other devices to see whether they have recently performed any searching, and if so, what the nature of the searching was.
  • Ginger can preset search topics (key terms or identifiers) in her wireless device. Instead of presetting search topics, Ginger's wireless device can automatically generate search topics based on Ginger's web browsing history or past internet queries. One setting can be simply to copy any search results carried out by the other devices. Ginger's device uses these preset search topics to sniff other devices and see if they have found anything related to Ginger's search terms.
  • the search results (and maybe even corresponding content like audio files) are stored in a search results or shared directory.
  • the search need not be carried out on Kim's mobile device, but instead, can be carried out on Kim's home computer, with the search results being communicated to Kim's mobile.
  • Ginger likes Aintitnice also, and has entered the group as a search term in her mobile device.
  • Ginger's wireless device negotiates with Kim's device to obtain the search results and/or even the audio files.
  • Ginger's device can negotiate with an online server to obtain the necessary rights to play the music.
  • the audio file may include a digital watermark that is used to link to the online server.).
  • Self selection by Ginger e.g., being friends with Kim and presetting Aintitnice
  • proximity e.g., clubbing with certain friends
  • a method of searching comprising: from a first mobile device, wirelessly querying a second mobile device to determine whether the second mobile device has internet search results relating to predetermined search criteria; and receiving at least a subset of the search results.
  • A2 The method of Al, wherein the first device also queries to determine whether the second mobile device has content related to the predetermined search criteria.
  • a method of searching comprising: receiving search criteria in a first, handheld mobile device; upon sensing of a second, handheld mobile device by the first, handheld mobile device, automatically and wirelessly querying the second, handheld mobile device to determine whether the second, handheld mobile device has any content stored thereon corresponding to the search criteria; and receiving content corresponding to the search criteria from the second, handheld mobile device.
  • a method of searching a network for watermarked content comprising: receiving data representing a visible pattern; searching the network for content corresponding to the visible pattern; analyzing content identified as corresponding to the visible pattern for digital watermarking; obtaining at least one watermark identifier from the digital watermarking; and reporting at least one watermark identifier and a corresponding network location when digital watermarking is found.
  • a method of searching a network for watermarked content comprising accessing a remote server to obtain a list of network locations; searching the network locations for digital watermarking during periods of computer user inactivity; reporting to the remote server at least one watermark identifier and a corresponding network location when digital watermarking is found.
  • a method of searching a network for watermarked content comprising: accessing a remote server to obtain search criteria; searching the internet for digital watermarking as a background process during periods of computer user activity; reporting to the remote server at least one watermark identifier and a corresponding network location when digital watermarking is found.
  • search criteria comprises an instruction to search internet content accessed by the user.
  • a system to direct network searching for watermarked content comprising: a website interface to receive at least one of keywords and network locations from a customer; a website interface to communicate with a plurality of distributed watermark detectors; a controller to control communication of keywords and network locations to the plurality of distributed watermark detectors; and a database to maintain information associated with digital watermarking and corresponding network locations.
  • a system to direct network searching for watermarked content comprising: a website interface to receive at least one of keywords and network locations from a remote customer; a web browser including or cooperating with a digital watermark detector; a controller to communicate keywords and network locations to a web browser, wherein the web browser searches locations associated with the keywords or the network locations; and a database to maintain information associated with digital watermarking and corresponding network locations.
  • a desktop searching tool that provides efficient media (e.g., audio, images and video) searching and cataloging.
  • the tool can also provide metadata refreshing capabilities as well.
  • a searching tool 201 e.g., a software program or application
  • the searching tool includes two primary software components - an indexing tool 202 and desktop searching tool 204.
  • tools 202 and 204 need not be separate components or software application, but are referred to separately here to ease discussion of their individual functions.
  • the software can be written in any language available to software programmers such as C, C++, Visual Basic, Java, Python, TcI, Perl, Scheme, Smalltalk and Ruby, etc.
  • the indexing tool 202 combs through the user computer (or home network) in search of image, audio or video files.
  • the indexing tool 202 catalogs its findings in one or more indices (e.g., it creates an index).
  • An "index” contains a searchable listing or collection of words, numbers and characters and their associated files and locations.
  • a user searches an index - instead of the entire computer - when she wants to find a file including a keyword.
  • the search is carried out with Desktop Searching Tool 204.
  • Desktop Searching Tool 204 We mention here that we sometimes refer to both image and video files as "imagery.” Our use of the term "imagery" is also broad enough to cover multimedia files as well.
  • the desktop searching tool 204 provides a user interface (e.g., desktop window or HTML based interface) through which a user queries an index to find specific imagery or audio files or metadata associated therewith.
  • Imagery or audio files are typically defined by a content portion and a metadata portion.
  • a user is preferably able to select storage areas to search and catalog by the searching tool 201, e.g., C drive, certain files or directories, and/or removable media (zip drive, external hard drive, DVD drive, attached MP3 player or jump drive (flash memory, USB drive), etc).
  • the searching tool 201 can be preferably placed in a background searching mode.
  • the searching tool 202 searches the computer while a user works on other applications (e.g., akin to common anti-virus software that routinely looks at all incoming files).
  • This background mode preferably filters new files as they are created or received by the user's computer or home network.
  • Our indexing tool searches for image files, e.g., as identified by their file extensions *.gif, *jpg, *.bmp, *.tif, etc. (If searching for audio or video files, we might search for *.au, *.wmv, *.mpg, *.aac, *.mp3, *.swf, etc.)
  • An image is indexed once it is located. To do so the image is opened sufficiently (e.g., perhaps without accessed any compressed image portion) to access a metadata portion, if any.
  • the metadata can be provided for inclusion in a searchable index. For example, consider an image named "Falls.jpg,” with metadata including a descriptive phrase: "Picture of Falls taken near Silver Lake, Montana.” The file name and the descriptive phrase are added to the desktop search index, along with the file location and any other metadata in the descriptive phrase.
  • This first implementation works best when the searching tool 201 cooperates with a desktop searching index (e.g., MSN Desktop Search) through an application program interface.
  • a desktop searching index e.g., MSN Desktop Search
  • the Desktop Search encounters an image file it calls searching tool 201, or passes the image file or file location to searching tool 201.
  • image searching software from IFilterShop LLC (available on-line at www.ifiltershop.com) as a component of indexing tool 202.
  • the IFilterShop software would help to search images for metadata associated therewith. Such metadata is added to an index to be searched by a desktop searching tool 204.
  • indexing tool 202 creates an HTML file (or XML, Word, or other text searchable file) for each image file searched.
  • the HTML file is preferably stored in the same directory as the image file, or in a directory that is accessible to a searching tool.
  • the HTML file includes the image file name ("Falls.jpg”) and a listing of any terms ("Picture of Falls take near Silver Lake, Montana”) and other metadata (time, date taken, camera parameters, geo-coordinates, etc.).
  • the HTML file preferably includes a similar name, but with a different extension (e.g., "Falls.dwm.html").
  • the HTML file is searchable.
  • indexing tool 202 or the Google and MSN desktop searching tools
  • metadata e.g., text
  • an image file is preferably searched for an embedded digital watermark. That is the indexing tool 202 includes or cooperates with a digital watermark detector. If found, the HTML file is provided with a watermarking indicator (e.g., text, number or graphical indicator) to show that the image file is watermarked and what information is carried by the watermark (e.g., a plural-bit identifier or message).
  • a watermarking indicator e.g., text, number or graphical indicator
  • a digital watermark — embedded in an image - becomes searchable by a desktop searching tool.
  • watermark-based Refreshing If a watermark is not found in an image, one can be embedded therein if desired. A watermark can also be used as "the" identifier to link between an image and an on-line metadata repository as further explored below. Watermark-based Refreshing
  • a digital watermark provides the persistent link between metadata and content.
  • One aspect of our invention is a metadata "refresh” or synchronization.
  • Desktop searching tool 201 - checks with a metadata repository to ensure that metadata associated with an image is current or up to date.
  • these refreshing or synchronization techniques can also be extended to internet searching tools, like Google and Yahoo!, as well.
  • a search engine after or part of a search, can ask a searcher whether they would like to populate metadata for a particular image, audio or video found. The methods and systems detailed below can be used for such populating.
  • the desktop searching tool 201 queries a metadata repository 210 (FIG. 3) to see if there is any metadata associated with an encountered image.
  • the repository 210 can be stored locally on the user's computer 200, but more likely the repository 210 is accessed over a network (e.g., internet or cellular network).
  • the watermark identifier is communicated to the metadata repository 210.
  • the identifier is used to index into the repository 210 and locate any information associated therewith.
  • the information is communicated to the searching tool 201 for indexing.
  • the information stored in the repository is checked against the image metadata. If the repository information is the most current or up to date, it is accessed and indexed (and perhaps stored or associated with the image on the user's computer). If, however, the image includes the most up to date metadata, the image metadata is preferably copied to the metadata repository and cataloged according to the watermark identifier.
  • Relative metadata "freshness” can be determined, e.g., by a metadata timestamp or even a "last updated” file indicator. Or if no file metadata is found (another case of unfreshness), metadata from the repository is provided for indexing and associated with the image file.
  • a hash or other reduced-bit identifier can be used to verify the veracity of content and metadata.
  • a header indicates the underlying content is a song by the Eagles.
  • the header can include a hash of the song to allow verification of the contents and header information.
  • the hash is provided to a trusted third-party repository along with the metadata.
  • the hash is authenticated and the metadata (and song) are then deemed trustworthy.
  • the searching tool 201 can periodically check with the metadata repository 210 to ensure that the image metadata (and index of such metadata) is up to date.
  • a graphical user interface may also provide a selectable button, allowing a user to select a feature to continuously (or frequently) query the metadata repository 210 to ensure metadata freshness.
  • the searching tool 201 inquires whether an encountered image itself is stored in repository 210. If not, the searching tool provides a copy of the image to the repository 210. Then, both the metadata and image are stored in the repository 210.
  • a search index can be updated to reflect that the image itself has been stored in the repository 210. (In some cases the image is removed from the user's computer when it is copied to the repository).
  • An image registration can be automatically carried out by the searching tool 201. For example, the registration may include association of the image to the user's account or assignment of a unique identifier (e.g., via a digital watermark, fingerprint or hash).
  • the repository 210 is a public repository.
  • the young photographer selects an identifier that is generally associated with Disneyland. That is, the photographer selects an identifier that people generally use when vacationing at Disneyland.
  • the watermark identifier is obtained through a trust metadata broker, one who is trusted to provide or obtain metadata associated with key metadata "ground truths" (e.g., like location, events, dates, etc.).
  • the metadata broker then gathers general metadata that is associated with the identifier or location/event in which the identifier is associated with.
  • a user identifier can be used in connection with the selected identifier to aid in identifying the young photographer.
  • the public or trusted metadata broker populates or obtains data records associated with the identifier (e.g., people post Disneyland favorite memories, directions, Mickey Mouse facts; or the trusted metadata broker obtains metadata itself, etc.).
  • the searching tool 201 once it encounters the watermark identifier in a Disneyland picture, queries the data repository 210 with the identifier in search of additional metadata. The data records are retrieved and indexed for desktop searching.
  • a semi- public identifier can be provided. For example, all members attending a family reunion can use the same identifier.
  • Use of the term "same” includes a situation where a watermark has many payload fields, and the "same" identifier is included in a predetermined field. In this multi-payload field situation, two watermarks may include the same identifier but have different information stored in different fields.
  • Metadata can be gathered using other techniques as well. For example, a location of an image can be inferred from related clues. An image file named "DisneyLandOOl" was probably taken at Disneyland. The word Disneyland is provided to an internet search engine or data repository to gather metadata. The metadata is provided to a desktop searching tool which updates the image file's metadata portion and indexes the new metadata in a searchable desktop index. A directory structure name and/or date and time information can be used to gather metadata. For example, if searching tool 201 knows (e.g., from a metadata field or watermark date/time stamp) that a picture was taken on February 14, 2005 at 8:30 pm, the searching tool can use this information to gather related metadata.
  • searching tool 201 knows (e.g., from a metadata field or watermark date/time stamp) that a picture was taken on February 14, 2005 at 8:30 pm, the searching tool can use this information to gather related metadata.
  • the searching tool queries the photographer's Outlook calendar or other calendaring software to see what was scheduled at that time ("Valentine's Day dinner at Jake's with Jane").
  • This information is provided for indexing by the desktop searching tool 201.
  • the information can be associated as metadata in the image file.
  • a certain date within the journal or diary can be similarly queried. For example, words or terms within a journal entry are extracted, indexed and then stored as metadata.
  • the searching tool can access financial or checkbook software (e.g., Microsoft Money or on-line bank statements) to check receipts or entries around this time.
  • a desktop searching tool 201 may also use an audit trail to gather metadata.
  • a user receives a picture emailed from her brother Scott.
  • the email trail (from whom and when received) can be used as metadata for the picture.
  • the indexing tool 202 recognizes that a new image is received in an Outlook Inbox. The email history and image are combed by the indexing tool 202 to gather this information.).
  • An internet history or cache is also looked at. For example, search terms entered into an internet search engine are pulled from the Browser's history or are cached and used as metadata for an image found from the search.
  • GPS data generated by these units can be stored in header or watermark information.
  • Searching tool 201 uses the GPS data to locate related metadata. For example, GPS coordinates are extracted from an image and are provided to a geographical database. The coordinates are used to index the database and find metadata associated with the geolocation.
  • the metadata can include a city name, historical information, current weather, building specification, associated pictures, etc.
  • Metadata associated with these images is used by the searching tool 201 or associated with a target image.
  • GPS data and timestamps can be used to generate even further information. For example, a sports enthusiast snaps a few pictures while attending the NCCA men's basketball semi-finals in Dallas. GPS coordinates and a timestamp are associated with the pictures (e.g., as an embedded watermark or header information). The GPS is used to identify the location (e.g., sports arena) and the timestamp is used to identify an event at the sports arena (basketball game). These terms can be used as search terms to identify additional metadata, stories, scores, etc. associated with the event. This information is provided for association with the images.
  • GPS coordinates and a timestamp are associated with the pictures (e.g., as an embedded watermark or header information).
  • the GPS is used to identify the location (e.g., sports arena) and the timestamp is used to identify an event at the sports arena (basketball game). These terms can be used as search terms to identify additional metadata, stories, scores, etc. associated with the event. This information is provided for association with the images.
  • Metadata Generation We can also automatically generate metadata for an image.
  • a cell phone that has a biometric sensor (e.g., a fingerprint scanner).
  • a biometric sensor e.g., a fingerprint scanner
  • LG Telecom one of the largest wireless network operators in Korea, recently launched a biometric cell phone - the LP3800. Other manufacturers are providing competing cell phones.
  • a user presents her finger for scanning by the cell phone. The user is identified via the fingerprint.
  • a searching tool 201 uses this fingerprint identifier as photographer metadata.
  • the searching tool 201 can query (e.g. via a wireless or Bluetooth sniff) the cell phone and inquire who the photographer was when the photo was taken.
  • the photo is identified to the cell phone camera by file name or other identifier.
  • the searching tool 201 queries the cell phone to see who the identifier corresponds with. If the biometric identifier has been encountered before, the searching tool can use past cell phone inquiry result instead of talking with the cell phone.
  • a human fingerprint or template there from can be used as metadata itself.
  • Search tool 201 may also include or cooperate with a pattern recognition or color analysis module. Metadata is generated through image pattern recognition. For example, the searching tool 201 analyzes an image with a pattern recognition module. The results of which are used as metadata. (For example, the pattern recognition module might return the term "tree" after analyzing a picture of a tree.).
  • the pattern recognition module might return the term "tree” after analyzing a picture of a tree.).
  • We can also perform a color analysis of an image, e.g., calculating a 3-D color space histogram of the image. The histogram identifies predominate colors (e.g., red, pink, yellow, etc.). Predominate colors can be based on an image region or a percentage of an image including the predominate color. Or only the top three or so colors are indexed for a particular image.
  • a search request typed or spoken into desktop searching tool 204 requesting a picture of grandma wearing her pink hat.
  • the query may specifically include the terms "grandma” and "pink”.
  • the term "pink” identifies those pictures having pink as a predominate color as automatically determined from ' such color analysis. This subset is cross-check with all pictures including grandma as metadata. The resulting set of pictures is identified for user perusal.
  • searching tool 201 may include or cooperate with a fingerprinting module.
  • fingerprinting to mean a reduced-bit representation of an image like an image hash.
  • the terms “fingerprint” and “hash” are sometime interchangeably used.
  • a fingerprint is generated and is then used to query a database where other images have been fingerprinted. For example, different pictures of the Empire State Building yield similar (or related) fingerprints. These pictures and their corresponding fingerprints are indexed in the database. While exact matches might not be frequently found, those fingerprints that are deemed statistically relevant are returned as possible matches. Metadata associated with these fingerprints can be returned as well. (Fingerprinting and watermarking can also be advantageously combined. For example, a digital watermark can be used as a persistent link to metadata, while a fingerprint can be used for identification.)
  • Searching tool 201 may also include or cooperate with a facial recognition module.
  • Facial recognition software is used to identify people depicted in images. Once trained, the facial recognition software analyzes images to see whether it can identify people depicted therein. Names of depicted people can be indexed and associated with the image. Or individual profiles (name, birth date, family relation, etc.) can be established and associated with a person. Then, when the facial recognition software identifies an individual, the individual's profile is associated with the image as metadata. (Fig. 4 shows one example of this method. Facial recognition software 401 analyzes an image 402 and determines that the image depicts Jane. A profile database 403 is interrogated to obtain Jane's profile 404 (e.g., name, current age, birth date, etc.) and the profile 404 is associated with the image as metadata.)
  • Jane's profile 404 e.g., name, current age, birth date, etc.
  • Metadata can also be generated by searching devices within a user's home domain.
  • the searching tool 201 initiates communication (e.g., via Bluetooth or wireless connection) with the user's cell phone, which is equipped with a camera and GPS unit.
  • the searching tool 201 queries where the camera has taken pictures.
  • the geolocations and times of image capture can be used as metadata or to find metadata.
  • the searching tool might talk with a user's TiVo device, game console (e.g., Xbox or PlayStation), music player (e.g., an iPod or MP3 player) or PDA.
  • Relevant information e.g., journals, calendars, other images, music, video games, etc.
  • gathered from these sources can be used as metadata for a particular file on the user's desktop.
  • the searching tool 201 (FIG. 2) preferably includes one or more user interfaces (e.g., as provided by tool 204) through which a user can interact with the tool 201 and metadata found or indexed by the tool 201.
  • a user is preferably able to select, through desktop searching tool 204, internet-based sites at which searching tool 201 is likely to find additional metadata. (The user can type in URLs or highlight predetermined metadata websites.)
  • the user can also preferably set one or more filters through such interfaces.
  • a "filter” is a software module or process that limits or screens information that should be used as metadata. Filters allow a user to weed out potentially meaningless metadata. For example, one filter option allows for only metadata gathered from the user's desktop to be associated with an image.
  • Metadata gathered from repository 210 might be designated as being trusted, but metadata gathered from an automatic internet search of text found in an image header might not be trusted.
  • a related filter option allows a user to pre-rank metadata based on source of the metadata. If the metadata is not of a sufficient rank, an image file is not augmented to include the new metadata and the low-ranking metadata is not indexed.
  • Yet another filter option allows for only metadata approved by a user to be associated with an image.
  • Gathered or generated metadata is preferably presented through an interface for the user's review and approval.
  • metadata is presented via a graphical window complete with check-boxes (see FIG. 5).
  • a user simply checks the metadata she would like associated with an image and the searching tool 201 updates the metadata portion of an image file to reflect the user's selections. Instead of checkboxes a user can highlight metadata she wants to keep.
  • Another feature of the present invention is a directory view. Files are often arranged and graphically displayed by directories and folders. (Just click on "My Documents" in your computer directory and see how the files are arranged therein.)
  • An improvement arranges and graphically displays files according to their metadata. For example, based on information gathered by searching tool 201, images arranged and graphically displayed on a computer display according to metadata associated therewith.
  • the metadata categories can change based on user preference, but we provide a few examples below. A user selects three broad metadata categories, vacations, professional and family.
  • a program queries an index provided by searching tool 201. All images including metadata identifying them as a "vacation” image are associated with the vacations directory, and all images including metadata identifying them as "family" are associated with the family directory.
  • the user can change the "file directory” view by changing the metadata categories.
  • the user can also establish subdirectories as well (e.g., Disneyland and Niagara Falls metadata displays within the vacation directory). Image are arranged and displayed in a metadata structure and not in a typical directory tree fashion. If a user changes the metadata request, the desktop arrangement is altered as well.
  • Visual presentation of a directory view can also be changed, e.g., according to a style sheet associated with a particular type of metadata or media.
  • Style sheets can vary from family member to family member (or between Windows login profiles).
  • Music can also be represented according to its content. For example, music with a particular rhythm or harmony can be presented uniquely or according to a style sheet, etc.
  • a graphical user interface stored and running on a computer, comprising: a first module to present a graphical representation of files through a computer display; a second module to determine metadata associated with each of the files for display; a third module to graphically organize the files for display according to their metadata.
  • a metadata authoring tool 206 (e.g., a software application) is described with reference to FIG. 6.
  • the authoring tool 206 allows a user to annotate and associate metadata with multimedia content. While most image editing software (e.g., Digital Image Suite from Microsoft) provides metadata authoring capabilities, we provide a few improvements herein.
  • One improvement is the ability to "paint" an image or group or images with predetermined metadata.
  • a metadata toolbar that provides different metadata choices, e.g., terms like "vacation,” “family,” or profiles ("Jane's individual profile"), etc. Selecting (clicking) a metadata category from the metadata toolbar enables us to paint an image or file directory with the metadata. (One can imagine that the metadata selection makes the mouse cursor appear as a paintbrush. We then literally "paint" an image with the selected metadata.
  • the image or directory icon representation can even turn a color associated with the metadata to provide user feedback that the metadata has been attached to the image.
  • an image (and/or audio and video) searching tool e.g., a computer program written in C++.
  • the image searching tool resides on a user's device (e.g., computer, network server, iPod, cell phone, etc.) and crawls though files and folders in search of images.
  • the searching tool searches for image files, e.g., as identified by their file extensions *.gif, *-j ⁇ g > *.bmp, *.tif, etc. (If searching for audio or video files, we might search for *.au, *.wmv, *.mpg, *.aac, *.mp3, *.swf, etc.).
  • a user or operating system identifies image directories and the searching tool combs through each of these identified directories.
  • the searching tool opens an image and searches the image for an embedded digital watermark.
  • the searching tool may include or call a watermark detector. If found, the watermark information (e.g., plural- bit payload) is provided to or is included in a first file, e.g., an XML file.
  • the first file preferably includes the same file name, but has a different file extension.
  • the image is further evaluated to obtain metadata there from (e.g., EXEF information, header information or other metadata).
  • the metadata is provided to or is included in the first file.
  • the first file may include the same tags or identifiers as were originally included in the image (or audio or video).
  • the searching tool may query one or more online metadata repositories to determine whether there exists additional metadata associated with the image.
  • Such online metadata may be downloaded to the first file.
  • filters or criteria may be used to restrict which online metadata is accepted. For example, only those packets or groupings of metadata that are signed by a trusted or recognized source may be accepted for inclusion in the first file.
  • different metadata fields or tags can include a last modified or time stamp indicator. That way, if the online-metadata includes a redundant field or tag, the most recent version (either associated with the image or online) of the metadata is used.
  • a user can specify which sources of metadata should be trusted and included.
  • a watermark identifier can also facilitate "bi-directional" metadata population. That is, a watermark identifier can link to an online repository of metadata, and in particular, to a particular image or associated metadata. Metadata can be uploaded to the online repository and associated with the image metadata via the watermark identifier. (Watermark-based network navigation is discussed, e.g., in assignee's U.S. Patent Application No. 09/571,422, mentioned above.)
  • a second file (e.g., HTML) is created.
  • the second file name preferably includes the same file name as the first file and image, but with a different file extension.
  • the second file preferably includes information from the first file. For example, if the first file includes a storage location for the image, the second file may include a hyperlink to the image (based on the storage location). As discussed in some of the implementations above, the second file may also include a representation of the image, or if video or audio, perhaps a sample or snippet of the audio or video.
  • the second file can be configured by a user to include some or all of the information from the first file. This is advantageous, e.g., if the user wants to limit viewing of camera settings.
  • an XML parser cooperating with a style sheet or "skin" can be used to interpret the first file and populate the second file in accordance with the style sheet.
  • underlying content itself is used to determine how to populate a second file. Audio content having a certain rhythm or melody is displayed according to a first predetermined style, while content having other characteristics are displayed according to a second, different style.
  • the creation of the HTML file typically triggers indexing by a desktop searching tool (e.g., Google or Yahoo, etc.).
  • a desktop searching tool e.g., Google or Yahoo, etc.
  • the metadata is added to an index, effectively allowing searching of the image.
  • the functionality of the above search tool is integrated with the desktop searching tool, hi other implementations, the searching tool plugs-in with the desktop searching tool.
  • a searching tool cooperates (or operates from) a proxy server or network hub. (We note here that some desktop searching tools, such as Google's Desktop
  • Searching tool allows for registering of certain file "types” (e.g., JPEG, etc.).
  • the first file mentioned above can be given a unique file extension (or type). That way, a desktop searching tool can be told to ignore the first file when indexing so as to avoid redundant results.)
  • the image searching tool can compare a "Last modified" date to determine whether to index a particular image. For example, an image's last modified date can be compared to a last modified date of a corresponding first file. If the image's modification date is later than the first files, the image is again analyzed to obtain the watermark and metadata. The first file is updated, along with the corresponding second file.
  • Watermarks can also be used to facilitate and link to so-called on-line blogs.
  • a blog is information that is published to a web site. So-called “blog scripting" allows someone to post information to a Web site.)
  • a photo or audio or video
  • a watermark reader extracts the watermark and links to an on-line resource (e.g., a URL).
  • an improvement is that the digital watermark links to a blog or blog thread (or conversation).
  • the blog may be stored, e.g., as a file.
  • the watermark includes or references a URL of an online photo blog site, e.g., akin to Flickr (see, e.g., www.flickr.com).
  • the watermark can link to a specific picture or account at flicker, and perhaps even a particular blog thread. For example, consider a photo depicting a home office, complete with a computer, monitor and chair.
  • the watermark payload or component information may even include an identifier that will link to subject matter line - displayable to a user - to allow user's to pick which blog thread they would like to consider. If the photo contains multiple such watermarks, each of the corresponding subject matter lines can be displayed for selection.
  • the watermark becomes the root of each blog and blog thread. (Perhaps as a prerequisite to starting a blog thread, the conversation is assigned a watermark identifier or component, and the component is embedded in the image - perhaps region specific - when the blog or response is posted.)
  • each person who comments to a blog is assigned an identifier (or even a particular, unique watermark signature).
  • the person's watermark is embedded in the image when they blog or otherwise comment on the photo.
  • Digital watermarking brings a new twist with improvements. Watermarking makes the photo the centerpiece of a photoblog.
  • a watermarked photo becomes an agent to the blog and a portal that can be revisited repeatedly.
  • the photo could be distributed as a pointer to the blog itself.
  • the photo catches the attention of the recipient, and through the digital watermark links back to a blog server (or network resource at which the blog is hosted).
  • a blog server or network resource at which the blog is hosted.
  • the blog is hosted (e.g., you must go to the website to read) or downloadable (e.g., sort of like the good old newsgroup concept).
  • a watermark detector reads a watermark identifier from the dragged-and-dropped photo.
  • the watermark identifier is used to link to the on-line blog (or conversation).
  • the identifier is used to identifier a file storage location of the blog, or a network location hosting the blog (e.g., URL).
  • the blog is obtained or downloaded to the location, hi other cases, instead of downloading the entire blog, a link to the blog is stored at the application or client.
  • a user uploads an image to a blogging site to start a blog and writes a first entry.
  • the site automatically watermarks the image with an identifier, linking the photo to the blog (or adding it to an existing blog).
  • the user may (optionally) right-click, e.g., to send the image (and blog) to a friend.
  • the e-mail including the watermarked photo invites friends to respond. They are linked to the original blog through the watermark identifier.
  • This functionality can be incorporated with desktop searching tools.
  • a desktop searching tool When a watermarked image is noticed by a desktop searching tool, that image is checked to see if there's an associated blog, e.g., by querying an on-line blog site associated with the watermark or evaluating a "blog-bit" carried by a watermark payload.
  • a watermark payload may include many fields, with one of them identifying or linking to a particular blogging site.).
  • the desktop searching tool or photo handling software including, Photoshop, web browser, etc.
  • the image becomes linked or "bookmarked" to the blogging thread.
  • a watermark reader or desktop searching tool can include a right-click feature that allows addition of a blog entry on bloggable images (a feature determined by the watermark).
  • a blog entry on bloggable images a feature determined by the watermark.
  • an image may appear anywhere, on a home computer or cell phone, and act as a gateway to the blog for reading or adding to the blogging thread.
  • the basic association of a blog with an image can happen, e.g., when a photo is registered at a photo-repository or online site.
  • the act of registering a photograph - or watermarking the photograph - can create a blog, and over time, provide a more generalized brokerage to any blog that is registered. Any image can be "bloggable".
  • photographers can create blogs around their collection as a way of marketing or communicating.
  • blogs that are private (e.g., password or biometric protected) as a means of interacting with a friend or client.
  • a watermark preferably survives into print, and thus a relationship is created between printed images and (photo) blogs. (In some implementations a blogs is not created until an image is printed. But in any case, watermarking adds power to print that passes through a watermarking step, giving it a unique identity.)
  • a web-based user interface is created. A user presents a watermarked picture (or just a watermark identifier extracted from said picture) to the interface via the web. If receiving the picture the website extracts a watermark identifier there from. The watermark identifier is provided to a basebase or index to locate information associated therewith. For example, the picture was originally associated with one-or more text-based blogs. A current location of the blogs are found and provided to the user through the interface.
  • a method of associating a blog with media comprising: embedding a digital watermark in an image or audio; associating at least a portion of the digital watermark with a network-hosted blog.
  • a method of associating an online blog with media comprising: decoding a digital watermark from the media; accessing an on-line repository associated with the watermark; and accessing the blog associating with the media.
  • the image preferably corresponds to an authorized bearer of the document.
  • the document 400 illustrated in FIG. 8 represents an identification document, such as a passport book, visa, driver's license, etc.
  • Document 400 includes a photographic representation 410 of an authorized bearer (also referred to as "printed image") of the document 400, printing 420 on a surface of the document and integrated circuitry (e.g., a chip) 430.
  • the chip 430 can include both electronic memory and processing circuitry. Chip 430 can be passive (e.g., no internal power supply) or active (e.g., including its own power supply).
  • document 400 can include a contact-type chip as well. Suitable chips are known in the art, e.g., those complying with ISO standards 14443 and 7816-4.
  • the integrated circuitry 430 includes an image stored therein.
  • the image is preferably compressed, e.g., as a JPEG file, to help conserve memory space.
  • the stored image preferably corresponds to printed image 410, or a reduced bit representation of printed image 410.
  • the image includes digital watermarking embedded therein.
  • the digital watermark is preferably cross-correlated with information corresponding to the document, integrated circuitry and/or the authorized document bearer.
  • the chip 430 may include a serial number (e.g., 96 bits) that is stored in static memory on the chip.
  • the serial number, or a hash (e.g., reduced-bit representation) of the serial number, is used as a digital watermark message component.
  • the hash or serial number is embedded in the photographic image stored on the chip 430.
  • the serial number can be combined with a document number as shown in Table 1 : Watermark Message, below:
  • the combined message is steganographically embedded in the stored image.
  • the chip and document are tied together via digital watermarking. If the chip is replaced, moved to another document or simulated, the changes can be identified by validating the serial number or document number that should be embedded in the image stored on chip 430. Similarly, if the printed image 410 is altered or replaced, it may not include the necessary watermark message (e.g., chip serial number) embedded therein.
  • Document verification can be automated. For example, a serial number is read from static memory (e.g., via a smartcard reader) and a watermarked image is similarly retrieved and decoded. The serial number and watermark message are compared to see if they correspond as expected. If the document number is used as a watermark message component, it can be input (e.g., via reading OCR-B text, barcode, magstrip or manual entry) for comparison as well.
  • printed image 410 can be steganographically embedded with data as well, e.g., in the form of a digital watermark.
  • the digital watermarking is preferably cross-correlated with information carried by the chip 430.
  • a watermark embedded in printed image 410 may include a chip serial number or hash thereof.
  • the printed image 410 watermark provides a link between the chip and the document.
  • a first watermark in printed image 410 is linked to a second watermark embedded in a stored image on chip 430.
  • the linkage can be accomplished in several different ways.
  • each watermark includes a redundant version of information, e.g., such as a serial number, document number or information printed on or carried by (e.g., a barcode) the document.
  • the first digital watermark includes a key to decode or decrypt the second digital watermark (or vice versa).
  • a first message portion is carried by the first digital watermark, and a second message portion is carried by the second digital watermark. Concatenating the two message portions is required for proper authentication of identification document 400.
  • Another example includes a third digital watermark printed elsewhere on the identification document (e.g., in a background pattern, graphic, ghost image, seal, etc.). All three digital watermarks are linked or cross-correlated for authentication.
  • a different biometric image or template is stored in the chip, instead of a photographic image.
  • the biometric may include a fingerprint image or retinal scan.
  • biometrics can be watermarked and linked to the document as discussed above. An example work flow for document production is shown in FIG. 9A and FIG.
  • An applicant for an identification document fills out an application and provides a photograph (step 500).
  • the application is submitted to a processing agency (e.g., state department, step 510), which processes the application (step 520).
  • a processing agency e.g., state department, step 510
  • the application can be mailed or electronically submitted.
  • Application processing may include background checks, including a check of relevant databases to ensure that the applicant is not fraudulently trying to obtain the identification document.
  • a document is created for the applicant.
  • a blank "book” is obtained.
  • the blank book includes a book (hereafter "document") number.
  • the document number is matched with the applicant or applicant's file (step 530).
  • the book will include a chip already affixed (or integrated with) thereto. If not, the chip can be attached or integrated with the document at a later stage.
  • the document is personalized to identify the applicant (step 540). For example, the document is printed with variable information (e.g., name, address, sex, eye color, birth date, etc.).
  • the variable information, or portions thereof, is also stored as a barcode or stored in a magstripe or on chip.
  • a photographic representation is also printed (or attached) on the document and stored in the chip.
  • a digital image representing the applicant is provided to a watermark embedder.
  • Messages e.g., a chip serial number read from static memory or document number, etc.
  • the watermark embedder embeds a desired message in a copy of the digital image.
  • the embedded, digital image is compressed and then stored on the chip.
  • a second message can be embedded in another copy of the digital image, and then printed on a document surface.
  • the same embedded image, including the same message is both printed on the document and stored on-chip.
  • the document production process optionally includes a quality assurance step 550, where the document is inspected.
  • any machine-readable features e.g., OCR, barcode, magstripe, digital watermark, optical memory, electronic chip storage
  • Any cross-correlation relationships e.g., between first and second digital watermarks
  • a quality assurance operator may also visually inspect the document.
  • An identification document comprising: an electronic memory chip, wherein the electronic memory chip comprises a serial number stored therein, the serial number uniquely identifying the electronic memory chip, wherein the electronic memory chip further comprises a digital image representing an authorized bearer of the identification document, and wherein the digital image comprises first digital watermarking embedded therein, and wherein the first digital watermarking comprises a representation of the serial number; a first surface area including text printed thereon, wherein the text comprises at least one of a name and an identification document number; and a second surface area comprising a photographic image printed thereon, wherein the photographic image comprises a representation of the authorized bearer of the identification document.
  • the identification document of F2, wherein the first digital watermarking further comprises a representation of the identification document number.
  • F8 The identification document of F4 wherein the first digital watermarking and the second digital watermarking comprise information that is redundant with or correlated to each other.
  • F9 The identification document of any one of F1-F8, wherein the identification document comprises at least one of a driver's license and passport.
  • An identification document comprising: an electronic memory chip, wherein the electronic memory chip comprises a serial number stored therein, the serial number uniquely identifying the electronic memory chip, and wherein the electronic memory chip further comprises a digital image stored therein, wherein the digital image comprises first digital watermarking embedded therein; a first surface area including text printed thereon, wherein the text comprises at least one of a name and an identification document number; and a second surface area comprising a printed image or graphic, wherein the printed image or graphic comprises second digital watermarking embedded therein, and wherein the first digital watermarking and the second digital watermarking are cross-correlated for authenticating the identification document.
  • the identification document of Gl wherein the first digital watermarking and the second digital watermarking are cross-correlated by including redundant or correlated information.
  • the identification document of G3 wherein the information further comprises at least a representation of the document number.
  • G5 The identification document of Gl wherein the first digital watermarking and the second digital watermarking are cross-correlated through decoding or decrypting keys .
  • G6 The identification document of Gl wherein the digital image comprises a biometric of an authorized bearer of the identification document.
  • a method of controlling a desktop searching tool comprising: searching one or more computer directories for imagery or audio files; upon discovery of an imagery or audio file, analyzing the file for a digital watermark embedded therein, and if a digital watermark is embedded therein, recovering a plural-bit identifier carried by the digital watermark; obtaining metadata from the imagery or audio file; and querying a remote database with the plural-bit identifier to determine whether the file metadata is current.
  • H2 The method of Hl further comprising refreshing the file metadata with metadata from the remote database when the file metadata is not current.
  • H3 The method of H2 wherein a timestamp or last edited field is used to determine whether the file metadata is current.
  • H4 The method of Hl further comprising uploading the imagery or audio file when the file is not stored in the remote database.
  • a method of searching a network for watermarked content comprising: receiving one or more keywords associated with watermarked content; providing the one or more keywords to a network search engine; obtaining from the network search engine a listing of URLs that are associated with the one or more keywords; analyzing content at websites associated with the URLs for digital watermarking; obtaining at least one watermark identifier from the digital watermarking; and reporting at least one watermark identifier and a corresponding URL location.
  • a method of searching a network for watermarked content comprising:
  • a method of categorizing content by a search engine comprising: examining metadata associated with a website, the metadata reflecting a presence of digital watermarking; and providing a presence of digital watermarking indicator that is associated with the website, wherein the presence of digital watermarking indicator is searchable through the search engine.
  • a device searching method comprising: i. searching one or more device directories for imagery or audio files; ii. upon discovery of an imagery or audio file, analyzing the file for a digital watermark embedded therein, and if a digital watermark is embedded therein, recovering watermark information there from; iii. obtaining metadata from the imagery or audio file; iv. creating a first file including at least some of the watermark information and at least some of the metadata; v. creating a second file from the first file, wherein the second file includes at least some of the watermark information and at least some of the metadata, and wherein creation of the second file triggers indexing of the second file by a device searching tool.
  • L3 The method of Ll wherein the device comprises at least one of a cell phone, portable music player, game console and computer.
  • L4 The method of Ll wherein said creating employs at least a style sheet.
  • watermark data encoding processes may be implemented in a programmable computer or a special purpose digital circuit.
  • watermark data decoding may be implemented in software, firmware, hardware, or combinations of software, firmware and hardware.
  • desktop searching tools and metadata generation and gathering tools may be implemented in software programs (e.g., C, C++, Visual Basic, Java, Python, TcI, Perl, Scheme, Ruby, executable binary files, etc.) executed from a system's memory (e.g., a computer readable medium, such as an electronic, optical or magnetic storage device).
  • software programs e.g., C, C++, Visual Basic, Java, Python, TcI, Perl, Scheme, Ruby, executable binary files, etc.
  • a system's memory e.g., a computer readable medium, such as an electronic, optical or magnetic storage device.

Abstract

The present invention provides methods and systems to improve network searching for watermarked content. In some implementations we employ keyword searching to narrow the universe of possible URL candidates. A resulting URL list is searched for digital watermarking. A system is provided to allow customer input. For example, a customer enters keywords or network locations. The keywords or network locations are provided to a watermark-enabled web browser which accesses locations associated with the keywords or network locations. Some implementations of the present invention employ a plurality of distributed watermark-enabled web browsers. Other aspects of the invention provide methods and system to facilitate desktop searching and automated metadata gathering and generating. In one implementation a digital watermark is used to determine whether metadata associated with an image or audio file is current or fresh. The metadata is updated when it is out of date. Watermarks can also be used to link to or facilitate so-called on-line 'blogs' (or online conversations).

Description

Digital Asset Management, Targeted Searching and Desktop Searching Using Digital Watermarks
Related Application Data
This patent application claims the benefit of U.S. Provisional Patent Application Nos.: 60/673,022, filed April 19, 2005; 60/656,642, filed February 25, 2005; 60/582,914, filed June 24, 2004; and 60/582,280, filed June 22, 2004.
This patent application is also related to U.S. Patent Application No. 10/118,468 (published as US 2002-0188841 Al), filed April 5, 2002, which claims the benefit of U.S. Provisional Application No. 60/282,205, filed April 6, 2001; U.S. Patent Application No. 09/482,786, filed January 13, 2000 (allowed); U.S. Patent Application No. 09/612,177 (now U.S. Patent No. 6,681,029), filed July 6, 2000, which is a continuation of U.S. Patent Application No. 08/746,613 (now U.S. Patent No. 6,122,403), filed November 12, 1996, which is a continuation in part of U.S. Patent Application Nos. 08/649,419, filed May 16, 1996 (now U.S. Patent No. 5,862,260) and 08/508,083 filed July 27, 1995 (now U.S. Patent No. 5,841,978).
Technical Field
The present invention relates generally to digital watermarking. In some implementations the present invention relates to searching networks or desktops.
Background and Summary
As digital content continues to proliferate, management of digital assets becomes an increasingly difficult challenge. Enhancements in computer networking and database technology allow companies to manage large collections of images and other media and make the content available to third parties. While network communication provides a powerful tool to enable a database manager to share content with others, it makes it more difficult to control and track how the content is being used, and efficiently share the content. Pn or patent documents by the assignee of this patent application describe systems and methods of automated searching and digital watermark screening of media object files on computer networks like the internet. See, e.g., assignee's U.S. Patent No. 5,862,260. Software used to perform automated searching and compiling of internet content or links is sometimes referred to as a web crawler or spider.
Digital watermarking is a process for modifying media content to embed a machine-readable code into the data content. The data may be modified such that the embedded code is imperceptible or nearly imperceptible to the user, yet may be detected through an automated detection process. Most commonly, digital watermarking is applied to media such as images, audio signals, and video signals. However, it may also be applied to other types of data, including documents (e.g., through line, word or character shifting, background texturing, etc.), software, multi¬ dimensional graphics models, and surface textures of objects.
Digital watermarking systems have two primary components: an embedding component that embeds the watermark in the media content, and a reading component that detects and reads the embedded watermark. The embedding component embeds a watermark by altering data samples of the media content in the spatial, temporal or some other domain (e.g., Fourier, Discrete Cosine or Wavelet transform domains). The reading component analyzes target content to detect whether a watermark is present, hi applications where the watermark encodes information (e.g., a message), the reader extracts this information from the detected watermark.
The present assignee's work in steganography, data hiding and digital watermarking is reflected, e.g., in U.S. Patent Nos. 5,862,260, 6,408,082 and 6,614,914; and in published specifications WO 9953428 and WO 0007356 (corresponding to US Patent Nos. 6,449,377 and 6,345,104). A great many other approaches are familiar to those skilled in the art. The artisan is presumed to be familiar with the full range of literature concerning steganography, data hiding and digital watermarking. The subject matter of the present application is related to that disclosed in US Patent 5,862,260, 6,122,403 and in co-pending applications 09/571,422 filed May 15, 2000, 09/620,019 filed July 20, 2000, and 09/636,102 filed August 10, 2000. As an extension of the watermark-based information retrieval described in US Patent 5,862,260 and marketed by Digimarc Corporation (e.g., under the trade name EMAGEBRIDGE), watermark decoders can be employed in a distributed fashion to perform watermark screening and interacting with watermarked media objects on networks, including the internet. For example, watermark decoders are deployed at a variety of locations on a computer network such as the internet, including in internet search engines that screen media objects gathered by each search engine, network firewalls that screen media objects that are encountered at the firewall, in local area networks and databases where spiders do not typically reach, in content filters, in client-based web-browsers, etc. Each of these distributed decoders acts as a spider thread that logs (and perhaps acts upon) watermark information. Examples of the types of watermark information include identifiers decoded from watermarks in watermarked media objects, media object counts, addresses of the location of the media objects (where they were found), and other context information (e.g., how the object was being used, who was using it, etc.). The spider threads, in turn, send their logs or reports to a central spider program that compiles them and aggregates the information into fields of a searchable database.
But the internet is vast. One challenge is to locate watermarked content throughout the web. Thus, additional improvements are provided to even further explore the depths of the internet for watermark data.
According to one aspect of the present invention, a method of searching a network for watermarked content is provided. The method includes receiving one or more keywords associated with watermarked content and providing the one or more keywords to a network search engine. A listing of URLs that are associated with the one or more keywords are obtained from the network search engine. The URLs are visited and, while visiting each URL, the content at each URL is analyzed for digital watermarking. At least one watermark identifier and a corresponding URL location are reported when found. According to another aspect of the present invention, a system to direct network searching for watermarked content is provided. The method includes: i) a website interface to receive at least one of keywords and network locations from a customer; ii) - A -
a website interface for communication with a plurality of distributed watermark detectors; iii) a controller to communicate keywords and network locations to distributed watermark detectors; and iv) a database to maintain information associated with digital watermarking and corresponding network locations. According to still another aspect, a method of searching a network for watermarked content is provided. The method includes receiving a visible pattern and searching the network for content corresponding to the visible pattern. Content identified as corresponding to the visible pattern is analyzed for digital watermarking. At least one watermark identifier and a corresponding URL location are reported when digital watermarking is found.
Another challenge is to find and manage content stored locally on a user's computer or on her networked computers. Searching tools have recently emerged to allow a user to search and catalog files on her computer. Examples are Google's Google Desktop Search and Microsoft's MSN Desktop Search. We provide improvements to ensure that metadata associated with images and audio are current and easily indexable by such desktop searching tools.
Thus, according to yet another aspect of the invention, we provide a method including receiving an imagery or audio file; identifying perceptual features in the imagery or audio file; and based on the perceptual features, generating metadata for the imagery or audio file.
In a related implementation, the method further includes indexing the metadata in a desktop searching index.
In another related implementation, the identifying includes pattern recognition, color analysis or facial recognition. According to still another aspect of the invention, we provide a desktop searching tool including executable instructions stored in computer memory for execution by electronic processing circuitry. The instructions include instructions to: i. search one or more computer directories for imagery or audio files; ii. upon discovery of an imagery or audio file, analyze the file for a digital watermark embedded therein, and if a digital watermark is embedded therein to recover a plural-bit identifier; iii. obtain metadata from the imagery or audio file; and iv. query a remote database with the plural-bit identifier to determine whether the file metadata is current. In a related implementation, the desktop searching tool further includes instructions to refresh the file metadata with metadata from the remote database when the file metadata is not current.
According to another aspect of the invention, a method of controlling a desktop searching tool is provided. The method includes searching one or more computer directories for imagery or audio files; upon discovery of an imagery or audio file, analyzing the file to determine whether a digital watermark is embedded therein. And if a digital watermark is embedded therein, then a plural-bit identifier carried by the digital watermark is recovered there from. The method further includes obtaining metadata from the imagery or audio file, and querying a remote database with the plural-bit identifier to determine whether the file metadata is current.
According to still another aspect of the invention, we provide a method to gather metadata associated with imagery or audio. The method includes receiving an imagery or audio file including a content portion and a metadata portion; analyzing the metadata to determine at least one of a time and day when the content portion was created; automatically accessing one or more user software applications to gather information associated with at least one of time and day; and adding the information to the metadata portion.
According to yet another aspect of the invention, a method of obtaining metadata for a first imagery or audio file is provided. The method includes determining other imagery or audio files that were created within a predetermined window of a creation time for the first imagery or audio file; gathering metadata associated with the other imagery or audio files; and associating at least some of the metadata with the first imagery or audio file. According to another aspect of the invention, we provide a method of authoring metadata for an image or audio file or file directory via a computer. The method includes: providing a graphical user interface through which a user can select a category of metadata from a plurality of categories of metadata; and once selected, applying the selected category of metadata to a file or contents in a directory through a mouse cursor or touch screen, whereby the selected category of metadata is associated with the image or audio file or file directory. Further aspects, features and advantages will become even more apparent with reference to the following detailed description and accompanying drawing.
Brief Description of the Drawings
FIG. 1 illustrates a system for enhancing network searching. FIG. 2 illustrates a desktop searching tool.
FIG. 3 illustrates a metadata repository that communicates with the desktop searching tool of FIG. 2.
FIG. 4 illustrates associating a person's metadata profile with images. FIG. 5 illustrates a graphical user interface for selecting automatically gathered or generated metadata.
FIG. 6 illustrates a metadata authoring tool on a user's desktop. FIG. 7 is a flow diagram illustrating a desktop indexing method according to yet another aspect of the invention.
FIG. 8 illustrates an identification document including electronic memory. FIG. 9A illustrate an identification document issuance processes; and FIG. 9B illustrates a related digital watermarking process.
Detailed Description
Introduction The following sections describe systems and processes for content searching, indexing and desktop searching. Some of these employ imperceptibly embedded digital watermarks in combination with other mechanisms for identifying and indexing media content, including still images, video, audio, graphics, and text. Some of the sections describe methods and systems for automatically generating and gathering information, indexing the information in a searchable index and associating the information with media files. One section even describes an identification document with electronic circuitry that is further secured with digital watermarking. Searching more of the Internet and Integrated Searching Systems
Web searching continues to be a boom for the internet. Examples include Google, Yahoo!, and MSNBC, to name a few. Web searching allows a user to find information that is distributed over the internet. However, current searching systems have two major problems. First, web crawlers that find information for indexing on a search engine only search around 10-20% (a generous estimate) of the internet. Second, a web crawler traditionally only locates surface information, such as HTML (hypertext markup language) web page, and ignores deep information, including downloadable files, FlashMedia and database information. We are faced with a problem of how to efficiently search the internet. The more internet we search, the higher chance we have of locating watermarked content thereon.
A first solution uses an army of client-based web-browsers to locate watermarked content.
One implementation of this first solution searches content that a user encounters as she routinely surfs the internet. Once identified, watermarked content and a content location can be reported to a central location. The power of this tool emerges as watermark detectors are incorporated into hundreds or thousands (even millions) of browsing tools. Watermarked content - perhaps located behind password protected or restricted websites - is analyzed after a user enters the website, perhaps after entering a user id or password to gain access to a restricted website.
Consider a few additional details. A digital watermark reader is incorporated into (or cooperates with) a user's internet browser or file browser, such as Windows Explorer. Using a web file browser equipped with watermark reader software (e.g., a plug-in, integrated via an Application Programming Interface, or as a shell extension to the operating system), a user browses the internet and/or content files. The digital watermark reader analyzes content encountered through the browser. For example, say a user visits ESPN.com, CNN.com and then looks at images posted on Lotsoflmages.com. The watermark reader sniffs through the various web pages and web images as the user browses the content. (A watermark reader can also be configured to review web-based audio and video as well.) The digital watermark reader is looking for watermarked content. Upon finding and decoding watermarked content, the reader obtains a watermark identifier. The identifier can be a numeric identifier or may include text or other identifying information. The watermark reader stores (or immediately reports) the identifier and a web location at which the watermark identifier was found. The report can also include a day/timestamp.
When the central server receives a location report, the server can, optionally, verify the existence of the watermarked content by visiting the web location and searching for the watermarked content. Alternatively, the server reports the watermarked content to a registered owner of the content. The owner is identified, e.g., through a database lookup that associates identifiers with their owners. (The owner can then use the report to help enforce copyrights, trademarks or other intellectual property rights.). The central server can also maintain a log - a chain of custody if you will - to evidence that watermarked content (e.g., audio, video, images) was found on a particular day, at a particular web location.
Instead of a watermark reader reporting identified content to a server, the watermark reader can alternatively report the content identifier and location directly to an owner of the watermarked content. In this implementation, a watermark includes or links to information that identifies a content owner. The watermark reader uses this information to properly direct a message (e.g., automated email) to the owner when reporting a watermark identifier and location at which the watermark identifier was found. A related implementation of our first solution is a bit more passive. A watermark reader is incorporated into a browser (or screen saver). The watermark- reader-equipped browser searches the internet for watermarked content when the computer is idle or otherwise inactive. For example, the browser automatically searches (e.g., visits) websites when a screen saver is activated, or after a predetermined period of computer inactivity.
But which websites does the browser visit? There are a number of suitable approaches to direct web browsing. In a first implementation, a browser (or cooperating software) communicates with a central server to obtain a list of websites to visit. The browser caches the list, and accesses the websites listed therein when the computer is inactive. Or, instead of providing a querying browser a list of websites, the server provides the browser with a list of keywords. The keywords are plugged into a search engine, say Google, and the browser then searches resulting websites during periods of computer inactivity. The browser can be configured to accept keywords and automatically access a search engine, where resulting URLs are collected and searched. Or, the central server can hit the search engine, plug in the keywords, and collect the URLs. (Content owners can communicate with the central server, giving it a listing of websites or keywords that the customers would like to have searched.).
Instead of operating during periods of inactivity, a watermark-reader-equipped browser can search as a background process. For example, the browser searches websites while a computer user is frantically pulling together a PowerPoint presentation or typing email. The background process is optionally interrupted when the user clicks the browser icon for web browsing or when the user needs additional computer resources. In fact, one implementation of our invention provides a regulator (e.g., a software module) to monitor activity associated with watermark searching. The regulator automatically scales back watermark searching activity if processing or computer resources reach a predetermined level. (A pop-window can also be presented to a user to allow a user to determine whether to continue watermark searching.)
If a watermark reader encounters a database or flash media (or other content that is difficult to analyze), the watermark reader can report such findings to a central server. The central server can revisit the websites to handle such layered content. For example, the central server may employ algorithms that allow databases or FlashMedia to be explored for watermarked content. One example of a database is an image database. The database is combed, perhaps with a keyword search for file names or metadata, or via a record-by-record search. Each record (or specific records) is then searched for watermarked content.
Targeted Searching
Efficiencies are leveraged when watermark detection is combined with targeted searching.
For example, a content owner (e.g., a copyright owner of prize- winning Beagle images) discovers that her images are being copied from her website and illegally distributed on the internet. Of course, the content owner embeds her images with digital watermarks prior to posting them on her website. The watermarks preferably carry or link to image identifying information, such as the content owner's name, image identifier, data copyright information, etc. The content owner further discovers that her pirated images are often associated with a particular brand of knock-off dog food, "Yumpsterlishious." A targeted search (e.g., via a search engine) for "Yumpsterlishious" and/or "Beagles" generates a listing of, oh say, 1024 URLs. Content from each of the 1024 URLs is then analyzed with a watermark reader to locate unauthorized copies of the content owner's images. The location (e.g., URL) of suspect images can be forwarded to the copyright owner for legal enforcement. Of course, other keywords may include author, photographer, artist, subject matter, dates, etc. The above examples leverage keyword searching (or targeted searching) and digital watermark analysis.
Another targeted search utilizes metadata associated with content. A search engine (or media handlers, like web browsers, media players, etc.) looks for keywords in content metadata (e.g., headers, XML tags, etc.). Content including certain keywords in associated metadata (e.g., to borrow from the above example, "Beagles") is searched with a watermark reader to determine whether it includes a watermark embedded therein. Or metadata associated with an audio or video file is searched for keywords, and if the keywords are found, the file is further analyzed with a digital watermark reader. This example uses keywords in metadata to identify likely candidates for watermark detection.
We can also combine watermark detection with so-called pattern matching. Pattern matching algorithms are well known, and we can employ such algorithms while searching the internet for lexical or image based matches. Watermark decoding is performed only on content meeting predetermined pattern criteria. For example, a pattern matching search is initiated for all images or graphics including a stylistic X, a trademark for Xerloppy Corporation. The pattern matching search turns up 72 hits. The images (or graphic files) are then searched to determine whether a digital watermark embedded therein.
Yet another targeted searching tool leverages network traffic patterns. Routers and switch nodes are monitored to determine internet traffic trends. A watermark- reading web crawler is directed toward the trends. For example, a particular router is monitored to see where traffic originated or is routed from prior to accessing a website including copyrighted (and watermarked) images. The suspected originating or routing websites are crawled in search of watermarked content.
Still another targeted searching method routinely analyzes websites in which unauthorized copyrighted materials have been previously found. For example, a server maintains a listing of websites where watermarked content has been previously found. The websites are routinely crawled in search of any watermarked content.
Integrated Searching System
FIG. 1 illustrates a system 101 implementing an integrated searching strategy. The term "integrated" is used to reflect a system operable to employ both manual and automated searching. One object of system 101 is to identify digital watermarked content on a network, like the internet. The system includes a central control panel or interface 102, through which searching criteria is provided. For example, a customer can enter search terms (e.g., "Beagle") or specific web addresses that she would like searched for watermarked content through, e.g., a web-based customer interface 104. The terms and/or URLs are communicated to interface 102. Interface 102 farms the terms and/or URLs to a watermark detector-enabled web crawler (or searching agent) 120 or to a distributed watermark detector-enabled web crawler (e.g., a soldier from the "army" of web browsers mentioned above). As an alternative, interface 102 provides the terms and/or URLs to directed search module 106, which is a server-based web crawler including a digital watermark detector. The directed search module 106 hits the corresponding URLs in search of watermarked content.
The FIG. 1 system further includes a manual searching module 108 in which an operator directs searching. For example, an operator enters a website to be forwarded to web crawler 120, or directs a web browser to a particular website in search of watermarked content. Or, the module 108 is used to interface with a search engine, e.g., Google, where keywords are entered and resulting URL are provided to watermark detector-enabled web browsers. Of course, interfacing with a search engine can be automated as well. Interface 102 also preferably interfaces with modules 110 (which may include some human interaction) to assist in digging deeper into websites, e.g., websites including databases and FlashMedia. Modules 110 may also provide the systems with addition URLs to visit. These URLs may be directly provided to web crawler 120, but are preferably controlled by control panel 102.
Results from web crawler 120 (and reports from distributed web crawlers) are provided to a database 130 for customer reports or for further analysis. Search Engine Categorization
Search engines employ web crawlers to categorize web pages. For example, website text is obtained by a crawler and used to create keyword indexing. Website owners can also register a website by listing the URL and keywords. An improvement is to include a digital watermark analysis in the registration or categorization process. For example, the search engine's web crawler employs a digital watermark reader and scans a target website for digital watermarking. A digital watermark includes a unique identifier, and perhaps text. The identifier and/or text are used as keywords when cataloging the target website. For example, the search engine may associate a web address with a watermark numeric identifier and any text carried by the watermark, and may even indicate that the website includes digital watermarking. The watermark- based keywords are searchable along with any keywords derived from text or HTML found on the website.
As a variation of the above categorization, content can include XML tags. The tags can include a field which indicates that one or more items of content on a website include digital watermarking. The web crawler/search engine need not decode the watermarks; but rather, determines from the XML fields (or header data) that the website includes digital watermarking. The web crawler or associated search engine includes a "watermarking is present" indicator as a keyword associated with the website. Then, a search engine keyword search may include all websites including "watermarking is present," plus any relevant keywords (e.g., "Beagles"). Resulting website listings can be searched for digital watermarking.
Mobile Applications
Another searching tool facilitates communication with a plurality of mobile devices and leverages search results generated by the various mobile devices. Say, for example, that 23-year old Ginger goes clubbing on Saturday night. She finds her way to her favorite hangout and meets up with three of her closest friends. The music is loud and conversation is stifled through the noise and haze. But wireless communication is uninhibited. Ginger - as always - has packed along her wireless device (e.g., Pocket PC, Blackberry, cell phone, etc.). Her device is, e.g., BlueTooth enabled and readily communicates with similar devices carried by Ginger's friends. (Long before, Ginger and her friends established passwords or shared security protocols that enabled secure communication; else anyone standing nearby with a wireless device might be able to sniff contents on their devices.). Ginger's device communicates with the other devices to see whether they have recently performed any searching, and if so, what the nature of the searching was. Ginger can preset search topics (key terms or identifiers) in her wireless device. Instead of presetting search topics, Ginger's wireless device can automatically generate search topics based on Ginger's web browsing history or past internet queries. One setting can be simply to copy any search results carried out by the other devices. Ginger's device uses these preset search topics to sniff other devices and see if they have found anything related to Ginger's search terms. One friend, Kim, performed a targeted search, yesterday, for music penned and performed in the late 1980's by an obscure Australian rock-band, Aintitnice. The search results (and maybe even corresponding content like audio files) are stored in a search results or shared directory. (The search need not be carried out on Kim's mobile device, but instead, can be carried out on Kim's home computer, with the search results being communicated to Kim's mobile.) Ginger likes Aintitnice also, and has entered the group as a search term in her mobile device. Ginger's wireless device negotiates with Kim's device to obtain the search results and/or even the audio files. (If the audio files are rights protected, Ginger's device can negotiate with an online server to obtain the necessary rights to play the music. For example, the audio file may include a digital watermark that is used to link to the online server.).
Self selection by Ginger (e.g., being friends with Kim and presetting Aintitnice) and proximity (e.g., clubbing with certain friends) enable mobile searching.
A few possible combinations of this mobile device searching include, e.g.:
Al. A method of searching comprising: from a first mobile device, wirelessly querying a second mobile device to determine whether the second mobile device has internet search results relating to predetermined search criteria; and receiving at least a subset of the search results.
A2. The method of Al, wherein the first device also queries to determine whether the second mobile device has content related to the predetermined search criteria.
Bl. A method of searching comprising: receiving search criteria in a first, handheld mobile device; upon sensing of a second, handheld mobile device by the first, handheld mobile device, automatically and wirelessly querying the second, handheld mobile device to determine whether the second, handheld mobile device has any content stored thereon corresponding to the search criteria; and receiving content corresponding to the search criteria from the second, handheld mobile device.
A few other combinations of the above sections include:
C 1. A method of searching a network for watermarked content comprising: receiving data representing a visible pattern; searching the network for content corresponding to the visible pattern; analyzing content identified as corresponding to the visible pattern for digital watermarking; obtaining at least one watermark identifier from the digital watermarking; and reporting at least one watermark identifier and a corresponding network location when digital watermarking is found.
C2. The method of Cl, wherein the visible pattern comprises a company logo.
C3. A method of searching a network for watermarked content comprising accessing a remote server to obtain a list of network locations; searching the network locations for digital watermarking during periods of computer user inactivity; reporting to the remote server at least one watermark identifier and a corresponding network location when digital watermarking is found.
C4. A method of searching a network for watermarked content comprising: accessing a remote server to obtain search criteria; searching the internet for digital watermarking as a background process during periods of computer user activity; reporting to the remote server at least one watermark identifier and a corresponding network location when digital watermarking is found.
C5. The method of C4, wherein search criteria comprises an instruction to search internet content accessed by the user.
C6. The method of C4, wherein the search criteria comprises keywords.
Cl. The method of C6, further comprising automatically accessing a network search engine, providing the keywords to the network search engine, and obtaining there from a listing of URLs associated with the keywords, wherein said searching comprises searching the URLs.
C8. A system to direct network searching for watermarked content comprising: a website interface to receive at least one of keywords and network locations from a customer; a website interface to communicate with a plurality of distributed watermark detectors; a controller to control communication of keywords and network locations to the plurality of distributed watermark detectors; and a database to maintain information associated with digital watermarking and corresponding network locations. C9. A system to direct network searching for watermarked content comprising: a website interface to receive at least one of keywords and network locations from a remote customer; a web browser including or cooperating with a digital watermark detector; a controller to communicate keywords and network locations to a web browser, wherein the web browser searches locations associated with the keywords or the network locations; and a database to maintain information associated with digital watermarking and corresponding network locations.
Desktop Searching
Another aspect of the invention is a desktop searching tool that provides efficient media (e.g., audio, images and video) searching and cataloging. The tool can also provide metadata refreshing capabilities as well. We start with a searching tool 201 (e.g., a software program or application) that resides on a user's computer 200 (FIG. 2). The searching tool includes two primary software components - an indexing tool 202 and desktop searching tool 204. Of course, tools 202 and 204 need not be separate components or software application, but are referred to separately here to ease discussion of their individual functions. The software can be written in any language available to software programmers such as C, C++, Visual Basic, Java, Python, TcI, Perl, Scheme, Smalltalk and Ruby, etc.
The indexing tool 202 combs through the user computer (or home network) in search of image, audio or video files. The indexing tool 202 catalogs its findings in one or more indices (e.g., it creates an index). An "index" contains a searchable listing or collection of words, numbers and characters and their associated files and locations. A user then searches an index - instead of the entire computer - when she wants to find a file including a keyword. The search is carried out with Desktop Searching Tool 204. We mention here that we sometimes refer to both image and video files as "imagery." Our use of the term "imagery" is also broad enough to cover multimedia files as well. The desktop searching tool 204 provides a user interface (e.g., desktop window or HTML based interface) through which a user queries an index to find specific imagery or audio files or metadata associated therewith. Imagery or audio files are typically defined by a content portion and a metadata portion.
A user is preferably able to select storage areas to search and catalog by the searching tool 201, e.g., C drive, certain files or directories, and/or removable media (zip drive, external hard drive, DVD drive, attached MP3 player or jump drive (flash memory, USB drive), etc). Of course, the user could select her entire computer or home network. The searching tool 201 can be preferably placed in a background searching mode.
When operating in a background searching mode, the searching tool 202 searches the computer while a user works on other applications (e.g., akin to common anti-virus software that routinely looks at all incoming files). This background mode preferably filters new files as they are created or received by the user's computer or home network.
To simplify the discussion going forward we'll focus on imagery files. But the reader should not presume that our inventive techniques are limited to just image or imagery files. Instead our techniques also apply to audio and rich content (e.g., MacroMedia flash files), etc.
Our indexing tool searches for image files, e.g., as identified by their file extensions *.gif, *jpg, *.bmp, *.tif, etc. (If searching for audio or video files, we might search for *.au, *.wmv, *.mpg, *.aac, *.mp3, *.swf, etc.)
An image is indexed once it is located. To do so the image is opened sufficiently (e.g., perhaps without accessed any compressed image portion) to access a metadata portion, if any. The metadata can be provided for inclusion in a searchable index. For example, consider an image named "Falls.jpg," with metadata including a descriptive phrase: "Picture of Falls taken near Silver Lake, Montana." The file name and the descriptive phrase are added to the desktop search index, along with the file location and any other metadata in the descriptive phrase.
This first implementation works best when the searching tool 201 cooperates with a desktop searching index (e.g., MSN Desktop Search) through an application program interface. For example, when the Desktop Search encounters an image file it calls searching tool 201, or passes the image file or file location to searching tool 201. In some alternatives, we use image searching software from IFilterShop LLC (available on-line at www.ifiltershop.com) as a component of indexing tool 202. The IFilterShop software would help to search images for metadata associated therewith. Such metadata is added to an index to be searched by a desktop searching tool 204.
In a second implementation, indexing tool 202 creates an HTML file (or XML, Word, or other text searchable file) for each image file searched. The HTML file is preferably stored in the same directory as the image file, or in a directory that is accessible to a searching tool. The HTML file includes the image file name ("Falls.jpg") and a listing of any terms ("Picture of Falls take near Silver Lake, Montana") and other metadata (time, date taken, camera parameters, geo-coordinates, etc.). The HTML file preferably includes a similar name, but with a different extension (e.g., "Falls.dwm.html"). We can optionally include (or associate) a thumbnail representation of the JPEP image in the HTML file as well.
The HTML file is searchable. For example, indexing tool 202 (or the Google and MSN desktop searching tools) are able to search the HTML file for metadata (e.g., text), and once found, the searching tools add the metadata to their desktop index.
Digital Watermarks
In both the first and second implementations of the previously discussed desktop searching an image file is preferably searched for an embedded digital watermark. That is the indexing tool 202 includes or cooperates with a digital watermark detector. If found, the HTML file is provided with a watermarking indicator (e.g., text, number or graphical indicator) to show that the image file is watermarked and what information is carried by the watermark (e.g., a plural-bit identifier or message).
Thus, a digital watermark — embedded in an image - becomes searchable by a desktop searching tool.
If a watermark is not found in an image, one can be embedded therein if desired. A watermark can also be used as "the" identifier to link between an image and an on-line metadata repository as further explored below. Watermark-based Refreshing
In U.S. Patent Application No. 09/482,786, filed January 13, 2000, and in its parent applications, we refer to a metadata repository and using a steganographic identifier to access the metadata repository. Related implementations are now provided.
We start with the premise that metadata will - inevitably - become disassociated with its underlying content. Scaling, cropping, editing, transforming and transmitting content increases the chances of separating metadata from its content.
A digital watermark provides the persistent link between metadata and content. One aspect of our invention is a metadata "refresh" or synchronization.
Desktop searching tool 201 - as part of the indexing process - checks with a metadata repository to ensure that metadata associated with an image is current or up to date. (As will be appreciated, these refreshing or synchronization techniques can also be extended to internet searching tools, like Google and Yahoo!, as well. A search engine, after or part of a search, can ask a searcher whether they would like to populate metadata for a particular image, audio or video found. The methods and systems detailed below can be used for such populating.)
In particular, the desktop searching tool 201 queries a metadata repository 210 (FIG. 3) to see if there is any metadata associated with an encountered image. The repository 210 can be stored locally on the user's computer 200, but more likely the repository 210 is accessed over a network (e.g., internet or cellular network).
If an encountered image includes a digital watermark identifier embedded therein, the watermark identifier is communicated to the metadata repository 210. The identifier is used to index into the repository 210 and locate any information associated therewith. The information is communicated to the searching tool 201 for indexing. The information stored in the repository is checked against the image metadata. If the repository information is the most current or up to date, it is accessed and indexed (and perhaps stored or associated with the image on the user's computer). If, however, the image includes the most up to date metadata, the image metadata is preferably copied to the metadata repository and cataloged according to the watermark identifier.
Relative metadata "freshness" can be determined, e.g., by a metadata timestamp or even a "last updated" file indicator. Or if no file metadata is found (another case of unfreshness), metadata from the repository is provided for indexing and associated with the image file.
Since a user is not so trusting, to simply except new metadata or fresh content, a hash or other reduced-bit identifier can be used to verify the veracity of content and metadata. For example, say a header indicates the underlying content is a song by the Eagles. The header can include a hash of the song to allow verification of the contents and header information. The hash is provided to a trusted third-party repository along with the metadata. The hash is authenticated and the metadata (and song) are then deemed trustworthy. The searching tool 201 can periodically check with the metadata repository 210 to ensure that the image metadata (and index of such metadata) is up to date. A graphical user interface may also provide a selectable button, allowing a user to select a feature to continuously (or frequently) query the metadata repository 210 to ensure metadata freshness. As an alternative implementation, the searching tool 201 inquires whether an encountered image itself is stored in repository 210. If not, the searching tool provides a copy of the image to the repository 210. Then, both the metadata and image are stored in the repository 210. A search index can be updated to reflect that the image itself has been stored in the repository 210. (In some cases the image is removed from the user's computer when it is copied to the repository). An image registration can be automatically carried out by the searching tool 201. For example, the registration may include association of the image to the user's account or assignment of a unique identifier (e.g., via a digital watermark, fingerprint or hash).
Consider some additional watermark-based metadata gathering examples. A fledging photographer takes a memory card full of pictures while vacationing at Disneyland. Prior to taking her trip, the photographer programmed her camera (maybe which is incorporated into a cell phone) to watermark some or all pictures taken by the camera with the same identifier. The identifier is associated in the data repository 210 with key words or information (e.g., vacation dates, location, family members on the trip, on-line journal, etc.). Our searching tool 201, once it encounters the watermark identifier in a Disneyland picture, queries the data repository 210 with the identifier in search of additional metadata. The key words or information are retrieved from the data repository 210 and indexed for desktop searching. Thus, the identifier is used to generate additional metadata. The metadata can also be indexed in a searchable index.
Now suppose that the repository 210 is a public repository. The young photographer selects an identifier that is generally associated with Disneyland. That is, the photographer selects an identifier that people generally use when vacationing at Disneyland. Perhaps the watermark identifier is obtained through a trust metadata broker, one who is trusted to provide or obtain metadata associated with key metadata "ground truths" (e.g., like location, events, dates, etc.). The metadata broker then gathers general metadata that is associated with the identifier or location/event in which the identifier is associated with. A user identifier can be used in connection with the selected identifier to aid in identifying the young photographer. The public or trusted metadata broker populates or obtains data records associated with the identifier (e.g., people post Disneyland favorite memories, directions, Mickey Mouse facts; or the trusted metadata broker obtains metadata itself, etc.). The searching tool 201, once it encounters the watermark identifier in a Disneyland picture, queries the data repository 210 with the identifier in search of additional metadata. The data records are retrieved and indexed for desktop searching. (Of course, instead of a public identifier, a semi- public identifier can be provided. For example, all members attending a family reunion can use the same identifier. Use of the term "same" includes a situation where a watermark has many payload fields, and the "same" identifier is included in a predetermined field. In this multi-payload field situation, two watermarks may include the same identifier but have different information stored in different fields.)
Metadata Gathering
Metadata can be gathered using other techniques as well. For example, a location of an image can be inferred from related clues. An image file named "DisneyLandOOl" was probably taken at Disneyland. The word Disneyland is provided to an internet search engine or data repository to gather metadata. The metadata is provided to a desktop searching tool which updates the image file's metadata portion and indexes the new metadata in a searchable desktop index. A directory structure name and/or date and time information can be used to gather metadata. For example, if searching tool 201 knows (e.g., from a metadata field or watermark date/time stamp) that a picture was taken on February 14, 2005 at 8:30 pm, the searching tool can use this information to gather related metadata. Perhaps the searching tool queries the photographer's Outlook calendar or other calendaring software to see what was scheduled at that time ("Valentine's Day dinner at Jake's with Jane"). This information is provided for indexing by the desktop searching tool 201. Not only is this information provided for indexing, the information can be associated as metadata in the image file. Or, if a user keeps an electronic journal or diary, a certain date within the journal or diary can be similarly queried. For example, words or terms within a journal entry are extracted, indexed and then stored as metadata. Still further, the searching tool can access financial or checkbook software (e.g., Microsoft Money or on-line bank statements) to check receipts or entries around this time. (Interfacing with Outlook, MS Money, Word and Excel is straightforward to those skilled in the art given the public information about such programs and their interfaces. For example, Outlook can be accessed using Automation techniques from just about any program written with Microsoft Visual Basic. Other techniques use application program interfaces, etc.). A desktop searching tool 201 may also use an audit trail to gather metadata. Suppose, for example, that a user receives a picture emailed from her brother Scott. The email trail (from whom and when received) can be used as metadata for the picture. (Recall from the discussion above that all files can be searched when received. For example, the indexing tool 202 recognizes that a new image is received in an Outlook Inbox. The email history and image are combed by the indexing tool 202 to gather this information.).
An internet history or cache is also looked at. For example, search terms entered into an internet search engine are pulled from the Browser's history or are cached and used as metadata for an image found from the search.
Many of today's cameras are equipped with GPS units. GPS data generated by these units can be stored in header or watermark information. Searching tool 201 uses the GPS data to locate related metadata. For example, GPS coordinates are extracted from an image and are provided to a geographical database. The coordinates are used to index the database and find metadata associated with the geolocation. The metadata can include a city name, historical information, current weather, building specification, associated pictures, etc.
We can also gather metadata from general "inferences" made about an image. For example, we can look at metadata in adjacent pictures. Consider, for example, a directory that includes three pictures: photo 1, photo 2 and photo 3. When gathering metadata for photo 2, searching tool 201 looks at metadata associated with photo 1 and photo 3 to supplement the metadata for photo 2. Chances are that the photographs were taken at or about the same time or at or around the same location. Similarly, timestamps are used to determine images that were taken near one another - like within a 5 or 10 minute window. Chances are that images within such a timeframe are related. This window can be expanded depending on user preference (e.g., expanded to 7 days to cover a Disneyland vacation). Metadata associated with these images is used by the searching tool 201 or associated with a target image. GPS data and timestamps can be used to generate even further information. For example, a sports enthusiast snaps a few pictures while attending the NCCA men's basketball semi-finals in Dallas. GPS coordinates and a timestamp are associated with the pictures (e.g., as an embedded watermark or header information). The GPS is used to identify the location (e.g., sports arena) and the timestamp is used to identify an event at the sports arena (basketball game). These terms can be used as search terms to identify additional metadata, stories, scores, etc. associated with the event. This information is provided for association with the images.
Metadata Generation We can also automatically generate metadata for an image.
Consider a cell phone that has a biometric sensor (e.g., a fingerprint scanner). (For example, LG Telecom, one of the largest wireless network operators in Korea, recently launched a biometric cell phone - the LP3800. Other manufacturers are providing competing cell phones.) A user presents her finger for scanning by the cell phone. The user is identified via the fingerprint. A searching tool 201 uses this fingerprint identifier as photographer metadata. (For example the searching tool 201 can query (e.g. via a wireless or Bluetooth sniff) the cell phone and inquire who the photographer was when the photo was taken. The photo is identified to the cell phone camera by file name or other identifier. Or, if a photographer identifier is included in a photograph's metadata, the searching tool 201 queries the cell phone to see who the identifier corresponds with. If the biometric identifier has been encountered before, the searching tool can use past cell phone inquiry result instead of talking with the cell phone. Of course a human fingerprint (or template there from) can be used as metadata itself.
Search tool 201 may also include or cooperate with a pattern recognition or color analysis module. Metadata is generated through image pattern recognition. For example, the searching tool 201 analyzes an image with a pattern recognition module. The results of which are used as metadata. (For example, the pattern recognition module might return the term "tree" after analyzing a picture of a tree.). We can also perform a color analysis of an image, e.g., calculating a 3-D color space histogram of the image. The histogram identifies predominate colors (e.g., red, pink, yellow, etc.). Predominate colors can be based on an image region or a percentage of an image including the predominate color. Or only the top three or so colors are indexed for a particular image. One can imagine a search request typed or spoken into desktop searching tool 204 requesting a picture of grandma wearing her pink hat. The query may specifically include the terms "grandma" and "pink". The term "pink" identifies those pictures having pink as a predominate color as automatically determined from ' such color analysis. This subset is cross-check with all pictures including grandma as metadata. The resulting set of pictures is identified for user perusal.
Other metadata can be inferred from image characteristics. A "dark" picture (as determined by a color or pixel analysis) might imply that the picture was taken at night or indoors.
Instead of pattern recognition or digital watermarking, searching tool 201 may include or cooperate with a fingerprinting module. We use the term "fingerprint" to mean a reduced-bit representation of an image like an image hash. The terms "fingerprint" and "hash" are sometime interchangeably used. A fingerprint is generated and is then used to query a database where other images have been fingerprinted. For example, different pictures of the Empire State Building yield similar (or related) fingerprints. These pictures and their corresponding fingerprints are indexed in the database. While exact matches might not be frequently found, those fingerprints that are deemed statistically relevant are returned as possible matches. Metadata associated with these fingerprints can be returned as well. (Fingerprinting and watermarking can also be advantageously combined. For example, a digital watermark can be used as a persistent link to metadata, while a fingerprint can be used for identification.)
Searching tool 201 may also include or cooperate with a facial recognition module. Facial recognition software is used to identify people depicted in images. Once trained, the facial recognition software analyzes images to see whether it can identify people depicted therein. Names of depicted people can be indexed and associated with the image. Or individual profiles (name, birth date, family relation, etc.) can be established and associated with a person. Then, when the facial recognition software identifies an individual, the individual's profile is associated with the image as metadata. (Fig. 4 shows one example of this method. Facial recognition software 401 analyzes an image 402 and determines that the image depicts Jane. A profile database 403 is interrogated to obtain Jane's profile 404 (e.g., name, current age, birth date, etc.) and the profile 404 is associated with the image as metadata.)
Metadata can also be generated by searching devices within a user's home domain. For example, the searching tool 201 initiates communication (e.g., via Bluetooth or wireless connection) with the user's cell phone, which is equipped with a camera and GPS unit. The searching tool 201 queries where the camera has taken pictures. The geolocations and times of image capture can be used as metadata or to find metadata. Instead of querying the cell phone or other camera, the searching tool might talk with a user's TiVo device, game console (e.g., Xbox or PlayStation), music player (e.g., an iPod or MP3 player) or PDA. Relevant information (e.g., journals, calendars, other images, music, video games, etc.) gathered from these sources can be used as metadata for a particular file on the user's desktop.
User Selection
The searching tool 201 (FIG. 2) preferably includes one or more user interfaces (e.g., as provided by tool 204) through which a user can interact with the tool 201 and metadata found or indexed by the tool 201. For example, a user is preferably able to select, through desktop searching tool 204, internet-based sites at which searching tool 201 is likely to find additional metadata. (The user can type in URLs or highlight predetermined metadata websites.) The user can also preferably set one or more filters through such interfaces. A "filter" is a software module or process that limits or screens information that should be used as metadata. Filters allow a user to weed out potentially meaningless metadata. For example, one filter option allows for only metadata gathered from the user's desktop to be associated with an image. Another option allows a user to set preferred or trusted metadata sources. Metadata gathered from repository 210 might be designated as being trusted, but metadata gathered from an automatic internet search of text found in an image header might not be trusted. A related filter option allows a user to pre-rank metadata based on source of the metadata. If the metadata is not of a sufficient rank, an image file is not augmented to include the new metadata and the low-ranking metadata is not indexed. Yet another filter option allows for only metadata approved by a user to be associated with an image.
Gathered or generated metadata is preferably presented through an interface for the user's review and approval. For example, metadata is presented via a graphical window complete with check-boxes (see FIG. 5). A user simply checks the metadata she would like associated with an image and the searching tool 201 updates the metadata portion of an image file to reflect the user's selections. Instead of checkboxes a user can highlight metadata she wants to keep.
Directory View
Another feature of the present invention is a directory view. Files are often arranged and graphically displayed by directories and folders. (Just click on "My Documents" in your computer directory and see how the files are arranged therein.)
An improvement arranges and graphically displays files according to their metadata. For example, based on information gathered by searching tool 201, images arranged and graphically displayed on a computer display according to metadata associated therewith. The metadata categories can change based on user preference, but we provide a few examples below. A user selects three broad metadata categories, vacations, professional and family.
A program (or operating system) queries an index provided by searching tool 201. All images including metadata identifying them as a "vacation" image are associated with the vacations directory, and all images including metadata identifying them as "family" are associated with the family directory.
The user can change the "file directory" view by changing the metadata categories. The user can also establish subdirectories as well (e.g., Disneyland and Niagara Falls metadata displays within the vacation directory). Image are arranged and displayed in a metadata structure and not in a typical directory tree fashion. If a user changes the metadata request, the desktop arrangement is altered as well.
Visual presentation of a directory view can also be changed, e.g., according to a style sheet associated with a particular type of metadata or media. Style sheets can vary from family member to family member (or between Windows login profiles). Music can also be represented according to its content. For example, music with a particular rhythm or harmony can be presented uniquely or according to a style sheet, etc.
One of the many possible combinations of the above file directory includes:
D 1. A graphical user interface, stored and running on a computer, comprising: a first module to present a graphical representation of files through a computer display; a second module to determine metadata associated with each of the files for display; a third module to graphically organize the files for display according to their metadata.
Metadata Authoring
A metadata authoring tool 206 (e.g., a software application) is described with reference to FIG. 6. The authoring tool 206 allows a user to annotate and associate metadata with multimedia content. While most image editing software (e.g., Digital Image Suite from Microsoft) provides metadata authoring capabilities, we provide a few improvements herein.
One improvement is the ability to "paint" an image or group or images with predetermined metadata. For example, in a software application setting, we provide a metadata toolbar that provides different metadata choices, e.g., terms like "vacation," "family," or profiles ("Jane's individual profile"), etc. Selecting (clicking) a metadata category from the metadata toolbar enables us to paint an image or file directory with the metadata. (One can imagine that the metadata selection makes the mouse cursor appear as a paintbrush. We then literally "paint" an image with the selected metadata. The image or directory icon representation (or thumbnail) can even turn a color associated with the metadata to provide user feedback that the metadata has been attached to the image.) Behind the scenes, the user selection of metadata and a target image tell the authoring tool 206 which metadata is to be added to a metadata portion of an image file. The metadata portion is rewritten or added to to reflect the "painting."
Even More Desktop Searching
Returning to the topic of desktop searching, in another implementation, we provide an image (and/or audio and video) searching tool (e.g., a computer program written in C++). The image searching tool resides on a user's device (e.g., computer, network server, iPod, cell phone, etc.) and crawls though files and folders in search of images. For example, the searching tool searches for image files, e.g., as identified by their file extensions *.gif, *-jρg> *.bmp, *.tif, etc. (If searching for audio or video files, we might search for *.au, *.wmv, *.mpg, *.aac, *.mp3, *.swf, etc.). In another example, a user (or operating system) identifies image directories and the searching tool combs through each of these identified directories.
Once identified, and with reference to FIG. 7, the searching tool opens an image and searches the image for an embedded digital watermark. The searching tool may include or call a watermark detector. If found, the watermark information (e.g., plural- bit payload) is provided to or is included in a first file, e.g., an XML file. The first file preferably includes the same file name, but has a different file extension. The image is further evaluated to obtain metadata there from (e.g., EXEF information, header information or other metadata). The metadata is provided to or is included in the first file. The first file may include the same tags or identifiers as were originally included in the image (or audio or video).
Upon encountering a digital watermark, the searching tool may query one or more online metadata repositories to determine whether there exists additional metadata associated with the image. Such online metadata may be downloaded to the first file. Of course, filters or criteria may be used to restrict which online metadata is accepted. For example, only those packets or groupings of metadata that are signed by a trusted or recognized source may be accepted for inclusion in the first file. Or different metadata fields or tags can include a last modified or time stamp indicator. That way, if the online-metadata includes a redundant field or tag, the most recent version (either associated with the image or online) of the metadata is used. Still further, a user can specify which sources of metadata should be trusted and included.
A watermark identifier can also facilitate "bi-directional" metadata population. That is, a watermark identifier can link to an online repository of metadata, and in particular, to a particular image or associated metadata. Metadata can be uploaded to the online repository and associated with the image metadata via the watermark identifier. (Watermark-based network navigation is discussed, e.g., in assignee's U.S. Patent Application No. 09/571,422, mentioned above.)
Returning to FIG. 7, a second file (e.g., HTML) is created. The second file name preferably includes the same file name as the first file and image, but with a different file extension. The second file preferably includes information from the first file. For example, if the first file includes a storage location for the image, the second file may include a hyperlink to the image (based on the storage location). As discussed in some of the implementations above, the second file may also include a representation of the image, or if video or audio, perhaps a sample or snippet of the audio or video. The second file can be configured by a user to include some or all of the information from the first file. This is advantageous, e.g., if the user wants to limit viewing of camera settings. (Behind the scenes, an XML parser cooperating with a style sheet or "skin" can be used to interpret the first file and populate the second file in accordance with the style sheet. In other implementations underlying content itself is used to determine how to populate a second file. Audio content having a certain rhythm or melody is displayed according to a first predetermined style, while content having other characteristics are displayed according to a second, different style.)
The creation of the HTML file typically triggers indexing by a desktop searching tool (e.g., Google or Yahoo, etc.). The metadata is added to an index, effectively allowing searching of the image. In some implementations, of course, the functionality of the above search tool is integrated with the desktop searching tool, hi other implementations, the searching tool plugs-in with the desktop searching tool. In still other implementation, a searching tool cooperates (or operates from) a proxy server or network hub. (We note here that some desktop searching tools, such as Google's Desktop
Searching tool allows for registering of certain file "types" (e.g., JPEG, etc.). The first file mentioned above can be given a unique file extension (or type). That way, a desktop searching tool can be told to ignore the first file when indexing so as to avoid redundant results.) The image searching tool can compare a "Last modified" date to determine whether to index a particular image. For example, an image's last modified date can be compared to a last modified date of a corresponding first file. If the image's modification date is later than the first files, the image is again analyzed to obtain the watermark and metadata. The first file is updated, along with the corresponding second file.
Blogs
Watermarks can also be used to facilitate and link to so-called on-line blogs. (A blog is information that is published to a web site. So-called "blog scripting" allows someone to post information to a Web site.)
Consider a photo (or audio or video) that includes a digital watermark embedded therein. A watermark reader extracts the watermark and links to an on-line resource (e.g., a URL). An improvement is that the digital watermark links to a blog or blog thread (or conversation). The blog may be stored, e.g., as a file. Consider that the watermark includes or references a URL of an online photo blog site, e.g., akin to Flickr (see, e.g., www.flickr.com). The watermark can link to a specific picture or account at flicker, and perhaps even a particular blog thread. For example, consider a photo depicting a home office, complete with a computer, monitor and chair. There may be several different blog threads (or conversations) being posted about the items depicted. (Maybe someone likes the chair, and wonders whether it will provide sufficient lumbar support. A conversation or thread continues on this topic.) A watermark ~ perhaps representing an image region that depicts the chair, or that is otherwise linked to the chair or thread — is used to link to the particular thread. A separate watermark (or watermark component) can be embedded in the image to represent a particular thread. The watermark payload or component information may even include an identifier that will link to subject matter line - displayable to a user - to allow user's to pick which blog thread they would like to consider. If the photo contains multiple such watermarks, each of the corresponding subject matter lines can be displayed for selection. Thus, the watermark becomes the root of each blog and blog thread. (Perhaps as a prerequisite to starting a blog thread, the conversation is assigned a watermark identifier or component, and the component is embedded in the image - perhaps region specific - when the blog or response is posted.)
In other implementations, each person who comments to a blog is assigned an identifier (or even a particular, unique watermark signature). The person's watermark is embedded in the image when they blog or otherwise comment on the photo.
More on BIo gs
At their roots, "photoblogs" are simply blogs with pictures. In most cases the pictures are the anchors. They grab attention, set tone and act as bookmarks. (See, e.g., www.photoblog.org).
So, on the one hand you can simply post an image as part of a log on the web, providing humor, illustration, documentation or an anchor for a conversation. The conversation could be about a vacation location, person, children, family, places or anything else topical and photogenic.
Digital watermarking brings a new twist with improvements. Watermarking makes the photo the centerpiece of a photoblog. A watermarked photo becomes an agent to the blog and a portal that can be revisited repeatedly. The photo could be distributed as a pointer to the blog itself. The photo catches the attention of the recipient, and through the digital watermark links back to a blog server (or network resource at which the blog is hosted). One can imagine that the blog is hosted (e.g., you must go to the website to read) or downloadable (e.g., sort of like the good old newsgroup concept). By dragging and dropping the photo on a blogging client or other application, one adds the blog to the client or application. (Behind the scenes, a watermark detector reads a watermark identifier from the dragged-and-dropped photo. The watermark identifier is used to link to the on-line blog (or conversation). For example, the identifier is used to identifier a file storage location of the blog, or a network location hosting the blog (e.g., URL). The blog is obtained or downloaded to the location, hi other cases, instead of downloading the entire blog, a link to the blog is stored at the application or client.)
Consider blog initiation. A user uploads an image to a blogging site to start a blog and writes a first entry. The site automatically watermarks the image with an identifier, linking the photo to the blog (or adding it to an existing blog). With the blog created, the user may (optionally) right-click, e.g., to send the image (and blog) to a friend. The e-mail including the watermarked photo invites friends to respond. They are linked to the original blog through the watermark identifier.
This functionality can be incorporated with desktop searching tools. When a watermarked image is noticed by a desktop searching tool, that image is checked to see if there's an associated blog, e.g., by querying an on-line blog site associated with the watermark or evaluating a "blog-bit" carried by a watermark payload. (A watermark payload may include many fields, with one of them identifying or linking to a particular blogging site.). The desktop searching tool (or photo handling software including, Photoshop, web browser, etc.) preferably provides a user indication (e.g., "go to blog" link shows up). Viewers can navigate over to read the blog via the watermark identifier. The image becomes linked or "bookmarked" to the blogging thread.
A watermark reader or desktop searching tool can include a right-click feature that allows addition of a blog entry on bloggable images (a feature determined by the watermark). Thus an image may appear anywhere, on a home computer or cell phone, and act as a gateway to the blog for reading or adding to the blogging thread.
The basic association of a blog with an image can happen, e.g., when a photo is registered at a photo-repository or online site. The act of registering a photograph - or watermarking the photograph - can create a blog, and over time, provide a more generalized brokerage to any blog that is registered. Any image can be "bloggable". Over time, photographers can create blogs around their collection as a way of marketing or communicating. One can even imagine blogs that are private (e.g., password or biometric protected) as a means of interacting with a friend or client.
A watermark preferably survives into print, and thus a relationship is created between printed images and (photo) blogs. (In some implementations a blogs is not created until an image is printed. But in any case, watermarking adds power to print that passes through a watermarking step, giving it a unique identity.) As a practical application a web-based user interface is created. A user presents a watermarked picture (or just a watermark identifier extracted from said picture) to the interface via the web. If receiving the picture the website extracts a watermark identifier there from. The watermark identifier is provided to a basebase or index to locate information associated therewith. For example, the picture was originally associated with one-or more text-based blogs. A current location of the blogs are found and provided to the user through the interface.
A few possible combinations of the above blogging implementations include:
El . A method of associating a blog with media comprising: embedding a digital watermark in an image or audio; associating at least a portion of the digital watermark with a network-hosted blog.
E2. The method of El, wherein the watermark comprises plural data fields, with at least one of the fields including or pointing to an on-line address at which the blog is hosted.
E3. The method of El wherein the blog comprises an on-line conversation. E4. A method of associating an online blog with media comprising: decoding a digital watermark from the media; accessing an on-line repository associated with the watermark; and accessing the blog associating with the media.
Watermarking Imagery stored in Electronic Memory on an Identification Document
The assignee of this patent document has filed several patent applications directed to securing identification documents with digital watermarks. Some of these disclosures also envision and disclose so-called smartcards, e.g., documents including electronic memory and/or electronic processing circuitry. For example, please see, e.g., U.S. Patent Nos. 5,841,886, 6,389,151, 6,546,112, 6,608,911, Published Patent Application Nos. US 2002-0009208 Al and US 2003-0178495 Al, and U.S. Patent Application Nos. 10/893,149 (published as US 2005-0063027 Al) and 10/686,495
(published as US 2004-0181671 Al). Related implementations and improvements are discussed below.
With reference to FIG. 8 we embed a digital watermark in an image stored on electronic memory circuitry of an identification document. The image preferably corresponds to an authorized bearer of the document. For example, the document 400 illustrated in FIG. 8 represents an identification document, such as a passport book, visa, driver's license, etc. Document 400 includes a photographic representation 410 of an authorized bearer (also referred to as "printed image") of the document 400, printing 420 on a surface of the document and integrated circuitry (e.g., a chip) 430. The chip 430 can include both electronic memory and processing circuitry. Chip 430 can be passive (e.g., no internal power supply) or active (e.g., including its own power supply). While the chip is preferably contactless, document 400 can include a contact-type chip as well. Suitable chips are known in the art, e.g., those complying with ISO standards 14443 and 7816-4. In one implementation, the integrated circuitry 430 includes an image stored therein. The image is preferably compressed, e.g., as a JPEG file, to help conserve memory space. The stored image preferably corresponds to printed image 410, or a reduced bit representation of printed image 410. The image includes digital watermarking embedded therein.
The digital watermark is preferably cross-correlated with information corresponding to the document, integrated circuitry and/or the authorized document bearer.
For example, the chip 430 may include a serial number (e.g., 96 bits) that is stored in static memory on the chip. The serial number, or a hash (e.g., reduced-bit representation) of the serial number, is used as a digital watermark message component. The hash or serial number is embedded in the photographic image stored on the chip 430.
The serial number can be combined with a document number as shown in Table 1 : Watermark Message, below:
Table 1 : Watermark Message
Chip Serial Number or Hash Document Number or Hash
The combined message is steganographically embedded in the stored image. Thus, the chip and document are tied together via digital watermarking. If the chip is replaced, moved to another document or simulated, the changes can be identified by validating the serial number or document number that should be embedded in the image stored on chip 430. Similarly, if the printed image 410 is altered or replaced, it may not include the necessary watermark message (e.g., chip serial number) embedded therein. Document verification can be automated. For example, a serial number is read from static memory (e.g., via a smartcard reader) and a watermarked image is similarly retrieved and decoded. The serial number and watermark message are compared to see if they correspond as expected. If the document number is used as a watermark message component, it can be input (e.g., via reading OCR-B text, barcode, magstrip or manual entry) for comparison as well.
Instead of a document number or serial number, any other text or message carried by printing, barcode, magstripe, etc. can be used as a watermark message component. Returning to FIG. 8, printed image 410 can be steganographically embedded with data as well, e.g., in the form of a digital watermark. The digital watermarking is preferably cross-correlated with information carried by the chip 430. For example, a watermark embedded in printed image 410 may include a chip serial number or hash thereof. In some cases, where memory capacity of chip 430 is limited and may not include a stored image but still includes a serial number, the printed image 410 watermark provides a link between the chip and the document.
In still other implementations, a first watermark in printed image 410 is linked to a second watermark embedded in a stored image on chip 430. The linkage can be accomplished in several different ways. For example, each watermark includes a redundant version of information, e.g., such as a serial number, document number or information printed on or carried by (e.g., a barcode) the document. In another example, the first digital watermark includes a key to decode or decrypt the second digital watermark (or vice versa). In still another example, a first message portion is carried by the first digital watermark, and a second message portion is carried by the second digital watermark. Concatenating the two message portions is required for proper authentication of identification document 400. Another example includes a third digital watermark printed elsewhere on the identification document (e.g., in a background pattern, graphic, ghost image, seal, etc.). All three digital watermarks are linked or cross-correlated for authentication.
In still further implementations, a different biometric image or template is stored in the chip, instead of a photographic image. For example, the biometric may include a fingerprint image or retinal scan. Such biometrics can be watermarked and linked to the document as discussed above. An example work flow for document production is shown in FIG. 9A and FIG.
9B. An applicant for an identification document (e.g., a passport) fills out an application and provides a photograph (step 500). The application is submitted to a processing agency (e.g., state department, step 510), which processes the application (step 520). Of course the application can be mailed or electronically submitted. Application processing may include background checks, including a check of relevant databases to ensure that the applicant is not fraudulently trying to obtain the identification document. If the application is approved, a document is created for the applicant. In the case of a passport, a blank "book" is obtained. The blank book includes a book (hereafter "document") number. The document number is matched with the applicant or applicant's file (step 530). In most cases, the book will include a chip already affixed (or integrated with) thereto. If not, the chip can be attached or integrated with the document at a later stage. The document is personalized to identify the applicant (step 540). For example, the document is printed with variable information (e.g., name, address, sex, eye color, birth date, etc.). The variable information, or portions thereof, is also stored as a barcode or stored in a magstripe or on chip. A photographic representation is also printed (or attached) on the document and stored in the chip.
With reference to FIG. 9B, a digital image representing the applicant is provided to a watermark embedder. (If the applicant provided a physical picture, the picture is optically scanned and a digital representation is provided to the watermark embedder.). Messages (e.g., a chip serial number read from static memory or document number, etc.) are input to the embedder. The watermark embedder embeds a desired message in a copy of the digital image. The embedded, digital image is compressed and then stored on the chip. If desired, a second message can be embedded in another copy of the digital image, and then printed on a document surface. (Of course, in some implementations, the same embedded image, including the same message, is both printed on the document and stored on-chip.).
Returning to FIG. 9A, the document production process optionally includes a quality assurance step 550, where the document is inspected. For example, any machine-readable features (e.g., OCR, barcode, magstripe, digital watermark, optical memory, electronic chip storage) are read and inspected to see if they match expected information. Any cross-correlation relationships (e.g., between first and second digital watermarks) can be tested as well. A quality assurance operator may also visually inspect the document.
A few possible combinations based on this section include (but are not limited to) the following:
Fl. An identification document comprising: an electronic memory chip, wherein the electronic memory chip comprises a serial number stored therein, the serial number uniquely identifying the electronic memory chip, wherein the electronic memory chip further comprises a digital image representing an authorized bearer of the identification document, and wherein the digital image comprises first digital watermarking embedded therein, and wherein the first digital watermarking comprises a representation of the serial number; a first surface area including text printed thereon, wherein the text comprises at least one of a name and an identification document number; and a second surface area comprising a photographic image printed thereon, wherein the photographic image comprises a representation of the authorized bearer of the identification document.
F2. The identification document of Fl wherein the first digital watermarking comprises a reduced-bit representation of the serial number.
F3. The identification document of F2, wherein the first digital watermarking further comprises a representation of the identification document number.
F4. The identification document of Fl wherein the photographic image printed on the second surface area comprises second digital watermarking.
F5. The identification document of F4 wherein the first digital watermarking and the second digital watermarking are interdependent.
F6. The identification document of F5 wherein the second digital watermarking comprises a key to decode or decrypt the first digital watermarking.
F7. The identification document of F5 wherein the first digital watermarking comprises a key to decode or decrypt the second digital watermarking.
F8. The identification document of F4 wherein the first digital watermarking and the second digital watermarking comprise information that is redundant with or correlated to each other. F9. The identification document of any one of F1-F8, wherein the identification document comprises at least one of a driver's license and passport.
FlO. The identification document of F4 wherein the identification document comprises a third surface area including third digital watermarking thereon.
FI l. The identification document of any one of F 1 -F 10 wherein the digital image comprises a compressed form in the electronic memory chip.
F12. The identification document of any one of Fl-Fl 1 wherein the electronic memory chip comprises electronic processing circuitry.
Gl. An identification document comprising: an electronic memory chip, wherein the electronic memory chip comprises a serial number stored therein, the serial number uniquely identifying the electronic memory chip, and wherein the electronic memory chip further comprises a digital image stored therein, wherein the digital image comprises first digital watermarking embedded therein; a first surface area including text printed thereon, wherein the text comprises at least one of a name and an identification document number; and a second surface area comprising a printed image or graphic, wherein the printed image or graphic comprises second digital watermarking embedded therein, and wherein the first digital watermarking and the second digital watermarking are cross-correlated for authenticating the identification document.
G2. The identification document of Gl, wherein the first digital watermarking and the second digital watermarking are cross-correlated by including redundant or correlated information. G3. The identification document of G2 wherein the information comprises at least a representation of the serial number.
G4. The identification document of G3 wherein the information further comprises at least a representation of the document number.
G5. The identification document of Gl wherein the first digital watermarking and the second digital watermarking are cross-correlated through decoding or decrypting keys .
G6. The identification document of Gl wherein the digital image comprises a biometric of an authorized bearer of the identification document.
Even further combinations of the above sections are provided below. Of course these are not the only possible combinations but are provided by way of example only.
Hl. A method of controlling a desktop searching tool comprising: searching one or more computer directories for imagery or audio files; upon discovery of an imagery or audio file, analyzing the file for a digital watermark embedded therein, and if a digital watermark is embedded therein, recovering a plural-bit identifier carried by the digital watermark; obtaining metadata from the imagery or audio file; and querying a remote database with the plural-bit identifier to determine whether the file metadata is current.
H2. The method of Hl further comprising refreshing the file metadata with metadata from the remote database when the file metadata is not current.
H3. The method of H2 wherein a timestamp or last edited field is used to determine whether the file metadata is current. H4. The method of Hl further comprising uploading the imagery or audio file when the file is not stored in the remote database.
Il . A method of searching a network for watermarked content comprising: receiving one or more keywords associated with watermarked content; providing the one or more keywords to a network search engine; obtaining from the network search engine a listing of URLs that are associated with the one or more keywords; analyzing content at websites associated with the URLs for digital watermarking; obtaining at least one watermark identifier from the digital watermarking; and reporting at least one watermark identifier and a corresponding URL location.
12. The method of Il , wherein the watermark identifier and URL location are reported to a remote server.
13. The method of II, wherein the watermark identifier and URL location are reported directly to a copyright owner associated with the watermark identifier.
14. The method of II, wherein the one or more keywords comprises a watermark indicator.
Jl . A method of searching a network for watermarked content comprising:
monitoring network traffic patterns associated with a network resource comprising watermarked content stored thereon; directing a watermark reader in a direction of the traffic patterns in search of watermarked content.
Kl. A method of categorizing content by a search engine comprising: examining metadata associated with a website, the metadata reflecting a presence of digital watermarking; and providing a presence of digital watermarking indicator that is associated with the website, wherein the presence of digital watermarking indicator is searchable through the search engine.
Ll. A device searching method comprising: i. searching one or more device directories for imagery or audio files; ii. upon discovery of an imagery or audio file, analyzing the file for a digital watermark embedded therein, and if a digital watermark is embedded therein, recovering watermark information there from; iii. obtaining metadata from the imagery or audio file; iv. creating a first file including at least some of the watermark information and at least some of the metadata; v. creating a second file from the first file, wherein the second file includes at least some of the watermark information and at least some of the metadata, and wherein creation of the second file triggers indexing of the second file by a device searching tool.
L2. The method of Ll wherein the first file comprises XML and the second file comprises HTML.
L3. The method of Ll wherein the device comprises at least one of a cell phone, portable music player, game console and computer.
L4. The method of Ll wherein said creating employs at least a style sheet.
Concluding Remarks
Having described and illustrated the principles of the technology with reference to specific implementations, it will be recognized that the technology can be implemented in many other, different, forms. The methods, processes, components, modules, filters and systems described above may be implemented in hardware, software or a combination of hardware and software. For example, the watermark data encoding processes may be implemented in a programmable computer or a special purpose digital circuit. Similarly, watermark data decoding may be implemented in software, firmware, hardware, or combinations of software, firmware and hardware.
The methods, components and processes described above (e.g., desktop searching tools and metadata generation and gathering tools) may be implemented in software programs (e.g., C, C++, Visual Basic, Java, Python, TcI, Perl, Scheme, Ruby, executable binary files, etc.) executed from a system's memory (e.g., a computer readable medium, such as an electronic, optical or magnetic storage device).
The section headings are provided for the reader's convenience. Features found under one heading can be combined with features found under another heading. The various example "combinations (e.g., Cl, Dl, etc.) are provided by way of example only. Of course, many other combinations are possible given the above detailed and enabling disclosure.
Our use of the term "desktop" should not be construed as being limiting. Indeed, our "desktop" searching modules and our metadata generation and gathering methods can be employed on laptops, handheld computing devices, personal (or digital ) video recorders (e.g., think TiVo), cell phones, etc. We can even store our metadata index or searching tools on consumer electronic devices like MP3 players, iPods, TiVo devices, game consoles (e.g., XBox), etc. Communication between such devices can be wireless or wired.
The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this and the above referenced patent documents.

Claims

What is claimed is:
1. A method comprising: receiving an imagery or audio file; identifying perceptual features in the imagery or audio file; and based on the perceptual features, automatically generating metadata for the imagery or audio file.
2. The method of claim 1 further comprising indexing the metadata in a desktop searchable index.
3. The method of claim 1, wherein said identifying comprises facial recognition.
4. The method of claim 3 wherein said metadata comprises a profile associated with a person depicted in the imagery as identified by the facial recognition.
5. The method of claim 4 wherein the profile comprises an XML or text file.
6. The method of claim 1 wherein said identifying comprises pattern recognition.
7. The method of claim 6 further comprising determining text to represent a pattern identified by the pattern recognition and indexing the text in a desktop searching index.
8. The method of claim 1 wherein the imagery comprises video.
9. The method of claim 1 wherein the perceptual features comprise color and said identifying comprises a color analysis involving a color-space histogram.
10. The method of claim 1 further including attaching the metadata to the imagery or audio file.
11. The method of claim 1 wherein said generating includes interrogating a data repository with the perceptual features or a reduced bit representation of the perceptual features to obtain metadata.
12. A memory device comprising executable instructions stored therein, wherein the executable instructions comprise instructions to carry out the method of claim 1.
13. A method to gather metadata associated with imagery or audio comprising: receiving an imagery or audio file including a content portion and a metadata portion; analyzing the metadata to determine at least one of a time and day when the content portion was created; automatically accessing one or more user software applications to gather information associated with at least one of time and day; and adding the information to the metadata portion.
14. A method of claim 13 wherein the one or more user software applications comprise a calendar or appointment application.
15. The method of claim 14 wherein the calendar or appointment application comprises a financial, on-line banking or checkbook application.
16. The method of claim 13 the one or more user software applications comprise a word processor or spreadsheet application.
17. The method of claim 13 further comprising cataloging the metadata in a searchable index.
18. The method of claim 13 wherein the one or more user software applications are stored on a computing device co-located with the imagery or audio file.
19. The method of claim 13 wherein the one or more user software applications are stored on a computing device that is remotely located from the imagery or audio file.
20. The method of claim 19 wherein the computing device comprises at least one of a cell phone and personal computing assistant (PDA).
21. A method of obtaining metadata for a first imagery or audio file comprising: determining other imagery or audio files that were created within a predetermined window of a creation time for the first imagery or audio file; gathering metadata associated with the other imagery or audio files; and associating at least some of the metadata with the first imagery or audio file.
22. The method of claim 21 wherein the other imagery or audio files comprise content portions and metadata portions, and said gathering comprises copying at least some of the metadata portions.
23. The method of claim 21 further comprising presenting gathered metadata for user selection through a graphical user interface.
24. A method of authoring metadata for an image or audio file or file directory via a computer comprising: providing a graphical user interface through which a user can select a category of metadata from a plurality of categories of metadata; and once selected, applying the selected category of metadata to a file or contents in a directory through a mouse cursor or touch screen, whereby the selected category of metadata is associated with the image or audio file or file directory.
25. The method of claim 24 wherein the mouse cursor changes its graphical appearance when a category of metadata is selected.
26. The method of claim 24 wherein a file changes color when the selected category of metadata is applied thereto.
27. A desktop searching tool comprising: executable instructions stored in computer memory for execution by electronic processing circuitry, said instructions comprising instructions to: i. search one or more computer directories for imagery or audio files; ii. upon discovery of an imagery or audio file, analyze the file for a digital watermark embedded therein, and if a digital watermark is embedded therein to, recover a plural-bit identifier; iii. obtain metadata from the imagery or audio file; and iv. query a remote database with the plural-bit identifier to determine whether the file metadata is current.
28. The desktop searching tool of claim 27 further comprising instructions to refresh the file metadata with metadata from the remote database when the file metadata is not current.
29. The desktop searching tool of claim 28 wherein a timestamp or last edited field is used to determine whether the file metadata is current.
30. The desktop searching tool of claim 27 further comprising instructions to upload the imagery or audio file when the file is not stored on the remote database.
31. The desktop searching tool of claim 27 further comprising instructions to generate a searchable index reflecting at least the metadata.
32. The method of claim 27 further comprising generating a searchable index reflecting at least the metadata.
PCT/US2005/020790 2004-06-22 2005-06-13 Digital asset management, targeted searching and desktop searching using digital watermarks WO2006009663A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007518107A JP5372369B2 (en) 2004-06-22 2005-06-13 Digital asset management, targeted search, and desktop search using digital watermark

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US58228004P 2004-06-22 2004-06-22
US60/582,280 2004-06-22
US58291404P 2004-06-24 2004-06-24
US60/582,914 2004-06-24
US65664205P 2005-02-25 2005-02-25
US60/656,642 2005-02-25
US67302205P 2005-04-19 2005-04-19
US60/673,022 2005-04-19

Publications (1)

Publication Number Publication Date
WO2006009663A1 true WO2006009663A1 (en) 2006-01-26

Family

ID=35785547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/020790 WO2006009663A1 (en) 2004-06-22 2005-06-13 Digital asset management, targeted searching and desktop searching using digital watermarks

Country Status (2)

Country Link
JP (2) JP5372369B2 (en)
WO (1) WO2006009663A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165424A (en) * 2006-12-27 2008-07-17 Sony Corp Image retrieval device and method, imaging device and program
US7450734B2 (en) 2000-01-13 2008-11-11 Digimarc Corporation Digital asset management, targeted searching and desktop searching using digital watermarks
EP2219107A1 (en) * 2007-12-07 2010-08-18 Hitachi Software Engineering Co., Ltd. Printing management system, printing management method, and program
WO2011059761A1 (en) 2009-10-28 2011-05-19 Digimarc Corporation Sensor-based mobile search, related methods and systems
US20120027871A1 (en) * 2009-04-06 2012-02-02 Vanda Pharmaceuticals, Inc. Method of treatment based on polymorphisms of the kcnq1 gene
US20120035215A1 (en) * 2009-04-06 2012-02-09 Vanda Pharmaceuticals, Inc. Method of predicting a predisposition to qt prolongation
US8463845B2 (en) 2010-03-30 2013-06-11 Itxc Ip Holdings S.A.R.L. Multimedia editing systems and methods therefor
US8570586B2 (en) 2005-05-02 2013-10-29 Digimarc Corporation Active images through digital watermarking
US8595503B2 (en) 2008-06-30 2013-11-26 Konica Minolta Laboratory U.S.A., Inc. Method of self-authenticating a document while preserving critical content in authentication data
EP2721536A1 (en) * 2012-05-28 2014-04-23 Tencent Technology Shenzhen Company Limited Method and system for accessing micro-blog album and micro-blog client
US8788941B2 (en) 2010-03-30 2014-07-22 Itxc Ip Holdings S.A.R.L. Navigable content source identification for multimedia editing systems and methods therefor
US8806346B2 (en) 2010-03-30 2014-08-12 Itxc Ip Holdings S.A.R.L. Configurable workflow editor for multimedia editing systems and methods therefor
US8953908B2 (en) 2004-06-22 2015-02-10 Digimarc Corporation Metadata management and generation using perceptual features
US9152711B2 (en) 2008-06-27 2015-10-06 Kii Corporation Social mobile search
US9281012B2 (en) 2010-03-30 2016-03-08 Itxc Ip Holdings S.A.R.L. Metadata role-based view generation in multimedia editing systems and methods therefor
US9953092B2 (en) 2009-08-21 2018-04-24 Mikko Vaananen Method and means for data searching and language translation
US11049094B2 (en) 2014-02-11 2021-06-29 Digimarc Corporation Methods and arrangements for device to device communication
US11308052B1 (en) 2006-10-05 2022-04-19 Resource Consortium Limited, Llc Facial based image organization and retrieval method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5557756B2 (en) * 2011-01-17 2014-07-23 日本放送協会 Digital watermark embedding device, digital watermark embedding program, digital watermark detection device, and digital watermark detection program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6389181B2 (en) * 1998-11-25 2002-05-14 Eastman Kodak Company Photocollage generation and modification using image recognition

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11213014A (en) * 1997-11-19 1999-08-06 Nippon Steel Corp Data base system, data base retrieving method and recording medium
JPH11260045A (en) 1998-03-13 1999-09-24 Sony Corp Reproducing system and reproducing terminal
GB2354104A (en) * 1999-09-08 2001-03-14 Sony Uk Ltd An editing method and system
GB2354105A (en) * 1999-09-08 2001-03-14 Sony Uk Ltd System and method for navigating source content
KR100865247B1 (en) * 2000-01-13 2008-10-27 디지맥 코포레이션 Authenticating metadata and embedding metadata in watermarks of media signals
AUPQ589300A0 (en) * 2000-02-28 2000-03-23 Canon Kabushiki Kaisha Automatically selecting fonts
JP2004505349A (en) 2000-07-20 2004-02-19 ディジマーク コーポレイション Using data embedded in file shares
GB0029880D0 (en) * 2000-12-07 2001-01-24 Sony Uk Ltd Video and audio information processing
US6973574B2 (en) * 2001-04-24 2005-12-06 Microsoft Corp. Recognizer of audio-content in digital signals
JP2002351878A (en) 2001-05-18 2002-12-06 Internatl Business Mach Corp <Ibm> Digital contents reproduction device, data acquisition system, digital contents reproduction method, metadata management method, electronic watermark embedding method, program, and recording medium
JP2003067397A (en) * 2001-06-11 2003-03-07 Matsushita Electric Ind Co Ltd Content control system
JP2003153217A (en) 2001-11-09 2003-05-23 Canon Inc Format for managing metadata and description object data
JP3933452B2 (en) * 2001-11-27 2007-06-20 シャープ株式会社 Support method and support server for supporting acquisition of information
US20030133017A1 (en) * 2002-01-16 2003-07-17 Eastman Kodak Company Method for capturing metadata in a captured image
JP2003303210A (en) 2002-04-11 2003-10-24 Canon Inc Information processing method, information processing device, and recording medium
JP3971642B2 (en) * 2002-04-23 2007-09-05 日本電信電話株式会社 Content download method and apparatus
JP3781715B2 (en) * 2002-11-01 2006-05-31 松下電器産業株式会社 Metadata production device and search device
JP2004046506A (en) * 2002-07-11 2004-02-12 Hitachi Ltd Local tax electronic declaration system, local tax electronic declaration method, retrieval method of local tax declaration state, and local tax electronic declaration program
JP2004120420A (en) * 2002-09-26 2004-04-15 Fuji Photo Film Co Ltd Image adjusting device and program
JP2004133536A (en) * 2002-10-08 2004-04-30 Canon Inc Metadata automatic generation/update device, metadata automatic generation/update method and program for realizing the generation/update method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6389181B2 (en) * 1998-11-25 2002-05-14 Eastman Kodak Company Photocollage generation and modification using image recognition

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7450734B2 (en) 2000-01-13 2008-11-11 Digimarc Corporation Digital asset management, targeted searching and desktop searching using digital watermarks
US8953908B2 (en) 2004-06-22 2015-02-10 Digimarc Corporation Metadata management and generation using perceptual features
US8570586B2 (en) 2005-05-02 2013-10-29 Digimarc Corporation Active images through digital watermarking
US11327936B1 (en) 2006-10-05 2022-05-10 Resource Consortium Limited, Llc Facial based image organization and retrieval method
US11308052B1 (en) 2006-10-05 2022-04-19 Resource Consortium Limited, Llc Facial based image organization and retrieval method
JP2008165424A (en) * 2006-12-27 2008-07-17 Sony Corp Image retrieval device and method, imaging device and program
EP2219107A1 (en) * 2007-12-07 2010-08-18 Hitachi Software Engineering Co., Ltd. Printing management system, printing management method, and program
EP2219107A4 (en) * 2007-12-07 2010-12-01 Hitachi Software Eng Printing management system, printing management method, and program
US8284431B2 (en) 2007-12-07 2012-10-09 Hitachi Solutions, Ltd. Printing management system, printing management method, and program
US9152711B2 (en) 2008-06-27 2015-10-06 Kii Corporation Social mobile search
US8595503B2 (en) 2008-06-30 2013-11-26 Konica Minolta Laboratory U.S.A., Inc. Method of self-authenticating a document while preserving critical content in authentication data
US9074256B2 (en) * 2009-04-06 2015-07-07 Vanda Pharmaceuticals, Inc. Method of predicting a predisposition to QT prolongation
US20120035215A1 (en) * 2009-04-06 2012-02-09 Vanda Pharmaceuticals, Inc. Method of predicting a predisposition to qt prolongation
US20120027871A1 (en) * 2009-04-06 2012-02-02 Vanda Pharmaceuticals, Inc. Method of treatment based on polymorphisms of the kcnq1 gene
US8999638B2 (en) * 2009-04-06 2015-04-07 Vanda Pharmaceuticals, Inc. Method of treatment based on polymorphisms of the KCNQ1 gene
US9953092B2 (en) 2009-08-21 2018-04-24 Mikko Vaananen Method and means for data searching and language translation
WO2011059761A1 (en) 2009-10-28 2011-05-19 Digimarc Corporation Sensor-based mobile search, related methods and systems
EP2494496A4 (en) * 2009-10-28 2015-12-02 Digimarc Corp Sensor-based mobile search, related methods and systems
EP2494496A1 (en) * 2009-10-28 2012-09-05 Digimarc Corporation Sensor-based mobile search, related methods and systems
US8463845B2 (en) 2010-03-30 2013-06-11 Itxc Ip Holdings S.A.R.L. Multimedia editing systems and methods therefor
US9281012B2 (en) 2010-03-30 2016-03-08 Itxc Ip Holdings S.A.R.L. Metadata role-based view generation in multimedia editing systems and methods therefor
US8806346B2 (en) 2010-03-30 2014-08-12 Itxc Ip Holdings S.A.R.L. Configurable workflow editor for multimedia editing systems and methods therefor
US8788941B2 (en) 2010-03-30 2014-07-22 Itxc Ip Holdings S.A.R.L. Navigable content source identification for multimedia editing systems and methods therefor
EP2721536A4 (en) * 2012-05-28 2015-04-08 Tencent Tech Shenzhen Co Ltd Method and system for accessing micro-blog album and micro-blog client
EP2721536A1 (en) * 2012-05-28 2014-04-23 Tencent Technology Shenzhen Company Limited Method and system for accessing micro-blog album and micro-blog client
US11049094B2 (en) 2014-02-11 2021-06-29 Digimarc Corporation Methods and arrangements for device to device communication

Also Published As

Publication number Publication date
JP2010267292A (en) 2010-11-25
JP2008510327A (en) 2008-04-03
JP5372369B2 (en) 2013-12-18
JP5702555B2 (en) 2015-04-15

Similar Documents

Publication Publication Date Title
US10346462B2 (en) Metadata management and generation using perceptual features
US10235465B2 (en) Internet and database searching with handheld devices
US7450734B2 (en) Digital asset management, targeted searching and desktop searching using digital watermarks
JP5372369B2 (en) Digital asset management, targeted search, and desktop search using digital watermark
US10628480B2 (en) Linking tags to user profiles
US9665642B2 (en) Automatic identification of digital content related to a block of text, such as a blog entry
US10482134B2 (en) Document management techniques to account for user-specific patterns in document metadata
US9740373B2 (en) Content sensitive connected content
US8745477B1 (en) Tool for managing online content
TW201142628A (en) Method and system for compiling a unique sample code for specific web content
CN1588879A (en) Internet content filtering system and method
TW201032075A (en) Collaborative bookmarking
US20050108172A1 (en) Detecting and reporting infringement of an intellectual property item
JP5430618B2 (en) Dynamic icon overlay system and method for creating a dynamic overlay

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007518107

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase