US20040205568A1 - Method and system for document image layout deconstruction and redisplay system - Google Patents

Method and system for document image layout deconstruction and redisplay system Download PDF

Info

Publication number
US20040205568A1
US20040205568A1 US10/064,892 US6489202A US2004205568A1 US 20040205568 A1 US20040205568 A1 US 20040205568A1 US 6489202 A US6489202 A US 6489202A US 2004205568 A1 US2004205568 A1 US 2004205568A1
Authority
US
United States
Prior art keywords
format
text
data structure
document
intermediate data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/064,892
Inventor
Thomas BREUEL
Henry Baird
William Janssen
Ashok Popat
Dan Bloomberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/064,892 priority Critical patent/US20040205568A1/en
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BLOOMBERG, DAN S., BAIRD, HENRY S., BREUEL, THOMAS M., JANSSEN, WILLIAM C., POPAT, ASHOK C.
Priority to EP03004558A priority patent/EP1343095A3/en
Priority to JP2003053197A priority patent/JP2004005453A/en
Assigned to JPMORGAN CHASE BANK, AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: XEROX CORPORATION
Publication of US20040205568A1 publication Critical patent/US20040205568A1/en
Priority to US13/152,984 priority patent/US10606933B2/en
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/131Fragmentation of text files, e.g. creating reusable text-blocks; Linking to fragments, e.g. using XInclude; Namespaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text

Definitions

  • the invention relates generally to the problem of making an arbitrary document, conveniently readable on an arbitrarily sized display.
  • problems with existing systems include: (a) high expense of manual keying and/or correcting of OCR results and manual tagging; (b) the risk of highly visible and disturbing errors in the text resulting from OCR mistakes; and (c) the loss of meaningful or aesthetically pleasing typeface and type size choices, graphics and other non-text elements; and (d) loss of proper placement of elements on the page.
  • the invention provides methods and systems for converting any document originating in a page-image format, such as a scanned hardcopy document represented as a bitmap, into a form suitable for display on screens of arbitrary size, through automatic reformatting or “reflowing” of document contents.
  • Reflowing is a process that moves text elements (often words) from one text-line to another so that each line of text can be contained within given margins.
  • Reflowing typically breaks or fills lines of text with words, and may re-justify column margins, so that the full width of a display is used and no manual ‘panning’ across the text is needed.
  • a display area within which lines of text appear, is altered so that the width of the visible text is reduced, it may be necessary for words to be moved from one text-line to another to shorten the length of all of the text-lines so that no text-line is too long to be entirely visible in the display area.
  • words may be moved from one text-line to another so that the length of text-lines increase, thereby allowing more text-lines to be seen without any word image being obscured.
  • Image and layout analysis transforms the raw document image into a form that is reflowable and that can be more compactly represented on hand-held devices.
  • image analysis begins with adaptive thresholding and binarization. For each pixel, the maximum and minimum values within a region around that pixel, are determined using greyscale morphology. If the difference between these two values is smaller than a statistically determined threshold, the region is judged to contain only white pixels. If the difference is above the threshold, the region contains both black and white pixels, and the minimum and maximum values represent the blank ink and white paper background values, respectively.
  • the pixel value is normalized by bringing the estimated white level to the actual white level of the display.
  • the pixel value is normalized by expanding the range between the estimated white and black levels to the full range between the white level and the black level of the display. After this normalization process, a standard thresholding method can be applied.
  • connected components are labeled using a scan algorithm combined with an efficient union-find data structure. Then, a bounding box is determined for each connected component. This results in a collection of usually several thousand connected components per page. Each connected component may represent a single character, a portion of a character, a collection of touching characters, background noise, or parts of a line drawing or image. These bounding boxes for connected components are the basis of the subsequent layout analysis.
  • the bounding boxes corresponding to characters in the running text of the document, as well as in a few other page elements, such as, for example, headers, footers, and/or section headings, are used to provide important information about the layout of the page needed for reflowing.
  • the bounding boxes and their spatial arrangement identify page rotation and skew, column boundaries, what tokens may be needed for token-based compression, reading order, and/or how the text should flow between different parts of the layout. Bounding boxes that are not found to represent “text” in this filtering operation are not lost, however. Such bounding boxes can later be incorporated into the output from the system as graphical elements.
  • bounding boxes representing body text are found using a simple statistical procedure. Using the distribution of heights as a statistical mixture of various components, for most pages containing text, the largest mixture component often corresponds to lower case letters at the predominant font size. The size is used to find the x-height of the predominant font and the dimension is used to filter out bounding boxes that are either too small or too large to represent body text or standard headings.
  • the bounding box that bounds all of the connected components that participated in the match is determined. All other connected components that fall within that bounding box are assigned to the same text line. This tends to “sweep up” punctuation marks, accents, and “i”-dots that would otherwise be missed.
  • multiple bounding boxes whose projections onto the baseline overlap are merged. This results in bounding boxes that predominantly contain only or more complete characters, as opposed to bounding boxes that contain only or predominantly portions of characters.
  • the resulting bounding boxes are then ordered by the x-coordinate of the lower left corner of the bounding boxes to obtain a sequence of character images in reading order. Multiple text lines are found using a greedy strategy, in which the top match is first identified. Then, the bounding boxes that participated in the match are removed from further consideration. Next, the next best text line is found, until no good text line matches can be identified anymore.
  • Any connected components that are not part of a text line are grouped together and treated as images.
  • a single column document by enumerating text lines and bounding boxes of images in order of their y-coordinates, a sequence of characters, whitespaces, and images in reading order is obtained.
  • the two columns are treated as if the right column were placed under the left column.
  • This simple layout analysis technique copes with a large number of commonly occurring layouts in printed documents and transform such layouts into a sequence of images that can be reflowed and displayed on a smaller-area display device.
  • the simple technique works well in these applications because the requirements of reflowing for a smaller-area display device, such as a document reader, are less stringent than for other layout analysis tasks, like rendering into a word processor. Since the output of the layout analysis will only be used for reflowing and not for editing, no semantic labels need to be attached to text blocks. Because the documents are reflowed on a smaller area screen, there is also no user expectation that a rendering of the output of the layout analysis precisely match the layout of the input document. Furthermore, if page elements, like headers, footers, and/or page numbers, are incorporated into the output of the layout analysis, users can easily skip such page elements during reading. Such page elements may also serve as convenient navigational signposts on the smaller-area display device.
  • the methods and systems according to this invention more specifically provide a two-stage system which analyzes, or “deconstructs”, page image layouts.
  • deconstruction includes both physical, e.g., geometric, and logical, e.g., functional, segmentation of page images.
  • the segmented image elements may include blocks, lines, and/or words of text, and other segmented image elements.
  • the segmented image elements are then synthesized and converted into an intermediate data structure, including images of words in correct reading order and links to non-textual image elements.
  • the intermediate data structure may, for example, be expressed in a variety of formats such as, for example, Open E-book XML, AdobeTM PDF 1.4 or later, HTML and/or XHTML, as well as other useful formats that are now available or may be developed in the future.
  • the methods and systems according to this invention then distill or convert, the intermediate data structure for “redisplay” into any of a number of standard electronic book formats, Internet browsable formats, and/or print formats.
  • the intermediate data structure may contain tags, such as those used in SGML and XML, which state the logical functions or geometric properties of the particular image elements the tags annotate. It is also possible that, in various exemplary embodiments, some image elements may not have tags attached to them. For example, in instances where the functions and properties of image elements may be inferable from their position and the position of other tagged and untagged image elements in the intermediate data structure, such tags may not be necessary.
  • special image elements that can be used for this purpose are not extracted from the original page image, but are created as tagged or untagged elements.
  • Such special image elements can be inserted into the intermediate data structure in an order that would define the desired functions and properties of other image elements.
  • a special image element may be a blank that represents a space between two words.
  • special non-image markers, other than tags attached to particular image elements could be inserted so that the functions and properties of at least some of the image elements may be inferred from their relative position with respect to the markers within the intermediate data structure.
  • the intermediate data structure may be converted, for example, to HTML for use on a standard Internet browser, or to Open E-book XML format for use on an Open E-book reader.
  • Other methods may include, for example, converting the intermediate data structure to Plucker format for use on a Plucker electronic book viewer, or to Microsoft Reader format for display using MS Reader format or to a print format for printing to paper or the like.
  • the physical layout geometry is fixed and the logical or functional layout structure is implicit. That is, it is intended to be understood by human readers, who bring, to the task of reading, certain conventional expectations of the meaning and implications of layout, typeface, and type size choices.
  • the intermediate data structure according to the methods and systems of this invention by contrast, the original fixed positions of words are noted but not strictly adhered to, so that the physical layout becomes fluid.
  • aspects of the logical structure of the document are captured explicitly, and automatically, and represented by additional information.
  • the intermediate data structure according to this invention is automatically adaptable at the time of display to the constraints of size, resolution, contrast, color, geometry, and/or the like, of any given display device or circumstance of viewing.
  • the adaptability enabled by the methods and systems according to this invention include re-pagination of text, reflowing, such as, for example, re-justification, reformatting, and/or the like, of text into text-lines, and logical linking of text to associated text and/or non-text contents, such as illustrations, figures, footnotes, signatures, and/or the like.
  • the methods and systems according to this invention take into account typographical conventions used to indicate the logical elements of a document, such as titles, author lists, body text, paragraphs, and/or hyphenation, for example.
  • the methods and systems of the invention also allow the reading order to be inferred within blocks of text and/or among blocks of text on the page.
  • redisplaying the document is enabled for a wide range of displays whose size, resolution, contrast, available colors, and/or geometries may require the document's contents to be reformatted, reflowed, re-colored, and/or reorganized to achieve a high degree of legibility and a complete understanding of the document's contents, without requiring OCR or re-keying, and without being subject to the respective attendant errors of OCR or re-keying, and without losing the look and feel of the original document as chosen by the author and publisher.
  • the methods and systems according to this invention reduce costs by obviating the need for manual keying, correction of OCR results, and/or tagging.
  • the methods and systems according to this invention tend to avoid introducing OCR character recognition errors.
  • the methods and systems according to this invention tend to preserve typeface and type size choices made by the original author and publisher, which may be helpful, or even essential, in assisting the reader in understanding the author's intent.
  • the methods and systems according to this invention also tend to preserve the association of graphics and non-textual elements with related text.
  • FIG. 1 illustrates an intermediate representation of an image of a page, using XHTML
  • FIG. 2 illustrates the format and content of the intermediate representation without the use of tags or explicit separators
  • FIG. 3 is a flowchart outlining one exemplary embodiment of a method for document image layout deconstruction and redisplay
  • FIG. 4 is a block diagram of one exemplary embodiment of a document deconstruction and display system according to this invention.
  • FIG. 1 illustrates a detailed example of an intermediate data structure 260 for a page image 300 .
  • the intermediate data structure 260 is expressed using XHTML as an example of an intermediate data structure format.
  • the page image 300 is shown schematically having a first text area 310 which functions as a title, a second area 320 which functions as an author list, third text areas 330 which function as paragraphs, and a fourth text area 340 which functions as a page number.
  • the structures represented by these text areas 310 - 340 are usually significant to both the author and the reader, and so are detected and preserved in the intermediate data structure 260 .
  • the intermediate data structure 260 preserves the title text area 310 by noting the position of this title text area 310 at the top of the page image, that the text area 310 is centered, and the large typeface used in this text area 310 .
  • the intermediate data structure 260 preserves the author-list text area 320 by the position, of this author-list text area 320 just beneath the title text area 310 .
  • the intermediate data structure 260 preserves the centered position of the author-list text area 320 , and that the author-list text area 320 is printed in a large typeface that is smaller than the typeface of the title text area 310 .
  • FIG. 2 shows a representation of the page image 300 as a sequence of image elements 190 , and the corresponding representative compressed image tokens 200 , without using attached tags or explicit separators.
  • image elements may be inferable from their position on the page and the position of other tagged and untagged image elements in the intermediate data structure, it is not necessary to tag all of the image elements.
  • FIG. 3 is a flowchart outlining one exemplary embodiment of a method for document image layout deconstruction and redisplay.
  • operation of the method begins in step S 100 and continues to step S 110 , where a document is input by scanning, or use of another data source that provides a document that is in a page image format.
  • the document may be represented as a set of page images, such as bi-level, gray-scale, or as color images, in one of a set of image file formats such as TIFF and JPEG, for example.
  • Text area images may include, for example, blocks (or columns), lines, words, or characters of text.
  • Non-text area images may include, for example, illustrations, figures, graphics, line-art, photographs, handwriting, footnotes, signatures and/or the like.
  • step S 130 the identified text image areas and non-text image areas are located and isolated. Locating and isolating text image areas may include, for example, locating and isolating the baseline and, possibly, top-line and/or cap-line, of each text line image.
  • the isolated line regions are modeled as line segments that run from one end of the text line image to another.
  • Baselines may be modeled as straight lines which are horizontal or, in the case of Japanese, Chinese, and other scripts, vertical, or oriented at some angle near the horizontal or the vertical. Baselines may also be modeled as curved functions. Operation then continues in step S 140 .
  • step S 140 the isolated text image areas are selected for further processing.
  • step S 150 the text line regions of the selected text image areas are located and isolated and the layout properties of the selected text image areas are then determined.
  • Layout properties may include, for example, indentation, left and/or right justification, centering, hyphenation, special spacing (e.g. for tabular data), proximity to figures and other non-textual areas, and the like.
  • Layout properties may also include type size and typeface-family properties (e.g. roman/bold/italic styles) that may indicate the function of the text within the page. Operation then continues in step S 160 .
  • step S 160 the located text line regions are further processed into a set of segmented image elements. Then in step S 170 , the segmented image elements are read and basic textual elements are located and isolated.
  • Basic textual elements may include, for example, words, numbers, dates, proper names, bibliographic references, references to figures, and/or other non-textual elements within or outside the document. The textual elements will become the basic image units which will be reflowed and reconstructed in later stages.
  • each segmented image element is labeled with the position of the element relative to the baseline of the text line so that when the text-lines are later reflowed, the reconstructed baseline may be referred to when placing the corresponding segmented image elements so the elements appear to share the newly constructed baseline. Operation then continues to step S 180 .
  • step S 180 the set of segmented image elements are labeled with their baseline-relative position.
  • step S 190 the segmented image elements and the relative baselines portions are compressed into token-based image elements.
  • step S 200 the image elements are synthesized into an intermediate data structure. Operation then continues to step S 210 .
  • step S 210 the intermediate data structure is stored to retain the data in an intermediate format until distilling and redisplay is desired.
  • step S 220 the stored data is distilled to convert the data into a device specific display format.
  • the intermediate data structure may be converted, for example, to HTML for use on a standard Internet browser, or to Open E-book XML format for use on an Open E-book reader.
  • Other methods may include, for example, converting the intermediate data structure to Plucker format for use on a Plucker electronic book viewer, or to Microsoft Reader format for display using MS Reader format or to a print format for printing to paper or the like.
  • step S 230 the distilled data is displayed to the user. Operation of the method then continues to step S 240 , where operation of the method ends.
  • the intermediate data structure may also be in a form that can be processed by an E-Book distiller for redisplaying the intermediate data structure on an E-book reader.
  • an E-book distiller reads the intermediate data structure and prepares it for display on a specific device such as a PDA, a computer graphical interface window, or any other graphical display device.
  • processing of the intermediate data structure is not limited to an E-Book distiller, but may accomplished be any method or device for re-converting the intermediate data structure for redisplay on a selected display device.
  • the intermediate data structure may be expressed in a variety of formats such as, for example, Open E-book XML, AdobeTM PDF 1.4 or later, HTML and/or XHTML, as well as other useful formats that are now available or may be developed in the future.
  • the intermediate data structure may contain tags, such as those used in SGML and XML.
  • step S 190 the segmented image elements are compressed into a smaller number of prototype images, so that each incoming element may be replaced by a prototype that is visually similar to, or perhaps indistinguishable from the image elements.
  • This is an instance of ‘token-based’ compression where the tokens are the image elements. Therefore, if the image elements are words, then the tokens are words.
  • Compressing the segmented image elements may further include writing a set, or dictionary, of representative compressed image tokens, and a list of references into the representative compressed image tokens. Each reference represents an original image element labeled with its position relative to the baseline.
  • non-text image areas, compressed non-text image areas, the set of representative compressed image tokens, the segmented image elements and/or the layout characteristics are synthesized in step S 200 into an intermediate data structure.
  • non-text area images may optionally first be compressed in step S 190 , for file compression, before being synthesized in step S 200 for integration into the intermediate data structure.
  • the segmented image elements may be optionally compressed in step S 190 before being synthesized in step S 200 for integration into the intermediate data structure. Determining whether to compress the non-text image areas and the segmented image elements may be dependent on file size or other user specific parameters. If the intermediate data structure does not include compressed data, then the intermediate data structure may be represented as XHTML, for example.
  • the intermediate data structure may also contain a tagged list containing references to every textual and non-image element that are proximate to or references by textual image element as well as layout characteristics such as indentation, hyphenation, spacing, and the like.
  • a set of representative compressed image tokens can be written to a separate but intimately associated image element database.
  • the intermediate data structure contains all the information required to support the reflowing and the reconstruction of the image elements.
  • FIG. 4 is a block diagram of one exemplary embodiment of a document deconstruction and redisplay system 400 according to this invention.
  • one or more user input devices 480 are connected over one or more links 482 to an input/output interface 410 .
  • a data source 500 is connected over a link 502 to the input/output interface 410 .
  • a data sink 600 is also connected to the input/output interface 410 through a link 602 .
  • Each of the links 482 , 502 , 602 can be implemented using any known or later developed device or system for connecting the one or more user input devices 480 , the data source 500 and the data sink 600 , respectively, to the document layout deconstruction and redisplay system 400 , including a direct cable connection, a connection over a wide area network or a local area network, a connection over an intranet, a connection over the Internet, or a connection over any other distributed processing network or system.
  • each of the links 482 , 502 , 602 can be any known or later developed connection system or structure usable to connect the one or more user input devices 480 , the data source 500 and the data sink 600 , respectively, to the document layout deconstruction and redisplay system 400 .
  • the input/output interface 410 inputs data from the data source 500 and/or the one or more user input devices 480 and outputs data to the data sink 600 via the link 602 .
  • the input/output interface 410 also provides the received data to one or more of the controller 420 , the memory 430 , a deconstructing circuit, routine or application 440 , a synthesizing circuit, routine or application 450 , a distilling circuit, routine or application 460 , and/or a display 490 .
  • the input/output interface 410 receives data from one or more of the controller 420 , the memory 430 , the deconstructing circuit, routine or application 440 , the synthesizing circuit, routine or application 450 , and/or the distilling circuit, routine or application 460 .
  • the memory 430 stores data received from the deconstructing circuit, routine or application 440 , synthesizing circuit, routine or application 450 , the distilling circuit, routine or application 460 , and/or the input/output interface 410 .
  • the original data, the deconstructed data, the synthesized data, and/or the distilled data may be stored in the memory 430 .
  • the memory can also store one or more control routines used by the controller 420 to operate the document layout deconstruction and redisplay system 400 .
  • the memory 430 can be implemented using any appropriate combination of alterable, volatile or non-volatile memory or non-alterable, or fixed, memory.
  • the alterable memory whether volatile or non-volatile, can be implemented using any one or more of static or dynamic RAM, a floppy disk and disk drive, a writable or re-writeable optical disk and disk drive, a hard drive, flash memory or the like.
  • the non-alterable or fixed memory can be implemented using any one or more of ROM, PROM, EPROM, EEPROM, an optical ROM disk, such as a CD-ROM or DVD-ROM disk, and disk drive or the like.
  • each of the circuits or routines shown in FIG. 4 can be implemented as portions of a suitably programmed general purpose computer.
  • each of the circuits or routines shown in FIG. 4 can be implemented as physically distinct hardware circuits within an ASIC, or using a FPGA, a PDL, a PLA or a PAL, or using discrete logic elements or discrete circuit elements.
  • the particular form each of the circuits or routines shown in FIG. 4 will take is a design choice and will be obvious and predicable to those skilled in the art.
  • the data source 500 outputs a set of original data, i.e., input document, scanned document, or the like, over the link 502 to the input/output interface 410 .
  • the user input device 480 can be used to input one or more of a set of newly created original data, scanned data, or the like, over the link 482 to the input/output interface 410 .
  • the input/output interface 410 directs the received set of data to the memory 430 under the control of the controller 420 .
  • it should be appreciated that either or both of these sets of data could have been previously input into the document layout deconstruction and redisplay system 400 .
  • An input document is input into the deconstructing circuit, routine or application 440 under control of the controller 420 .
  • the deconstructing circuit, routine or application 440 reads image files and locates and isolates text area images and non-text area images.
  • Non-text area images are then sent to the synthesizing circuit, routine or application 450 under control of the controller 420 for synthesizing the data into an intermediate data structure.
  • Non-text images may optionally be compressed prior to being synthesized at the synthesizing circuit, routine or application 450 .
  • the deconstructing circuit, routine or application 440 reads the set of isolated images text area images and locates and isolates text line regions and detects the layout properties of the text line regions.
  • the layout properties are sent to the synthesizing circuit, routine or application 450 under the control of the controller 420 .
  • the text line regions are further processed by the deconstructing circuit, routine or application 440 into a set of segmented image elements with their baseline relative portions and then sent to the synthesizing circuit or routine 450 under control of the controller 420 for synthesizing into an intermediate data structure.
  • the deconstructing circuit, routine or application 440 may also compress the segmented image elements with their baseline relative portions into token-based image elements before being sent to the synthesizing circuit, routine or application 450 under control of the controller 420 for synthesizing into an intermediate data structure.
  • the deconstructing circuit, routine or application 440 and the synthesizing circuit, routine or application 450 can use any known or later-developed encoding scheme, to deconstruct and synthesize the data to be converted into an intermediate data structure that may then be distilled by the distilling circuit, routine or application 460 for display on the display device 490 .
  • the synthesizing circuit, routine or application 450 synthesizes the non-text area images and compressed non-text area image elements, the set of representative compressed image tokens; the segmented image elements and the layout characteristics, and transcribes the data into an intermediate data structure.
  • the intermediate data structure is sent to the memory 430 under the control of the controller 430 for storage.
  • the distilling circuit, routine or application 460 Upon request by a user of the input document, the distilling circuit, routine or application 460 converts the intermediate data structure into a format usable by the display 490 .
  • the distilling circuit, routine or application 460 under control of the controller 420 and the input output interface 410 , will output the converted intermediate data structure to the user's device for display.
  • distilling circuit, routine or display 460 can use any known or later-developed encoding scheme, including, but not limited to, those disclosed in this application, to convert the intermediate data structure into a device specific format usable for redisplay on an arbitrarily sized display.
  • the systems and methods of this invention also relate to the use of special non-image markers, other than tags attached to particular image elements, to infer the functions and properties of all the image elements from their relative positions with respect to the markers within the intermediate data structure.

Abstract

The invention converts a document originating in a page-image format into a form suitable for an arbitrarily sized display, by reformatting or “re-flowing” of the document to fit an arbitrarily sized display device.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of Invention [0001]
  • The invention relates generally to the problem of making an arbitrary document, conveniently readable on an arbitrarily sized display. [0002]
  • 2. Description of Related Art [0003]
  • Existing systems for rendering page-image versions of documents on display screens have required manual activities to improve the rendering, or clumsy panning mechanisms to view direct display of page images on wrong-sized surfaces. In particular, it has been necessary to either (1) key in the entire text manually, or (2) process the page images through an optical character recognition (OCR) system and then manually tag the resulting text in order to preserve visually important layout features. [0004]
  • Problems with existing systems include: (a) high expense of manual keying and/or correcting of OCR results and manual tagging; (b) the risk of highly visible and disturbing errors in the text resulting from OCR mistakes; and (c) the loss of meaningful or aesthetically pleasing typeface and type size choices, graphics and other non-text elements; and (d) loss of proper placement of elements on the page. [0005]
  • Such problems are significant, for example, because book publishers are increasingly creating page-image versions of books currently being published, as well as books from their backlists. The page-image versions are being created for print-on-demand usage. While print-on-demand images can be re-targeted to slightly larger or slightly smaller formats by scaling the images, they cannot currently be re-used for most electronic book purposes without either re-keying the book into XML format, or scanning the page images using OCR and manually correcting the re-keyed and scanned images. [0006]
  • SUMMARY OF INVENTION
  • The invention provides methods and systems for converting any document originating in a page-image format, such as a scanned hardcopy document represented as a bitmap, into a form suitable for display on screens of arbitrary size, through automatic reformatting or “reflowing” of document contents. [0007]
  • Reflowing is a process that moves text elements (often words) from one text-line to another so that each line of text can be contained within given margins. Reflowing typically breaks or fills lines of text with words, and may re-justify column margins, so that the full width of a display is used and no manual ‘panning’ across the text is needed. As an example, as a display area, within which lines of text appear, is altered so that the width of the visible text is reduced, it may be necessary for words to be moved from one text-line to another to shorten the length of all of the text-lines so that no text-line is too long to be entirely visible in the display area. Conversely, if the display area is widened, words may be moved from one text-line to another so that the length of text-lines increase, thereby allowing more text-lines to be seen without any word image being obscured. [0008]
  • Image and layout analysis transforms the raw document image into a form that is reflowable and that can be more compactly represented on hand-held devices. In various exemplary embodiments, image analysis begins with adaptive thresholding and binarization. For each pixel, the maximum and minimum values within a region around that pixel, are determined using greyscale morphology. If the difference between these two values is smaller than a statistically determined threshold, the region is judged to contain only white pixels. If the difference is above the threshold, the region contains both black and white pixels, and the minimum and maximum values represent the blank ink and white paper background values, respectively. In the first case, the pixel value is normalized by bringing the estimated white level to the actual white level of the display. In the second case, the pixel value is normalized by expanding the range between the estimated white and black levels to the full range between the white level and the black level of the display. After this normalization process, a standard thresholding method can be applied. [0009]
  • In the thresholded image, connected components are labeled using a scan algorithm combined with an efficient union-find data structure. Then, a bounding box is determined for each connected component. This results in a collection of usually several thousand connected components per page. Each connected component may represent a single character, a portion of a character, a collection of touching characters, background noise, or parts of a line drawing or image. These bounding boxes for connected components are the basis of the subsequent layout analysis. [0010]
  • In various exemplary embodiments, for layout analysis, the bounding boxes corresponding to characters in the running text of the document, as well as in a few other page elements, such as, for example, headers, footers, and/or section headings, are used to provide important information about the layout of the page needed for reflowing. In particular, the bounding boxes and their spatial arrangement identify page rotation and skew, column boundaries, what tokens may be needed for token-based compression, reading order, and/or how the text should flow between different parts of the layout. Bounding boxes that are not found to represent “text” in this filtering operation are not lost, however. Such bounding boxes can later be incorporated into the output from the system as graphical elements. [0011]
  • The dimensions of bounding boxes representing body text are found using a simple statistical procedure. Using the distribution of heights as a statistical mixture of various components, for most pages containing text, the largest mixture component often corresponds to lower case letters at the predominant font size. The size is used to find the x-height of the predominant font and the dimension is used to filter out bounding boxes that are either too small or too large to represent body text or standard headings. [0012]
  • Given a collection of bounding boxes representing text, it is desirable to find text lines and column boundaries. The approach used in various exemplary embodiments to identify text lines and column boundaries relies on a branch-and-bound algorithm that finds maximum likelihood matches against line models under a robust least square error model, i.e., a Gaussian noise model in the presence of spurious background features. Text line models are described by three parameters: the angle and the offset of the line, and the descender height. Bounding boxes whose alignment point, that is, the center of the bottom side of the bounding box, rests either on the line or at a distance given by the descender height below the line, are considered to match the line. Matches are penalized by the square of their distance from the model, up to a threshold value ε, which is usually on the order of five pixels. [0013]
  • After a text line has been found, the bounding box that bounds all of the connected components that participated in the match is determined. All other connected components that fall within that bounding box are assigned to the same text line. This tends to “sweep up” punctuation marks, accents, and “i”-dots that would otherwise be missed. Within each text line, multiple bounding boxes whose projections onto the baseline overlap are merged. This results in bounding boxes that predominantly contain only or more complete characters, as opposed to bounding boxes that contain only or predominantly portions of characters. The resulting bounding boxes are then ordered by the x-coordinate of the lower left corner of the bounding boxes to obtain a sequence of character images in reading order. Multiple text lines are found using a greedy strategy, in which the top match is first identified. Then, the bounding boxes that participated in the match are removed from further consideration. Next, the next best text line is found, until no good text line matches can be identified anymore. [0014]
  • This approach to text line modeling has several advantages over known projection or linking methods. First, different text lines can have different orientations. Second, by taking into account both the baseline and the descender line, the technique can find text lines that are missed by known text line finders. Third, the matches returned by this method follow the individual text lines more accurately than other known methods. [0015]
  • Column boundaries are identified in a similar manner by finding globally optimal maximum likelihood matches of the center of the left side of bounding boxes against a line model. In order to reduce background noise, prior to applying the line finder to column finding, statistics about the distribution of horizontal distances between bounding boxes are used to estimate the intercharacter and inter-words spacing, i.e., the two largest components in the statistical distribution of horizontal bounding box distances. The bounding boxes for characters are then merged into words. This reduces severalfold the number of bounding boxes that need to be considered for column matching and tends to improve the reliability of column boundary detection. [0016]
  • Any connected components that are not part of a text line are grouped together and treated as images. For a single column document, by enumerating text lines and bounding boxes of images in order of their y-coordinates, a sequence of characters, whitespaces, and images in reading order is obtained. For a double column document, the two columns are treated as if the right column were placed under the left column. [0017]
  • This simple layout analysis technique copes with a large number of commonly occurring layouts in printed documents and transform such layouts into a sequence of images that can be reflowed and displayed on a smaller-area display device. The simple technique works well in these applications because the requirements of reflowing for a smaller-area display device, such as a document reader, are less stringent than for other layout analysis tasks, like rendering into a word processor. Since the output of the layout analysis will only be used for reflowing and not for editing, no semantic labels need to be attached to text blocks. Because the documents are reflowed on a smaller area screen, there is also no user expectation that a rendering of the output of the layout analysis precisely match the layout of the input document. Furthermore, if page elements, like headers, footers, and/or page numbers, are incorporated into the output of the layout analysis, users can easily skip such page elements during reading. Such page elements may also serve as convenient navigational signposts on the smaller-area display device. [0018]
  • In various exemplary embodiments, the methods and systems according to this invention more specifically provide a two-stage system which analyzes, or “deconstructs”, page image layouts. Such deconstruction includes both physical, e.g., geometric, and logical, e.g., functional, segmentation of page images. The segmented image elements may include blocks, lines, and/or words of text, and other segmented image elements. The segmented image elements are then synthesized and converted into an intermediate data structure, including images of words in correct reading order and links to non-textual image elements. The intermediate data structure may, for example, be expressed in a variety of formats such as, for example, Open E-book XML, Adobe™ PDF 1.4 or later, HTML and/or XHTML, as well as other useful formats that are now available or may be developed in the future. In various exemplary embodiments, the methods and systems according to this invention then distill or convert, the intermediate data structure for “redisplay” into any of a number of standard electronic book formats, Internet browsable formats, and/or print formats. [0019]
  • In various exemplary embodiments of the methods and systems according to this invention, the intermediate data structure may contain tags, such as those used in SGML and XML, which state the logical functions or geometric properties of the particular image elements the tags annotate. It is also possible that, in various exemplary embodiments, some image elements may not have tags attached to them. For example, in instances where the functions and properties of image elements may be inferable from their position and the position of other tagged and untagged image elements in the intermediate data structure, such tags may not be necessary. [0020]
  • It is also possible that, in various exemplary embodiments, special image elements that can be used for this purpose are not extracted from the original page image, but are created as tagged or untagged elements. Such special image elements can be inserted into the intermediate data structure in an order that would define the desired functions and properties of other image elements. For example, a special image element may be a blank that represents a space between two words. Further, special non-image markers, other than tags attached to particular image elements, could be inserted so that the functions and properties of at least some of the image elements may be inferred from their relative position with respect to the markers within the intermediate data structure. [0021]
  • To prepare the intermediate data structure for redisplay, the intermediate data structure may be converted, for example, to HTML for use on a standard Internet browser, or to Open E-book XML format for use on an Open E-book reader. Other methods may include, for example, converting the intermediate data structure to Plucker format for use on a Plucker electronic book viewer, or to Microsoft Reader format for display using MS Reader format or to a print format for printing to paper or the like. [0022]
  • In any document image, the physical layout geometry is fixed and the logical or functional layout structure is implicit. That is, it is intended to be understood by human readers, who bring, to the task of reading, certain conventional expectations of the meaning and implications of layout, typeface, and type size choices. In various exemplary embodiments, in the intermediate data structure according to the methods and systems of this invention, by contrast, the original fixed positions of words are noted but not strictly adhered to, so that the physical layout becomes fluid. In various exemplary embodiments, aspects of the logical structure of the document are captured explicitly, and automatically, and represented by additional information. In various exemplary embodiments, the intermediate data structure according to this invention is automatically adaptable at the time of display to the constraints of size, resolution, contrast, color, geometry, and/or the like, of any given display device or circumstance of viewing. [0023]
  • The adaptability enabled by the methods and systems according to this invention include re-pagination of text, reflowing, such as, for example, re-justification, reformatting, and/or the like, of text into text-lines, and logical linking of text to associated text and/or non-text contents, such as illustrations, figures, footnotes, signatures, and/or the like. In various exemplary embodiments, the methods and systems according to this invention take into account typographical conventions used to indicate the logical elements of a document, such as titles, author lists, body text, paragraphs, and/or hyphenation, for example. In various exemplary embodiments, the methods and systems of the invention also allow the reading order to be inferred within blocks of text and/or among blocks of text on the page. [0024]
  • Thus, redisplaying the document is enabled for a wide range of displays whose size, resolution, contrast, available colors, and/or geometries may require the document's contents to be reformatted, reflowed, re-colored, and/or reorganized to achieve a high degree of legibility and a complete understanding of the document's contents, without requiring OCR or re-keying, and without being subject to the respective attendant errors of OCR or re-keying, and without losing the look and feel of the original document as chosen by the author and publisher. [0025]
  • In various exemplary embodiments, the methods and systems according to this invention reduce costs by obviating the need for manual keying, correction of OCR results, and/or tagging. In various exemplary embodiments, the methods and systems according to this invention tend to avoid introducing OCR character recognition errors. In various exemplary embodiments, the methods and systems according to this invention tend to preserve typeface and type size choices made by the original author and publisher, which may be helpful, or even essential, in assisting the reader in understanding the author's intent. In various exemplary embodiments, the methods and systems according to this invention also tend to preserve the association of graphics and non-textual elements with related text. [0026]
  • These and other features and advantages of this invention are described in, or are apparent from, the following detailed description of various exemplary embodiments of the systems and methods according to this invention.[0027]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various exemplary embodiments of the systems and methods according to this invention will be described in detail, with reference to the following figures, wherein: [0028]
  • FIG. 1 illustrates an intermediate representation of an image of a page, using XHTML; [0029]
  • FIG. 2 illustrates the format and content of the intermediate representation without the use of tags or explicit separators; [0030]
  • FIG. 3 is a flowchart outlining one exemplary embodiment of a method for document image layout deconstruction and redisplay; [0031]
  • FIG. 4 is a block diagram of one exemplary embodiment of a document deconstruction and display system according to this invention.[0032]
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • FIG. 1 illustrates a detailed example of an [0033] intermediate data structure 260 for a page image 300. In FIG. 1 the intermediate data structure 260 is expressed using XHTML as an example of an intermediate data structure format. The page image 300 is shown schematically having a first text area 310 which functions as a title, a second area 320 which functions as an author list, third text areas 330 which function as paragraphs, and a fourth text area 340 which functions as a page number. The structures represented by these text areas 310-340 are usually significant to both the author and the reader, and so are detected and preserved in the intermediate data structure 260. For example, the intermediate data structure 260 preserves the title text area 310 by noting the position of this title text area 310 at the top of the page image, that the text area 310 is centered, and the large typeface used in this text area 310. The position is preserved in the intermediate data structure 260 by the XHTML tag “<DIV CLASS=title ID=title>”. Also, the intermediate data structure 260 preserves the author-list text area 320 by the position, of this author-list text area 320 just beneath the title text area 310. The intermediate data structure 260 preserves the centered position of the author-list text area 320, and that the author-list text area 320 is printed in a large typeface that is smaller than the typeface of the title text area 310. In particular, in the specific exemplary embodiment shown in FIG. 3, the author-list text area 320 is preserved in the intermediate data structure 260 by the XHTML tag “<DIV CLASS=authors ID=authors>”.
  • FIG. 2 shows a representation of the [0034] page image 300 as a sequence of image elements 190, and the corresponding representative compressed image tokens 200, without using attached tags or explicit separators. For example, in a document where the functions and properties of image elements may be inferable from their position on the page and the position of other tagged and untagged image elements in the intermediate data structure, it is not necessary to tag all of the image elements.
  • FIG. 3 is a flowchart outlining one exemplary embodiment of a method for document image layout deconstruction and redisplay. As shown in FIG. 3, operation of the method begins in step S[0035] 100 and continues to step S110, where a document is input by scanning, or use of another data source that provides a document that is in a page image format. The document may be represented as a set of page images, such as bi-level, gray-scale, or as color images, in one of a set of image file formats such as TIFF and JPEG, for example.
  • Then, in step S[0036] 120, the image file of the page image is analyzed to identify text image areas and non-text image areas. Text area images may include, for example, blocks (or columns), lines, words, or characters of text. Non-text area images may include, for example, illustrations, figures, graphics, line-art, photographs, handwriting, footnotes, signatures and/or the like.
  • Next, in step S[0037] 130, the identified text image areas and non-text image areas are located and isolated. Locating and isolating text image areas may include, for example, locating and isolating the baseline and, possibly, top-line and/or cap-line, of each text line image. The isolated line regions are modeled as line segments that run from one end of the text line image to another. Baselines may be modeled as straight lines which are horizontal or, in the case of Japanese, Chinese, and other scripts, vertical, or oriented at some angle near the horizontal or the vertical. Baselines may also be modeled as curved functions. Operation then continues in step S140.
  • In step S[0038] 140, the isolated text image areas are selected for further processing. Next, in step S150, the text line regions of the selected text image areas are located and isolated and the layout properties of the selected text image areas are then determined. Layout properties may include, for example, indentation, left and/or right justification, centering, hyphenation, special spacing (e.g. for tabular data), proximity to figures and other non-textual areas, and the like. Layout properties may also include type size and typeface-family properties (e.g. roman/bold/italic styles) that may indicate the function of the text within the page. Operation then continues in step S160.
  • In step S[0039] 160, the located text line regions are further processed into a set of segmented image elements. Then in step S170, the segmented image elements are read and basic textual elements are located and isolated. Basic textual elements may include, for example, words, numbers, dates, proper names, bibliographic references, references to figures, and/or other non-textual elements within or outside the document. The textual elements will become the basic image units which will be reflowed and reconstructed in later stages. As part of locating the segmented image elements, each segmented image element is labeled with the position of the element relative to the baseline of the text line so that when the text-lines are later reflowed, the reconstructed baseline may be referred to when placing the corresponding segmented image elements so the elements appear to share the newly constructed baseline. Operation then continues to step S180.
  • In step S[0040] 180, the set of segmented image elements are labeled with their baseline-relative position. Next, in step S190, the segmented image elements and the relative baselines portions are compressed into token-based image elements. Then, in step S200, the image elements are synthesized into an intermediate data structure. Operation then continues to step S210.
  • In step S[0041] 210, the intermediate data structure is stored to retain the data in an intermediate format until distilling and redisplay is desired. Then, in step S220, the stored data is distilled to convert the data into a device specific display format. The intermediate data structure may be converted, for example, to HTML for use on a standard Internet browser, or to Open E-book XML format for use on an Open E-book reader. Other methods may include, for example, converting the intermediate data structure to Plucker format for use on a Plucker electronic book viewer, or to Microsoft Reader format for display using MS Reader format or to a print format for printing to paper or the like. Next, in step S230, the distilled data is displayed to the user. Operation of the method then continues to step S240, where operation of the method ends.
  • In various exemplary embodiments of this invention, the intermediate data structure may also be in a form that can be processed by an E-Book distiller for redisplaying the intermediate data structure on an E-book reader. In the event the intended use is to display an electronic book, then an E-book distiller reads the intermediate data structure and prepares it for display on a specific device such as a PDA, a computer graphical interface window, or any other graphical display device. Such processing of the intermediate data structure is not limited to an E-Book distiller, but may accomplished be any method or device for re-converting the intermediate data structure for redisplay on a selected display device. [0042]
  • In various exemplary embodiments of this invention, the intermediate data structure may be expressed in a variety of formats such as, for example, Open E-book XML, Adobe™ PDF 1.4 or later, HTML and/or XHTML, as well as other useful formats that are now available or may be developed in the future. In various exemplary embodiments of this invention, the intermediate data structure may contain tags, such as those used in SGML and XML. [0043]
  • In various exemplary embodiments, in step S[0044] 190, the segmented image elements are compressed into a smaller number of prototype images, so that each incoming element may be replaced by a prototype that is visually similar to, or perhaps indistinguishable from the image elements. This is an instance of ‘token-based’ compression where the tokens are the image elements. Therefore, if the image elements are words, then the tokens are words. Alternatively, it may be advantageous to cut the image elements into smaller images corresponding exactly or approximately with individual characters since there are fewer distinct characters than words in some languages. Compressing the segmented image elements may further include writing a set, or dictionary, of representative compressed image tokens, and a list of references into the representative compressed image tokens. Each reference represents an original image element labeled with its position relative to the baseline.
  • In various exemplary embodiments of this invention, the non-text image areas, compressed non-text image areas, the set of representative compressed image tokens, the segmented image elements and/or the layout characteristics are synthesized in step S[0045] 200 into an intermediate data structure. However, in various exemplary embodiments of this invention, non-text area images may optionally first be compressed in step S190, for file compression, before being synthesized in step S200 for integration into the intermediate data structure. Additionally, in various exemplary embodiments of this invention, the segmented image elements may be optionally compressed in step S190 before being synthesized in step S200 for integration into the intermediate data structure. Determining whether to compress the non-text image areas and the segmented image elements may be dependent on file size or other user specific parameters. If the intermediate data structure does not include compressed data, then the intermediate data structure may be represented as XHTML, for example.
  • In various exemplary embodiments of this invention, the intermediate data structure may also contain a tagged list containing references to every textual and non-image element that are proximate to or references by textual image element as well as layout characteristics such as indentation, hyphenation, spacing, and the like. In addition to this list, a set of representative compressed image tokens can be written to a separate but intimately associated image element database. The intermediate data structure contains all the information required to support the reflowing and the reconstruction of the image elements. [0046]
  • FIG. 4 is a block diagram of one exemplary embodiment of a document deconstruction and [0047] redisplay system 400 according to this invention. As shown in FIG. 4, one or more user input devices 480 are connected over one or more links 482 to an input/output interface 410. Additionally, a data source 500 is connected over a link 502 to the input/output interface 410. A data sink 600 is also connected to the input/output interface 410 through a link 602.
  • Each of the links [0048] 482, 502, 602 can be implemented using any known or later developed device or system for connecting the one or more user input devices 480, the data source 500 and the data sink 600, respectively, to the document layout deconstruction and redisplay system 400, including a direct cable connection, a connection over a wide area network or a local area network, a connection over an intranet, a connection over the Internet, or a connection over any other distributed processing network or system. In general, each of the links 482, 502, 602 can be any known or later developed connection system or structure usable to connect the one or more user input devices 480, the data source 500 and the data sink 600, respectively, to the document layout deconstruction and redisplay system 400.
  • The input/[0049] output interface 410 inputs data from the data source 500 and/or the one or more user input devices 480 and outputs data to the data sink 600 via the link 602. The input/output interface 410 also provides the received data to one or more of the controller 420, the memory 430, a deconstructing circuit, routine or application 440, a synthesizing circuit, routine or application 450, a distilling circuit, routine or application 460, and/or a display 490. The input/output interface 410 receives data from one or more of the controller 420, the memory 430, the deconstructing circuit, routine or application 440, the synthesizing circuit, routine or application 450, and/or the distilling circuit, routine or application 460.
  • The [0050] memory 430 stores data received from the deconstructing circuit, routine or application 440, synthesizing circuit, routine or application 450, the distilling circuit, routine or application 460, and/or the input/output interface 410. For example, the original data, the deconstructed data, the synthesized data, and/or the distilled data, may be stored in the memory 430. The memory can also store one or more control routines used by the controller 420 to operate the document layout deconstruction and redisplay system 400.
  • The [0051] memory 430 can be implemented using any appropriate combination of alterable, volatile or non-volatile memory or non-alterable, or fixed, memory. The alterable memory, whether volatile or non-volatile, can be implemented using any one or more of static or dynamic RAM, a floppy disk and disk drive, a writable or re-writeable optical disk and disk drive, a hard drive, flash memory or the like. Similarly, the non-alterable or fixed memory can be implemented using any one or more of ROM, PROM, EPROM, EEPROM, an optical ROM disk, such as a CD-ROM or DVD-ROM disk, and disk drive or the like.
  • It should be understood that each of the circuits or routines shown in FIG. 4 can be implemented as portions of a suitably programmed general purpose computer. Alternatively, each of the circuits or routines shown in FIG. 4 can be implemented as physically distinct hardware circuits within an ASIC, or using a FPGA, a PDL, a PLA or a PAL, or using discrete logic elements or discrete circuit elements. The particular form each of the circuits or routines shown in FIG. 4 will take is a design choice and will be obvious and predicable to those skilled in the art. [0052]
  • In operation, the [0053] data source 500 outputs a set of original data, i.e., input document, scanned document, or the like, over the link 502 to the input/output interface 410. Similarly, the user input device 480 can be used to input one or more of a set of newly created original data, scanned data, or the like, over the link 482 to the input/output interface 410. The input/output interface 410 directs the received set of data to the memory 430 under the control of the controller 420. However, it should be appreciated that either or both of these sets of data could have been previously input into the document layout deconstruction and redisplay system 400.
  • An input document is input into the deconstructing circuit, routine or [0054] application 440 under control of the controller 420. The deconstructing circuit, routine or application 440 reads image files and locates and isolates text area images and non-text area images. Non-text area images are then sent to the synthesizing circuit, routine or application 450 under control of the controller 420 for synthesizing the data into an intermediate data structure. Non-text images may optionally be compressed prior to being synthesized at the synthesizing circuit, routine or application 450.
  • The deconstructing circuit, routine or [0055] application 440 reads the set of isolated images text area images and locates and isolates text line regions and detects the layout properties of the text line regions. The layout properties are sent to the synthesizing circuit, routine or application 450 under the control of the controller 420. The text line regions are further processed by the deconstructing circuit, routine or application 440 into a set of segmented image elements with their baseline relative portions and then sent to the synthesizing circuit or routine 450 under control of the controller 420 for synthesizing into an intermediate data structure. The deconstructing circuit, routine or application 440 may also compress the segmented image elements with their baseline relative portions into token-based image elements before being sent to the synthesizing circuit, routine or application 450 under control of the controller 420 for synthesizing into an intermediate data structure.
  • It should be appreciated that the deconstructing circuit, routine or [0056] application 440 and the synthesizing circuit, routine or application 450 can use any known or later-developed encoding scheme, to deconstruct and synthesize the data to be converted into an intermediate data structure that may then be distilled by the distilling circuit, routine or application 460 for display on the display device 490.
  • The synthesizing circuit, routine or application [0057] 450 synthesizes the non-text area images and compressed non-text area image elements, the set of representative compressed image tokens; the segmented image elements and the layout characteristics, and transcribes the data into an intermediate data structure. The intermediate data structure is sent to the memory 430 under the control of the controller 430 for storage.
  • Upon request by a user of the input document, the distilling circuit, routine or application [0058] 460 converts the intermediate data structure into a format usable by the display 490. The distilling circuit, routine or application 460, under control of the controller 420 and the input output interface 410, will output the converted intermediate data structure to the user's device for display.
  • It should be appreciated that the distilling circuit, routine or display [0059] 460 can use any known or later-developed encoding scheme, including, but not limited to, those disclosed in this application, to convert the intermediate data structure into a device specific format usable for redisplay on an arbitrarily sized display.
  • In various exemplary embodiments, the systems and methods of this invention also relate to the use of special non-image markers, other than tags attached to particular image elements, to infer the functions and properties of all the image elements from their relative positions with respect to the markers within the intermediate data structure. [0060]
  • While this invention has been described in conjunction with the exemplary embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the exemplary embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made to the invention without departing from the spirit and scope thereof. [0061]

Claims (28)

What is claimed is:
1. A method of converting a document in a page-image format into a form suitable for an arbitrarily sized display, comprising:
deconstructing a document in a page image format;
synthesizing the deconstructed document into an intermediate data structure; and
distilling the intermediate data structure for redisplay in a format usable for an arbitrarily sized display.
2. The method of claim 1, wherein deconstructing a document in a page image format includes:
identifying text image areas and non-text image areas of the document;
locating and isolating text image areas and non-text image areas;
processing the isolated text image areas and non-text image areas into text line regions and layout properties;
processing located text line regions into segmented image elements; and
locating and labeling segmented image elements.
3. The method of claim 2, wherein deconstructing a document in a page image into the set of segmented image elements includes at least one of physical segmentation of data and logical segmentation of data.
4. The method of claim 2, wherein the set of segmented image elements comprises at least one of blocks, lines, words, characters of text, groups of characters, and groups of non-text characters.
5. The method of claim 1, wherein synthesizing includes converting non-text image areas, layout properties and segmented image areas into the intermediate data structure.
6. The method of claim 2, wherein synthesizing the set of segmented image elements into an intermediate data structure includes integrating at least one of bitmapped images in an intelligible display layout and links to non-textual elements.
7. The method of claim 6, wherein the bitmapped images are images of words in reading order.
8. The method of claim 1, wherein the intermediate data structure is stored in a storage device.
9. The method of claim 1, wherein distilling the intermediate data structure for redisplay in a format usable for an arbitrarily sized display, includes redisplaying the document in human readable format
10. The method of claim 1, wherein distilling the intermediate data structure for redisplay in a format usable for an arbitrarily sized display, includes redisplaying the document in at least one of an electronic book format, Internet browsable format and a print format.
11. The method of claim 1, wherein distilling the intermediate data structure includes converting the stored intermediate data structure into a device specific display format for display.
12. The method of claim 1, wherein the intermediate data structure is adaptable to at least one of display screen size, page size, resolution, contrast, color and geometry, at the time of display.
13. The method of claim 1, wherein the intermediate data structure is adaptability supported by at least one of repagination of text, reflowing of text, logical links of text to associated text and non-textual content.
14. A method of converting a document in a page-image format into a form suitable for an arbitrarily sized display, comprising:
analyzing page layout;
converting a sequence of page images into a sequence of document element images captured in a tagged format; and
re-converting the tagged format into at least one of an electronic book format, an Internet browsable format that can accept images and a print format.
15. The method of claim 14, wherein the tagged format preserves at least one of reading order and logical page layout properties.
16. A system of converting a document in a page-image format into a form suitable for an arbitrarily sized display, comprising:
an input/output device;
a controller;
a deconstructing circuit, routine or application that deconstructs a document;
a synthesizing circuit, routine or application that synthesizes the deconstructed document into an intermediate data structure;
a distilling circuit, routine or application that distills the intermediate data structure for redisplay in a format usable for an arbitrarily sized display;
a memory.
17. The system of claim 16, wherein:
the deconstructing circuit, routine or application deconstructs the document in a page image format into non-text image areas, layout properties, and a set of segmented image elements;
the synthesizing circuit, routine or application synthesizes the non-text image areas, the layout properties, and the set of segmented image elements into an intermediate data structure; and
the distilling circuit, routine or application distills the intermediate data structure for redisplay in a format usable for an arbitrarily sized display.
18. The system of claim 17, wherein the deconstructing circuit, routine or application deconstructs the document in a page image format into the set of segmented image elements that includes at least one of physical segmentation of data and logical segmentation of data.
19. The system of claim 17, wherein the intermediate data structure includes at least one of bitmapped images in an intelligible display layout and links to non-textual elements.
20. The system of claim 19, wherein the bitmapped images are images of words in reading order.
21. The system of claim 16, wherein the memory stores at least one of the document in page image format, the deconstructed document, the intermediate data structure and the distilled document.
22. The system of claim 16, wherein the distilling circuit, routine or application distills the intermediate data structure for redisplay of the document in a format usable for an arbitrarily sized display includes redisplaying the document in at least one of an electronic book format, Internet browsable format, and a print format.
23. The system of claim 16, wherein the distilling circuit, routine or application converts the stored intermediate data structure into a device specific display format for display.
24. The system of claim 16, wherein the intermediate data structure is adaptable to at least one of display screen size, paper size, resolution, contrast, color and geometry, at the time of display.
25. The system of claim 16, wherein the intermediate data structure is adaptability supported by at least one of repagination of text, reflowing of text, logical links of text to associated text and non-textual content.
26. The system of claim 16, wherein the deconstructing circuit, routine or application analyzes page layout and converts a sequence of page images into a sequence of document element images captured in a tagged format; and
the distilling circuit, routine or application converts the tagged format into at least one of an electronic book format, an Internet browsable format that can accept images and a print format.
27. The system of claim 26, wherein the tagged format preserves at least one of reading order and logical page layout properties.
28. The system of claim 26, wherein the deconstructing routine includes a segmentation algorithm and a background structure analyzer.
US10/064,892 2002-03-01 2002-08-27 Method and system for document image layout deconstruction and redisplay system Abandoned US20040205568A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/064,892 US20040205568A1 (en) 2002-03-01 2002-08-27 Method and system for document image layout deconstruction and redisplay system
EP03004558A EP1343095A3 (en) 2002-03-01 2003-02-28 Method and system for document image layout deconstruction and redisplay
JP2003053197A JP2004005453A (en) 2002-03-01 2003-02-28 Method and system for breaking up and re-displaying document image layout
US13/152,984 US10606933B2 (en) 2002-03-01 2011-06-03 Method and system for document image layout deconstruction and redisplay

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US36017102P 2002-03-01 2002-03-01
US10/064,892 US20040205568A1 (en) 2002-03-01 2002-08-27 Method and system for document image layout deconstruction and redisplay system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/152,984 Continuation US10606933B2 (en) 2002-03-01 2011-06-03 Method and system for document image layout deconstruction and redisplay

Publications (1)

Publication Number Publication Date
US20040205568A1 true US20040205568A1 (en) 2004-10-14

Family

ID=27759894

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/064,892 Abandoned US20040205568A1 (en) 2002-03-01 2002-08-27 Method and system for document image layout deconstruction and redisplay system
US13/152,984 Active 2026-10-28 US10606933B2 (en) 2002-03-01 2011-06-03 Method and system for document image layout deconstruction and redisplay

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/152,984 Active 2026-10-28 US10606933B2 (en) 2002-03-01 2011-06-03 Method and system for document image layout deconstruction and redisplay

Country Status (3)

Country Link
US (2) US20040205568A1 (en)
EP (1) EP1343095A3 (en)
JP (1) JP2004005453A (en)

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030004946A1 (en) * 2001-06-28 2003-01-02 Vandenavond Todd M. Package labeling
US20040049735A1 (en) * 2002-09-05 2004-03-11 Tsykora Anatoliy V. System and method for identifying line breaks
US20040066530A1 (en) * 2002-10-04 2004-04-08 Fuji Xerox Co., Ltd. Image forming device and image formation control method
US20040135813A1 (en) * 2002-09-26 2004-07-15 Sony Corporation Information processing device and method, and recording medium and program used therewith
US20040139384A1 (en) * 2003-01-13 2004-07-15 Hewlett Packard Company Removal of extraneous text from electronic documents
US20040202352A1 (en) * 2003-04-10 2004-10-14 International Business Machines Corporation Enhanced readability with flowed bitmaps
US20050044171A1 (en) * 2003-08-21 2005-02-24 3M Innovative Properties Company Centralized management of packaging data having modular remote device control architecture
US20050050052A1 (en) * 2003-08-20 2005-03-03 3M Innovative Properties Company Centralized management of packaging data with artwork importation module
US20050138551A1 (en) * 2003-10-03 2005-06-23 Gidon Elazar Method for page translation
US20060123266A1 (en) * 2004-12-08 2006-06-08 Ziosoft, Inc. Communication terminal
US20060217954A1 (en) * 2005-03-22 2006-09-28 Fuji Xerox Co., Ltd. Translation device, image processing device, translation method, and recording medium
US20070002054A1 (en) * 2005-07-01 2007-01-04 Serge Bronstein Method of identifying semantic units in an electronic document
US20070047814A1 (en) * 2005-09-01 2007-03-01 Taeko Yamazaki Image processing apparatus and method thereof
US20070055690A1 (en) * 2005-09-08 2007-03-08 Hewlett-Packard Development Company, L.P. Flows for variable-data printing
US20070116362A1 (en) * 2004-06-02 2007-05-24 Ccs Content Conversion Specialists Gmbh Method and device for the structural analysis of a document
US20070206855A1 (en) * 2006-03-02 2007-09-06 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US20070206856A1 (en) * 2006-03-02 2007-09-06 Toyohisa Matsuda Methods and Systems for Detecting Regions in Digital Images
US20070234203A1 (en) * 2006-03-29 2007-10-04 Joshua Shagam Generating image-based reflowable files for rendering on various sized displays
US20070291120A1 (en) * 2006-06-15 2007-12-20 Richard John Campbell Methods and Systems for Identifying Regions of Substantially Uniform Color in a Digital Image
US20070291288A1 (en) * 2006-06-15 2007-12-20 Richard John Campbell Methods and Systems for Segmenting a Digital Image into Regions
US20080056573A1 (en) * 2006-09-06 2008-03-06 Toyohisa Matsuda Methods and Systems for Identifying Text in Digital Images
WO2007129288A3 (en) * 2006-05-05 2008-05-29 Big River Ltd Electronic document reformatting
US20080139191A1 (en) * 2006-12-08 2008-06-12 Miguel Melnyk Content adaptation
US20090110319A1 (en) * 2007-10-30 2009-04-30 Campbell Richard J Methods and Systems for Background Color Extrapolation
US20090279108A1 (en) * 2008-05-12 2009-11-12 Nagayasu Hoshi Image Processing Apparatus
US20090313577A1 (en) * 2005-12-20 2009-12-17 Liang Xu Method for displaying documents
US20110016384A1 (en) * 2006-09-29 2011-01-20 Joshua Shagam Optimizing typographical content for transmission and display
US20110035661A1 (en) * 2009-08-06 2011-02-10 Helen Balinsky Document layout system
US20110087955A1 (en) * 2009-10-14 2011-04-14 Chi Fai Ho Computer-aided methods and systems for e-books
US20110173532A1 (en) * 2010-01-13 2011-07-14 George Forman Generating a layout of text line images in a reflow area
US8023738B1 (en) 2006-03-28 2011-09-20 Amazon Technologies, Inc. Generating reflow files from digital images for rendering on various sized displays
US20120054605A1 (en) * 2010-08-31 2012-03-01 Hillcrest Publishing Group, Inc. Electronic document conversion system
US20120166974A1 (en) * 2010-12-23 2012-06-28 Elford Christopher L Method, apparatus and system for interacting with content on web browsers
US8249352B2 (en) * 2007-08-27 2012-08-21 Fuji Xerox Co., Ltd. Document image processing apparatus, document image processing method and computer readable medium
US8347232B1 (en) 2009-07-10 2013-01-01 Lexcycle, Inc Interactive user interface
US8413048B1 (en) 2006-03-28 2013-04-02 Amazon Technologies, Inc. Processing digital images including headers and footers into reflow content
WO2013062666A1 (en) * 2011-10-24 2013-05-02 Google Inc. Extensible framework for ereader tools
US20130191728A1 (en) * 2012-01-20 2013-07-25 Steven Victor McKinney Systems, methods, and media for generating electronic books
US8499236B1 (en) * 2010-01-21 2013-07-30 Amazon Technologies, Inc. Systems and methods for presenting reflowable content on a display
US8520025B2 (en) 2011-02-24 2013-08-27 Google Inc. Systems and methods for manipulating user annotations in electronic books
US8542926B2 (en) 2010-11-19 2013-09-24 Microsoft Corporation Script-agnostic text reflow for document images
US8572480B1 (en) 2008-05-30 2013-10-29 Amazon Technologies, Inc. Editing the sequential flow of a page
US8630498B2 (en) 2006-03-02 2014-01-14 Sharp Laboratories Of America, Inc. Methods and systems for detecting pictorial regions in digital images
US20140055803A1 (en) * 2005-10-14 2014-02-27 Uhlig Llc Dynamic Variable-Content Publishing
US20140173394A1 (en) * 2012-12-18 2014-06-19 Canon Kabushiki Kaisha Display apparatus, control method therefor, and storage medium
WO2014098528A1 (en) * 2012-12-21 2014-06-26 Samsung Electronics Co., Ltd. Text-enlargement display method
US8782516B1 (en) 2007-12-21 2014-07-15 Amazon Technologies, Inc. Content style detection
US20140208191A1 (en) * 2013-01-18 2014-07-24 Microsoft Corporation Grouping Fixed Format Document Elements to Preserve Graphical Data Semantics After Reflow
US9031493B2 (en) 2011-11-18 2015-05-12 Google Inc. Custom narration of electronic books
US9035887B1 (en) 2009-07-10 2015-05-19 Lexcycle, Inc Interactive user interface
US9069744B2 (en) 2012-05-15 2015-06-30 Google Inc. Extensible framework for ereader tools, including named entity information
US20150261740A1 (en) * 2012-10-16 2015-09-17 Heinz Grether Pc Text reading aid
US20150261761A1 (en) * 2006-12-28 2015-09-17 Ebay Inc. Header-token driven automatic text segmentation
US9229911B1 (en) 2008-09-30 2016-01-05 Amazon Technologies, Inc. Detecting continuation of flow of a page
US9323733B1 (en) 2013-06-05 2016-04-26 Google Inc. Indexed electronic book annotations
US20160140086A1 (en) * 2014-11-19 2016-05-19 Kobo Incorporated System and method for content repagination providing a page continuity indicium while e-reading
US9400549B2 (en) 2013-03-08 2016-07-26 Chi Fai Ho Method and system for a new-era electronic book
US9411790B2 (en) 2013-07-26 2016-08-09 Metrodigi, Inc. Systems, methods, and media for generating structured documents
US9542363B2 (en) 2014-01-31 2017-01-10 Konica Minolta Laboratory U.S.A., Inc. Processing of page-image based document to generate a re-targeted document for different display devices which support different types of user input methods
US9734132B1 (en) * 2011-12-20 2017-08-15 Amazon Technologies, Inc. Alignment and reflow of displayed character images
US9965444B2 (en) 2012-01-23 2018-05-08 Microsoft Technology Licensing, Llc Vector graphics classification engine
US20180150689A1 (en) * 2016-11-29 2018-05-31 Canon Kabushiki Kaisha Information processing apparatus, storage medium, and information processing method
US9990347B2 (en) 2012-01-23 2018-06-05 Microsoft Technology Licensing, Llc Borderless table detection engine
US10049107B2 (en) * 2016-02-25 2018-08-14 Fuji Xerox Co., Ltd. Non-transitory computer readable medium and information processing apparatus and method
US10387541B2 (en) * 2015-01-29 2019-08-20 Hewlett-Packard Development Company, L.P. High quality setting of text for print, with full control over layout, using a web browser
US10452748B2 (en) 2016-06-20 2019-10-22 Microsoft Technology Licensing, Llc Deconstructing and rendering of web page into native application experience
US10606933B2 (en) * 2002-03-01 2020-03-31 Xerox Corporation Method and system for document image layout deconstruction and redisplay
US10616443B1 (en) * 2019-02-11 2020-04-07 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US20200250613A1 (en) * 2019-01-31 2020-08-06 Walmart Apollo, Llc System and method for dispatching drivers for delivering grocery orders and facilitating digital tipping
US10831982B2 (en) 2009-10-14 2020-11-10 Iplcontent, Llc Hands-free presenting device
US11106858B2 (en) * 2020-01-16 2021-08-31 Adobe Inc. Merging selected digital point text objects while maintaining visual appearance fidelity

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7937653B2 (en) * 2005-01-10 2011-05-03 Xerox Corporation Method and apparatus for detecting pagination constructs including a header and a footer in legacy documents
US7433548B2 (en) * 2006-03-28 2008-10-07 Amazon Technologies, Inc. Efficient processing of non-reflow content in a digital image
US8381101B2 (en) * 2009-11-16 2013-02-19 Apple Inc. Supporting platform-independent typesetting for documents
US9218680B2 (en) * 2010-09-01 2015-12-22 K-Nfb Reading Technology, Inc. Systems and methods for rendering graphical content and glyphs
JP5182902B2 (en) * 2011-03-31 2013-04-17 京セラコミュニケーションシステム株式会社 Document image output device
US8855413B2 (en) * 2011-05-13 2014-10-07 Abbyy Development Llc Image reflow at word boundaries
JP5812702B2 (en) * 2011-06-08 2015-11-17 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Reading order determination apparatus, method and program for determining reading order of characters
FR2977692B1 (en) * 2011-07-07 2015-09-18 Aquafadas Sas ENRICHMENT OF ELECTRONIC DOCUMENT
US9928225B2 (en) * 2012-01-23 2018-03-27 Microsoft Technology Licensing, Llc Formula detection engine
WO2014050481A1 (en) * 2012-09-26 2014-04-03 富士フイルム株式会社 Document image processing device, method for controlling operation thereof, and program for controlling operation thereof
US9330070B2 (en) 2013-03-11 2016-05-03 Microsoft Technology Licensing, Llc Detection and reconstruction of east asian layout features in a fixed format document
US20140258852A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Detection and Reconstruction of Right-to-Left Text Direction, Ligatures and Diacritics in a Fixed Format Document
CN104331391B (en) * 2013-07-22 2018-02-02 北大方正集团有限公司 Document format conversion equipment and document format conversion method
US10372789B2 (en) 2014-08-22 2019-08-06 Oracle International Corporation Creating high fidelity page layout documents
US11436286B1 (en) * 2019-04-04 2022-09-06 Otsuka America Pharmaceutical, Inc. System and method for using deconstructed document sections to generate report data structures
US11087448B2 (en) * 2019-05-30 2021-08-10 Kyocera Document Solutions Inc. Apparatus, method, and non-transitory recording medium for a document fold determination based on the change point block detection
US11410446B2 (en) * 2019-11-22 2022-08-09 Nielsen Consumer Llc Methods, systems, apparatus and articles of manufacture for receipt decoding
CN111275139B (en) * 2020-01-21 2024-02-23 杭州大拿科技股份有限公司 Handwritten content removal method, handwritten content removal device, and storage medium
US11810380B2 (en) 2020-06-30 2023-11-07 Nielsen Consumer Llc Methods and apparatus to decode documents based on images using artificial intelligence
US11822216B2 (en) 2021-06-11 2023-11-21 Nielsen Consumer Llc Methods, systems, apparatus, and articles of manufacture for document scanning
US11625930B2 (en) 2021-06-30 2023-04-11 Nielsen Consumer Llc Methods, systems, articles of manufacture and apparatus to decode receipts based on neural graph architecture
US11687700B1 (en) * 2022-02-01 2023-06-27 International Business Machines Corporation Generating a structure of a PDF-document
US11699021B1 (en) 2022-03-14 2023-07-11 Bottomline Technologies Limited System for reading contents from a document

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6023714A (en) * 1997-04-24 2000-02-08 Microsoft Corporation Method and system for dynamically adapting the layout of a document to an output device
US6208426B1 (en) * 1996-04-04 2001-03-27 Matsushita Graphic Communication Systems, Inc. Facsimile communication method and facsimile machine
US6300947B1 (en) * 1998-07-06 2001-10-09 International Business Machines Corporation Display screen and window size related web page adaptation system
US20020029232A1 (en) * 1997-11-14 2002-03-07 Daniel G. Bobrow System for sorting document images by shape comparisons among corresponding layout components
US20020046245A1 (en) * 2000-09-29 2002-04-18 Hillar Christopher J. System and method for creating customized web pages
US20020143821A1 (en) * 2000-12-15 2002-10-03 Douglas Jakubowski Site mining stylesheet generator
US20030014445A1 (en) * 2001-07-13 2003-01-16 Dave Formanek Document reflowing technique
US6633314B1 (en) * 2000-02-02 2003-10-14 Raja Tuli Portable high speed internet device integrating cellular telephone and palm top computer
US6895552B1 (en) * 2000-05-31 2005-05-17 Ricoh Co., Ltd. Method and an apparatus for visual summarization of documents
US7028258B1 (en) * 1999-10-01 2006-04-11 Microsoft Corporation Dynamic pagination of text and resizing of image to fit in a document

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US582530A (en) * 1897-05-11 Crank-fastening for bicycles
US4251799A (en) * 1979-03-30 1981-02-17 International Business Machines Corporation Optical character recognition using baseline information
EP0385009A1 (en) * 1989-03-03 1990-09-05 Hewlett-Packard Limited Apparatus and method for use in image processing
US5159667A (en) * 1989-05-31 1992-10-27 Borrey Roland G Document identification by characteristics matching
CA2027253C (en) 1989-12-29 1997-12-16 Steven C. Bagley Editing text in an image
US5390354A (en) * 1991-03-15 1995-02-14 Itt Corporation Computerized directory pagination system and method
US5321770A (en) * 1991-11-19 1994-06-14 Xerox Corporation Method for determining boundaries of words in text
JP2579397B2 (en) * 1991-12-18 1997-02-05 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and apparatus for creating layout model of document image
US5983179A (en) * 1992-11-13 1999-11-09 Dragon Systems, Inc. Speech recognition system which turns its voice response on for confirmation when it has been turned off without confirmation
US5825919A (en) * 1992-12-17 1998-10-20 Xerox Corporation Technique for generating bounding boxes for word spotting in bitmap images
JP3272842B2 (en) * 1992-12-17 2002-04-08 ゼロックス・コーポレーション Processor-based decision method
US5848184A (en) * 1993-03-15 1998-12-08 Unisys Corporation Document page analyzer and method
US6587587B2 (en) * 1993-05-20 2003-07-01 Microsoft Corporation System and methods for spacing, storing and recognizing electronic representations of handwriting, printing and drawings
US5734761A (en) * 1994-06-30 1998-03-31 Xerox Corporation Editing scanned document images using simple interpretations
EP0702322B1 (en) 1994-09-12 2002-02-13 Adobe Systems Inc. Method and apparatus for identifying words described in a portable electronic document
US5574802A (en) * 1994-09-30 1996-11-12 Xerox Corporation Method and apparatus for document element classification by analysis of major white region geometry
US5724985A (en) * 1995-08-02 1998-03-10 Pacesetter, Inc. User interface for an implantable medical device using an integrated digitizer display screen
US5911146A (en) * 1996-05-03 1999-06-08 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Apparatus and method for automatic yellow pages pagination and layout
US5784487A (en) * 1996-05-23 1998-07-21 Xerox Corporation System for document layout analysis
US5893127A (en) * 1996-11-18 1999-04-06 Canon Information Systems, Inc. Generator for document with HTML tagged table having data elements which preserve layout relationships of information in bitmap image of original document
JP3634099B2 (en) * 1997-02-17 2005-03-30 株式会社リコー Document information management system, media sheet information creation device, and document information management device
US6336124B1 (en) * 1998-10-01 2002-01-01 Bcl Computers, Inc. Conversion data representing a document to other formats for manipulation and display
JP3879350B2 (en) * 2000-01-25 2007-02-14 富士ゼロックス株式会社 Structured document processing system and structured document processing method
US20020056085A1 (en) 2000-03-21 2002-05-09 Christer Fahraeus Method and system for transferring and displaying graphical objects
SE0000941L (en) * 2000-03-21 2001-09-22 Anoto Ab Procedure and arrangements for transmission of message
US6947162B2 (en) * 2001-08-30 2005-09-20 Hewlett-Packard Development Company, L.P. Systems and methods for converting the format of information
US20040205568A1 (en) * 2002-03-01 2004-10-14 Breuel Thomas M. Method and system for document image layout deconstruction and redisplay system
US8473467B2 (en) * 2009-01-02 2013-06-25 Apple Inc. Content profiling to dynamically configure content processing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6208426B1 (en) * 1996-04-04 2001-03-27 Matsushita Graphic Communication Systems, Inc. Facsimile communication method and facsimile machine
US6023714A (en) * 1997-04-24 2000-02-08 Microsoft Corporation Method and system for dynamically adapting the layout of a document to an output device
US20020029232A1 (en) * 1997-11-14 2002-03-07 Daniel G. Bobrow System for sorting document images by shape comparisons among corresponding layout components
US6300947B1 (en) * 1998-07-06 2001-10-09 International Business Machines Corporation Display screen and window size related web page adaptation system
US7028258B1 (en) * 1999-10-01 2006-04-11 Microsoft Corporation Dynamic pagination of text and resizing of image to fit in a document
US6633314B1 (en) * 2000-02-02 2003-10-14 Raja Tuli Portable high speed internet device integrating cellular telephone and palm top computer
US6895552B1 (en) * 2000-05-31 2005-05-17 Ricoh Co., Ltd. Method and an apparatus for visual summarization of documents
US20020046245A1 (en) * 2000-09-29 2002-04-18 Hillar Christopher J. System and method for creating customized web pages
US20020143821A1 (en) * 2000-12-15 2002-10-03 Douglas Jakubowski Site mining stylesheet generator
US20030014445A1 (en) * 2001-07-13 2003-01-16 Dave Formanek Document reflowing technique

Cited By (132)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030004946A1 (en) * 2001-06-28 2003-01-02 Vandenavond Todd M. Package labeling
US10606933B2 (en) * 2002-03-01 2020-03-31 Xerox Corporation Method and system for document image layout deconstruction and redisplay
US20040049735A1 (en) * 2002-09-05 2004-03-11 Tsykora Anatoliy V. System and method for identifying line breaks
US7949942B2 (en) 2002-09-05 2011-05-24 Vistaprint Technologies Limited System and method for identifying line breaks
US7020838B2 (en) * 2002-09-05 2006-03-28 Vistaprint Technologies Limited System and method for identifying line breaks
US20040135813A1 (en) * 2002-09-26 2004-07-15 Sony Corporation Information processing device and method, and recording medium and program used therewith
US8484559B2 (en) * 2002-09-26 2013-07-09 Sony Corporation Device and method for the magnification of content having a predetermined layout
US20040066530A1 (en) * 2002-10-04 2004-04-08 Fuji Xerox Co., Ltd. Image forming device and image formation control method
US7808672B2 (en) * 2002-10-04 2010-10-05 Fuji Xerox Co., Ltd. Image forming device and image formation control method
US20040139384A1 (en) * 2003-01-13 2004-07-15 Hewlett Packard Company Removal of extraneous text from electronic documents
US7310773B2 (en) * 2003-01-13 2007-12-18 Hewlett-Packard Development Company, L.P. Removal of extraneous text from electronic documents
US20040202352A1 (en) * 2003-04-10 2004-10-14 International Business Machines Corporation Enhanced readability with flowed bitmaps
US20050050052A1 (en) * 2003-08-20 2005-03-03 3M Innovative Properties Company Centralized management of packaging data with artwork importation module
US20050044171A1 (en) * 2003-08-21 2005-02-24 3M Innovative Properties Company Centralized management of packaging data having modular remote device control architecture
US7350143B2 (en) * 2003-10-03 2008-03-25 Sandisk Corporation Method for page translation
US20050138551A1 (en) * 2003-10-03 2005-06-23 Gidon Elazar Method for page translation
US20070116362A1 (en) * 2004-06-02 2007-05-24 Ccs Content Conversion Specialists Gmbh Method and device for the structural analysis of a document
US7860949B2 (en) * 2004-12-08 2010-12-28 Ziosoft, Inc. Communication terminal
US20060123266A1 (en) * 2004-12-08 2006-06-08 Ziosoft, Inc. Communication terminal
US7865353B2 (en) * 2005-03-22 2011-01-04 Fuji Xerox Co., Ltd. Translation device, image processing device, translation method, and recording medium
US20060217954A1 (en) * 2005-03-22 2006-09-28 Fuji Xerox Co., Ltd. Translation device, image processing device, translation method, and recording medium
US7705848B2 (en) * 2005-07-01 2010-04-27 Pdflib Gmbh Method of identifying semantic units in an electronic document
US20070002054A1 (en) * 2005-07-01 2007-01-04 Serge Bronstein Method of identifying semantic units in an electronic document
US7933447B2 (en) * 2005-09-01 2011-04-26 Canon Kabushiki Kaisha Image processing apparatus and method thereof
US20070047814A1 (en) * 2005-09-01 2007-03-01 Taeko Yamazaki Image processing apparatus and method thereof
US8381099B2 (en) * 2005-09-08 2013-02-19 Hewlett-Packard Development Company, L.P. Flows for variable-data printing
US20070055690A1 (en) * 2005-09-08 2007-03-08 Hewlett-Packard Development Company, L.P. Flows for variable-data printing
US20140055803A1 (en) * 2005-10-14 2014-02-27 Uhlig Llc Dynamic Variable-Content Publishing
US9383957B2 (en) * 2005-10-14 2016-07-05 Uhlig Llc Dynamic variable-content publishing
US20090313577A1 (en) * 2005-12-20 2009-12-17 Liang Xu Method for displaying documents
US8630498B2 (en) 2006-03-02 2014-01-14 Sharp Laboratories Of America, Inc. Methods and systems for detecting pictorial regions in digital images
US20070206855A1 (en) * 2006-03-02 2007-09-06 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US20070206856A1 (en) * 2006-03-02 2007-09-06 Toyohisa Matsuda Methods and Systems for Detecting Regions in Digital Images
US7792359B2 (en) 2006-03-02 2010-09-07 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US7889932B2 (en) 2006-03-02 2011-02-15 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US8413048B1 (en) 2006-03-28 2013-04-02 Amazon Technologies, Inc. Processing digital images including headers and footers into reflow content
US8023738B1 (en) 2006-03-28 2011-09-20 Amazon Technologies, Inc. Generating reflow files from digital images for rendering on various sized displays
US8566707B1 (en) 2006-03-29 2013-10-22 Amazon Technologies, Inc. Generating image-based reflowable files for rendering on various sized displays
US7966557B2 (en) 2006-03-29 2011-06-21 Amazon Technologies, Inc. Generating image-based reflowable files for rendering on various sized displays
US20070234203A1 (en) * 2006-03-29 2007-10-04 Joshua Shagam Generating image-based reflowable files for rendering on various sized displays
WO2007129288A3 (en) * 2006-05-05 2008-05-29 Big River Ltd Electronic document reformatting
US20070291288A1 (en) * 2006-06-15 2007-12-20 Richard John Campbell Methods and Systems for Segmenting a Digital Image into Regions
US20070291120A1 (en) * 2006-06-15 2007-12-20 Richard John Campbell Methods and Systems for Identifying Regions of Substantially Uniform Color in a Digital Image
US8368956B2 (en) 2006-06-15 2013-02-05 Sharp Laboratories Of America, Inc. Methods and systems for segmenting a digital image into regions
US8437054B2 (en) 2006-06-15 2013-05-07 Sharp Laboratories Of America, Inc. Methods and systems for identifying regions of substantially uniform color in a digital image
US7864365B2 (en) 2006-06-15 2011-01-04 Sharp Laboratories Of America, Inc. Methods and systems for segmenting a digital image into regions
US7876959B2 (en) 2006-09-06 2011-01-25 Sharp Laboratories Of America, Inc. Methods and systems for identifying text in digital images
US8150166B2 (en) 2006-09-06 2012-04-03 Sharp Laboratories Of America, Inc. Methods and systems for identifying text in digital images
US20080056573A1 (en) * 2006-09-06 2008-03-06 Toyohisa Matsuda Methods and Systems for Identifying Text in Digital Images
US9208133B2 (en) * 2006-09-29 2015-12-08 Amazon Technologies, Inc. Optimizing typographical content for transmission and display
US20110016384A1 (en) * 2006-09-29 2011-01-20 Joshua Shagam Optimizing typographical content for transmission and display
US9275167B2 (en) 2006-12-08 2016-03-01 Citrix Systems, Inc. Content adaptation
US20080139191A1 (en) * 2006-12-08 2008-06-12 Miguel Melnyk Content adaptation
US8181107B2 (en) * 2006-12-08 2012-05-15 Bytemobile, Inc. Content adaptation
US9292618B2 (en) 2006-12-08 2016-03-22 Citrix Systems, Inc. Content adaptation
US20150261761A1 (en) * 2006-12-28 2015-09-17 Ebay Inc. Header-token driven automatic text segmentation
US9529862B2 (en) * 2006-12-28 2016-12-27 Paypal, Inc. Header-token driven automatic text segmentation
US8249352B2 (en) * 2007-08-27 2012-08-21 Fuji Xerox Co., Ltd. Document image processing apparatus, document image processing method and computer readable medium
US8014596B2 (en) 2007-10-30 2011-09-06 Sharp Laboratories Of America, Inc. Methods and systems for background color extrapolation
US8121403B2 (en) 2007-10-30 2012-02-21 Sharp Laboratories Of America, Inc. Methods and systems for glyph-pixel selection
US20090110319A1 (en) * 2007-10-30 2009-04-30 Campbell Richard J Methods and Systems for Background Color Extrapolation
US8782516B1 (en) 2007-12-21 2014-07-15 Amazon Technologies, Inc. Content style detection
US20090279108A1 (en) * 2008-05-12 2009-11-12 Nagayasu Hoshi Image Processing Apparatus
US8572480B1 (en) 2008-05-30 2013-10-29 Amazon Technologies, Inc. Editing the sequential flow of a page
US9229911B1 (en) 2008-09-30 2016-01-05 Amazon Technologies, Inc. Detecting continuation of flow of a page
US9785327B1 (en) 2009-07-10 2017-10-10 Lexcycle, Inc. Interactive user interface
US9035887B1 (en) 2009-07-10 2015-05-19 Lexcycle, Inc Interactive user interface
US8347232B1 (en) 2009-07-10 2013-01-01 Lexcycle, Inc Interactive user interface
US9400769B2 (en) * 2009-08-06 2016-07-26 Hewlett-Packard Development Company, L.P. Document layout system
US20110035661A1 (en) * 2009-08-06 2011-02-10 Helen Balinsky Document layout system
US11074393B2 (en) 2009-10-14 2021-07-27 Iplcontent, Llc Method and apparatus to layout screens
US9330069B2 (en) * 2009-10-14 2016-05-03 Chi Fai Ho Layout of E-book content in screens of varying sizes
US11416668B2 (en) 2009-10-14 2022-08-16 Iplcontent, Llc Method and apparatus applicable for voice recognition with limited dictionary
US20220261531A1 (en) * 2009-10-14 2022-08-18 Iplcontent, Llc Method and apparatus to layout screens of varying sizes
US10503812B2 (en) 2009-10-14 2019-12-10 Iplcontent, Llc Method and apparatus for materials in different screen sizes using an imaging sensor
US11366955B2 (en) 2009-10-14 2022-06-21 Iplcontent, Llc Method and apparatus to layout screens of varying sizes
US10831982B2 (en) 2009-10-14 2020-11-10 Iplcontent, Llc Hands-free presenting device
US20110087955A1 (en) * 2009-10-14 2011-04-14 Chi Fai Ho Computer-aided methods and systems for e-books
US11630940B2 (en) 2009-10-14 2023-04-18 Iplcontent, Llc Method and apparatus applicable for voice recognition with limited dictionary
US20110173532A1 (en) * 2010-01-13 2011-07-14 George Forman Generating a layout of text line images in a reflow area
US8499236B1 (en) * 2010-01-21 2013-07-30 Amazon Technologies, Inc. Systems and methods for presenting reflowable content on a display
US20120054605A1 (en) * 2010-08-31 2012-03-01 Hillcrest Publishing Group, Inc. Electronic document conversion system
US8542926B2 (en) 2010-11-19 2013-09-24 Microsoft Corporation Script-agnostic text reflow for document images
US11204650B2 (en) 2010-12-23 2021-12-21 Intel Corporation Method, apparatus and system for interacting with content on web browsers
US10802595B2 (en) 2010-12-23 2020-10-13 Intel Corporation Method, apparatus and system for interacting with content on web browsers
US9575561B2 (en) * 2010-12-23 2017-02-21 Intel Corporation Method, apparatus and system for interacting with content on web browsers
US20120166974A1 (en) * 2010-12-23 2012-06-28 Elford Christopher L Method, apparatus and system for interacting with content on web browsers
US8543941B2 (en) 2011-02-24 2013-09-24 Google Inc. Electronic book contextual menu systems and methods
US9063641B2 (en) 2011-02-24 2015-06-23 Google Inc. Systems and methods for remote collaborative studying using electronic books
US10067922B2 (en) 2011-02-24 2018-09-04 Google Llc Automated study guide generation for electronic books
US9501461B2 (en) 2011-02-24 2016-11-22 Google Inc. Systems and methods for manipulating user annotations in electronic books
US8520025B2 (en) 2011-02-24 2013-08-27 Google Inc. Systems and methods for manipulating user annotations in electronic books
US9645986B2 (en) 2011-02-24 2017-05-09 Google Inc. Method, medium, and system for creating an electronic book with an umbrella policy
US9141404B2 (en) 2011-10-24 2015-09-22 Google Inc. Extensible framework for ereader tools
US9678634B2 (en) 2011-10-24 2017-06-13 Google Inc. Extensible framework for ereader tools
WO2013062666A1 (en) * 2011-10-24 2013-05-02 Google Inc. Extensible framework for ereader tools
US9031493B2 (en) 2011-11-18 2015-05-12 Google Inc. Custom narration of electronic books
US9734132B1 (en) * 2011-12-20 2017-08-15 Amazon Technologies, Inc. Alignment and reflow of displayed character images
US20130191728A1 (en) * 2012-01-20 2013-07-25 Steven Victor McKinney Systems, methods, and media for generating electronic books
US9965444B2 (en) 2012-01-23 2018-05-08 Microsoft Technology Licensing, Llc Vector graphics classification engine
US9990347B2 (en) 2012-01-23 2018-06-05 Microsoft Technology Licensing, Llc Borderless table detection engine
US10102187B2 (en) 2012-05-15 2018-10-16 Google Llc Extensible framework for ereader tools, including named entity information
US9069744B2 (en) 2012-05-15 2015-06-30 Google Inc. Extensible framework for ereader tools, including named entity information
US20150261740A1 (en) * 2012-10-16 2015-09-17 Heinz Grether Pc Text reading aid
CN105027142A (en) * 2012-10-16 2015-11-04 海因策格雷特尔Pc公司 A text reading aid
US10296559B2 (en) * 2012-12-18 2019-05-21 Canon Kabushiki Kaisha Display apparatus, control method therefor, and storage medium
US20140173394A1 (en) * 2012-12-18 2014-06-19 Canon Kabushiki Kaisha Display apparatus, control method therefor, and storage medium
WO2014098528A1 (en) * 2012-12-21 2014-06-26 Samsung Electronics Co., Ltd. Text-enlargement display method
US20140208191A1 (en) * 2013-01-18 2014-07-24 Microsoft Corporation Grouping Fixed Format Document Elements to Preserve Graphical Data Semantics After Reflow
US9953008B2 (en) * 2013-01-18 2018-04-24 Microsoft Technology Licensing, Llc Grouping fixed format document elements to preserve graphical data semantics after reflow by manipulating a bounding box vertically and horizontally
US10261575B2 (en) 2013-03-08 2019-04-16 Chi Fai Ho Method and apparatus to tell a story that depends on user attributes
US9400549B2 (en) 2013-03-08 2016-07-26 Chi Fai Ho Method and system for a new-era electronic book
US10606346B2 (en) 2013-03-08 2020-03-31 Iplcontent, Llc Method and apparatus to compose a story for a user depending on an attribute of the user
US11320895B2 (en) 2013-03-08 2022-05-03 Iplcontent, Llc Method and apparatus to compose a story for a user depending on an attribute of the user
US9323733B1 (en) 2013-06-05 2016-04-26 Google Inc. Indexed electronic book annotations
US9411790B2 (en) 2013-07-26 2016-08-09 Metrodigi, Inc. Systems, methods, and media for generating structured documents
US9542363B2 (en) 2014-01-31 2017-01-10 Konica Minolta Laboratory U.S.A., Inc. Processing of page-image based document to generate a re-targeted document for different display devices which support different types of user input methods
US20160140086A1 (en) * 2014-11-19 2016-05-19 Kobo Incorporated System and method for content repagination providing a page continuity indicium while e-reading
US10387541B2 (en) * 2015-01-29 2019-08-20 Hewlett-Packard Development Company, L.P. High quality setting of text for print, with full control over layout, using a web browser
US10049107B2 (en) * 2016-02-25 2018-08-14 Fuji Xerox Co., Ltd. Non-transitory computer readable medium and information processing apparatus and method
US10452748B2 (en) 2016-06-20 2019-10-22 Microsoft Technology Licensing, Llc Deconstructing and rendering of web page into native application experience
US10621427B2 (en) * 2016-11-29 2020-04-14 Canon Kabushiki Kaisha Information processing apparatus, storage medium, and information processing method for character recognition by setting a search area on a target image
US20180150689A1 (en) * 2016-11-29 2018-05-31 Canon Kabushiki Kaisha Information processing apparatus, storage medium, and information processing method
US20200250613A1 (en) * 2019-01-31 2020-08-06 Walmart Apollo, Llc System and method for dispatching drivers for delivering grocery orders and facilitating digital tipping
US10616443B1 (en) * 2019-02-11 2020-04-07 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US20210306517A1 (en) * 2019-02-11 2021-09-30 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US11509795B2 (en) * 2019-02-11 2022-11-22 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US20230049296A1 (en) * 2019-02-11 2023-02-16 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US11044382B2 (en) * 2019-02-11 2021-06-22 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US11847563B2 (en) * 2019-02-11 2023-12-19 Open Text Sa Ulc On-device artificial intelligence systems and methods for document auto-rotation
US11106858B2 (en) * 2020-01-16 2021-08-31 Adobe Inc. Merging selected digital point text objects while maintaining visual appearance fidelity
US11893338B2 (en) 2020-01-16 2024-02-06 Adobe Inc. Merging selected digital point text objects

Also Published As

Publication number Publication date
JP2004005453A (en) 2004-01-08
EP1343095A2 (en) 2003-09-10
EP1343095A3 (en) 2005-01-19
US10606933B2 (en) 2020-03-31
US20110289395A1 (en) 2011-11-24

Similar Documents

Publication Publication Date Title
US10606933B2 (en) Method and system for document image layout deconstruction and redisplay
US6694053B1 (en) Method and apparatus for performing document structure analysis
US6377704B1 (en) Method for inset detection in document layout analysis
CA2116600C (en) Methods and apparatus for inferring orientation of lines of text
US8645819B2 (en) Detection and extraction of elements constituting images in unstructured document files
EP0543598B1 (en) Method and apparatus for document image processing
US5491760A (en) Method and apparatus for summarizing a document without document image decoding
EP2545495B1 (en) Paragraph recognition in an optical character recognition (ocr) process
US9898548B1 (en) Image conversion of text-based images
US8520224B2 (en) Method of scanning to a field that covers a delimited area of a document repeatedly
US20070027749A1 (en) Advertisement detection
US8340425B2 (en) Optical character recognition with two-pass zoning
US20040202352A1 (en) Enhanced readability with flowed bitmaps
US11615635B2 (en) Heuristic method for analyzing content of an electronic document
US9008425B2 (en) Detection of numbered captions
US20190005325A1 (en) Identification of emphasized text in electronic documents
US8605297B2 (en) Method of scanning to a field that covers a delimited area of a document repeatedly
Breuel et al. Paper to PDA
Kumar et al. Line based robust script identification for indianlanguages
JP3159087B2 (en) Document collation device and method
Breuel et al. Reflowable document images
EP0692768A2 (en) Full text storage and retrieval in image at OCR and code speed
KR102571209B1 (en) Documents comparison method and device
JP2020144754A (en) Information processing device and program
Coy A Look at Optoelectronic Document Processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BREUEL, THOMAS M.;BAIRD, HENRY S.;JANSSEN, WILLIAM C.;AND OTHERS;REEL/FRAME:013025/0904;SIGNING DATES FROM 20020723 TO 20020806

AS Assignment

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT,TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK;REEL/FRAME:066728/0193

Effective date: 20220822