US20100303382A1 - Data input system, data input receiving device, data input receiving method and computer readable medium - Google Patents
Data input system, data input receiving device, data input receiving method and computer readable medium Download PDFInfo
- Publication number
- US20100303382A1 US20100303382A1 US12/619,200 US61920009A US2010303382A1 US 20100303382 A1 US20100303382 A1 US 20100303382A1 US 61920009 A US61920009 A US 61920009A US 2010303382 A1 US2010303382 A1 US 2010303382A1
- Authority
- US
- United States
- Prior art keywords
- original image
- image data
- data pieces
- character
- pieces
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00795—Reading arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/12—Detection or correction of errors, e.g. by rescanning the pattern
- G06V30/127—Detection or correction of errors, e.g. by rescanning the pattern with the intervention of an operator
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00326—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00326—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus
- H04N1/00328—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus with an apparatus processing optically-read information
- H04N1/00331—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus with an apparatus processing optically-read information with an apparatus performing optical character recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00204—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00326—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus
- H04N1/00328—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus with an apparatus processing optically-read information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0008—Connection or combination of a still picture apparatus with another apparatus
- H04N2201/0034—Details of the connection, e.g. connector, interface
- H04N2201/0037—Topological details of the connection
- H04N2201/0039—Connection via a network
Definitions
- the present invention relates to a data input system, and a data input receiving device, a data input receiving device, and a computer readable medium.
- a first aspect of the present invention is directed to a data input system that includes: an image reading device provided with: a reading unit for reading, on a form basis, an original image of each form filled with characters; a setting unit for extracting original image data pieces as a result of dividing, on a character basis, data of the original image of the form read by the reading unit, and setting identification information to each of the original image data pieces for defining positions thereof on the form; a generation unit for generating, for each of the original image data pieces extracted by the setting unit, character-associated information associated therewith; and an output control unit for making an output with a correlation among the original image data pieces, the identification information set to each of the original image data pieces, and the character-associated information generated for each of the original image data pieces; and a data input receiving device provided with: a display control unit for displaying any of the original image data pieces selected in accordance with a predetermined procedure from the original image data pieces provided by the output control unit; a receiving unit for receiving an input of text data for use to identify which character is represented
- FIG. 1 is a diagram showing an exemplary configuration of a data input system of the invention
- FIG. 2 is a diagram showing exemplary data in a form in a first exemplary embodiment
- FIG. 3 is a block diagram of an image reading device and that of a data input receiving device in the first exemplary embodiment
- FIG. 4 is a conceptual view of original image information in the first exemplary embodiment
- FIG. 5 shows exemplary data in a form information table in the first exemplary embodiment
- FIG. 6 is a flow diagram of an operation of the image reading device in the first exemplary embodiment
- FIG. 7 is a flow diagram of an operation of the data input receiving device in the first exemplary embodiment
- FIG. 8A is a diagram showing an exemplary screen for input of characters in the first exemplary embodiment
- FIG. 8B is a diagram showing an exemplary screen after the input of characters on the character input screen of FIG. 8A ;
- FIG. 9 is a diagram showing another exemplary data in the form information table in the first exemplary embodiment.
- FIG. 10 is a diagram showing an exemplary form in a second exemplary embodiment
- FIG. 11 shows exemplary data in a form information table in the second exemplary embodiment
- FIG. 12 is a flow diagram of an operation of the image reading device in the second exemplary embodiment
- FIG. 13 is a flow diagram of an operation of the data input receiving device in the second exemplary embodiment
- FIG. 14A is a diagram showing an exemplary screen for input of characters in the second exemplary embodiment
- FIG. 14B is a diagram showing an exemplary screen after the input of characters on the character input screen of FIG. 14A ;
- FIG. 15 is a diagram showing another exemplary data in the form information table in the second exemplary embodiment.
- FIG. 16 shows exemplary data in a word table in a third exemplary embodiment
- FIG. 17 shows exemplary data in a form information table in the third exemplary embodiment
- FIG. 18 is a flow diagram of an operation of the data input receiving device in the third exemplary embodiment.
- FIG. 19A is a diagram showing an exemplary screen for input of characters in the third exemplary embodiment.
- FIG. 19B is a diagram showing an exemplary screen after the input of characters on the character input screen of FIG. 19A ;
- FIG. 20 shows another exemplary data in the form information table in the third exemplary embodiment.
- FIG. 1 shows an exemplary configuration of a data input system in a first exemplary embodiment.
- a data input system 1 is configured to include an image reading device 100 , and a plurality of data input receiving devices 200 a to 200 n (hereinafter, referred to collectively as data input receiving devices 200 when no device discrimination is required).
- the image reading device 100 and the data input receiving devices 200 are connected to one another over a communications unit such as LAN (Local Area Network).
- LAN Local Area Network
- the image reading device 100 is operated to read hand-written characters filled in a form 30 of a predetermined format as shown in FIG. 2 , i.e., segmented into form items 301 to 303 to be respectively filled with address, name, telephone number, and others.
- text data e.g., text code
- Such data input is made using a word processor function, i.e., kana-kanji conversion function, by a person who is in charge of data input of the form (hereinafter, such a person is referred to as “operator”).
- the form in this exemplary embodiment is exemplified as being a paper medium segmented in advance into form items to be filled with details, but may not be restricted by type as long as being a medium filled with hand-written character strings in advance by a person concerned.
- FIG. 3 is a block diagram showing the configuration of the image reading device 100 , and that of the data input receiving device 200 , which are the configuration components of the data input system 1 . In the below, described are the configurations of such devices.
- the image reading device 100 is implemented by a scanner device, and is configured to include a CPU (Central Processing Unit) 110 , a ROM (Read Only Memory) 111 , a RAM (Random Access Memory) 112 , a receiving section 113 , an image reading section 114 , a storage section 115 , and a communications section 116 .
- the CPU 110 is operated to run a control program stored in the ROM 111 using the RAM 112 as a work area, thereby allowing components therein to function, i.e., a setting unit 110 a, a generation unit 110 b, and an output control unit 110 c, and controlling the components connected to the CPU 110 .
- the setting unit 110 a divides image data on the basis of a predetermined number of pixels for extraction.
- the image data here is a read result of each form by the image reading section 114 that will be described later, and such image data is hereinafter referred to as original image data.
- the division results of the original image data extracted as such (hereinafter, referred to as original image data pieces) are each set with identification information for identification use thereof.
- the generation unit 110 b extracts, from the hand-written characters in the original image data pieces extracted by the setting unit 110 a, any characteristics of each of the hand-written characters, and generates characteristics information thereabout for use as characters-associated information thereabout.
- the output control unit 110 c stores, into the storage section 115 , original image information including the original image data pieces extracted by the setting unit 110 a, and the identification information each thereof.
- the output control unit 110 a then correlates the identification information with the characteristics information about each of the original image data pieces generated by the generation unit 110 b, and writes the correlation result into a form information table. Note here that the unit of division and the characteristics about the hand-written characters will be described later.
- the receiving section 113 includes a power switch of the image reading device 100 , an operation switch for operating the image reading section 114 , and others. Such a receiving section 113 forwards information about the details of a user operation to the CPU 110 .
- the image reading section 114 performs scanning by directing a light onto the filling surface of the form 30 placed on the image reading device 100 , and sequentially forwards an electric signal to the CPU 110 .
- the electric signal is a result of photoelectric conversion applied to the lights reflected by the form and received by a CCD (Charge Coupled Device).
- CCD Charge Coupled Device
- the storage section 115 is configured by a nonvolatile storage medium such as hard disk, and stores the original image information and data such as form information table. Note here that a detailed description will be given later about the original image information and the form information table.
- the communications section 116 forwards/receives data to/from the data input receiving devices 200 under the control of the CPU 110 .
- the form items 301 to 303 are provided with fill-in areas 311 to 313 , respectively.
- the fill-in areas 311 to 313 are each segmented by broken lines into a plurality of areas each for a character.
- the original image data about the form 30 is subjected to raster scanning sequentially on a pixel basis starting from a pixel on the upper left. As a result, the original image data is divided on the basis of such a fill-in area for each of the form items so that original image data pieces each about a character are extracted.
- the original image data pieces to be extracted as such are the image data including the hand-written characters, and the fill-in areas are each subjected to an automatic determination to see whether or not the number of black pixels therein is equal to or larger than a predetermined value. The determination result is then used to check whether or not the fill-in areas include any hand-written characters.
- the original image data pieces to be extracted are each set with identification information for identification use thereof.
- the identification information includes a form ID, an item ID, and an extraction order, and is set after the extraction of the original image data pieces.
- the form ID is for identifying which form includes which of the original image data pieces
- the item ID is for identifying which form item.
- the extraction order indicates the order of the original image data pieces extracted for each of the form items.
- the extraction results i.e., the original image data pieces and the identification information
- the identification information in this exemplary embodiment is configured in the extraction order as a result of raster scanning of the form, and includes information about the positions of the original image data pieces in the form.
- the form ID in the identification information may be an ID found in the form through character recognition, or a bar-code printed on the form found by reading, for example.
- the image reading device 100 may set a form ID based on the date or order of the reading process.
- an item ID set in advance may be assigned in the order of raster scanning performed considering the positions of the fill-in areas in the form.
- FIG. 4 is a conceptual view of the original image information in which a correlation is established between the original image data pieces extracted from the form 30 of FIG. 2 , and the identification information.
- the fill-in area 311 is written with a character of which means “east” and is pronounced as “tou”, “higashi”, or “azuma”, in one of the areas segmented by broken lines, i.e., original image data piece 311 a.
- FIG. 4 is a conceptual view of the original image information in which a correlation is established between the original image data pieces extracted from the form 30 of FIG. 2 , and the identification information.
- the fill-in area 311 is written with a character of which means “east” and is pronounced as “tou”, “higashi”, or “azuma”, in one of the areas segmented by broken lines, i.e., original image data piece 311 a.
- the original image data piece 311 a being the character of is set with identification information 31 a, including a form ID “A001”, an item ID “a”, and an extraction order “1”.
- identification information 31 b therefor includes a form ID “A001”, an item ID “a”, and an extraction order “2”.
- FIG. 5 shows exemplary data in the form information table.
- a form information table 32 includes, with a correlation, the characteristics information and the identification information about the original image data pieces.
- the characteristics information is about characteristics of hand-written characters in each of the original image data pieces.
- Such a form information table 32 is stored with, when the characteristics information is extracted for each of the original image data pieces, the extracted characteristics information and the identification information are stored in the form of data.
- the characteristics information about the hand-written characters in this exemplary embodiment is extracted by identifying the number of line segments configuring each of the hand-written characters, i.e., vertical line segments and horizontal line segments.
- the characteristic information extracted for this character includes one vertical line segment and four horizontal line segments.
- the characteristic information of “vertical 1: horizontal 4” is generated and stored.
- “vertical” denotes the number of vertical line segments
- “horizontal” denotes the number of horizontal segments.
- the data input receiving device 200 is implemented by a personal computer or others, and is configured to include a CPU 210 , a ROM 211 , a RAM 212 , a storage section 213 , a receiving section 214 , a display section 215 , and a communications section 216 .
- the CPU 210 is operated to run a control program stored in the ROM 211 using the RAM 212 as a work area, thereby allowing components therein to function, i.e., an extraction unit 210 a, a display control unit 210 b, and a storage control unit 210 c, and controlling the components connected to the CPU 210 .
- the extraction unit 210 a extracts, from image data read by the image reading device 100 , information about a form being a data input target, i.e., form information, and original image information.
- the display control unit 210 b displays, on the display section 215 , any of the original image data pieces of a character selected at random from the form information extracted by the extraction unit 210 a.
- the storage control unit 210 c generates characteristics information about characters found in text data input by an operator, and verifies the resulting characteristic information against the characteristics information about the original image data piece displayed on the display section 215 .
- the storage control unit 210 c then executes a process in accordance with the verification result.
- the storage section 213 is configured by a nonvolatile storage medium such as hard disk, and stores various types of data such as application program and user data.
- the receiving section 214 is implemented by a ten-digit keypad, a keyboard, a mouse, and others. The receiving section 214 is operated by an operator to make an input, and forwards information about such user's input operation to the CPU 210 .
- the display section 215 is implemented by a display such as liquid crystal display, and displays images of various types of screens under the control of the CPU 210 , e.g., character input screen for an operator's input operation of characters.
- the communications section 216 forwards/receives data to/from the image reading device 100 under the control of the CPU 210 .
- the CPU 110 of the image reading device 100 reads characters filled in the form by the image reading section 114 directing a light thereto (step S 110 ).
- the CPU 110 then generates original image data based on an electric signal of the image being a read result (step S 111 ).
- the CPU 110 sequentially scans the original image data generated in step S 111 on the basis of a fill-in area set in advance to each form item. Such scanning is performed on a pixel basis, and the original image data is accordingly divided into original image data pieces.
- the original image data pieces are then each set with identification information (step S 112 ). That is, for each of the fill-in areas set to each form item in the form, values of the pixels therein are extracted, and these pixel values are checked to see whether the pixels are black pixels or not. For any of the fill-in areas determined that the number of black pixels therein is equal to or larger than a predetermined value, data of the fill-in area(s) is extracted as an original image data piece.
- the CPU 110 then generates identification information, including a form ID, an item ID, and an extraction order, for the original image data piece(s), and stores the identification information in the RAM 112 with a correlation with the original image data piece(s).
- the form ID is for identifying from which form the original image data piece(s) are extracted
- the item ID is for identifying which form item.
- the extraction order indicates the order of the original image data piece(s) extracted for the form item.
- the CPU 110 serves to sequentially read the original image data piece(s) and the identification information stored in the RAM 112 , and extract characteristics information about hand-written characters found in the original image data piece(s) read as such for each original image data piece (step S 113 ). That is, the CPU 110 detects the number of line segments each in vertical and horizontal for the character in each original image data piece, and generates characteristics information about each original image data piece using detection results thereof, i.e., the number of line segments each in vertical and horizontal. The CPU 110 then stores, in the storage section 115 , the original image information in which a correlation is established between the original image data piece(s) and the identification information. The CPU 110 also correlates the identification information set to each original image data piece(s) with the characteristics information extracted for each original image data piece in step S 113 , and stores the correlation result in the form information table 32 (step S 114 ).
- step S 210 when an operator operates the receiving section 214 to issue a command for data input (step S 210 : YES), the CPU 210 accesses the storage section 115 of the image reading device 100 via the communications section 216 , thereby reading any form information not yet provided with data from the form information table 32 in the order of form ID.
- the reading result is stored in the RAM 212 (step 211 ).
- the process started by such accessing is executed, for reading data from the storage section 115 of the image reading device 100 , by forwarding a request of data reading to the image reading device 100 after specifying which data is to be read from the data input receiving device 200 , and by the CPU 110 of the image reading device 100 reading the requested data from the storage section 115 for transmission to the data input receiving device 200 via the communications section 116 .
- the process is executed by the data input receiving device 200 forwarding a request of data writing to the image reading device 100 indicating which data is to be written to where, and by the CPU 110 of the image reading device 100 storing the requested data at the specified position.
- accessing means such process details.
- the CPU 210 selects, in step S 211 , at random any of the identification information stored in the RAM 212 about the form information, and then accesses the storage section 115 of the image reading device 100 via the communications section 216 , thereby reading any of the original image data pieces correlated with the selected identification information.
- the CPU 210 then displays a character input screen for an input target being the original image data pieces read as such (step S 212 ).
- FIG. 8A shows an exemplary character input screen 33 to be displayed in step S 212 .
- the character input screen 33 of FIG. 8A shows the character of as an input target 33 a.
- the character of is of the original image data piece 311 a whose identification information includes the form ID “A001”, the item ID “a” corresponding to the form item “address”, and the extraction order “1”.
- an operator inputs the same character as the input target 33 a, i.e., in an input data field 33 b on the character input screen 33 .
- the CPU 210 when the operator operates the receiving section 214 for input of characters on the character input screen 33 (step S 213 : YES), the CPU 210 generates characteristics information by detecting the number of line segments each in vertical and horizontal for the input character of (step S 214 ).
- the characteristics information about the input character of is “vertical 1: horizontal 4” because the number of vertical line segments is 1, and the number of horizontal line segments is 4.
- the CPU 210 then reads the characteristics information stored in the RAM 212 about the form information, and determines whether or not predetermined verification requirements are satisfied by the characteristics information about the character being an input target, and the characteristics information about the character input as above (step S 215 ). Note that, in this exemplary embodiment, such verification requirements are about a matching between the characteristics information about the input character and the characteristics information set to the original image data piece on display.
- step S 215 When determining in step S 215 that the verification requirements about the characteristics information are satisfied, i.e., determining that there is a matching between the characteristics information (step S 215 : YES), in the form information stored in the RAM 212 , the CPU 210 stores the character input on the character input screen 33 as input data for the identification information correlated with the characteristics information determined in step S 215 as being the same. The CPU 210 then accesses the storage section 115 of the image reading device 100 via the communications section 216 , and writes the resulting form information to the form information table 32 for update (step S 216 ).
- FIG. 9 shows an example of the form information table 32 updated as such.
- input data 321 whatever including the characteristics information of “vertical 1: horizontal 4” is provided with text data about the character input on the character input screen 33 , and then is updated.
- step S 217 when the form information stored in the RAM 212 is provided with the input data in its entirety, or when the operator operates the receiving section 214 to end the data input process (step S 217 : YES), the CPU 210 ends the data input process for the form currently selected.
- step S 217 when the form information stored in the RAM 212 is not yet stored with input data, or when the operator does not yet operate the receiving section 214 to end the data input process (step S 217 : NO), the procedure returns to step S 212 , and the CPU 210 accesses the storage section 115 of the image reading device 100 via the communications section 216 . The CPU 210 then reads, at random, any of the original image data pieces correlated to the identification information not yet provided with input data, and displays the characters of the original image data pieces read as such on the character input screen 33 . As such, the process in step S 212 and thereafter are repeatedly executed.
- step S 210 the CPU 210 waits until the operator issues a command for the data input process through operation of the receiving section 214 (step S 210 : NO).
- step S 213 until the operator inputs any character on the character input screen 33 through operation of the receiving section 214 (step S 213 : NO), the CPU 210 waits with the character input screen 33 remained displayed.
- step S 215 when the CPU 210 determines that the verification requirements are not satisfied by the characteristics information about the original image data piece being an input target and the characteristics information about the input character, i.e., determines that there is no matching between the characteristics information (step S 215 : NO), the procedure repeats the process in step S 213 and thereafter. Note that, in this case, a message is displayed to ask the operator to make an input of characters again on the character input screen 33 , whereby the CPU 210 becomes ready for the operator's input operation of characters.
- original image data pieces of each character string filled in a form are displayed at random on a character basis, and with such a display, advantageously, details about addresses and names filled in the form items in the form does not make that much sense to an operator. Moreover, once the operator inputs an original image data piece, i.e., a character of such an input operation is applied also to any other original image data pieces about the character of in the same form so that the operator has no more need to repeatedly make data input for any same characters.
- original image data is subjected to character recognition, and the resulting characters completed with recognition (hereinafter, referred to as recognized characters) are arranged in a predetermined order depending on the type of characters, e.g., numerical values, or alphabetic characters.
- recognized characters e.g., numerical values, or alphabetic characters.
- the original image data pieces correlated to the recognized characters arranged as such are then displayed for an input operation of characters by an operator.
- any configuration similar to that in the first exemplary embodiment above is provided with the same reference numeral.
- FIG. 10 shows an exemplary form in the second exemplary embodiment.
- a form 40 of FIG. 10 is segmented into form items 401 to 403 to be respectively filled with zip code, address, and name.
- These form items 401 to 403 are respectively provided with fill-in areas 411 to 413 each segmented by broken lines into a plurality of areas each for a character.
- the generation unit 110 b applies pattern matching to any original image data piece whose identification information includes an item ID indicating the form item 401 of “address” to be filled with numerical values, thereby identifying which numerical value corresponds to the original image data piece.
- the output control unit 110 c then stores the identified numerical value into a form information table 42 as text information about the original image data piece.
- FIG. 11 shows an exemplary form information table in this exemplary embodiment. As shown in the drawing, similarly to the form information table 32 in the first exemplary embodiment, in the form information table 42 , a correlation is established between the identification information set to each of the original image data pieces, i.e., form ID, item ID, and extraction order, with the recognized characters.
- These recognized characters are those identified as a result of the pattern matching applied to the hand-written characters of the original image data pieces, i.e., the character-associated information correlated to each of the hand-written characters.
- This form information table 42 is stored with, at the time of character recognition process for each of the original image data pieces, the identification information completed with character recognition and the recognized characters.
- the storage section 115 stores original image information in which the original image data pieces are correlated with the identification information, and is provided therein also with in advance data for use with pattern matching. Such data is hereinafter referred to as verification text data.
- FIGS. 12 and 13 described is the operation of the data input system 1 in the second exemplary embodiment.
- FIGS. 12 and 13 any process similar to that in the first exemplary embodiment described above is provided with the same step number, and any operation different from that in the first exemplary embodiment is described by referring to FIGS. 10 and 11 examples. Described first is the operation of the image reading device 100 .
- the CPU 110 of the image reading device 100 executes the processes in steps S 110 to S 112 .
- the CPU 110 then reads the verification text data from the storage section 115 , and using the verification text data, performs character recognition by applying pattern matching to any original image data piece whose identification information includes an item ID “a”, thereby identifying a numerical value completed with the character recognition (step S 123 ).
- the CPU 110 then stores the original image information in which the original image data pieces are correlated with the identification information into the storage section 115 .
- the CPU 110 also stores, in the form information table 42 , the identification information about any original image data piece including an item ID “a”, and the numerical value identified for the original image data piece (step S 124 ).
- the CPU 210 of the data input receiving device 200 executes the processes in steps S 210 and S 211 , and reads any form information of a form ID “B001” from the image reading device 100 for storage into the RAM 212 .
- the CPU 210 sorts, in the ascending order, any recognized characters correlated with an item ID “a” in the form information stored in the RAM 212 , and accesses the storage section 115 of the image reading device 100 via the communications section 216 for reading of any original image data piece whose identification information includes an item ID “a”.
- the CPU 210 displays, on the character input screen, the original image data pieces correlated with the identification information about the recognized characters completed with sorting (step S 221 ).
- FIG. 14A shows an exemplary character input screen 43 to be displayed in step S 221 .
- the recognized characters “5”, “6”, “7”, “1”, “2”, “4”, and “3” correlated with the item ID “a” under the form ID “B001” are sorted in the ascending order, and original image data pieces of the hand-written numbers correlated to the recognized characters, i.e., “1”, “2”, “3”, “4”, “5”, “6”, and “7”, are displayed in an input target 43 a.
- FIG. 14B on such a character input screen 43 , the operator inputs the numerical values displayed in the input target 43 a into an input data field 43 b.
- step S 213 when the operator operates the receiving section 214 to make an input of characters on the character input screen 43 (step S 213 : YES), the CPU 210 stores numerical data in the form information stored in the RAM 212 as input data about the identification information set to each of the original image data pieces displayed on the character input screen 43 . The CPU 210 then accesses the storage section 115 of the image reading device 100 via the communications section 216 for writing of the form information thereabout into the form information table 42 , thereby updating the information (step S 216 ).
- FIG. 15 shows an example of the form information table 42 updated as such.
- an input data field 421 for the item ID “a” under the form ID “B001” is stored with the numerical data input on the character input screen 43 .
- characters such as numerical values found in a form are displayed after sorting in a predetermined order, i.e., ascending or descending order, and an operator makes data input while referring to the characters completed with sorting as such.
- targets for character recognition are numerical values.
- targets for character recognition may be alphabetical characters or Japanese characters.
- targets for character recognition may be alphabetical characters or Japanese characters.
- the identified characters are all alphabetical characters
- such recognized characters may be sorted in an alphabetical order before display.
- the identified characters are all Japanese characters
- such recognized characters may be sorted in the order of text code, or when those are Japanese numerical values, these may be sorted in the ascending or descending order.
- the recognized characters are sorted in a predetermined order.
- the original image data pieces each being a numerical value are sorted in a predetermined order for display, and the resulting data is input.
- any characters not being the numerical values are subjected to character recognition, and from a word table stored in advance in the data input receiving device 200 , any word including the recognized characters is extracted for display of the original image data pieces in the order of the extracted word.
- FIG. 16 shows an exemplary word table in such a third exemplary embodiment.
- a word table 50 includes an element of “word ID” for identification use of words, and words of various classes, i.e., noun, adjective, and verb.
- word table 50 is stored in advance in the storage section 213 in the data input receiving device 200 .
- FIG. 17 shows the form information table 42 similarly to that in the second exemplary embodiment.
- This form information table 42 exemplarily includes recognized characters completed with identification as a result of character recognition applied to the original image data pieces whose identification information includes a form ID corresponding to the form items 402 and 403 , i.e., address and name.
- the storage section 115 of the image reading device 100 is stored with original image information in which the original image data pieces are correlated with the identification information.
- the storage section 115 is also provided in advance with verification text data for pattern matching use.
- FIG. 18 is a flow diagram of the operation of the data input receiving device 200 . In the below, a description is given by referring to FIGS. 10 , 16 , and 17 examples.
- the CPU 210 of the data input receiving device 200 executes the processes in steps S 210 and S 211 , and reads form information about the form ID “B001” from the form information table 42 for storage into the RAM 212 .
- the identification information in the form information assumed here is that the CPU 210 selects a recognized character of which means “calmness” and is pronounced as “sizu”, “sei” or “jou”, correlated to any identification information not yet provided with input data, i.e., identification information with a form ID “B001”, an item ID “c”, and an extraction order “3” (step S 231 ).
- a recognized character is hereinafter referred to as not-yet-input recognition character.
- the CPU 210 then reads the word table 50 from the storage section 213 , and extracts therefrom any word including the selected not-yet-input recognized character, i.e., which means “Shizuoka Prefecture” and is pronounced as “shizuoka ken”, (step S 232 ).
- step S 233 using any not-yet-input recognized character correlated with the identification information including an item ID different from that of the selected not-yet-input recognized character of the CPU 210 determines whether or not the extracted word of can be configured thereby (step S 233 ).
- step S 233 when determining that the extracted word of can be configured by the not-yet-input character of and other not-yet-input recognized characters of which means “hill” and is pronounced as “oka” or “kou”, and which means “Prefecture” and is pronounced as “ken”, “gen” or “agata”, (step S 230 : YES), the CPU 210 accesses the storage section 115 of the image reading device 100 via the communications section 216 .
- the not-yet-input recognized character of is the one correlated with the identification information including a form ID “B001”, an item ID “d”, and an extraction order “1”
- the not-yet-input recognized character of is the one correlated with the identification information including a form ID “B001”, an item ID “c”, and an extraction order “4”.
- the CPU 210 then reads any of the original image data pieces corresponding to such not-yet-input recognized characters, respectively, and displays on the character input screen the original image data pieces read as such in the order of the word (step S 234 ).
- FIG. 19A shows an exemplary character input screen 51 displayed as such.
- Input targets 511 to 513 display the original image data pieces in the order of the selected word.
- FIG. 19B in fields of input data 521 to 523 respectively corresponding to the input targets 511 to 513 , the characters same as those in the input targets 511 to 513 are input.
- step S 213 when the character input screen 51 displayed in step S 236 is provided with characters (step S 213 : YES), the CPU 210 executes the process in step S 216 and thereafter similarly to the second exemplary embodiment.
- FIG. 20 shows an example of the form information table 42 completed with the process in step S 216 in the above example.
- the text data about and provided by the operator on the character input screen 51 is stored in input data 422 corresponding to the identification information set to the original image data pieces on display.
- step S 233 when determining that the extracted word cannot be configured (step S 233 : NO), the CPU repeats the process in step S 233 , i.e., keeps performing word extraction one by one from the word table 50 (step S 232 ) until every word in the word table 50 is completed with the process in step S 233 (step S 235 : NO).
- step S 235 when every word in the word table 50 is completed with the determination in step S 233 (step S 235 : YES), the CPU 210 accesses the storage section 115 of the image reading device 100 via the communications section 216 , and displays any original image data pieces on the character input screen in the order of the word (step S 236 ).
- the original image pieces here are those corresponding to any not-yet-input recognized characters that can be apart of the extracted word, and to any input recognized characters. That is, when the word extracted last is when this word cannot be configured by not-yet-input recognized characters, another word is configured by combining input recognized characters, thereby displaying the original image data pieces respectively corresponding to the recognized characters.
- any data completed with input is provided into the field of input data corresponding to the original image data pieces also completed with input.
- any original image data pieces partially corresponding to the recognized characters being a part of the word may be displayed.
- the word of if the recognized character of is not found in the form information only the original image data pieces being the characters of and are displayed.
- the characters found in the form in the order of a word making sense are displayed.
- any density difference observed in a fill-in area between the area of character and the remaining area is extracted so that the outline of the character is detected, and information about the detected outline is extracted for use as characteristics information.
- a character is categorized through density detection of black pixels appearing along the four sides around the character in a fill-in area, and information about the resulting category may be generated for use as characteristics information.
- the characteristics information generated as such may be stored with a correlation with the characteristics information in the first exemplary embodiment, and when verification requirements are satisfied, e.g., the characteristics information about any input character and the characteristics information about any original image data piece are both falling in a predetermined range of similarity, the input character may be stored with a correlation with the identification information set to the original image data piece.
- data input receiving devices 200 a and 200 b may be respectively input with text data in any same form by two different operators. If this is the case, in the first and third exemplary embodiments, after extracting any original image data pieces corresponding to identification information not yet provided with data in the form information table, the data input receiving devices 200 a and 200 b may each set a use flag in the form information table to indicate that the identification information is in use, and may display the extracted original image data pieces.
- the data input receiving devices 200 a and 200 b each receive an input of text data in a form on a form item basis, and set a use flag to any identification information including an item ID being an input target in the form information table.
- the data input receiving devices 200 a and 200 b then each sort the original image data pieces under the item ID being an input target in a predetermined order for display.
- the identification information to be set may indicate the positions of the original image data pieces, e.g., coordinates of the original image data piece in a form.
- the data input receiving device 200 is exemplified as storing text data provided by an operator for the original image data pieces displayed on the display section 215 with a correlation also with identification information set to any other original image data pieces including the character-associated information corresponding to the original image data pieces on display.
- the data input receiving device 200 may store, with a correlation, text data provided only for the identification information set to the original image data pieces on display.
- the input text data may be stored with a correlation with the identification information set to the other original image pieces.
- the image reading device 100 is exemplified as storing, in the storage section 115 therein, with a correlation, the original image data pieces in each form completed with reading, the identification information set to each of the original image data pieces, and the character-associated information corresponding to each of the original image data pieces, i.e., characteristics information and recognized characters.
- the image reading device 100 may be so configured as to forward such data to any other device connected to the image reading device 100 and the data input receiving device 200 , and store the data in such a device.
- the data input receiving device 200 may access the device storing the character-associated information (characteristics information and recognized characters) with a correlation with the original image data pieces, the identification information set to each of the original image data pieces, and the character-associated information corresponding to each of the original image data pieces, thereby performing reading of such data and writing of input text data.
- the character-associated information characteristics information and recognized characters
- the data input system 1 in the first to third exemplary embodiments described above is exemplarily configured by the image reading device 100 and the data input receiving devices 200 .
- the image reading device 100 may be so configured as to serve also as the data input receiving device 200 , and in the resulting image reading device 100 , any input of text data may be provided by an operator for data input.
- the form exemplified in the first to third exemplary embodiments above is segmented in advance into form items including fill-in areas to be filled with details of information.
- original image data pieces may be extracted as below. That is, for example, the image reading device 100 may scan a medium filled with hand-written characters at predetermined intervals to detect a distribution of black pixels, thereby defining the space between characters filled in the medium. Thereafter, based on the character space defined as such, the image reading device 100 may extract original image data pieces on a character basis from original image data of the medium.
- the data input receiving device 200 in the first to third exemplary embodiments described above is exemplified as including the display section 215 .
- the data input receiving device 200 may be connected with an external display device such as liquid crystal display. If this is the configuration, the CPU 210 of the data input receiving device 200 may perform control to display the image of a character input screen as in the above exemplary embodiments on the display device.
- the program to be run by the CPUs 110 and 210 of the image reading device 100 and the data input receiving device 200 , respectively, in the first to third exemplary embodiments above can be distributed in the form of a computer-readable recording medium such as magnetic recording medium, e.g., magnetic tape, and magnetic disk, optical recording medium, e.g., optical disk, magneto-optical recording medium, and semiconductor memory.
- the program may be also downloaded to the image reading device 100 and the data input receiving device 200 using a communications unit such as the Internet.
Abstract
Description
- This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2009-126649 filed May 26, 2009.
- 1. Technical Field
- The present invention relates to a data input system, and a data input receiving device, a data input receiving device, and a computer readable medium.
- 2. Related Art
- For data input of forms filled with personal information such as addresses and names, various technologies have been proposed for preventing leakage of such personal information.
- A first aspect of the present invention is directed to a data input system that includes: an image reading device provided with: a reading unit for reading, on a form basis, an original image of each form filled with characters; a setting unit for extracting original image data pieces as a result of dividing, on a character basis, data of the original image of the form read by the reading unit, and setting identification information to each of the original image data pieces for defining positions thereof on the form; a generation unit for generating, for each of the original image data pieces extracted by the setting unit, character-associated information associated therewith; and an output control unit for making an output with a correlation among the original image data pieces, the identification information set to each of the original image data pieces, and the character-associated information generated for each of the original image data pieces; and a data input receiving device provided with: a display control unit for displaying any of the original image data pieces selected in accordance with a predetermined procedure from the original image data pieces provided by the output control unit; a receiving unit for receiving an input of text data for use to identify which character is represented by each of the original image data pieces displayed by the display control unit; and a storage control unit for making storage by correlating the text data received by the receiving unit with the original image data pieces displayed by the display control unit and the identification information set to each of the original image data pieces.
- Exemplary embodiments of the present invention will be described in detail based on the following figures, wherein:
-
FIG. 1 is a diagram showing an exemplary configuration of a data input system of the invention; -
FIG. 2 is a diagram showing exemplary data in a form in a first exemplary embodiment; -
FIG. 3 is a block diagram of an image reading device and that of a data input receiving device in the first exemplary embodiment; -
FIG. 4 is a conceptual view of original image information in the first exemplary embodiment; -
FIG. 5 shows exemplary data in a form information table in the first exemplary embodiment; -
FIG. 6 is a flow diagram of an operation of the image reading device in the first exemplary embodiment; -
FIG. 7 is a flow diagram of an operation of the data input receiving device in the first exemplary embodiment; -
FIG. 8A is a diagram showing an exemplary screen for input of characters in the first exemplary embodiment; -
FIG. 8B is a diagram showing an exemplary screen after the input of characters on the character input screen ofFIG. 8A ; -
FIG. 9 is a diagram showing another exemplary data in the form information table in the first exemplary embodiment; -
FIG. 10 is a diagram showing an exemplary form in a second exemplary embodiment; -
FIG. 11 shows exemplary data in a form information table in the second exemplary embodiment; -
FIG. 12 is a flow diagram of an operation of the image reading device in the second exemplary embodiment; -
FIG. 13 is a flow diagram of an operation of the data input receiving device in the second exemplary embodiment; -
FIG. 14A is a diagram showing an exemplary screen for input of characters in the second exemplary embodiment; -
FIG. 14B is a diagram showing an exemplary screen after the input of characters on the character input screen ofFIG. 14A ; -
FIG. 15 is a diagram showing another exemplary data in the form information table in the second exemplary embodiment; -
FIG. 16 shows exemplary data in a word table in a third exemplary embodiment; -
FIG. 17 shows exemplary data in a form information table in the third exemplary embodiment; -
FIG. 18 is a flow diagram of an operation of the data input receiving device in the third exemplary embodiment; -
FIG. 19A is a diagram showing an exemplary screen for input of characters in the third exemplary embodiment; -
FIG. 19B is a diagram showing an exemplary screen after the input of characters on the character input screen ofFIG. 19A ; and -
FIG. 20 shows another exemplary data in the form information table in the third exemplary embodiment. - In the below, described is a data input system in exemplary embodiments of the invention by referring to the accompanying drawings.
-
FIG. 1 shows an exemplary configuration of a data input system in a first exemplary embodiment. As shown inFIG. 1 , adata input system 1 is configured to include animage reading device 100, and a plurality of datainput receiving devices 200 a to 200 n (hereinafter, referred to collectively as datainput receiving devices 200 when no device discrimination is required). Theimage reading device 100 and the datainput receiving devices 200 are connected to one another over a communications unit such as LAN (Local Area Network). - In the
data input system 1 in this exemplary embodiment, theimage reading device 100 is operated to read hand-written characters filled in aform 30 of a predetermined format as shown inFIG. 2 , i.e., segmented intoform items 301 to 303 to be respectively filled with address, name, telephone number, and others. In the datainput receiving devices 200, by referring to the hand-written characters read as such, text data, e.g., text code, is input for use to identify the characters. Such data input is made using a word processor function, i.e., kana-kanji conversion function, by a person who is in charge of data input of the form (hereinafter, such a person is referred to as “operator”). - Note here that the form in this exemplary embodiment is exemplified as being a paper medium segmented in advance into form items to be filled with details, but may not be restricted by type as long as being a medium filled with hand-written character strings in advance by a person concerned.
-
FIG. 3 is a block diagram showing the configuration of theimage reading device 100, and that of the datainput receiving device 200, which are the configuration components of thedata input system 1. In the below, described are the configurations of such devices. - The
image reading device 100 is implemented by a scanner device, and is configured to include a CPU (Central Processing Unit) 110, a ROM (Read Only Memory) 111, a RAM (Random Access Memory) 112, areceiving section 113, animage reading section 114, astorage section 115, and acommunications section 116. TheCPU 110 is operated to run a control program stored in theROM 111 using theRAM 112 as a work area, thereby allowing components therein to function, i.e., asetting unit 110 a, ageneration unit 110 b, and anoutput control unit 110 c, and controlling the components connected to theCPU 110. - That is, the
setting unit 110 a divides image data on the basis of a predetermined number of pixels for extraction. The image data here is a read result of each form by theimage reading section 114 that will be described later, and such image data is hereinafter referred to as original image data. The division results of the original image data extracted as such (hereinafter, referred to as original image data pieces) are each set with identification information for identification use thereof. Thegeneration unit 110 b extracts, from the hand-written characters in the original image data pieces extracted by thesetting unit 110 a, any characteristics of each of the hand-written characters, and generates characteristics information thereabout for use as characters-associated information thereabout. Theoutput control unit 110 c stores, into thestorage section 115, original image information including the original image data pieces extracted by thesetting unit 110 a, and the identification information each thereof. Theoutput control unit 110 a then correlates the identification information with the characteristics information about each of the original image data pieces generated by thegeneration unit 110 b, and writes the correlation result into a form information table. Note here that the unit of division and the characteristics about the hand-written characters will be described later. - The receiving
section 113 includes a power switch of theimage reading device 100, an operation switch for operating theimage reading section 114, and others. Such a receivingsection 113 forwards information about the details of a user operation to theCPU 110. Theimage reading section 114 performs scanning by directing a light onto the filling surface of theform 30 placed on theimage reading device 100, and sequentially forwards an electric signal to theCPU 110. The electric signal is a result of photoelectric conversion applied to the lights reflected by the form and received by a CCD (Charge Coupled Device). - The
storage section 115 is configured by a nonvolatile storage medium such as hard disk, and stores the original image information and data such as form information table. Note here that a detailed description will be given later about the original image information and the form information table. Thecommunications section 116 forwards/receives data to/from the datainput receiving devices 200 under the control of theCPU 110. - Described next is data for storage into the
storage section 115. First of all, described are the original image data pieces in the first exemplary embodiment. - As shown in
FIG. 2 , in theform 30, theform items 301 to 303 are provided with fill-inareas 311 to 313, respectively. The fill-inareas 311 to 313 are each segmented by broken lines into a plurality of areas each for a character. The original image data about theform 30 is subjected to raster scanning sequentially on a pixel basis starting from a pixel on the upper left. As a result, the original image data is divided on the basis of such a fill-in area for each of the form items so that original image data pieces each about a character are extracted. Herein, the original image data pieces to be extracted as such are the image data including the hand-written characters, and the fill-in areas are each subjected to an automatic determination to see whether or not the number of black pixels therein is equal to or larger than a predetermined value. The determination result is then used to check whether or not the fill-in areas include any hand-written characters. The original image data pieces to be extracted are each set with identification information for identification use thereof. The identification information includes a form ID, an item ID, and an extraction order, and is set after the extraction of the original image data pieces. The form ID is for identifying which form includes which of the original image data pieces, and the item ID is for identifying which form item. The extraction order indicates the order of the original image data pieces extracted for each of the form items. The extraction results, i.e., the original image data pieces and the identification information, are stored in a predetermined area in thestorage section 115 as original image information. As such, the identification information in this exemplary embodiment is configured in the extraction order as a result of raster scanning of the form, and includes information about the positions of the original image data pieces in the form. - Note here that the form ID in the identification information may be an ID found in the form through character recognition, or a bar-code printed on the form found by reading, for example. Alternatively, the
image reading device 100 may set a form ID based on the date or order of the reading process. Moreover, an item ID set in advance may be assigned in the order of raster scanning performed considering the positions of the fill-in areas in the form. -
FIG. 4 is a conceptual view of the original image information in which a correlation is established between the original image data pieces extracted from theform 30 ofFIG. 2 , and the identification information. Exemplified here is a case where, in theform item 301 for address under a “form 001” in theform 30 ofFIG. 2 , the fill-inarea 311 is written with a character of which means “east” and is pronounced as “tou”, “higashi”, or “azuma”, in one of the areas segmented by broken lines, i.e., originalimage data piece 311 a. In such a case, as shown inFIG. 4 , the originalimage data piece 311 a being the character of is set withidentification information 31 a, including a form ID “A001”, an item ID “a”, and an extraction order “1”. Similarly, with an originalimage data piece 311 b being a character of which means “city” and is pronounced as “kyou”, “kei” or “miyako”,identification information 31 b therefor includes a form ID “A001”, an item ID “a”, and an extraction order “2”. - Described next is a form information table.
FIG. 5 shows exemplary data in the form information table. A form information table 32 includes, with a correlation, the characteristics information and the identification information about the original image data pieces. The characteristics information is about characteristics of hand-written characters in each of the original image data pieces. Such a form information table 32 is stored with, when the characteristics information is extracted for each of the original image data pieces, the extracted characteristics information and the identification information are stored in the form of data. - Herein, the characteristics information about the hand-written characters in this exemplary embodiment is extracted by identifying the number of line segments configuring each of the hand-written characters, i.e., vertical line segments and horizontal line segments. With the original
image data piece 311 a being the character of inFIG. 4 , for example, the characteristic information extracted for this character includes one vertical line segment and four horizontal line segments. As shown inFIG. 5 , with a correlation with the identification information of this original image data piece, i.e., the form ID “A001”, the item ID “a”, and the extraction order “1”, the characteristic information of “vertical 1: horizontal 4” is generated and stored. In the characteristic information, “vertical” denotes the number of vertical line segments, and “horizontal” denotes the number of horizontal segments. - Described next is the configuration of the data
input receiving device 200. The datainput receiving device 200 is implemented by a personal computer or others, and is configured to include aCPU 210, aROM 211, aRAM 212, astorage section 213, a receivingsection 214, adisplay section 215, and acommunications section 216. TheCPU 210 is operated to run a control program stored in theROM 211 using theRAM 212 as a work area, thereby allowing components therein to function, i.e., anextraction unit 210 a, adisplay control unit 210 b, and astorage control unit 210 c, and controlling the components connected to theCPU 210. That is, theextraction unit 210 a extracts, from image data read by theimage reading device 100, information about a form being a data input target, i.e., form information, and original image information. Thedisplay control unit 210 b displays, on thedisplay section 215, any of the original image data pieces of a character selected at random from the form information extracted by theextraction unit 210 a. Thestorage control unit 210 c generates characteristics information about characters found in text data input by an operator, and verifies the resulting characteristic information against the characteristics information about the original image data piece displayed on thedisplay section 215. Thestorage control unit 210 c then executes a process in accordance with the verification result. - The
storage section 213 is configured by a nonvolatile storage medium such as hard disk, and stores various types of data such as application program and user data. The receivingsection 214 is implemented by a ten-digit keypad, a keyboard, a mouse, and others. The receivingsection 214 is operated by an operator to make an input, and forwards information about such user's input operation to theCPU 210. - The
display section 215 is implemented by a display such as liquid crystal display, and displays images of various types of screens under the control of theCPU 210, e.g., character input screen for an operator's input operation of characters. Thecommunications section 216 forwards/receives data to/from theimage reading device 100 under the control of theCPU 210. - Described next is the operation of the
data input system 1 of the first exemplary embodiment. First of all, described is the operation of theimage reading device 100 by referring toFIG. 6 . - When a user places the
form 30 on theimage reading device 100, and when an image read operation is made to theform 30 via the receivingsection 113, theCPU 110 of theimage reading device 100 reads characters filled in the form by theimage reading section 114 directing a light thereto (step S110). TheCPU 110 then generates original image data based on an electric signal of the image being a read result (step S111). - The
CPU 110 sequentially scans the original image data generated in step S111 on the basis of a fill-in area set in advance to each form item. Such scanning is performed on a pixel basis, and the original image data is accordingly divided into original image data pieces. The original image data pieces are then each set with identification information (step S112). That is, for each of the fill-in areas set to each form item in the form, values of the pixels therein are extracted, and these pixel values are checked to see whether the pixels are black pixels or not. For any of the fill-in areas determined that the number of black pixels therein is equal to or larger than a predetermined value, data of the fill-in area(s) is extracted as an original image data piece. TheCPU 110 then generates identification information, including a form ID, an item ID, and an extraction order, for the original image data piece(s), and stores the identification information in theRAM 112 with a correlation with the original image data piece(s). Herein, the form ID is for identifying from which form the original image data piece(s) are extracted, and the item ID is for identifying which form item. The extraction order indicates the order of the original image data piece(s) extracted for the form item. - The
CPU 110 serves to sequentially read the original image data piece(s) and the identification information stored in theRAM 112, and extract characteristics information about hand-written characters found in the original image data piece(s) read as such for each original image data piece (step S113). That is, theCPU 110 detects the number of line segments each in vertical and horizontal for the character in each original image data piece, and generates characteristics information about each original image data piece using detection results thereof, i.e., the number of line segments each in vertical and horizontal. TheCPU 110 then stores, in thestorage section 115, the original image information in which a correlation is established between the original image data piece(s) and the identification information. TheCPU 110 also correlates the identification information set to each original image data piece(s) with the characteristics information extracted for each original image data piece in step S113, and stores the correlation result in the form information table 32 (step S114). - Described next is the operation of the data
input receiving device 200 by referring toFIG. 7 . - In the data
input receiving device 200, when an operator operates the receivingsection 214 to issue a command for data input (step S210: YES), theCPU 210 accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216, thereby reading any form information not yet provided with data from the form information table 32 in the order of form ID. The reading result is stored in the RAM 212 (step 211). - The process started by such accessing is executed, for reading data from the
storage section 115 of theimage reading device 100, by forwarding a request of data reading to theimage reading device 100 after specifying which data is to be read from the datainput receiving device 200, and by theCPU 110 of theimage reading device 100 reading the requested data from thestorage section 115 for transmission to the datainput receiving device 200 via thecommunications section 116. For writing data to thestorage section 115, the process is executed by the datainput receiving device 200 forwarding a request of data writing to theimage reading device 100 indicating which data is to be written to where, and by theCPU 110 of theimage reading device 100 storing the requested data at the specified position. In the below, the expression of “accessing” means such process details. - The
CPU 210 selects, in step S211, at random any of the identification information stored in theRAM 212 about the form information, and then accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216, thereby reading any of the original image data pieces correlated with the selected identification information. TheCPU 210 then displays a character input screen for an input target being the original image data pieces read as such (step S212). - By referring to
FIGS. 8A and 8B , the character input screen is described.FIG. 8A shows an exemplarycharacter input screen 33 to be displayed in step S212. Thecharacter input screen 33 ofFIG. 8A shows the character of as aninput target 33 a. The character of is of the originalimage data piece 311 a whose identification information includes the form ID “A001”, the item ID “a” corresponding to the form item “address”, and the extraction order “1”. As shown inFIG. 8B , an operator inputs the same character as theinput target 33 a, i.e., in aninput data field 33 b on thecharacter input screen 33. - Referring back to
FIG. 7 , when the operator operates the receivingsection 214 for input of characters on the character input screen 33 (step S213: YES), theCPU 210 generates characteristics information by detecting the number of line segments each in vertical and horizontal for the input character of (step S214). InFIG. 8B example, the characteristics information about the input character of is “vertical 1: horizontal 4” because the number of vertical line segments is 1, and the number of horizontal line segments is 4. TheCPU 210 then reads the characteristics information stored in theRAM 212 about the form information, and determines whether or not predetermined verification requirements are satisfied by the characteristics information about the character being an input target, and the characteristics information about the character input as above (step S215). Note that, in this exemplary embodiment, such verification requirements are about a matching between the characteristics information about the input character and the characteristics information set to the original image data piece on display. - When determining in step S215 that the verification requirements about the characteristics information are satisfied, i.e., determining that there is a matching between the characteristics information (step S215: YES), in the form information stored in the
RAM 212, theCPU 210 stores the character input on thecharacter input screen 33 as input data for the identification information correlated with the characteristics information determined in step S215 as being the same. TheCPU 210 then accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216, and writes the resulting form information to the form information table 32 for update (step S216). -
FIG. 9 shows an example of the form information table 32 updated as such. As shown in the drawing, in the form information of the form ID “A001”,input data 321 whatever including the characteristics information of “vertical 1: horizontal 4” is provided with text data about the character input on thecharacter input screen 33, and then is updated. - Referring back to
FIG. 7 , in step S217, when the form information stored in theRAM 212 is provided with the input data in its entirety, or when the operator operates the receivingsection 214 to end the data input process (step S217: YES), theCPU 210 ends the data input process for the form currently selected. - Also in step S217, when the form information stored in the
RAM 212 is not yet stored with input data, or when the operator does not yet operate the receivingsection 214 to end the data input process (step S217: NO), the procedure returns to step S212, and theCPU 210 accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216. TheCPU 210 then reads, at random, any of the original image data pieces correlated to the identification information not yet provided with input data, and displays the characters of the original image data pieces read as such on thecharacter input screen 33. As such, the process in step S212 and thereafter are repeatedly executed. - In step S210, the
CPU 210 waits until the operator issues a command for the data input process through operation of the receiving section 214 (step S210: NO). In step S213, until the operator inputs any character on thecharacter input screen 33 through operation of the receiving section 214 (step S213: NO), theCPU 210 waits with thecharacter input screen 33 remained displayed. In step S215, when theCPU 210 determines that the verification requirements are not satisfied by the characteristics information about the original image data piece being an input target and the characteristics information about the input character, i.e., determines that there is no matching between the characteristics information (step S215: NO), the procedure repeats the process in step S213 and thereafter. Note that, in this case, a message is displayed to ask the operator to make an input of characters again on thecharacter input screen 33, whereby theCPU 210 becomes ready for the operator's input operation of characters. - In the exemplary embodiment described above, original image data pieces of each character string filled in a form are displayed at random on a character basis, and with such a display, advantageously, details about addresses and names filled in the form items in the form does not make that much sense to an operator. Moreover, once the operator inputs an original image data piece, i.e., a character of such an input operation is applied also to any other original image data pieces about the character of in the same form so that the operator has no more need to repeatedly make data input for any same characters.
- Described next is a data input system in a second exemplary embodiment of the invention.
- In the second exemplary embodiment, original image data is subjected to character recognition, and the resulting characters completed with recognition (hereinafter, referred to as recognized characters) are arranged in a predetermined order depending on the type of characters, e.g., numerical values, or alphabetic characters. The original image data pieces correlated to the recognized characters arranged as such are then displayed for an input operation of characters by an operator. In the below, any configuration similar to that in the first exemplary embodiment above is provided with the same reference numeral.
-
FIG. 10 shows an exemplary form in the second exemplary embodiment. Similarly to theform 30 in the first exemplary embodiment, aform 40 ofFIG. 10 is segmented intoform items 401 to 403 to be respectively filled with zip code, address, and name. Theseform items 401 to 403 are respectively provided with fill-inareas 411 to 413 each segmented by broken lines into a plurality of areas each for a character. - In the second exemplary embodiment, in the
image reading device 100, thegeneration unit 110 b applies pattern matching to any original image data piece whose identification information includes an item ID indicating theform item 401 of “address” to be filled with numerical values, thereby identifying which numerical value corresponds to the original image data piece. Theoutput control unit 110 c then stores the identified numerical value into a form information table 42 as text information about the original image data piece.FIG. 11 shows an exemplary form information table in this exemplary embodiment. As shown in the drawing, similarly to the form information table 32 in the first exemplary embodiment, in the form information table 42, a correlation is established between the identification information set to each of the original image data pieces, i.e., form ID, item ID, and extraction order, with the recognized characters. These recognized characters are those identified as a result of the pattern matching applied to the hand-written characters of the original image data pieces, i.e., the character-associated information correlated to each of the hand-written characters. This form information table 42 is stored with, at the time of character recognition process for each of the original image data pieces, the identification information completed with character recognition and the recognized characters. - Note here that, similarly to the first exemplary embodiment, the
storage section 115 stores original image information in which the original image data pieces are correlated with the identification information, and is provided therein also with in advance data for use with pattern matching. Such data is hereinafter referred to as verification text data. - In the below, by referring to
FIGS. 12 and 13 , described is the operation of thedata input system 1 in the second exemplary embodiment. Note here that, inFIGS. 12 and 13 , any process similar to that in the first exemplary embodiment described above is provided with the same step number, and any operation different from that in the first exemplary embodiment is described by referring toFIGS. 10 and 11 examples. Described first is the operation of theimage reading device 100. - By referring to
FIG. 12 , theCPU 110 of theimage reading device 100 executes the processes in steps S110 to S112. TheCPU 110 then reads the verification text data from thestorage section 115, and using the verification text data, performs character recognition by applying pattern matching to any original image data piece whose identification information includes an item ID “a”, thereby identifying a numerical value completed with the character recognition (step S123). TheCPU 110 then stores the original image information in which the original image data pieces are correlated with the identification information into thestorage section 115. TheCPU 110 also stores, in the form information table 42, the identification information about any original image data piece including an item ID “a”, and the numerical value identified for the original image data piece (step S124). - In
FIG. 13 , similarly to the first exemplary embodiment, theCPU 210 of the datainput receiving device 200 executes the processes in steps S210 and S211, and reads any form information of a form ID “B001” from theimage reading device 100 for storage into theRAM 212. - The
CPU 210 sorts, in the ascending order, any recognized characters correlated with an item ID “a” in the form information stored in theRAM 212, and accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216 for reading of any original image data piece whose identification information includes an item ID “a”. TheCPU 210 then displays, on the character input screen, the original image data pieces correlated with the identification information about the recognized characters completed with sorting (step S221). - Described now is the character input screen in the second exemplary embodiment by referring to
FIGS. 14A and 14B .FIG. 14A shows an exemplarycharacter input screen 43 to be displayed in step S221. On thecharacter input screen 43, the recognized characters “5”, “6”, “7”, “1”, “2”, “4”, and “3” correlated with the item ID “a” under the form ID “B001” are sorted in the ascending order, and original image data pieces of the hand-written numbers correlated to the recognized characters, i.e., “1”, “2”, “3”, “4”, “5”, “6”, and “7”, are displayed in aninput target 43 a. As shown inFIG. 14B , on such acharacter input screen 43, the operator inputs the numerical values displayed in theinput target 43 a into aninput data field 43 b. - Referring back to
FIG. 13 , when the operator operates the receivingsection 214 to make an input of characters on the character input screen 43 (step S213: YES), theCPU 210 stores numerical data in the form information stored in theRAM 212 as input data about the identification information set to each of the original image data pieces displayed on thecharacter input screen 43. TheCPU 210 then accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216 for writing of the form information thereabout into the form information table 42, thereby updating the information (step S216). -
FIG. 15 shows an example of the form information table 42 updated as such. As shown inFIG. 15 , in the form information table 42, aninput data field 421 for the item ID “a” under the form ID “B001” is stored with the numerical data input on thecharacter input screen 43. - In the second exemplary embodiment described above, characters such as numerical values found in a form are displayed after sorting in a predetermined order, i.e., ascending or descending order, and an operator makes data input while referring to the characters completed with sorting as such.
- Note here that, in the above second exemplary embodiment, exemplified is a case where targets for character recognition are numerical values. This is surely not restrictive, and targets for character recognition may be alphabetical characters or Japanese characters. For example, after character recognition is performed to any original image data pieces in a specific form item, when the identified characters are all alphabetical characters, such recognized characters may be sorted in an alphabetical order before display. Also when the identified characters are all Japanese characters, such recognized characters may be sorted in the order of text code, or when those are Japanese numerical values, these may be sorted in the ascending or descending order. As such, for displaying the original image data pieces, the recognized characters are sorted in a predetermined order.
- In the second exemplary embodiment described above, the original image data pieces each being a numerical value are sorted in a predetermined order for display, and the resulting data is input. In a third exemplary embodiment, any characters not being the numerical values are subjected to character recognition, and from a word table stored in advance in the data
input receiving device 200, any word including the recognized characters is extracted for display of the original image data pieces in the order of the extracted word. In the below, any configuration similar to that in the first exemplary embodiment described above is provided with the same reference numeral, and any different configuration is mainly described. -
FIG. 16 shows an exemplary word table in such a third exemplary embodiment. As shown inFIG. 16 , a word table 50 includes an element of “word ID” for identification use of words, and words of various classes, i.e., noun, adjective, and verb. Such a word table 50 is stored in advance in thestorage section 213 in the datainput receiving device 200. -
FIG. 17 shows the form information table 42 similarly to that in the second exemplary embodiment. This form information table 42 exemplarily includes recognized characters completed with identification as a result of character recognition applied to the original image data pieces whose identification information includes a form ID corresponding to theform items storage section 115 of theimage reading device 100 is stored with original image information in which the original image data pieces are correlated with the identification information. Thestorage section 115 is also provided in advance with verification text data for pattern matching use. - Described next is the operation of the
data input system 1 in the third exemplary embodiment. Because theimage reading device 100 in this exemplary embodiment is similar to that in the second exemplary embodiment described above, and thus described here is only the operation of the datainput receiving device 200.FIG. 18 is a flow diagram of the operation of the datainput receiving device 200. In the below, a description is given by referring toFIGS. 10 , 16, and 17 examples. - Similarly to the second exemplary embodiment, the
CPU 210 of the datainput receiving device 200 executes the processes in steps S210 and S211, and reads form information about the form ID “B001” from the form information table 42 for storage into theRAM 212. In the identification information in the form information, assumed here is that theCPU 210 selects a recognized character of which means “calmness” and is pronounced as “sizu”, “sei” or “jou”, correlated to any identification information not yet provided with input data, i.e., identification information with a form ID “B001”, an item ID “c”, and an extraction order “3” (step S231). Such a recognized character is hereinafter referred to as not-yet-input recognition character. TheCPU 210 then reads the word table 50 from thestorage section 213, and extracts therefrom any word including the selected not-yet-input recognized character, i.e., which means “Shizuoka Prefecture” and is pronounced as “shizuoka ken”, (step S232). - In the form information stored in the
RAM 212, using any not-yet-input recognized character correlated with the identification information including an item ID different from that of the selected not-yet-input recognized character of theCPU 210 determines whether or not the extracted word of can be configured thereby (step S233). - In step S233, when determining that the extracted word of can be configured by the not-yet-input character of and other not-yet-input recognized characters of which means “hill” and is pronounced as “oka” or “kou”, and which means “Prefecture” and is pronounced as “ken”, “gen” or “agata”, (step S230: YES), the
CPU 210 accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216. Herein, the not-yet-input recognized character of is the one correlated with the identification information including a form ID “B001”, an item ID “d”, and an extraction order “1”, and the not-yet-input recognized character of is the one correlated with the identification information including a form ID “B001”, an item ID “c”, and an extraction order “4”. TheCPU 210 then reads any of the original image data pieces corresponding to such not-yet-input recognized characters, respectively, and displays on the character input screen the original image data pieces read as such in the order of the word (step S234). -
FIG. 19A shows an exemplarycharacter input screen 51 displayed as such. Input targets 511 to 513 display the original image data pieces in the order of the selected word. As shown inFIG. 19B , in fields ofinput data 521 to 523 respectively corresponding to the input targets 511 to 513, the characters same as those in the input targets 511 to 513 are input. - Referring back to
FIG. 18 , in step S213, when thecharacter input screen 51 displayed in step S236 is provided with characters (step S213: YES), theCPU 210 executes the process in step S216 and thereafter similarly to the second exemplary embodiment.FIG. 20 shows an example of the form information table 42 completed with the process in step S216 in the above example. As shown in the drawing, the text data about and provided by the operator on thecharacter input screen 51 is stored ininput data 422 corresponding to the identification information set to the original image data pieces on display. - Also in
FIG. 18 , in step S233, when determining that the extracted word cannot be configured (step S233: NO), the CPU repeats the process in step S233, i.e., keeps performing word extraction one by one from the word table 50 (step S232) until every word in the word table 50 is completed with the process in step S233 (step S235: NO). - Also in step S235, when every word in the word table 50 is completed with the determination in step S233 (step S235: YES), the
CPU 210 accesses thestorage section 115 of theimage reading device 100 via thecommunications section 216, and displays any original image data pieces on the character input screen in the order of the word (step S236). The original image pieces here are those corresponding to any not-yet-input recognized characters that can be apart of the extracted word, and to any input recognized characters. That is, when the word extracted last is when this word cannot be configured by not-yet-input recognized characters, another word is configured by combining input recognized characters, thereby displaying the original image data pieces respectively corresponding to the recognized characters. In this case, on the character input screen, any data completed with input is provided into the field of input data corresponding to the original image data pieces also completed with input. When no word can be configured even if the input recognized characters are combined together, any original image data pieces partially corresponding to the recognized characters being a part of the word may be displayed. In this case, as to the word of if the recognized character of is not found in the form information, only the original image data pieces being the characters of and are displayed. - As such, in the above exemplary embodiment, the characters found in the form in the order of a word making sense are displayed.
- In the below, modified examples of the invention are described.
- 1. In the first exemplary embodiment described above, exemplified is the case of extracting the number of line segments both in vertical and horizontal for use as characteristics information. This is surely not restrictive, and any other methods will also do as long as extracting characteristics of characters. With an exemplary method, any density difference observed in a fill-in area between the area of character and the remaining area is extracted so that the outline of the character is detected, and information about the detected outline is extracted for use as characteristics information. With another exemplary method, a character is categorized through density detection of black pixels appearing along the four sides around the character in a fill-in area, and information about the resulting category may be generated for use as characteristics information. Still alternatively, the characteristics information generated as such may be stored with a correlation with the characteristics information in the first exemplary embodiment, and when verification requirements are satisfied, e.g., the characteristics information about any input character and the characteristics information about any original image data piece are both falling in a predetermined range of similarity, the input character may be stored with a correlation with the identification information set to the original image data piece.
- 2. In the above first exemplary embodiment, exemplified is the case of displaying at random the original image data pieces on a character basis. Alternatively, a plurality of original image data pieces varying in item ID may be arranged for display.
- 3. In the second exemplary embodiment described above, exemplified is the case of storing any input numerical value with identification information of a recognized character under the same form ID. Alternatively, the input numerical value may be stored with a correlation with the identification information set to any same recognized character under any different form ID.
- 4. Also in the above second exemplary embodiment, exemplified is the case of sorting any recognized characters under the same item ID in a predetermined order, and displaying original image data pieces in the same order. Alternatively, when recognized characters being numerical values are stored in a plurality of form items, the recognized characters in all of the form items may be sorted in a predetermined order, and a predetermined number of original image data pieces may be displayed in the same order.
- 5. In the third exemplary embodiment described above, exemplified is the case of extracting a word being a combination of recognized characters under different form items. Alternatively, any recognized characters in any same form item may be combined together to configure a word not found in the form item.
- 6. In the first to third exemplary embodiments described above, exemplified is the case of making data input, on a form basis, to every data
input receiving device 200. Alternatively, datainput receiving devices input receiving devices input receiving devices input receiving devices - 7. In the first to third exemplary embodiments described above, exemplified is the case of, in a form including any original image data pieces, setting identification information to each of the original image data pieces in an extraction order of the original image data pieces. Alternatively, the identification information to be set may indicate the positions of the original image data pieces, e.g., coordinates of the original image data piece in a form.
- 8. In the first exemplary embodiment described above, the data
input receiving device 200 is exemplified as storing text data provided by an operator for the original image data pieces displayed on thedisplay section 215 with a correlation also with identification information set to any other original image data pieces including the character-associated information corresponding to the original image data pieces on display. Alternatively, the datainput receiving device 200 may store, with a correlation, text data provided only for the identification information set to the original image data pieces on display. If this is the case, still alternatively, when the text data provided by the operator is displayed on thedisplay section 215 together with such other original image data pieces, and when the operator determines to input his or her input text data as text data corresponding to the other original image data pieces, the input text data may be stored with a correlation with the identification information set to the other original image pieces. - 9. In the first to third exemplary embodiments described above, the
image reading device 100 is exemplified as storing, in thestorage section 115 therein, with a correlation, the original image data pieces in each form completed with reading, the identification information set to each of the original image data pieces, and the character-associated information corresponding to each of the original image data pieces, i.e., characteristics information and recognized characters. Alternatively, theimage reading device 100 may be so configured as to forward such data to any other device connected to theimage reading device 100 and the datainput receiving device 200, and store the data in such a device. If this is the case, the datainput receiving device 200 may access the device storing the character-associated information (characteristics information and recognized characters) with a correlation with the original image data pieces, the identification information set to each of the original image data pieces, and the character-associated information corresponding to each of the original image data pieces, thereby performing reading of such data and writing of input text data. - 10. The
data input system 1 in the first to third exemplary embodiments described above is exemplarily configured by theimage reading device 100 and the datainput receiving devices 200. Alternatively, theimage reading device 100 may be so configured as to serve also as the datainput receiving device 200, and in the resultingimage reading device 100, any input of text data may be provided by an operator for data input. - 11. The form exemplified in the first to third exemplary embodiments above is segmented in advance into form items including fill-in areas to be filled with details of information. When the form is any medium not defined by fill-in area as such, original image data pieces may be extracted as below. That is, for example, the
image reading device 100 may scan a medium filled with hand-written characters at predetermined intervals to detect a distribution of black pixels, thereby defining the space between characters filled in the medium. Thereafter, based on the character space defined as such, theimage reading device 100 may extract original image data pieces on a character basis from original image data of the medium. - 12. The data
input receiving device 200 in the first to third exemplary embodiments described above is exemplified as including thedisplay section 215. Alternatively, the datainput receiving device 200 may be connected with an external display device such as liquid crystal display. If this is the configuration, theCPU 210 of the datainput receiving device 200 may perform control to display the image of a character input screen as in the above exemplary embodiments on the display device. - 13. The program to be run by the
CPUs image reading device 100 and the datainput receiving device 200, respectively, in the first to third exemplary embodiments above can be distributed in the form of a computer-readable recording medium such as magnetic recording medium, e.g., magnetic tape, and magnetic disk, optical recording medium, e.g., optical disk, magneto-optical recording medium, and semiconductor memory. The program may be also downloaded to theimage reading device 100 and the datainput receiving device 200 using a communications unit such as the Internet. - The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Claims (12)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-126649 | 2009-05-26 | ||
JP2009126649A JP2010277168A (en) | 2009-05-26 | 2009-05-26 | Data input system, data input acceptance device and program |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100303382A1 true US20100303382A1 (en) | 2010-12-02 |
US8254721B2 US8254721B2 (en) | 2012-08-28 |
Family
ID=43220308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/619,200 Active 2030-11-24 US8254721B2 (en) | 2009-05-26 | 2009-11-16 | Data input system, data input receiving device, data input receiving method and computer readable medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US8254721B2 (en) |
JP (1) | JP2010277168A (en) |
CN (1) | CN101902541B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140105501A1 (en) * | 2012-10-11 | 2014-04-17 | Fuji Xerox Co., Ltd. | Character recognition apparatus, non-transitory computer readable medium, and character recognition method |
US20180276461A1 (en) * | 2017-03-21 | 2018-09-27 | Casio Computer Co., Ltd. | Ledger document processing device, ledger document processing method and storage medium |
US10498910B2 (en) * | 2016-05-06 | 2019-12-03 | Konica Minolta, Inc. | Image forming apparatus for displaying conference information, non-transitory computer-readable recording medium, conference system and method for controlling conference system |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5606385B2 (en) * | 2011-04-28 | 2014-10-15 | 楽天株式会社 | Server apparatus, server apparatus control method, and program |
CN103077389B (en) * | 2013-01-07 | 2016-08-03 | 华中科技大学 | A kind of combination character level classification and character string level classification text detection and recognition methods |
JP6025803B2 (en) * | 2014-11-06 | 2016-11-16 | 京セラドキュメントソリューションズ株式会社 | Image processing device |
JP6488729B2 (en) * | 2015-01-29 | 2019-03-27 | 富士ゼロックス株式会社 | Entry form providing apparatus, image forming apparatus, and program |
CN105808742A (en) * | 2016-03-11 | 2016-07-27 | 北京天创征腾信息科技有限公司 | Image pool system and method for using the image pool |
JP6856321B2 (en) * | 2016-03-29 | 2021-04-07 | 株式会社東芝 | Image processing system, image processing device, and image processing program |
JP6370857B2 (en) * | 2016-10-21 | 2018-08-08 | 三菱電機インフォメーションシステムズ株式会社 | Data storage device and data storage program |
JP7095345B2 (en) * | 2018-03-22 | 2022-07-05 | 富士フイルムビジネスイノベーション株式会社 | Information processing equipment, information processing systems and programs |
JP7275617B2 (en) * | 2019-02-06 | 2023-05-18 | 日本電信電話株式会社 | Information processing device, discrimination method and discrimination program |
CN110598186A (en) * | 2019-07-31 | 2019-12-20 | 浙江口碑网络技术有限公司 | Auxiliary processing method, device and system for image recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5821929A (en) * | 1994-11-30 | 1998-10-13 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US5900005A (en) * | 1996-05-17 | 1999-05-04 | Techcraft Co., Ltd. | System for extraction of text strings from on-screen computer window displays using the computer operating system in lieu of a clipboard |
JP2006244315A (en) * | 2005-03-04 | 2006-09-14 | Aidekku:Kk | Data entry system |
US7593961B2 (en) * | 2003-04-30 | 2009-09-22 | Canon Kabushiki Kaisha | Information processing apparatus for retrieving image data similar to an entered image |
US7836010B2 (en) * | 2003-07-30 | 2010-11-16 | Northwestern University | Method and system for assessing relevant properties of work contexts for use by information services |
US8059896B2 (en) * | 2006-05-11 | 2011-11-15 | Fuji Xerox Co., Ltd. | Character recognition processing system and computer readable medium storing program for character recognition processing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3167578B2 (en) | 1995-03-30 | 2001-05-21 | 沖電気工業株式会社 | Form classification processing method and system |
JP2003006315A (en) * | 2001-06-21 | 2003-01-10 | Ricoh Co Ltd | Information input-supporting service system |
JP2003029910A (en) * | 2001-07-10 | 2003-01-31 | Toshiba Corp | Secret information input method, device and program |
JP4278524B2 (en) * | 2004-01-16 | 2009-06-17 | トランス・コスモス株式会社 | Image processing apparatus, method and program, and image processing system |
-
2009
- 2009-05-26 JP JP2009126649A patent/JP2010277168A/en active Pending
- 2009-11-16 US US12/619,200 patent/US8254721B2/en active Active
- 2009-12-16 CN CN200910225481.5A patent/CN101902541B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5821929A (en) * | 1994-11-30 | 1998-10-13 | Canon Kabushiki Kaisha | Image processing method and apparatus |
US5900005A (en) * | 1996-05-17 | 1999-05-04 | Techcraft Co., Ltd. | System for extraction of text strings from on-screen computer window displays using the computer operating system in lieu of a clipboard |
US7593961B2 (en) * | 2003-04-30 | 2009-09-22 | Canon Kabushiki Kaisha | Information processing apparatus for retrieving image data similar to an entered image |
US7836010B2 (en) * | 2003-07-30 | 2010-11-16 | Northwestern University | Method and system for assessing relevant properties of work contexts for use by information services |
JP2006244315A (en) * | 2005-03-04 | 2006-09-14 | Aidekku:Kk | Data entry system |
US8059896B2 (en) * | 2006-05-11 | 2011-11-15 | Fuji Xerox Co., Ltd. | Character recognition processing system and computer readable medium storing program for character recognition processing |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140105501A1 (en) * | 2012-10-11 | 2014-04-17 | Fuji Xerox Co., Ltd. | Character recognition apparatus, non-transitory computer readable medium, and character recognition method |
US9342739B2 (en) * | 2012-10-11 | 2016-05-17 | Fuji Xerox Co., Ltd. | Character recognition apparatus, non-transitory computer readable medium, and character recognition method |
US10498910B2 (en) * | 2016-05-06 | 2019-12-03 | Konica Minolta, Inc. | Image forming apparatus for displaying conference information, non-transitory computer-readable recording medium, conference system and method for controlling conference system |
US20180276461A1 (en) * | 2017-03-21 | 2018-09-27 | Casio Computer Co., Ltd. | Ledger document processing device, ledger document processing method and storage medium |
JP2018157472A (en) * | 2017-03-21 | 2018-10-04 | カシオ計算機株式会社 | Account book document processing device, account book document processing method, and program |
CN108629215A (en) * | 2017-03-21 | 2018-10-09 | 卡西欧计算机株式会社 | Account book document handling apparatus, account book document handling method and recording medium |
CN108629215B (en) * | 2017-03-21 | 2020-12-25 | 卡西欧计算机株式会社 | Account book file processing device, account book file processing method, and recording medium |
US11010603B2 (en) * | 2017-03-21 | 2021-05-18 | Casio Computer Co., Ltd. | Ledger document processing device, ledger document processing method and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP2010277168A (en) | 2010-12-09 |
CN101902541B (en) | 2015-08-19 |
US8254721B2 (en) | 2012-08-28 |
CN101902541A (en) | 2010-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8254721B2 (en) | Data input system, data input receiving device, data input receiving method and computer readable medium | |
US10824801B2 (en) | Interactively predicting fields in a form | |
JP4266695B2 (en) | Image processing apparatus and image processing method | |
JP4533273B2 (en) | Image processing apparatus, image processing method, and program | |
JP4118349B2 (en) | Document selection method and document server | |
US9514103B2 (en) | Effective system and method for visual document comparison using localized two-dimensional visual fingerprints | |
CN101178725B (en) | Device and method for information retrieval | |
JP4920928B2 (en) | Image processing apparatus, control method therefor, and program | |
US8655107B2 (en) | Signal processing apparatus, signal processing method, computer-readable medium and computer data signal | |
KR101552525B1 (en) | A system for recognizing a font and providing its information and the method thereof | |
JP2008022159A (en) | Document processing apparatus and document processing method | |
US9614984B2 (en) | Electronic document generation system and recording medium | |
US10558745B2 (en) | Information processing apparatus and non-transitory computer readable medium | |
CN104809099A (en) | Document file generating device and document file generation method | |
US20120014612A1 (en) | Document processing apparatus and computer readable medium | |
US20210182477A1 (en) | Information processing apparatus and non-transitory computer readable medium storing program | |
JP2006333248A (en) | Image processing apparatus, image processing method, program and storage medium | |
US9152885B2 (en) | Image processing apparatus that groups objects within image | |
JP6201638B2 (en) | Form processing apparatus and program | |
JP2018005801A (en) | Image processing system | |
JP2008004116A (en) | Method and device for retrieving character in video | |
JP2017072941A (en) | Document distribution system, information processing method, and program | |
JP2020047138A (en) | Information processing apparatus | |
US11238305B2 (en) | Information processing apparatus and non-transitory computer readable medium storing program | |
US20150062660A1 (en) | File management apparatus and file management method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUJI XEROX CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAMURA, JUNICHI;REEL/FRAME:023591/0348 Effective date: 20091102 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: FUJIFILM BUSINESS INNOVATION CORP., JAPAN Free format text: CHANGE OF NAME;ASSIGNOR:FUJI XEROX CO., LTD.;REEL/FRAME:058287/0056 Effective date: 20210401 |