WO2006061941A1 - Image reading device - Google Patents

Image reading device Download PDF

Info

Publication number
WO2006061941A1
WO2006061941A1 PCT/JP2005/018076 JP2005018076W WO2006061941A1 WO 2006061941 A1 WO2006061941 A1 WO 2006061941A1 JP 2005018076 W JP2005018076 W JP 2005018076W WO 2006061941 A1 WO2006061941 A1 WO 2006061941A1
Authority
WO
WIPO (PCT)
Prior art keywords
reading
scanning direction
main scanning
image
cis
Prior art date
Application number
PCT/JP2005/018076
Other languages
French (fr)
Japanese (ja)
Inventor
Takehiko Saitoh
Masaaki Yoshida
Original Assignee
Seiko Instruments Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Instruments Inc. filed Critical Seiko Instruments Inc.
Publication of WO2006061941A1 publication Critical patent/WO2006061941A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/191Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements
    • H04N1/1911Simultaneously or substantially simultaneously scanning picture elements on more than one main scanning line, e.g. scanning in swaths
    • H04N1/1916Simultaneously or substantially simultaneously scanning picture elements on more than one main scanning line, e.g. scanning in swaths using an array of elements displaced from one another in the main scan direction, e.g. a diagonally arranged array
    • H04N1/1917Staggered element array, e.g. arrays with elements arranged in a zigzag
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/191Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements

Definitions

  • the present invention relates to an image reading apparatus, and more particularly, to reading of a document using a plurality of reading units in which a plurality of light receiving elements are arranged in the main scanning direction, and joining image data between the reading units.
  • the present invention relates to an image reading apparatus that improves image quality accurately.
  • a light receiving element in which reading light is irradiated onto an original and reflected light from the original is arranged in a line in the main scanning direction.
  • Image data was obtained by photoelectric conversion in the reading unit that also has a coercive force.
  • CIS which is a 1X sensor, is used because of its simple structure and easy miniaturization.
  • light receiving elements are usually arranged at intervals of 300dpi to 1200dpi in order to obtain high quality images.
  • the light receiving element combines an optical component such as a self-occ lens and a photoelectric conversion element such as a phototransistor, and the reflected light from the document surface passes through the self-occ lens and forms an image on the photoelectric conversion unit.
  • CIS is relatively easy to manufacture and can be supplied at low cost due to its mass production effect.
  • a photo detector that covers only Al and AO size documents with a single CIS. It is difficult to manufacture, and the yield becomes difficult, and as a result, it becomes expensive. Therefore, a large-size scanning device such as A1 or AO forms a sensor unit by scanning multiple CISs for scanning A4 or A3 size documents, and reads large-size documents in a single document feed. Making things possible.
  • the distance Ly between the light receiving element at one CIS end and the light receiving element at the other adjacent CIS end is one. Since it will be longer than the distance (Lx) between the light receiving elements of the CIS, it is necessary to perform processing such as interpolating the pixels between the CIS after reading so that the unnaturalness of the joints is not revealed. is there. However, this pixel interpolation is not correct because the pixel to be interpolated is estimated from the pixel information read in the vicinity.
  • the CISs are arranged at different positions in the sub-scanning direction, the ends of adjacent CISs are overlapped, and further, on the upstream side, the read image data is delayed using delay means, and then the downstream side A method is used that eliminates white space at the joints by combining with image data.
  • a specific example of such conventional joining will be described below with reference to FIG. Note that the sub-scanning direction described in this specification refers to the document conveyance direction, and the main scanning direction refers to the light receiving elements in the reading unit that are perpendicular to the sub-scanning direction. It is the direction of being.
  • Each of CIS (A) 101 and CIS (B) 102 in Fig. 16 (a) is composed of X light receiving elements.
  • the (X-2) th 114th light receiving elements are valid, and (X-1) 115, the Xth 116th light receiving elements are invalid.
  • the light receiving element from the third 123 is made valid, the first 121 and the second 122 light receiving elements are made invalid, and they are connected as shown below.
  • Such a joining method eliminates the need for pixel interpolation when the reading units are arranged in one row in the main scanning direction, and makes it possible to reproduce the original more faithfully at the joints.
  • the present invention is intended to solve the problems of the conventional configuration as described above, and eliminates the unnaturalness of the image at the joint and realizes an image reading apparatus with high read image quality.
  • the purpose is to do.
  • read pixel data relating to a portion where pixels in the main scanning direction of adjacent CISs overlap at the time of stitching are respectively received in the same position in the main scanning direction.
  • the output value is averaged by weighting, and the weighting coefficient is not the same in the overlap range.
  • the gradation values of the pixels around the joint are read, and both read gradation values are included, so that the discontinuity of the gradation of the joint is made inconspicuous. Is possible.
  • the gradation value obtained by one light receiving element force and the other light receiving element force obtained from a pixel in a portion where the light receiving elements of adjacent reading units overlap each other are expressed by the following arithmetic expressions: This is an image reading apparatus that is used for calculation.
  • Nout (x) Na (x) X (ga (x) / G) + Nb (x) X (gb (x) / G)
  • G ga (x) + gb (x)
  • Nout (x) The gradation value after calculation of the pixel at the position in the main scanning direction X.
  • Na (X) The gradation value of the pixel of one reading unit at the position in the main scanning direction X.
  • Nb (X) The gradation value of the pixel of the other reading unit at the position in the main scanning direction X.
  • ga (X) A weighting coefficient of the pixel of one reading unit at the position in the main scanning direction X.
  • G integer greater than or equal to l
  • the weighting coefficient used when calculating the pixels in the overlapping range in the main scanning direction of the adjacent reading unit is larger on the side close to one reading unit, and the weighting coefficient for the gradation value of one reading unit is increased.
  • the image reading apparatus has a larger weighting coefficient for the gradation value of the other reading unit.
  • the weighting coefficient of the reading value of the CIS is set to the main of the adjacent reading unit.
  • the weighting coefficient of the reading gradation value of the reading section Is made smaller than the weighting coefficient of the reading value at the same position in the main scanning of the adjacent reading section, so that the unnatural gradation of the range to be joined can be reduced.
  • FIG. 1 is a configuration diagram of a reading apparatus according to the present invention.
  • FIG. 2 is a diagram showing a positional relationship between a staggered arrangement of a reading unit and a document according to the present invention.
  • FIG. 3 is a diagram showing a configuration of a sensor unit according to the present invention.
  • FIG. 4 is a diagram showing a configuration of a reading unit according to the present invention.
  • FIG. 5 is a block diagram showing an electrical configuration of a reading unit according to the present invention.
  • FIG. 6 is a diagram showing a flow of the data processing unit shown in FIG.
  • FIG. 7 is a diagram showing the overlap of light receiving elements of adjacent reading units.
  • FIG. 8 is a diagram showing specific locations of symbols indicating the positions of the respective reading units shown in FIG.
  • FIG. 9 is a diagram showing the correspondence between the position of the light receiving element and the weighting coefficient in the sensor joining process.
  • FIG. 10 is a diagram showing a weighting expression between CIS (A) and CIS (B) in the example.
  • FIG. 11 is a diagram showing a weighting calculation formula between CIS (B) and CIS (C) in the embodiment.
  • FIG. 12 is a diagram showing a weighting expression between CIS (C) and CIS (D) in the example.
  • FIG. 13 is a diagram showing a weighting expression between CIS (D) and CIS (E) in the example.
  • FIG. 14 An operation for calculating read data in a range not included in FIG. 10, FIG. 11, FIG. 12, and FIG. It is a figure which shows a formula.
  • FIG. 15 is a diagram showing the arrangement of a conventional reading unit.
  • FIG. 16 is a diagram for explaining joining of conventional reading units.
  • the reading unit called CIS 41 has 5000 light receiving elements 42 arranged in one row in the main running direction.
  • the light receiving element 42 also has a combination force of an optical component such as a self-occ lens (not shown) and a photoelectric conversion element such as a phototransistor (not shown), and the reflected light from the original is coupled to the light receiving surface of the photoelectric conversion element by the self-occ lens.
  • I have an image.
  • three light guides are arranged in the CIS 41 at a position parallel to the arrangement direction of the light receiving elements. The light guides illuminate the original with red, blue, and green light, respectively, by LEDs that emit red, blue, and green light.
  • the sensor unit fixes five reading units 21 to 25 called CIS on the plate 31 so as to form a staggered arrangement.
  • Each CIS reads the document by transporting the document 26 in the document transport direction 27 on the sensor unit.
  • Fig. 2 shows the positional relationship between each CIS and the manuscript from the perspective of Fig. 3 from above.
  • CIS (A) 21, CIS (C) 23, and CIS (E) 25 are arranged at the same position in the sub-scanning direction so that they do not overlap in the main scanning direction.
  • CIS (A), (C), (CIS (B) 22 and CIS (D) 24 are located downstream of the three CISs in the sub-scanning direction at the same position in the sub-scanning direction. E) and arranged in a staggered arrangement.
  • CIS (A) 21 and CIS (B) 22, CIS (B) 22 and CIS (C) 23, CIS (C) 23 and CIS (D) 24, CIS (D) 24 and CIS (E) 25 In this embodiment, the light receiving elements are arranged so as to overlap each other in the main scanning direction.
  • CIS (A) 21, CIS (B) 22, CIS (C) 23, CIS (D) 24, and CIS (E) 25 are input to AD converter 51.
  • the output of the AD converter 51 is input to the data processing unit 52.
  • An image memory 55 and a non-volatile memory 56 are connected to the data processing unit 52, and read / write can be performed by the data processing unit 52.
  • the output of the data processing unit 52 is input to the image processing unit 57.
  • the output of the image processing unit 57 is connected to the outside of the scanner.
  • the CPU 53 is connected to the data processing unit 52, and the CPU 53 controls the data processing unit 52.
  • the CPU 53 is connected to a program memory 54 and stores a program that defines the operation of the CPU 53.
  • the sensor unit starts from the trigger pulse input from the data processing unit 52, and outputs the output values from the light receiving elements in order from the first to the 5000th on each CIS. Is output as an analog signal in synchronization with the rising edge of the CIS reference clock not shown.
  • the AD converter 51 samples in accordance with the rising edge of the CIS reference clock, performs A / D conversion, and outputs it as 8-bit digital data.
  • the data processing unit 52 includes a gate array, a cell-based ASIC, an FPGA, and the like, receives digital data output from the AD converter, and performs shading correction. A specific method of shading correction will be described later.
  • the data processing unit temporarily stores the data after the upstream CIS processing in the image memory 55.
  • the CIS on the downstream side reads the p line, and after AD conversion in the data processing unit, the data on the same line by the upstream CIS is read from the memory, and the inter-sensor connection processing is performed. Do. Through the stitching process, the data that has been processed by dividing the data into 5 pieces so far becomes one line of data and sent to the image processing unit.
  • the image processing unit is configured by a DSP (Digital Signal Processor) or the like, and performs image processing such as contrast adjustment and gamma correction, cut-out processing in the main scanning direction and sub-scanning direction, enlargement / reduction, 2 Processing such as digitization is performed as necessary, then sent to the outside of the reading device, and stored as image data, printed out, or remotely via LAN or WAN, depending on the application. It is transferred to the ground.
  • DSP Digital Signal Processor
  • the flow of the data processing unit will be described below with reference to FIG.
  • the data input from the AD converter 51 enters the FIF061 in order to absorb the difference between the transfer rate between the AD converter 51 and the data processing unit 52 and the transfer rate in the data processing unit.
  • the in-read section processing unit 62 reads data from the FIF061 and does not require a CIS sensor as will be described later.
  • the data for 100 pixels at both ends is discarded and sent to the shading correction unit. Shading correction is calculated by the following formula when the number of gradations is 256 to widen the dynamic range of the read data.
  • Bk (x) used for shading correction stores the AD conversion value of the light receiving element output in the nonvolatile memory 56 with the CIS LED turned off before reading the document, and Wh (x) reads the same document. Is obtained by storing the AD conversion value of the output of the light receiving element when the white reference plate provided in the apparatus is read in the nonvolatile memory 56, and from the nonvolatile memory 56 at the time of shading correction, the Bk (x ), Wh (x) is read and shading correction is performed.
  • the data after the shading correction is written into the image memory 55 for delay processing by the memory controller 64.
  • the delay processing unit 65 reads the data corresponding to the same line on the document via the memory controller 64 and puts it in the inter-sensor connection processing unit 66 in consideration of the shift of the transport time in the sub-scanning direction of the CIS arranged in a staggered manner. It is done. The operation of the inter-sensor connection processing unit 66 will be described later.
  • the data after the linking process is sent to the main scanning direction data cutout processing unit 67, and only the range in the main scanning direction specified by the CPU is extracted and sent to the output FIF068.
  • the output FIF 068 reads out data from the output FIF 068 and transfers it to the outside of the data processing unit 52 so as to meet the transfer rate between the data processing unit 52 and the image processing unit, and the image processing unit 57 performs the above-described processing.
  • each CIS the 100 pixels at both ends are used for adjustment when the deviation in the main scanning direction between adjacent CISs is greater than the gap between the light receiving elements.
  • the output for 100 pixels at the end is invalid.
  • 4800 light receiving elements including the 4900th light receiving element are effective. If there are adjacent CISs, the light receiving elements for 64 pixels at the end of 4800s, that is, the 101st to 164th and 4737th to 4800th light receiving elements are combined in the CIS. In the stitching calculation, weighting calculation is performed on the 101st received light data of one CIS and the 4737th received light data of the other CIS, and the data of that pixel is calculated. Similarly, the weighting operation is performed on the 102nd and 4738th of one CIS, the 164th and 4800th of the other.
  • G integer greater than or equal to l
  • ga Weighting factor consisting of an integer between 1 and G
  • G is 9 and the stitching range of 64 pixels is divided into 8 small areas, and the same weighting coefficient is used in each small area.
  • G is not limited to 9, and if the above subdivision areas are subdivided, the number of subdivision areas will be more than that.
  • the stitching range of 64 pixels is divided into 64 small areas, the G will be a number of 64 or more, and if the stitching range is 64 pixels or more, it will be larger. Number.
  • the larger the G the more natural the seam of images can be realized. The amount of data that needs to be increased.
  • the weighting coefficient ga is an integer of 1 or more and G or less, and becomes G when added to the weighting coefficient gb of the other CIS at the same position in the main scanning direction.
  • the weighting coefficient ga (x) at the position X in the main scanning direction within one reading unit and the weighting coefficient ga (y) at the position y adjacent to the position X and close to the other reading unit within the same reading unit Ga (x) ⁇ ga (y) is satisfied.
  • the weighting factor becomes smaller and a more natural read image can be obtained.
  • FIG. 7 (a) shows the positional relationship between the joints of CIS (A) and CIS (B).
  • alO to al8 indicate the positions of the light receiving elements of CIS (A).
  • bl to b9 indicate the position of the light receiving element of CIS (B).
  • Fig. 7 (b) shows CIS (B) and CIS (C)
  • Fig. 7 (c) shows CIS (C) and CIS (D)
  • Fig. 7 (d) shows CIS (D) and CIS (D). Show me about CIS (E)!
  • FIG. 9 shows the weighting coefficient for each small area that requires the weighting calculation shown in FIG. 7 and FIG.
  • FIGS. 10 to 14 show calculation formulas for pixel data of one line after stitching using the weighting coefficients of FIG.
  • the peripheral pixels between the reading units include both reading data at the same position in the main scanning direction, so CIS manufacturing variations and CIS sensor unit mounting accuracy It is possible to eliminate the unnaturalness of the scanned image that occurs due to the stitching of the scanning unit, which occurs due to variations in the image quality.
  • FIG. 1 shows a configuration of a reading apparatus according to an embodiment of the present invention.
  • reference numeral 1 denotes a sensor unit that irradiates a document with reading light and reads the reflected light.
  • Reference numeral 3 denotes a white platen placed at a position facing the sensor unit 1 with the document table and the document table 3 interposed therebetween. Fix reference plate 2.
  • a document transport path 4 is between the document table 3 and the white reference plate 2.
  • Reference numerals 5a and 5b are transmission sensors that detect the insertion of an original when an external force original is inserted into a document carry-in locuser (not shown). For transporting documents to the platen 3 when document insertion is detected There are a driving roller 6a and a driven roller 6b for loading.
  • 8a and 8b are sensors for detecting the conveyance of the document to the document table 3
  • 7a is a driving roller for conveying the document
  • 7b is a driven roller for conveying the document.
  • Reference numeral 9a denotes a discharge driving roller for discharging the original after reading out of the apparatus
  • 9b denotes a discharge driven roller.
  • the reading device of the present invention can provide an image reading device that can eliminate the unnaturalness of the read image that occurs when the reading unit is joined.

Abstract

An image reading device reads out a document by using a plurality of reading units having a plurality of light receiving elements arrayed in a main scanning direction. Read pixel data on the portion where the pixels in the main scanning direction of adjoining CIS overlap is subjected to an averaging operation, in which the output values of individual light receiving elements at the same position in the main scanning direction are weighted, and the weighting coefficients are not made identical in an overlap range. The image reading device thus provided can suppress the image quality deterioration at the joints of the reading units and has a high reading image quality.

Description

画像読み取り装置  Image reading device
技術分野  Technical field
[0001] 本発明は、画像読み取り装置に関し、詳細には、受光素子が主走査方向に複数配 列した読み取り部を複数個用いた原稿の読み取りに関し、各読み取り部間での画像 データのつなぎ合わせを正確におこなって画像品質を向上させる画像読み取り装置 に関する。  TECHNICAL FIELD [0001] The present invention relates to an image reading apparatus, and more particularly, to reading of a document using a plurality of reading units in which a plurality of light receiving elements are arranged in the main scanning direction, and joining image data between the reading units. The present invention relates to an image reading apparatus that improves image quality accurately.
背景技術  Background art
[0002] 従来の画像読み取り装置では、特開昭 60— 31357号公報に示すように、原稿に 読み取り光を照射して、当該原稿からの反射光を主走査方向に 1列に並んだ受光素 子力もなる読み取り部で光電変換する事で画像データを得て 、た。読み取り部として は、構造が単純で、小型化しやすいという特徴から、等倍センサである CISが使われ ている。 CISでは、品質のよい画像を得る為に、受光素子が通常 300dpiから 1200d piの間隔で並べられて 、る。受光素子はセルフオックレンズなどの光学部品とフォトト ランジスタなどの光電変換素子を組み合わせ、原稿面の反射光をセルフオックレンズ を通す事で光電変換部に結像させている。 A4、 A3サイズ位までは、 CISの製造は 比較的容易で、量産効果から、安価に供給する事が可能であるが、 Al、 AOサイズ の原稿を、ひとつの CISで網羅するだけの受光素子をならベる事は製造が難しぐ歩 留まりが悪くなり、結果として、高価になる。ゆえに、 A1や AOといった大きなサイズの 読み取り装置では、 A4や A3サイズ程度の原稿読み取り用の CISを複数ならベる事 で、センサーユニットを形成し、大きなサイズの原稿を 1回の原稿搬送で読み取る事 を可能としている。また、複数の CISを主走査方向 1列にならべると、図 15に示すよう に、一つの CISの端の受光素子と、隣接する他の CISの端の受光素子との距離 Lyが 、一つの CISの受光素子間の距離 (Lx)よりも大きくなつてしまうので、読み取り後に、 CIS間に画素を補間する事で、つなぎ目の不自然さをめだたせないようにするなどの 処理をおこなう必要がある。ただし、この画素補間では、周辺の読み取った画素情報 から、補間する画素を推測するので、正しくない。その為、図 3に示すように、隣接す る CISを副走査方向異なる位置に配置し、隣接する CISの端部をオーバーラップさ せ、さらには、上流側で、読み取った画像データを遅延手段を用いて遅延させた後 に、下流側の画像データと合成する事で、つなぎ目の空白をなくす方法が用いられ ている。このような従来のつなぎあわせの具体例を図 16 (a)を用いて以下に説明する 。なお、本明細書で記載する副走査方向とは、原稿搬送方向の事であり、また主走 查方向とは、副走査方向とは直角をなす、読み取り部内の受光素子が 1列に並んで いる方向の事である。 In a conventional image reading apparatus, as shown in Japanese Patent Application Laid-Open No. 60-31357, a light receiving element in which reading light is irradiated onto an original and reflected light from the original is arranged in a line in the main scanning direction. Image data was obtained by photoelectric conversion in the reading unit that also has a coercive force. As the reading unit, CIS, which is a 1X sensor, is used because of its simple structure and easy miniaturization. In CIS, light receiving elements are usually arranged at intervals of 300dpi to 1200dpi in order to obtain high quality images. The light receiving element combines an optical component such as a self-occ lens and a photoelectric conversion element such as a phototransistor, and the reflected light from the document surface passes through the self-occ lens and forms an image on the photoelectric conversion unit. Up to A4 and A3 sizes, CIS is relatively easy to manufacture and can be supplied at low cost due to its mass production effect. However, a photo detector that covers only Al and AO size documents with a single CIS. It is difficult to manufacture, and the yield becomes difficult, and as a result, it becomes expensive. Therefore, a large-size scanning device such as A1 or AO forms a sensor unit by scanning multiple CISs for scanning A4 or A3 size documents, and reads large-size documents in a single document feed. Making things possible. When a plurality of CISs are arranged in one row in the main scanning direction, as shown in FIG. 15, the distance Ly between the light receiving element at one CIS end and the light receiving element at the other adjacent CIS end is one. Since it will be longer than the distance (Lx) between the light receiving elements of the CIS, it is necessary to perform processing such as interpolating the pixels between the CIS after reading so that the unnaturalness of the joints is not revealed. is there. However, this pixel interpolation is not correct because the pixel to be interpolated is estimated from the pixel information read in the vicinity. Therefore, as shown in Figure 3, The CISs are arranged at different positions in the sub-scanning direction, the ends of adjacent CISs are overlapped, and further, on the upstream side, the read image data is delayed using delay means, and then the downstream side A method is used that eliminates white space at the joints by combining with image data. A specific example of such conventional joining will be described below with reference to FIG. Note that the sub-scanning direction described in this specification refers to the document conveyance direction, and the main scanning direction refers to the light receiving elements in the reading unit that are perpendicular to the sub-scanning direction. It is the direction of being.
[0003] 図 16 (a)における CIS (A) 101と、 CIS (B) 102はそれぞれ、 X個の受光素子から なる。 CIS (A) 101の(X— 3)番目 111から X番目 116の受光素子と CIS (B) 102の 1 番目 121力 4番目 124までの受光素子を主走査方向でオーバーラップさせる。 CIS (A) 101では、(X— 2)番目 114までの受光素子を有効として、(X— 1) 115、 X番目 116の受光素子は無効とする。同様に CIS (B) 102では、 3番目 123からの受光素 子を有効として、 1番目 121、 2番目 122の受光素子を無効として、以下に示すように つなぎあわせる。  [0003] Each of CIS (A) 101 and CIS (B) 102 in Fig. 16 (a) is composed of X light receiving elements. The CIS (A) 101 (X-3) th 111th to Xth 116th light receiving elements overlap the CIS (B) 102 first 121th power to fourth 124th light receiving elements in the main scanning direction. In CIS (A) 101, the (X-2) th 114th light receiving elements are valid, and (X-1) 115, the Xth 116th light receiving elements are invalid. Similarly, in CIS (B) 102, the light receiving element from the third 123 is made valid, the first 121 and the second 122 light receiving elements are made invalid, and they are connected as shown below.
• · · ·、 CIS (A) 101の(X— 4)番目の受光素子 112、 CIS (A) 101の(X— 3)番目の 受光素子 113、 CIS (A) 101の(X— 2)番目の受光素子 114、 CIS (B) 102の 1番目 の受光素子 121, CIS (B) 102の 2番目の受光素子 122, CIS (B) 102の 3番目の受 光素子 123、 · · · ·  • CIS (A) 101 (X—4) th light receiving element 112, CIS (A) 101 (X—3) th light receiving element 113, CIS (A) 101 (X—2) 1st light receiving element 114, CIS (B) 102 1st light receiving element 121, CIS (B) 102 2nd light receiving element 122, CIS (B) 102 3rd light receiving element 123,
このようなつなぎあわせ方法により、読み取り部を主走査方向 1列に並べた際の画 素補間の必要性がなくなり、つなぎ目に関して、より原稿を忠実に再現する事が可能 となる。  Such a joining method eliminates the need for pixel interpolation when the reading units are arranged in one row in the main scanning direction, and makes it possible to reproduce the original more faithfully at the joints.
[0004] しかし、このような CISの端部をオーバーラップさせた画像読み取り装置でも、読み 取った画像のつなぎ目が不自然になる場合がある。これは、読み取り解像度が高い 程、わずかな CISの取り付け精度のばらつきや、 CISを取り付けるプレートの温度等 の環境によるわずかな伸縮により、図 16 (b)に示すように、 CIS同士のオーバーラッ プ部の本来、主走査方向で同じ位置にあるべき受光素子に A Lxのずれが生じる。こ の A Lxにより、 CIS間の画像つなぎ目を不自然にし、自然画の読み取りにおいては 、つなぎめに、線がはいったように見えるなど読み取り画像の品質を悪くしていた。主 走査方向のばらつきに関しては、つなぎあわせる CIS上の画素の位置を変更する事 で、受光素子の間隔単位で調節する事は可能であるが、 CIS上の受光素子の間隔 以下の調節はできない。この吸収する事のできないばらつきにより、画像のつなぎ目 にずれが生じ、不自然さが残り、画像の品質を悪くしていた。 [0004] However, even in such an image reading apparatus in which the ends of the CIS overlap, the joints of the read images may become unnatural. As shown in Fig. 16 (b), the CIS overlaps with each other due to slight variations in the CIS mounting accuracy and slight expansion / contraction due to the temperature of the plate to which the CIS is mounted as the reading resolution increases. A Lx shift occurs in the light receiving element that should be at the same position in the main scanning direction. This A Lx made the image joint between CIS unnatural, and when reading natural images, the quality of the read image deteriorated, such as the lines appearing to join. main Regarding the variation in the scanning direction, it is possible to adjust by the unit of the light receiving element interval by changing the position of the pixel on the CIS to be connected, but it cannot be adjusted below the interval of the light receiving element on the CIS. Due to this non-absorbable variation, the joints of the images are displaced, leaving unnaturalness and degrading the image quality.
[0005] 以上に述べた従来の複数の CISを用いた画像読み取り装置では、つなぎ目の画像 の品質が悪!、と!、う問題があった。  [0005] In the conventional image reading apparatus using a plurality of CISs described above, the image quality at the joint is poor! ,When! There was a problem.
[0006] そこで本発明は、このような従来の構成が有していた問題を解決しょうとするもので あり、つなぎ目の画像の不自然さをなくし、読み取り画像の品質が高い画像読み取り 装置を実現する事を目的とするものである。  [0006] Therefore, the present invention is intended to solve the problems of the conventional configuration as described above, and eliminates the unnaturalness of the image at the joint and realizes an image reading apparatus with high read image quality. The purpose is to do.
発明の開示  Disclosure of the invention
[0007] 上記目的を達成するために、本発明は、つなぎ合わせる際に隣接する CISの主走 查方向の画素が重なる部分に関しての読み取り画素データは主走査方向で同位置 にあるそれぞれの受光素子の出力値を重み付けによる平均化演算をおこない、かつ 重み付け係数をオーバーラップ範囲で同一にしない。すなわち、つなぎあわせる読 み取り部、それぞれにおいて、つなぎ目周辺の画素の階調値を読み取り、双方の読 み取り階調値が含まれるので、つなぎ目の階調の不連続性を目立たせなくする事が できる。  [0007] In order to achieve the above object, according to the present invention, read pixel data relating to a portion where pixels in the main scanning direction of adjacent CISs overlap at the time of stitching are respectively received in the same position in the main scanning direction. The output value is averaged by weighting, and the weighting coefficient is not the same in the overlap range. In other words, in each of the reading sections to be connected, the gradation values of the pixels around the joint are read, and both read gradation values are included, so that the discontinuity of the gradation of the joint is made inconspicuous. Is possible.
[0008] また本発明は、隣接する読み取り部の受光素子が重なる部分の画素を一方の受光 素子力 得られた階調値と他方の受光素子力 得られた階調値力 以下の演算式を 用いて算出する画像読み取り装置である。  [0008] Further, according to the present invention, the gradation value obtained by one light receiving element force and the other light receiving element force obtained from a pixel in a portion where the light receiving elements of adjacent reading units overlap each other are expressed by the following arithmetic expressions: This is an image reading apparatus that is used for calculation.
Nout (x) =Na (x) X (ga (x) /G) +Nb (x) X (gb (x) /G)  Nout (x) = Na (x) X (ga (x) / G) + Nb (x) X (gb (x) / G)
G = ga (x) +gb (x)  G = ga (x) + gb (x)
x:つなぎあわせ後の画像の主走査方向の画素位置  x: Pixel position in the main scanning direction of the joined images
Nout (x) :主走査方向 Xの位置における画素の算出後の階調値。  Nout (x): The gradation value after calculation of the pixel at the position in the main scanning direction X.
Na (X):主走査方向 Xの位置における一方の読み取り部の画素の階調値。  Na (X): The gradation value of the pixel of one reading unit at the position in the main scanning direction X.
Nb (X):主走査方向 Xの位置における他方の読み取り部の画素の階調値。  Nb (X): The gradation value of the pixel of the other reading unit at the position in the main scanning direction X.
ga (X):主走査方向 Xの位置における一方の読み取り部の画素の重み付け係数。 gb (X):主走査方向 Xの位置における他方の読み取り部の画素の重み付け係数。 G : l以上の整数 ga (X): A weighting coefficient of the pixel of one reading unit at the position in the main scanning direction X. gb (X): Weighting coefficient of the pixel of the other reading unit at the position in the main scanning direction X. G: integer greater than or equal to l
また本発明は、隣接する読み取り部の主走査方向で重なる範囲の画素を算出する 際に用いる重み付け係数を一方の読み取り部に近い側では、一方の読み取り部の 階調値への重み付け係数を大きくし、他方の読み取り部に近い側では、他方の読み 取り部の階調値への重み付け係数を大きくした画像読み取り装置である。  Further, according to the present invention, the weighting coefficient used when calculating the pixels in the overlapping range in the main scanning direction of the adjacent reading unit is larger on the side close to one reading unit, and the weighting coefficient for the gradation value of one reading unit is increased. However, on the side closer to the other reading unit, the image reading apparatus has a larger weighting coefficient for the gradation value of the other reading unit.
[0009] また本発明は、当該読み取り部の主走査方向で重なる範囲において、より当該読 み取り部に近い方の重なる範囲においては、当該 CISの読み取り値の重み付け係数 を隣接する読み取り部の主走査同一個所の読み取り階調値の重み付けよりも大きく し、逆に、より隣接する読み取り部に近い方の主走査方向で重なる範囲のつなぎ目 処理においては、当該読み取り部の読み取り階調値の重み付け係数を隣接する読 み取り部の主走査同一個所の読み取り値の重み付け係数よりも小さくするので、つな ぎあわせる範囲の階調の不自然さを軽減することができる。 [0009] Further, according to the present invention, in the overlapping range in the main scanning direction of the reading unit, in the overlapping range closer to the reading unit, the weighting coefficient of the reading value of the CIS is set to the main of the adjacent reading unit. In the joint processing of the range that is larger than the weight of the reading gradation value at the same scanning position and conversely overlaps in the main scanning direction closer to the adjacent reading section, the weighting coefficient of the reading gradation value of the reading section Is made smaller than the weighting coefficient of the reading value at the same position in the main scanning of the adjacent reading section, so that the unnatural gradation of the range to be joined can be reduced.
図面の簡単な説明  Brief Description of Drawings
[0010] [図 1]本発明に係る読み取り装置の構成図である。  FIG. 1 is a configuration diagram of a reading apparatus according to the present invention.
[図 2]本発明に係る読取り部の千鳥配列と原稿の位置関係を示す図である。  FIG. 2 is a diagram showing a positional relationship between a staggered arrangement of a reading unit and a document according to the present invention.
[図 3]本発明に係るセンサユニットの構成を示す図である。  FIG. 3 is a diagram showing a configuration of a sensor unit according to the present invention.
[図 4]本発明に係る読取り部の構成を示す図である。  FIG. 4 is a diagram showing a configuration of a reading unit according to the present invention.
[図 5]本発明に係る読取り部の電気的構成を示すブロック図である。  FIG. 5 is a block diagram showing an electrical configuration of a reading unit according to the present invention.
[図 6]図 5に示したデータ処理部のフローを示す図である。  6 is a diagram showing a flow of the data processing unit shown in FIG.
[図 7]隣接する読み取り部の受光素子の重なりを示す図である。  FIG. 7 is a diagram showing the overlap of light receiving elements of adjacent reading units.
[図 8]図 7に示した各読み取り部の位置を示す記号の具体的な場所を示す図である。  8 is a diagram showing specific locations of symbols indicating the positions of the respective reading units shown in FIG.
[図 9]センサ間つなぎ合わせ処理における受光素子の位置と重み付け係数の対応を 示す図である。  FIG. 9 is a diagram showing the correspondence between the position of the light receiving element and the weighting coefficient in the sensor joining process.
[図 10]実施例における CIS (A)と CIS (B)の間の重み付け演算式を示す図である。  FIG. 10 is a diagram showing a weighting expression between CIS (A) and CIS (B) in the example.
[図 11]実施例における CIS (B)と CIS (C)の間の重み付け演算式を示す図である。  FIG. 11 is a diagram showing a weighting calculation formula between CIS (B) and CIS (C) in the embodiment.
[図 12]実施例における CIS (C)と CIS (D)の間の重み付け演算式を示す図である。  FIG. 12 is a diagram showing a weighting expression between CIS (C) and CIS (D) in the example.
[図 13]実施例における CIS (D)と CIS (E)の間の重み付け演算式を示す図である。  FIG. 13 is a diagram showing a weighting expression between CIS (D) and CIS (E) in the example.
[図 14]図 10、図 11、図 12、図 13に含まれない範囲の読み取りデータを算出する演 算式を示す図である。 [FIG. 14] An operation for calculating read data in a range not included in FIG. 10, FIG. 11, FIG. 12, and FIG. It is a figure which shows a formula.
[図 15]従来の読取り部の配置を示す図である。  FIG. 15 is a diagram showing the arrangement of a conventional reading unit.
[図 16]従来の読取り部のつなぎあわせを説明する図である。  FIG. 16 is a diagram for explaining joining of conventional reading units.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0011] 以下、本発明の実施の形態を図 1〜図 14に基づいて説明する。 Hereinafter, embodiments of the present invention will be described with reference to FIGS.
[0012] CIS41と呼ばれる読み取り部は図 4に示すように、 5000個の受光素子 42が主走 查方向 1列に配置されている。受光素子 42は、図示しないセルフオックレンズなどの 光学部品と、図示しないフォトトランジスタなどの光電変換素子の組み合わせ力もなり 、原稿からの反射光を、セルフオックレンズで、光電変換素子の受光面に結像してい る。さらに、 CIS41には、受光素子の並び方向と平行になるような位置に導光体が 3 本配置されている。導光体はそれぞれ、赤、青、緑色に発光する LEDにより、原稿に 赤色光、青色光、緑色光をそれぞれ照射する。 As shown in FIG. 4, the reading unit called CIS 41 has 5000 light receiving elements 42 arranged in one row in the main running direction. The light receiving element 42 also has a combination force of an optical component such as a self-occ lens (not shown) and a photoelectric conversion element such as a phototransistor (not shown), and the reflected light from the original is coupled to the light receiving surface of the photoelectric conversion element by the self-occ lens. I have an image. Furthermore, three light guides are arranged in the CIS 41 at a position parallel to the arrangement direction of the light receiving elements. The light guides illuminate the original with red, blue, and green light, respectively, by LEDs that emit red, blue, and green light.
[0013] センサーユニットは図 3に示すように、 CISと呼ばれる読み取り部 21〜25をプレート 31上に 5個、千鳥配列になるように固定する。前記センサーユニット上で、原稿 26を 原稿搬送方向 27に従って搬送する事で、各 CISが原稿を読取る。図 2は図 3を真上 からの視点でみた、各 CISと原稿との位置関係を示している。図 2、図 3からわ力るよ うに、 CIS (A) 21、 CIS (C) 23、 CIS (E) 25は副走査方向同じ位置に、主走査方向 で重ならないように配置され、また、前記 3個の CISと副走査方向異なる位置で、原 稿搬送に対して下流側に CIS (B) 22、 CIS (D) 24が副走査方向同じ位置に CIS (A )、(C)、(E)と千鳥配列になるように配置されている。なお、 CIS (A) 21と CIS (B) 22 、 CIS (B) 22と CIS (C) 23、 CIS (C) 23と CIS (D) 24、 CIS (D) 24と CIS (E) 25では 、それぞれ受光素子が主走査方向で 264個オーバーラップするように本実施例では 、配置されている。 As shown in FIG. 3, the sensor unit fixes five reading units 21 to 25 called CIS on the plate 31 so as to form a staggered arrangement. Each CIS reads the document by transporting the document 26 in the document transport direction 27 on the sensor unit. Fig. 2 shows the positional relationship between each CIS and the manuscript from the perspective of Fig. 3 from above. As can be seen from Figs. 2 and 3, CIS (A) 21, CIS (C) 23, and CIS (E) 25 are arranged at the same position in the sub-scanning direction so that they do not overlap in the main scanning direction. CIS (A), (C), (CIS (B) 22 and CIS (D) 24 are located downstream of the three CISs in the sub-scanning direction at the same position in the sub-scanning direction. E) and arranged in a staggered arrangement. In CIS (A) 21 and CIS (B) 22, CIS (B) 22 and CIS (C) 23, CIS (C) 23 and CIS (D) 24, CIS (D) 24 and CIS (E) 25 In this embodiment, the light receiving elements are arranged so as to overlap each other in the main scanning direction.
[0014] 次に、前記センサユニット上の CISの出力データの流れを図 5のブロック図で説明 する。 CIS (A) 21、 CIS (B) 22、 CIS (C) 23、 CIS (D) 24、 CIS (E) 25の出力は AD コンバータ 51に入力される。 ADコンバータ 51の出力はデータ処理部 52に入力され る。データ処理部 52には、画像メモリ 55、不揮発性メモリ 56が接続され、データ処理 部 52によりリード Zライト可能である。データ処理部 52の出力は画像処理部 57に入 力され、画像処理部 57の出力はスキャナー外部へ接続されている。 CPU53はデー タ処理部 52に接続され、 CPU53がデータ処理部 52を制御する。また、 CPU53に は、プログラムメモリ 54が接続され、 CPU53の動作を定義するプログラムが格納され ている。 [0014] Next, the flow of CIS output data on the sensor unit will be described with reference to the block diagram of FIG. The outputs of CIS (A) 21, CIS (B) 22, CIS (C) 23, CIS (D) 24, and CIS (E) 25 are input to AD converter 51. The output of the AD converter 51 is input to the data processing unit 52. An image memory 55 and a non-volatile memory 56 are connected to the data processing unit 52, and read / write can be performed by the data processing unit 52. The output of the data processing unit 52 is input to the image processing unit 57. The output of the image processing unit 57 is connected to the outside of the scanner. The CPU 53 is connected to the data processing unit 52, and the CPU 53 controls the data processing unit 52. The CPU 53 is connected to a program memory 54 and stores a program that defines the operation of the CPU 53.
[0015] センサユニットは CIS毎に、データ処理部 52から入力されるトリガパルスを起点とし て、各 CIS上の 1番目から 5000番目まで順番に、受光素子からの出力値をデータ処 理部 52が生成する図示しない CIS用基準クロックの立ち上がりエッジにあわせてァ ナログ信号として出力する。 ADコンバータ 51では、前記 CIS用基準クロックの立ち 上がりエッジにあわせてサンプリングし、 A/D変換し、 8ビットのディジタルデータとし て出力する。データ処理部 52はゲートアレイや、セルベースなどの ASICや FPGAな どで構成され、 ADコンバータ出力のディジタルデータを受け取り、シェーディング補 正を実施する。シェーディング補正の具体的な方法については後述する。また、上流 側の CISが pライン目の読み取りデータを出力している場合、下流側の CISは p— qラ イン目の読み取りデータを出力している。したがって、データ処理部は上流側の CIS の処理後のデータを一旦、画像メモリ 55に保管する。そして、 qライン分の原稿搬送 後、下流側の CISが pラインを読み取り、データ処理部において AD変換された後、 上流側の CISによる同じラインのデータをメモリから読み出し、センサ間つなぎ合わせ 処理を行う。つなぎ合わせ処理により、それまで 5分割して処理されていたデータが 1 ラインのデータとなって、画像処理部へ送られる。  [0015] For each CIS, the sensor unit starts from the trigger pulse input from the data processing unit 52, and outputs the output values from the light receiving elements in order from the first to the 5000th on each CIS. Is output as an analog signal in synchronization with the rising edge of the CIS reference clock not shown. The AD converter 51 samples in accordance with the rising edge of the CIS reference clock, performs A / D conversion, and outputs it as 8-bit digital data. The data processing unit 52 includes a gate array, a cell-based ASIC, an FPGA, and the like, receives digital data output from the AD converter, and performs shading correction. A specific method of shading correction will be described later. In addition, when the upstream CIS outputs read data for the p-th line, the downstream CIS outputs read data for the p-q line. Therefore, the data processing unit temporarily stores the data after the upstream CIS processing in the image memory 55. After conveying the document for q lines, the CIS on the downstream side reads the p line, and after AD conversion in the data processing unit, the data on the same line by the upstream CIS is read from the memory, and the inter-sensor connection processing is performed. Do. Through the stitching process, the data that has been processed by dividing the data into 5 pieces so far becomes one line of data and sent to the image processing unit.
[0016] 画像処理部は、 DSP (Digital Signal Processor)などで構成され、コントラスト 調整や、ガンマ補正などの画像処理、主走査方向、副走査方向必要な範囲の切り出 し処理、拡大縮小、 2値化などの処理が必要に応じて実行された後、読み取り装置外 部へ送出され、用途に応じて、画像データとして保管したり、プリントアウトしたり、 LA N、 WANを経由して、遠隔地へ転送される。  [0016] The image processing unit is configured by a DSP (Digital Signal Processor) or the like, and performs image processing such as contrast adjustment and gamma correction, cut-out processing in the main scanning direction and sub-scanning direction, enlargement / reduction, 2 Processing such as digitization is performed as necessary, then sent to the outside of the reading device, and stored as image data, printed out, or remotely via LAN or WAN, depending on the application. It is transferred to the ground.
[0017] 前記データ処理部のフローを図 6を用いて以下に説明する。 ADコンバータ 51から 入力されたデータは ADコンバータ 51とデータ処理部 52の間の転送レートと、データ 処理部内転送レートの違いを吸収する為、ー且、 FIF061に入る。読取り部内切り出 し処理部 62は、 FIF061からデータを読み出し、後述するように、 CISセンサの不要 となる両端 100画素分のデータを破棄してから、シェーディング補正部に送る。シェ ーデイング補正は、読み取りデータのダイナミックレンジを広げるために、階調数が 2 56の場合、以下の計算式で演算される。 The flow of the data processing unit will be described below with reference to FIG. The data input from the AD converter 51 enters the FIF061 in order to absorb the difference between the transfer rate between the AD converter 51 and the data processing unit 52 and the transfer rate in the data processing unit. The in-read section processing unit 62 reads data from the FIF061 and does not require a CIS sensor as will be described later. The data for 100 pixels at both ends is discarded and sent to the shading correction unit. Shading correction is calculated by the following formula when the number of gradations is 256 to widen the dynamic range of the read data.
シェーディング補正後のデータ = 255 X ( (N (X) Bk (X) ) Z (Wh (x) Bk (x) ) ) N (x) :原稿を読取った際の xの位置における ADコンバータ出力の読み取りデータ Bk (X): CISの LEDを消灯した状態での Xの位置における ADコンバータ出力の読み 取りデータ  Data after shading correction = 255 X ((N (X) Bk (X)) Z (Wh (x) Bk (x))) N (x): The AD converter output at the position of x when the document is scanned Read data Bk (X): Read data of AD converter output at X position with CIS LED off
Wh (x):白基準板を読取った際の Xの位置における ADコンバータ出力の読み取り データ  Wh (x): Data read from AD converter output at position X when white reference plate is read
シェーディング補正時に用いる Bk (x)は原稿読み取りの前に CISの LEDを消灯し た状態で受光素子の出力の AD変換値を不揮発性メモリ 56に保管し、また Wh(x)は 同じぐ原稿読み取りの前に装置に備え付けられた白基準板を読取った際の受光素 子の出力の AD変換値を不揮発性メモリ 56に格納する事で得られ、シェーディング 補正時に不揮発性メモリ 56から前記 Bk(x)、 Wh (x)を読み出し、シェーディング補 正を実施している。  Bk (x) used for shading correction stores the AD conversion value of the light receiving element output in the nonvolatile memory 56 with the CIS LED turned off before reading the document, and Wh (x) reads the same document. Is obtained by storing the AD conversion value of the output of the light receiving element when the white reference plate provided in the apparatus is read in the nonvolatile memory 56, and from the nonvolatile memory 56 at the time of shading correction, the Bk (x ), Wh (x) is read and shading correction is performed.
[0018] シェーディング補正後のデータはメモリコントローラ 64により、遅延処理をおこなう為 、画像メモリ 55に書き込まれる。遅延処理部 65は千鳥配列されている CISの副走査 方向搬送時間のずれを考慮して、原稿上の同じラインに相当するデータをメモリコン トローラ 64経由で読み出し、センサ間つなぎ処理部 66におくられる。センサ間つなぎ 処理部 66の動作につ ヽては後述する。  The data after the shading correction is written into the image memory 55 for delay processing by the memory controller 64. The delay processing unit 65 reads the data corresponding to the same line on the document via the memory controller 64 and puts it in the inter-sensor connection processing unit 66 in consideration of the shift of the transport time in the sub-scanning direction of the CIS arranged in a staggered manner. It is done. The operation of the inter-sensor connection processing unit 66 will be described later.
[0019] つなぎ処理後のデータは主走査方向データ切り出し処理部 67に送られ、 CPUに より指定される主走査方向の範囲のみ抽出してから出力 FIF068に送る。出力 FIF 068はデータ処理部 52と画像処理部の間の転送レートにあうように、出力 FIF068 からデータを読み出しデータ処理部 52外へ転送され、画像処理部 57で前述の処理 が実施される。  [0019] The data after the linking process is sent to the main scanning direction data cutout processing unit 67, and only the range in the main scanning direction specified by the CPU is extracted and sent to the output FIF068. The output FIF 068 reads out data from the output FIF 068 and transfers it to the outside of the data processing unit 52 so as to meet the transfer rate between the data processing unit 52 and the image processing unit, and the image processing unit 57 performs the above-described processing.
[0020] センサ間つなぎ処理の詳細な実施例について以下、説明する。  A detailed example of the inter-sensor connection process will be described below.
[0021] 各 CISにおいて両端の 100画素は隣接する CISとの間の主走査方向のずれが受 光素子の間隔以上のずれがあった場合の調整用として使い、製造ばらつきによる調 整が不要の場合、端の 100画素分の出力は無効となる。 [0021] In each CIS, the 100 pixels at both ends are used for adjustment when the deviation in the main scanning direction between adjacent CISs is greater than the gap between the light receiving elements. When adjustment is not required, the output for 100 pixels at the end is invalid.
[0022] ゆえに、 101番目の受光素子から、 4900番目の受光素子のあわせて 4800個の受 光素子が有効となる。隣接する CISがある場合、 4800個の内の端の 64画素分の受 光素子、つまり CIS内で、 101〜164番目、 4737番目〜4800番目の受光素子力 なぎあわせの演算対象となる。つなぎあわせの演算では、一方の CISの 101番目の 受光データと他方の CISの 4737番目の受光データで重み付け演算を実施し、当該 画素のデータが算出される。以下、同様に一方の CISの 102番目と他方の 4738番 目、 一方の 164番目と他方の 4800番目同士で重み付け演算が実施される。  Therefore, from the 101st light receiving element, 4800 light receiving elements including the 4900th light receiving element are effective. If there are adjacent CISs, the light receiving elements for 64 pixels at the end of 4800s, that is, the 101st to 164th and 4737th to 4800th light receiving elements are combined in the CIS. In the stitching calculation, weighting calculation is performed on the 101st received light data of one CIS and the 4737th received light data of the other CIS, and the data of that pixel is calculated. Similarly, the weighting operation is performed on the 102nd and 4738th of one CIS, the 164th and 4800th of the other.
[0023] よって、 CIS (A)、 CIS (E)では 4800個の受光素子が有効であり、内、 64個は隣 接する CISとのつなぎあわせ用であり、つなぎ合わせ処理を経ない受光素子は 4736 個である。また、両端でつなぎあわせ処理を必要とする CIS (B)、 CIS (C)、 CIS (D) では、 4800個の受光素子が有効であり、内、 128個は隣接する CISとのつなぎあわ せ用であり、つなぎ合わせ処理を経ない受光素子は 4672個である。よって、つなぎ 合わせ後の 1ラインの画素数は 4736 + 64 +4672 + 64 +4672 + 64 +4672 + 64 +4736 = 23744である。  [0023] Therefore, in CIS (A) and CIS (E), 4800 light receiving elements are effective. Of these, 64 are used for connecting to adjacent CIS, and the light receiving elements that have not undergone the connecting process are There are 4736 pieces. In CIS (B), CIS (C), and CIS (D), which require joint processing at both ends, 4800 light receiving elements are effective, of which 128 are connected to adjacent CIS. There are 4672 light-receiving elements that are not connected and processed. Therefore, the number of pixels in one line after joining is 4736 + 64 +4672 + 64 +4672 + 64 +4672 + 64 +4736 = 23744.
[0024] つなぎ合わせ処理における重み付け演算では、一方の受光素子の階調を Naとし、 他方の受光素子の階調を Nbとすると、以下の式で算出する。  [0024] In the weighting calculation in the stitching process, when the gradation of one light receiving element is Na and the gradation of the other light receiving element is Nb, the calculation is performed by the following equation.
Nout = ( (ga/G) X Na) + ( ( (G -ga) /G) X Nb)  Nout = ((ga / G) X Na) + (((G -ga) / G) X Nb)
Nout:つなぎあわせ処理後の階調  Nout: Gradation after stitching processing
G : l以上の整数  G: integer greater than or equal to l
ga :重み付け係数で、 1以上 G以下の整数からなる  ga: Weighting factor consisting of an integer between 1 and G
本実施例では、前記 Gを 9として、つなぎあわせの範囲 64画素分を 8個の小エリア にわけ、それぞれの小エリア内で、同じ重み付け係数を用いる。ここで、前記 Gは 9に 限られず、上記小分けエリアを細力べ分ければ、それに応じて小分けエリアの数以上 の数となる。例えば、つなぎあわせの範囲 64画素分を 64個の小エリアにわけるとす ると、前記 Gは 64以上の数となるし、つなぎあわせの範囲が 64画素以上になると、そ れに合わせて大きな数となる。前記小分けエリア及び Gは、小分けエリアを細力べ分 け Gが大きくなるほどに、自然な画像のつなぎ目を実現することができるが、その分処 理が必要なデータ量が増えてしまう。 In this embodiment, G is 9 and the stitching range of 64 pixels is divided into 8 small areas, and the same weighting coefficient is used in each small area. Here, G is not limited to 9, and if the above subdivision areas are subdivided, the number of subdivision areas will be more than that. For example, if the stitching range of 64 pixels is divided into 64 small areas, the G will be a number of 64 or more, and if the stitching range is 64 pixels or more, it will be larger. Number. As the subdivision area and G are divided into subdivision areas, the larger the G, the more natural the seam of images can be realized. The amount of data that needs to be increased.
[0025] また、重み付け係数 gaは 1以上 G以下の整数であり、主走査方向で同じ位置にある 他方の CISの重み付け係数 gbと足し合わせると Gとなる。  [0025] The weighting coefficient ga is an integer of 1 or more and G or less, and becomes G when added to the weighting coefficient gb of the other CIS at the same position in the main scanning direction.
また、一つの読み取り部内で、主走査方向位置 Xにある重み付け係数 ga (x)と、同じ 読み取り部内で、位置 Xと隣接し他方の読み取り部に近い位置 yにある重み付け係数 ga (y)では、 ga (x)≥ga (y)の式を満たす。つまり、 CISの端に近い受光素子になる に従い、小さな重み付け係数となり、より自然な読み取り画像を得ることが出来る。  In addition, the weighting coefficient ga (x) at the position X in the main scanning direction within one reading unit and the weighting coefficient ga (y) at the position y adjacent to the position X and close to the other reading unit within the same reading unit , Ga (x) ≥ga (y) is satisfied. In other words, as the light receiving element is closer to the end of the CIS, the weighting factor becomes smaller and a more natural read image can be obtained.
[0026] 図 7 (a)に、 CIS (A)と CIS (B)のつなぎ合わせ部の位置関係を示す。図中、 alO〜 al8は CIS (A)の受光素子の位置を示す。 bl〜b9は CIS (B)の受光素子の位置を 示す。同様に、図 7 (b)に CIS (B)と CIS (C)に関して示し,図 7 (c)に CIS (C)と CIS ( D)に関して示し、図 7 (d)に CIS (D)と CIS (E)に関して示して!/、る。  [0026] FIG. 7 (a) shows the positional relationship between the joints of CIS (A) and CIS (B). In the figure, alO to al8 indicate the positions of the light receiving elements of CIS (A). bl to b9 indicate the position of the light receiving element of CIS (B). Similarly, Fig. 7 (b) shows CIS (B) and CIS (C), Fig. 7 (c) shows CIS (C) and CIS (D), and Fig. 7 (d) shows CIS (D) and CIS (D). Show me about CIS (E)!
[0027] 図 7の記号(al0〜al8、 bl〜b9、 bl0〜bl8、 cl〜c9、 cl0〜cl8、 dl〜d9、 dl 0〜dl8、 el〜e9)が各 CIS上のどの受光素子を特定しているかを図 8に示す。  [0027] Symbols in Fig. 7 (al0 to al8, bl to b9, bl0 to bl8, cl to c9, cl0 to cl8, dl to d9, dl 0 to dl8, el to e9) indicate which light receiving element on each CIS. Figure 8 shows whether they are specified.
[0028] また、図 7、図 8に示す重み付け演算を必要とする前記小エリアごとの重み付け係 数を図 9に示す。  [0028] Further, FIG. 9 shows the weighting coefficient for each small area that requires the weighting calculation shown in FIG. 7 and FIG.
[0029] 図 9の重み付け係数を用いた、つなぎあわせ後の 1ラインの画素データの演算式を 図 10〜図 14に示す。  [0029] FIGS. 10 to 14 show calculation formulas for pixel data of one line after stitching using the weighting coefficients of FIG.
[0030] これらのつなぎ合わせ処理により、読取り部間の周辺の画素は、主走査方向で同じ 位置における双方の読み取りデータが含まれる為、 CISの製造ばらつきや、 CISのセ ンサユニットへの取り付け精度のばらつきにより発生する読み取り部のつなぎめで発 生する読み取った画像の不自然さをなくす事ができる。  [0030] By these joining processes, the peripheral pixels between the reading units include both reading data at the same position in the main scanning direction, so CIS manufacturing variations and CIS sensor unit mounting accuracy It is possible to eliminate the unnaturalness of the scanned image that occurs due to the stitching of the scanning unit, which occurs due to variations in the image quality.
実施例 1  Example 1
[0031] 図 1は本発明の一実施例の読み取り装置の構成を示す。  FIG. 1 shows a configuration of a reading apparatus according to an embodiment of the present invention.
[0032] 図 1において、 1は原稿に読み取り光を照射して、その反射光を読み取るセンサー ユニットで、 3は原稿台、原稿台 3を挟んで、センサーユニット 1と対畤する位置に、白 基準板 2を固定する。原稿台 3と、白基準板 2の間が原稿の搬送経路 4である。 5a、 5 bは、外部力 原稿を図示しない原稿搬入ロカ 挿入した際に、原稿の挿入を感知 する透過センサーである。原稿の挿入感知により原稿を原稿台 3へ搬送する搬入用 駆動ローラ 6a、搬入用従動ローラ 6bがある。 8a、 8bは原稿台 3への原稿搬送を感知 するセンサ、 7aは原稿を搬送する駆動ローラであり、 7bは原稿搬送用従動ローラで ある。また、 9aは読み取り終了後の原稿を装置外へ排出する排出用駆動ローラであ り、 9bは排出用従動ローラである。 In FIG. 1, reference numeral 1 denotes a sensor unit that irradiates a document with reading light and reads the reflected light. Reference numeral 3 denotes a white platen placed at a position facing the sensor unit 1 with the document table and the document table 3 interposed therebetween. Fix reference plate 2. A document transport path 4 is between the document table 3 and the white reference plate 2. Reference numerals 5a and 5b are transmission sensors that detect the insertion of an original when an external force original is inserted into a document carry-in locuser (not shown). For transporting documents to the platen 3 when document insertion is detected There are a driving roller 6a and a driven roller 6b for loading. 8a and 8b are sensors for detecting the conveyance of the document to the document table 3, 7a is a driving roller for conveying the document, and 7b is a driven roller for conveying the document. Reference numeral 9a denotes a discharge driving roller for discharging the original after reading out of the apparatus, and 9b denotes a discharge driven roller.
[0033] 次に、図 1に示す実施例の動作について説明する。原稿読み取り前に、センサー ユニット 1は消灯した状態で黒基準値を読み取り、また、センサユニット 1上の LEDを 点灯して白基準板 2を読み、シェーディング補正の為のパラメータを保管する。図示 しない原稿搬入ロカゝら挿入された原稿は透過センサ 5a, 5bにより感知され、搬入用 駆動ローラ 6a、搬入用従動ローラ 6bにより、透過センサ 8a、 8bが原稿の先端を感知 するまで、原稿台 3へ送られる。原稿送り用駆動ローラ 7a、原稿送り用従動ローラ 7b により、原稿が搬送されながら、センサユニット 1により原稿が読取られる。所定ライン の読み取りが完了した時点、または、透過センサ 5a, 5bが原稿の後端を検出した時 点で、センサユニット 1は読み取りを終了し、排出用駆動ローラ 9a、排出用従動ロー ラ 9bによって、装置外部へ排出される。 Next, the operation of the embodiment shown in FIG. 1 will be described. Before reading the document, read the black reference value with sensor unit 1 turned off, turn on the LED on sensor unit 1, read white reference plate 2, and store the parameters for shading correction. A document inserted from a document loading locuser (not shown) is detected by the transmission sensors 5a and 5b, and until the transmission sensors 8a and 8b detect the leading edge of the document by the loading drive roller 6a and the driven driven roller 6b. Sent to 3. The document is read by the sensor unit 1 while the document is being conveyed by the document feeding drive roller 7a and the document feeding driven roller 7b. When reading of the predetermined line is completed, or when the transmission sensors 5a and 5b detect the trailing edge of the document, the sensor unit 1 finishes reading, and is driven by the discharge drive roller 9a and the discharge driven roller 9b. , Discharged outside the device.
産業上の利用可能性  Industrial applicability
[0034] 以上のように、本発明の読み取り装置では、読み取り部のつなぎめで発生する読み 取った画像の不自然さをなくすことのできる画像読み取り装置を提供することができ る。 [0034] As described above, the reading device of the present invention can provide an image reading device that can eliminate the unnaturalness of the read image that occurs when the reading unit is joined.

Claims

請求の範囲 The scope of the claims
[1] 原稿に読み取り光を照射してその反射光を受光素子で光電変換する事で、当該原 稿の画像を読み取る読み取り部を、隣接する読み取り部の受光素子の一部が主走 查方向で重なるような主走査方向の異なる位置に少なくとも 2個以上配置し、前記読 み取り部は、当該複数の読み取り部力 なるセンサーユニット又は前記当該原稿を、 移動手段を用いて副走査方向に移動させる事で、前記当該原稿を読み取り、上流 側に位置する前記読み取り部で読み取った画像データと下流側の読み取り部で読 み取った画像データとを合成手段を用いて合成して、 1ラインの画像データを生成す る画像読み取り装置において、前記合成手段は、隣接する読み取り部の主走査方向 で受光素子が重なる範囲の階調値を、上流側に位置する読み取り部から得られる階 調値と下流側に位置する読み取り部力 得られる階調値の重み付けのある平均化処 理演算によって算出するつなぎ目画像処理手段を備え、前記平均化処理演算に用 いる重み付けの係数は主走査方向の位置に依存して前記重なる範囲で異なってい る事を特徴とする画像読み取り装置。  [1] By irradiating the original with reading light and photoelectrically converting the reflected light with the light receiving element, the reading part that reads the image of the original is used as a part of the light receiving element in the adjacent reading part. At least two or more are arranged at different positions in the main scanning direction so as to overlap with each other, and the reading unit moves the sensor units or the originals, which are the reading unit forces, in the sub-scanning direction using moving means. By scanning the original, the image data read by the reading unit positioned upstream and the image data read by the reading unit downstream are combined using a combining unit, and one line is read. In the image reading apparatus that generates image data, the synthesizing unit obtains a gradation value in a range in which the light receiving elements overlap in the main scanning direction of the adjacent reading unit from the reading unit located upstream. A gradation image and a reading unit located downstream are provided with joint image processing means for calculating by weighted averaging processing of the obtained gradation values, and the weighting coefficient used for the averaging processing calculation is the main scanning. An image reading apparatus, wherein the overlapping range differs depending on a direction position.
[2] 前記画像読み取り装置は、センサーユニットを固定して、原稿を原稿移動手段を用 いて移動させることを特徴とする請求項 1記載の画像読み取り装置。  2. The image reading apparatus according to claim 1, wherein the image reading apparatus fixes the sensor unit and moves the document using the document moving means.
[3] 前記画像読み取り装置は原稿を固定して、センサーユニットをセンサーユニット移 動手段を用いて移動させることを特徴とする請求項 1記載の画像読み取り装置。 3. The image reading apparatus according to claim 1, wherein the image reading apparatus fixes a document and moves the sensor unit using a sensor unit moving means.
[4] 前記読み取り部は、主走査方向奇数番目と、偶数番目で、副走査方向それぞれに 同じ位置に配置されている事を特徴とする請求項 1〜3のうち何れか 1に記載の画像 読み取り装置。 [4] The image according to any one of [1] to [3], wherein the reading unit is arranged at the same position in each of the odd-numbered and even-numbered main scanning directions in the sub-scanning direction. Reading device.
[5] 前記重み付けのある平均化処理演算は、以下の式で算出される事を特徴とする請 求項 1〜4のうち何れか 1に記載の画像読み取り装置。  [5] The image reading apparatus according to any one of claims 1 to 4, wherein the weighted averaging processing calculation is calculated by the following equation.
Nout (x) =Na (x) X (ga (x) /G) +Nb (x) X (gb (x) /G)  Nout (x) = Na (x) X (ga (x) / G) + Nb (x) X (gb (x) / G)
G = ga (x) +gb (x)  G = ga (x) + gb (x)
x:つなぎあわせ後の画像の主走査方向の画素位置  x: Pixel position in the main scanning direction of the joined images
Nout (x) :主走査方向 Xの位置における画素の算出後の階調値。  Nout (x): The gradation value after calculation of the pixel at the position in the main scanning direction X.
Na (X):主走査方向 Xの位置における一方の読み取り部の画素の階調値。 Nb (x):主走査方向 xの位置における他方の読み取り部の画素の階調値。 Na (X): The gradation value of the pixel of one reading unit at the position in the main scanning direction X. Nb (x): The gradation value of the pixel of the other reading unit at the position in the main scanning direction x.
ga (X):主走査方向 Xの位置における一方の読み取り部の画素の重み付け係数。 gb (X):主走査方向 Xの位置における他方の読み取り部の画素の重み付け係数。 ga (X): A weighting coefficient of the pixel of one reading unit at the position in the main scanning direction X. gb (X): Weighting coefficient of the pixel of the other reading unit at the position in the main scanning direction X.
G : l以上の整数 G: integer greater than or equal to l
前記平均化に用いる重み付け係数は、一つの読み取り部内の主走査方向の隣接 する画素において、一つの読み取り部中心に近い位置 Xにある画素の重み付け係数 ga (x)と、 Xに隣接する他の読み取り部に近い位置 yにある画素の重み付け係数 ga ( y)では、 ga (x)≥ga (y)である事を特徴とする請求項 1から 5の何れか 1に記載の画 像読み取り装置。  The weighting coefficient used for the averaging is the weighting coefficient ga (x) of the pixel located at the position X close to the center of one reading section and the other adjacent to X in the adjacent pixels in the main scanning direction in one reading section. 6. The image reading device according to claim 1, wherein a weighting coefficient ga (y) of a pixel located at a position y close to the reading unit satisfies ga (x) ≥ga (y). .
PCT/JP2005/018076 2004-12-08 2005-09-30 Image reading device WO2006061941A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004355518A JP2006166106A (en) 2004-12-08 2004-12-08 Picture reader
JP2004-355518 2004-12-08

Publications (1)

Publication Number Publication Date
WO2006061941A1 true WO2006061941A1 (en) 2006-06-15

Family

ID=36577771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/018076 WO2006061941A1 (en) 2004-12-08 2005-09-30 Image reading device

Country Status (2)

Country Link
JP (1) JP2006166106A (en)
WO (1) WO2006061941A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8264705B2 (en) 2007-11-07 2012-09-11 Ricoh Company, Ltd. Image reading apparatus, image forming apparatus and computer readable information recording medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008028662A (en) * 2006-07-20 2008-02-07 Ricoh Co Ltd Image reader and image forming apparatus
JP5096291B2 (en) * 2007-11-07 2012-12-12 株式会社リコー Image reading apparatus, image forming apparatus, and image data processing program
JP5897090B2 (en) 2013-10-22 2016-03-30 キヤノン・コンポーネンツ株式会社 Image sensor unit, image reading device, and paper sheet identification device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4692812A (en) * 1985-03-26 1987-09-08 Kabushiki Kaisha Toshiba Picture image reader
JPS6348053A (en) * 1986-08-15 1988-02-29 Canon Inc Picture information inputting device
JP2002057860A (en) * 2000-08-10 2002-02-22 Pfu Ltd Image reader
US20030138167A1 (en) * 2002-01-22 2003-07-24 Joergen Rasmusen Method and a system for stitching images produced by two or more sensors in a graphical scanner

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4692812A (en) * 1985-03-26 1987-09-08 Kabushiki Kaisha Toshiba Picture image reader
JPS6348053A (en) * 1986-08-15 1988-02-29 Canon Inc Picture information inputting device
JP2002057860A (en) * 2000-08-10 2002-02-22 Pfu Ltd Image reader
US20030138167A1 (en) * 2002-01-22 2003-07-24 Joergen Rasmusen Method and a system for stitching images produced by two or more sensors in a graphical scanner

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8264705B2 (en) 2007-11-07 2012-09-11 Ricoh Company, Ltd. Image reading apparatus, image forming apparatus and computer readable information recording medium

Also Published As

Publication number Publication date
JP2006166106A (en) 2006-06-22

Similar Documents

Publication Publication Date Title
JP5609288B2 (en) Image reading device
US7688488B2 (en) Image reading device and image forming apparatus including the same
JP4107029B2 (en) Image reading device
US7989748B2 (en) Image reading apparatus, image forming apparatus, image inspecting apparatus and image forming system
US20050199781A1 (en) Color signal compensation
WO2006061941A1 (en) Image reading device
US20060279748A1 (en) Apparatus and method for compensating for resolution differences of color and monochrome sensors
US5903363A (en) Image processing apparatus and method
US20050270589A1 (en) Image scanner
JP4557474B2 (en) Color signal correction circuit and image reading apparatus
JP4179329B2 (en) Line sensor chip, line sensor, image information reading device, facsimile, scanner and copying machine
JP2006186558A (en) Image reading apparatus
JP2006186902A (en) Image reading apparatus
US8634116B2 (en) Image reading device and image forming apparatus
JP4448427B2 (en) Image processing device
JP2004214834A (en) Image reading apparatus
JP2006311293A (en) Image processor
JP5429035B2 (en) Contact image sensor
JP4052134B2 (en) Image reading apparatus, image forming apparatus, and image processing method
JP4616716B2 (en) Image reading apparatus and image forming apparatus
JP2004214769A (en) Image reading apparatus
JP2012104931A (en) Image reading apparatus and image forming apparatus
JPH08102866A (en) Image reader
JPH1051603A (en) Image reader
JP4414276B2 (en) Image reading device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05787678

Country of ref document: EP

Kind code of ref document: A1