CN102905143B - 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof - Google Patents

2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof Download PDF

Info

Publication number
CN102905143B
CN102905143B CN201110214453.0A CN201110214453A CN102905143B CN 102905143 B CN102905143 B CN 102905143B CN 201110214453 A CN201110214453 A CN 201110214453A CN 102905143 B CN102905143 B CN 102905143B
Authority
CN
China
Prior art keywords
offset
described current
data
pixel
current pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110214453.0A
Other languages
Chinese (zh)
Other versions
CN102905143A (en
Inventor
谢俊兴
郑皓盈
余家伟
张政信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN201110214453.0A priority Critical patent/CN102905143B/en
Publication of CN102905143A publication Critical patent/CN102905143A/en
Application granted granted Critical
Publication of CN102905143B publication Critical patent/CN102905143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof. The device comprises a data queue, a conversion unit and an offset calculation unit, wherein the data queue is used for receiving and temporarily storing an input data value corresponding to a current pixel. The conversion unit is used for outputting a current offset table corresponding to a current depth parameter according to the current depth parameter corresponding to the current pixel, and the current offset table comprises (m+1) reference offset values corresponding to the current pixel and the adjacent m pixels. The offset calculation unit is used for selecting one reference offset value from a plurality of reference offset values corresponding to the current pixel in the current offset table and a plurality of prior offset tables as a data offset value corresponding to the current pixel. The data queue is used for selecting and outputting an output data value corresponding to the current pixel according to the integer part of the data offset value and an input data value.

Description

2D turns 3D rendering conversion equipment and method thereof
Technical field
The invention relates to that a kind of 2D turns 3D rendering conversion equipment and method.
Background technology
Flourish along with modern science and technology, the mankind start to pursue visual enjoyment more more real than 2D image device, and therefore the correlation technique of 3D stereo-picture reaches its maturity in recent years.Depthmeter corresponding for 2D image adaptation, to form 3D stereo-picture, first must be made image procossing to draw the dual imaging obtaining corresponding 3D glasses by 2D image device common at present, more above-mentioned glasses viewing of arranging in pairs or groups can reach 3D 3-D effect.But, depthmeter corresponding for 2D image adaptation is being carried out, in the process of anamorphose (image warping), the problem of shortage of data often can occur.
Please refer to Fig. 1, it is the schematic diagram that known 2D turns 3D rendering transfer process.In FIG, the side-play amount (offset) that pixel basis is relevant to the degree of depth (depth) carries out anamorphose.Such as, the side-play amount that pixel P4 is corresponding is 3, therefore input data values d4 is the output data value of pixel P7 by displacement.Again such as, the side-play amount that pixel P5 is corresponding is 1, therefore input data values d5 is the output data value of pixel P6 by displacement.But, as shown in Figure 1, the output data value disappearance of pixel P1, P5, P8, P9 and P10 etc.In addition, the output data of multiple pixel P4 and P6 can be the output data of same pixel P7 by displacement, and the output data value of pixel P6 and P7 produces the problem of data interlace (data crossing).Therefore, except carrying out extra filling-up hole (hole filling) process to output data value, the image procossing that also will carry out other just can obtain binocular parallax (disparity) dual imaging wanted.Thus, not only to expend extra resource and carry out filling-up hole process, also reduce the overall efficiency of image processing system.
Summary of the invention
The present invention is about turning 3D rendering conversion equipment and method at a kind of 2D, utilize simple depth image drawing (depth image based rendering), do not need to carry out extra filling-up hole process and can reach effect 2D image being converted to 3D rendering.
An object of the present invention, be that proposing a kind of 2D turns 3D rendering conversion equipment, this image conversion apparatus comprises a data queue, a converting unit and a calculations of offset unit.Data queue is in order to receive and to keep in an input data values of a corresponding current pixel.Converting unit is in order to export a current offset table of corresponding current depth parameter according to a current depth parameter of corresponding current pixel, (m+1) individual reference offset amount of m pixel that current offset table comprises corresponding current pixel and is adjacent, m is positive integer.Calculations of offset unit in order to choose one as a data offset of corresponding current pixel in multiple reference offset amounts of current offset table with corresponding current pixel in multiple previous offsets table.Wherein, data queue, according to the integer part of data offset and input data values, chooses and exports an output data value of corresponding current pixel.
Another object of the present invention is to propose a kind of 2D and turn 3D rendering conversion method, comprising: receive and keep in an input data values of a corresponding current pixel; A current depth parameter according to this current pixel corresponding produces a current offset table of this current depth parameter corresponding, (m+1) individual reference offset amount of m pixel that this current offset table comprises this current pixel corresponding and is adjacent, m is positive integer; Offset table and the data offset choosing one as this current pixel corresponding in multiple reference offset amounts of this current pixel corresponding in multiple previous offsets table in this prior; And according to the integer part of this data offset and this input data values, choose and export an output data value of this current pixel corresponding.
In order to have better understanding, an embodiment cited below particularly to above-mentioned and other side of the present invention, and coordinating institute's accompanying drawings, being described in detail below:
Accompanying drawing explanation
Fig. 1 is the schematic diagram that known 2D turns 3D rendering transfer process.
Fig. 2 shows the calcspar turning 3D rendering conversion equipment according to the 2D of an embodiment.
Fig. 3 shows the rough schematic turning 3D rendering transfer process according to the 2D of an embodiment.
Fig. 4 A ~ Fig. 4 K shows the detailed maps turning 3D rendering transfer process according to the 2D of an embodiment.
Fig. 5 shows the calcspar turning 3D rendering conversion equipment according to the 2D of another embodiment.
Fig. 6 shows the flow chart turning 3D rendering conversion method according to the 2D of an embodiment.
Main element symbol description
200,500:2D turns 3D rendering conversion equipment
210,510: data queue
220,520: converting unit
230,530: calculations of offset unit
540: interpolation unit
Embodiment
2D proposed by the invention turns 3D rendering conversion equipment and method, utilize simple depth image drawing (depth image based rendering), do not need to carry out extra filling-up hole process and can reach effect 2D image being converted to 3D rendering.
Please refer to Fig. 2, it illustrates the calcspar turning 3D rendering conversion equipment according to the 2D of an embodiment.2D turns 3D rendering conversion equipment 200 and comprises a data queue (data queue) 210, converting unit 220 and a calculations of offset unit 230.Data queue 210 receives and keeps in an input data values data_in of a corresponding current pixel.Converting unit 220 exports a current offset table of corresponding current depth parameter according to a current depth parameter d epth_ref of corresponding current pixel.In implementation, converting unit 220 can be designed as an offset lookup table (Offset LUT), obtains current offset table, maybe current depth parameter can be substituted into a formula and obtain current offset table for according to current depth parameter d epth_ref, do not limit, viewable design demand and determining.
(m+1) individual reference offset amount of m pixel that current offset table comprises corresponding current pixel and is adjacent, m is positive integer and is maximum possible side-play amount.For example, if maximum possible side-play amount is 4, then current offset table can comprise 5 reference offset amounts.In addition, m adjacent pixel can be m the pixel of following after current pixel, also can be m pixel before current pixel, does not limit.Calculations of offset unit 210 chooses one as a data offset offset of corresponding current pixel in multiple reference offset amounts of current offset table with corresponding current pixel in multiple previous offsets table in order to calculate.Data offset offset can be a maximum or a minimum value of multiple reference offset amount, being next be described for maximum with data offset offset, does not so limit in fact, viewable design demand and changing.Data queue 210, according to the integer part of data offset offset and input data values data_in, chooses and exports an output data value data_out of corresponding current pixel.
Referring to Fig. 3 and Fig. 4 A ~ Fig. 4 K, Fig. 3 shows the rough schematic turning 3D rendering transfer process according to the 2D of an embodiment, and Fig. 4 A ~ Fig. 4 K shows the detailed maps turning 3D rendering transfer process according to the 2D of an embodiment.The input data values data_in of tentation data queue 210 received in sequence pixel P1, P2 ..., P8, P9, P10, P11 ... is d1, d2 ..., d8, d9, d10, d11 ..., and the converting unit 220 depth parameter depth_ref that also received in sequence is corresponding be 1,1 ..., 4,4,4,4 ....Wherein, depth parameter depth_ref can be produced by degree of depth inductor, vision signal self or turned estimated by 3D drawing engine by 2D, does not limit.
In addition, depth parameter depth_ref can be depth value (depth), also can be through other parameter value that image algorithm process obtains, such as, be shift offset, is only for depth value herein, not its restriction.In addition, be y corresponding to current depth parameter, assuming that the formula that converting unit 220 exports the current offset table of corresponding current depth parameter be according to this 1,2,3 ..., (y-1), y, y, 0,0 ... etc.In Figure 4 A, data queue 210 receives and keeps in the input data values data_in (d1) of corresponding current pixel (P1).Converting unit 220 exports the current offset table LUToutput (1,1,0,0,0) of corresponding current depth parameter (1) according to the current depth parameter d epth_ref (1) of corresponding current pixel (P1).Because maximum possible depth parameter is set to 4 in this embodiment, therefore current offset table LUT output comprises (4+1) individual reference offset amount.
Current offset table LUT output and previous offsets table prev (0,0,0,0) do correspondingly compare maximizing and obtain new offset table new (1,1,0,0,0) by calculations of offset unit 210, comprising the reference offset amount (1) of corresponding current pixel P1 and 4 reference offset amounts (1,0,0,0) of 4 pixels subsequently.The reference offset amount 1 of corresponding current pixel P1 is outputted as data offset offset (1), and 4 reference offset amounts (1,0,0,0) are then regarded as the previous offsets table of next pixel P2.Data queue 210, according to the integer part of data offset offset (1), turns left from input data values data_in (d1) and chooses 1 number and export output data value data_out into corresponding current pixel P1 according to this.Because current pixel P1 is the 1st pixel, therefore the countless certificate in its left side, the output data value data_out of corresponding current pixel P1 is (x).In other embodiments, these 4 reference offset amounts are 4 neighbors before current pixel P1, or front 2 of this current pixel P1 and rear 2 neighbors.
In figure 4b, data queue 210 receives and keeps in the input data values data_in (d2) of corresponding current pixel (P2).Converting unit 220 exports the current offset table LUT output (1,1,0,0,0) of corresponding current depth parameter (1) according to the current depth parameter d epth_ref (1) of corresponding current pixel (P2).Current offset table LUT output and previous offsets table prev (1,0,0,0) do correspondingly compare maximizing and obtain new offset table new (1,1,0,0,0) by calculations of offset unit 210.The reference offset amount 1 of corresponding current pixel P2 is outputted as data offset offset (1), and 4 reference offset amounts (1,0,0,0) are then regarded as the previous offsets table of next pixel P3.Data queue 210, according to the integer part of data offset offset (1), turns left from input data values data_in (d2) and chooses 1 number and export output data value data_out (d1) into corresponding current pixel P2 according to this.
In figure 4 c, data queue 210 receives and keeps in the input data values data_in (d3) of corresponding current pixel (P3).Converting unit 220 exports the current offset table LUT output (1,1,0,0,0) of corresponding current depth parameter (1) according to the current depth parameter d epth_ref (1) of corresponding current pixel (P3).Current offset table LUT output and previous offsets table prev (1,0,0,0) do correspondingly compare maximizing and obtain new offset table new (1,1,0,0,0) by calculations of offset unit 210.The reference offset amount 1 of corresponding current pixel P3 is outputted as data offset offset (1), and 4 reference offset amounts (1,0,0,0) are then regarded as the previous offsets table of next pixel P4.Data queue 210, according to the integer part of data offset offset (1), turns left from input data values data_in (d3) and chooses 1 number and export output data value data_out (d2) into corresponding current pixel P2 according to this.
In fig. 4d, data queue 210 receives and keeps in the input data values data_in (d4) of corresponding current pixel (P4).Converting unit 220 exports the current offset table LUT output (1,2,3,3,0) of corresponding current depth parameter (3) according to the current depth parameter d epth_ref (3) of corresponding current pixel (P4).Current offset table LUT output and previous offsets table prev (1,0,0,0) do correspondingly compare maximizing and obtain new offset table new (1,2,3,3,0) by calculations of offset unit 210.The reference offset amount 1 of corresponding current pixel P4 is outputted as data offset offset (1), and 4 reference offset amounts (2,3,3,0) are then regarded as the previous offsets table of next pixel P5.Data queue 210, according to the integer part of data offset offset (1), turns left from input data values data_in (d4) and chooses 1 number and export output data value data_out (d3) into corresponding current pixel P4 according to this.
In Fig. 4 E, data queue 210 receives and keeps in the input data values data_in (d5) of corresponding current pixel (P5).Converting unit 220 exports the current offset table LUT output (1,1,0,0,0) of corresponding current depth parameter (1) according to the current depth parameter d epth_ref (1) of corresponding current pixel (P5).Current offset table LUT output and previous offsets table prev (2,3,3,0) do correspondingly compare maximizing and obtain new offset table new (2,3,3,0,0) by calculations of offset unit 210.The reference offset amount 2 of corresponding current pixel P5 is outputted as data offset offset (2), and 4 reference offset amounts (3,3,0,0) are then regarded as the previous offsets table of next pixel P6.Data queue 210, according to the integer part of data offset offset (2), turns left from input data values data_in (d5) and chooses 2 numbers and export output data value data_out (d3) into corresponding current pixel P5 according to this.
In Fig. 4 F, data queue 210 receives and keeps in the input data values data_in (d6) of corresponding current pixel (P6).Converting unit 220 exports the current offset table LUT output (1,1,0,0,0) of corresponding current depth parameter (1) according to the current depth parameter d epth_ref (1) of corresponding current pixel (P6).Current offset table LUT output and previous offsets table prev (3,3,0,0) do correspondingly compare maximizing and obtain new offset table new (3,3,0,0,0) by calculations of offset unit 210.The reference offset amount 3 of corresponding current pixel P6 is outputted as data offset offset (3), and 4 reference offset amounts (3,0,0,0) are then regarded as the previous offsets table of next pixel P7.Data queue 210, according to the integer part of data offset offset (3), turns left from input data values data_in (d6) and chooses 3 numbers and export output data value data_out (d3) into corresponding current pixel P6 according to this.
In Fig. 4 G, data queue 210 receives and keeps in the input data values data_in (d7) of corresponding current pixel (P7).Converting unit 220 exports the current offset table LUT output (1,2,3,4,4) of corresponding current depth parameter (4) according to the current depth parameter d epth_ref (4) of corresponding current pixel (P7).Current offset table LUT output and previous offsets table prev (3,0,0,0) do correspondingly compare maximizing and obtain new offset table new (3,2,3,4,4) by calculations of offset unit 210.The reference offset amount 3 of corresponding current pixel P7 is outputted as data offset offset (3), and 4 reference offset amounts (2,3,4,4) are then regarded as the previous offsets table of next pixel P8.Data queue 210, according to the integer part of data offset offset (3), turns left from input data values data_in (d7) and chooses 3 numbers and export output data value data_out (d4) into corresponding current pixel P7 according to this.
In Fig. 4 H, data queue 210 receives and keeps in the input data values data_in (d8) of corresponding current pixel (P8).Converting unit 220 exports the current offset table LUT output (1,2,3,4,4) of corresponding current depth parameter (4) according to the current depth parameter d epth_ref (4) of corresponding current pixel (P8).Current offset table LUT output and previous offsets table prev (2,3,4,4) do correspondingly compare maximizing and obtain new offset table new (2,3,4,4,4) by calculations of offset unit 210.The reference offset amount 2 of corresponding current pixel P8 is outputted as data offset offset (2), and 4 reference offset amounts (3,4,4,4) are then regarded as the previous offsets table of next pixel P9.Data queue 210, according to the integer part of data offset offset (2), turns left from input data values data_in (d8) and chooses 2 numbers and export output data value data_out (d6) into corresponding current pixel P8 according to this.
In Fig. 4 I, data queue 210 receives and keeps in the input data values data_in (d9) of corresponding current pixel (P9).Converting unit 220 exports the current offset table LUT output (1,2,3,4,4) of corresponding current depth parameter (4) according to the current depth parameter d epth_ref (4) of corresponding current pixel (P9).Current offset table LUT output and previous offsets table prev (3,4,4,4) do correspondingly compare maximizing and obtain new offset table new (3,4,4,4,4) by calculations of offset unit 210.The reference offset amount 3 of corresponding current pixel P9 is outputted as data offset offset (3), and 4 reference offset amounts (4,4,4,4) are then regarded as the previous offsets table of next pixel P10.Data queue 210, according to the integer part of data offset offset (3), turns left from input data values data_in (d9) and chooses 3 numbers and export output data value data_out (d6) into corresponding current pixel P9 according to this.
In Fig. 4 J, data queue 210 receives and keeps in the input data values data_in (d10) of corresponding current pixel (P10).Converting unit 220 exports the current offset table LUT output (1,2,3,4,4) of corresponding current depth parameter (4) according to the current depth parameter d epth_ref (4) of corresponding current pixel (P10).Current offset table LUT output and previous offsets table prev (4,4,4,4) do correspondingly compare maximizing and obtain new offset table new (4,4,4,4,4) by calculations of offset unit 210.The reference offset amount 4 of corresponding current pixel P10 is outputted as data offset offset (4), and 4 reference offset amounts (4,4,4,4) are then regarded as the previous offsets table of next pixel P11.Data queue 210, according to the integer part of data offset offset (4), turns left from input data values data_in (d10) and chooses 4 numbers and export output data value data_out (d6) into corresponding current pixel P10 according to this.
In Fig. 4 K, data queue 210 receives and keeps in the input data values data_in (d11) of corresponding current pixel (P11).Converting unit 220 exports the current offset table LUT output (1,2,3,4,4) of corresponding current depth parameter (4) according to the current depth parameter d epth_ref (4) of corresponding current pixel (P11).Current offset table LUT output and previous offsets table prev (4,4,4,4) do correspondingly compare maximizing and obtain new offset table new (4,4,4,4,4) by calculations of offset unit 210.The reference offset amount 4 of corresponding current pixel P11 is outputted as data offset offset (4), and 4 reference offset amounts (4,4,4,4) are then regarded as the previous offsets table of next pixel P12.Data queue 210, according to the integer part of data offset offset (4), turns left from input data values data_in (d11) and chooses 4 numbers and export output data value data_out (d7) into corresponding current pixel P11 according to this.
Coordinate Fig. 3 and Fig. 4 A ~ Fig. 4 K to learn, the 2D of the present embodiment turns the problem that 3D rendering conversion equipment can't produce output data value disappearance, therefore does not need follow-up extra filling-up hole process to carry out image rectification.Meanwhile, can also learn that the problem not having data interlace produces by Fig. 3 and Fig. 4 A ~ Fig. 4 K.In addition, converting unit 220 can also according to other formula to export current offset table.Such as, when current depth parameter is y, formula be y/ (y+1), 2y/ (y+1), 3y/ (y+1) ..., (y-1) × y/ (y+1), y × y/ (y+1), 0,0 ... etc.
In addition, data offset can be accurate to decimal to make 3D stereo-picture more level and smooth.Please refer to Fig. 5, it illustrates the calcspar turning 3D rendering conversion equipment according to the 2D of another embodiment.Similarly turn 3D rendering conversion equipment 200,2D at 2D and turn 3D rendering conversion equipment 500 and comprise data queue 510, converting unit 520 and a calculations of offset unit 530; In addition, 2D turns 3D rendering conversion equipment 500 and also comprises an interpolation unit 540.Interpolation unit 540 receives an output data value data_out and subsequent data value data_outnex from data queue 510, and carries out interpolative operation according to the fractional part offset_frac of data offset to output data value data_out and subsequent data value data_outnex and obtain an interpolative data value data_out '.Interpolative operation in Figure 5 can adopt 2 point Linear interpolation methods, also can adopt S curve (S-curve) interpolation method, not limit.
The present invention also proposes a kind of 2D and turns 3D rendering conversion method, and the 2D that please refer to Fig. 6 turns 3D rendering flow path switch figure.Upon start, receive in step S600 and keep in an input data values of a corresponding current pixel, then a current offset table of corresponding current depth parameter is produced in step S610 according to a current depth parameter of corresponding current pixel, (m+1) individual reference offset amount of m pixel that this current offset table comprises corresponding current pixel and is adjacent, m is positive integer.Then the data offset choosing one as corresponding current pixel in multiple reference offset amounts of current offset table with corresponding current pixel in multiple previous offsets table is calculated in step S620.In step S630, choose with input data values according to the integer part of data offset and export an output data value of corresponding current pixel, completing 2D and turn the conversion of 3D rendering and end operation.
The principle that above-mentioned 2D turns 3D rendering conversion method has been described in detail in Fig. 2 ~ Fig. 4 K and related content thereof, and how relevant operation, such as, produce this current offset table, how to select m etc., also can learn in the aforementioned embodiment.Therefore no longer repeat at this.
2D disclosed by the above embodiment of the present invention turns 3D rendering conversion equipment and method, utilize simple depth image drawing, the problem of output data value disappearance can not be produced, therefore do not need to carry out extra filling-up hole process and can reach effect 2D image being converted to 3D rendering.In addition, the problem producing data interlace can also be avoided via suitable conversion designs.
In sum, although the present invention discloses as above with multiple embodiment, so itself and be not used to limit the present invention.Persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when doing various change and retouching.Therefore, protection scope of the present invention is when being as the criterion depending on accompanying those as defined in claim.

Claims (20)

1. 2D turns a 3D rendering conversion equipment, comprising:
One data queue, in order to receive and to keep in an input data values of a corresponding current pixel;
One converting unit, in order to export a current offset table of corresponding described current depth parameter according to a current depth parameter of corresponding described current pixel, (m+1) individual reference offset amount of m pixel that described current offset table comprises corresponding described current pixel and is adjacent, m is positive integer; And
One calculations of offset unit, in order to choose one as a data offset of corresponding described current pixel in multiple reference offset amounts of described current offset table with corresponding described current pixel in multiple previous offsets table;
Wherein, described data queue, according to the integer part of described data offset and described input data values, chooses and exports an output data value of corresponding described current pixel.
2. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, described calculations of offset unit selects the described data offset of a maximum as described current pixel in described multiple reference offset amount.
3. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, described calculations of offset unit selects the described data offset of a minimum value as described current pixel in described multiple reference offset amount.
4. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, a described m pixel is followed after described current pixel.
5. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, a described m pixel is before described current pixel.
6. 2D according to claim 1 turns 3D rendering conversion equipment, also comprises:
One interpolation unit, in order to receive described output data value and a subsequent data value from described data queue, and carries out interpolative operation according to the fractional part of described data offset to described output data value and described subsequent data value and obtains an interpolative data value.
7. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, described converting unit obtains described current offset table according to described current depth parameter from an offset lookup table.
8. 2D according to claim 1 turns 3D rendering conversion equipment, wherein, described current depth parameter is substituted into a formula and obtains described current offset table by described converting unit, when described current depth parameter is y, described formula is 1,2,3 ..., (y-1), y, y, 0,0 ... wherein, 1,2,3 ..., (y-1), item after y, y supply by 0, with make the reference offset amount in described current offset table for (m+1) individual.
9. 2D according to claim 1 turns 3D rendering conversion equipment, wherein, described current depth parameter is substituted into a formula and obtains described current offset table by described converting unit, when described current depth parameter is y, described formula is y/ (y+1), 2y/ (y+1), 3y/ (y+1), (y-1) × y/ (y+1), y × y/ (y+1), 0, 0, wherein, at y/ (y+1), 2y/ (y+1), 3y/ (y+1), (y-1) × y/ (y+1), item after y × y/ (y+1) is supplied by 0, be that (m+1) is individual to make the reference offset amount in described current offset table.
10. 2D according to claim 1 turns 3D rendering conversion equipment, and wherein, m is maximum possible side-play amount.
11. 1 kinds of 2D turn 3D rendering conversion method, comprising:
Receive and keep in an input data values of a corresponding current pixel;
A current depth parameter according to corresponding described current pixel produces a current offset table of corresponding described current depth parameter, (m+1) individual reference offset amount of m pixel that described current offset table comprises corresponding described current pixel and is adjacent, m is positive integer;
A data offset of corresponding described current pixel is chosen one as in multiple reference offset amounts of described current offset table with corresponding described current pixel in multiple previous offsets table; And
According to integer part and the described input data values of described data offset, choose and export an output data value of corresponding described current pixel.
12. 2D according to claim 11 turn 3D rendering conversion method, wherein, the step choosing one as the described data offset of described current pixel in described multiple reference offset amount selects the described data offset of a maximum as described current pixel in described multiple reference offset amount.
13. 2D according to claim 11 turn 3D rendering conversion method, wherein, the step choosing one as the described data offset of described current pixel in described multiple reference offset amount selects the described data offset of a minimum value as described current pixel in described multiple reference offset amount.
14. 2D according to claim 11 turn 3D rendering conversion method, and wherein, a described m pixel is followed after described current pixel.
15. 2D according to claim 11 turn 3D rendering conversion method, and wherein, a described m pixel is before described current pixel.
16. 2D according to claim 11 turn 3D rendering conversion method, also comprise:
Receive described output data value and a subsequent data value from described data queue, and according to the fractional part of described data offset, interpolative operation is carried out to described output data value and described subsequent data value and obtain an interpolative data value.
17. 2D according to claim 11 turn 3D rendering conversion method, also comprise:
Described current offset table is obtained from an offset lookup table according to described current depth parameter.
18. 2D according to claim 11 turn 3D rendering conversion method, also comprise:
Described current depth parameter is substituted into a formula and obtains described current offset table;
Wherein, when described current depth parameter is y, described formula is 1,2,3 ..., (y-1), y, y, 0,0 ... wherein, 1,2,3 ..., (y-1), item after y, y supply by 0, with make the reference offset amount in described current offset table for (m+1) individual.
19. 2D according to claim 11 turn 3D rendering conversion method, also comprise:
Described current depth parameter is substituted into a formula and obtains described current offset table;
Wherein, when described current depth parameter is y, described formula be y/ (y+1), 2y/ (y+1), 3y/ (y+1) ..., (y-1) × y/ (y+1), y × y/ (y+1), 0,0 ... wherein, y/ (y+1), 2y/ (y+1), 3y/ (y+1) ..., item after (y-1) × y/ (y+1), y × y/ (y+1) supplies by 0, with make the reference offset amount in described current offset table for (m+1) individual.
20. 2D according to claim 11 turn 3D rendering conversion method, and wherein, m is maximum possible side-play amount.
CN201110214453.0A 2011-07-28 2011-07-28 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof Active CN102905143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110214453.0A CN102905143B (en) 2011-07-28 2011-07-28 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110214453.0A CN102905143B (en) 2011-07-28 2011-07-28 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof

Publications (2)

Publication Number Publication Date
CN102905143A CN102905143A (en) 2013-01-30
CN102905143B true CN102905143B (en) 2015-04-15

Family

ID=47577156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110214453.0A Active CN102905143B (en) 2011-07-28 2011-07-28 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof

Country Status (1)

Country Link
CN (1) CN102905143B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809137B (en) * 2014-01-28 2018-07-13 上海尚恩华科网络科技股份有限公司 A kind of the three-dimensional web page production method and system of the two dimension page

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070279415A1 (en) * 2006-06-01 2007-12-06 Steve Sullivan 2D to 3D image conversion
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
US20110074784A1 (en) * 2009-09-30 2011-03-31 Disney Enterprises, Inc Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-d images into stereoscopic 3-d images
US20110158504A1 (en) * 2009-12-31 2011-06-30 Disney Enterprises, Inc. Apparatus and method for indicating depth of one or more pixels of a stereoscopic 3-d image comprised from a plurality of 2-d layers
CN102210156A (en) * 2008-11-18 2011-10-05 松下电器产业株式会社 Reproduction device, reproduction method, and program for stereoscopic reproduction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070279415A1 (en) * 2006-06-01 2007-12-06 Steve Sullivan 2D to 3D image conversion
CN102210156A (en) * 2008-11-18 2011-10-05 松下电器产业株式会社 Reproduction device, reproduction method, and program for stereoscopic reproduction
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
US20110074784A1 (en) * 2009-09-30 2011-03-31 Disney Enterprises, Inc Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-d images into stereoscopic 3-d images
US20110158504A1 (en) * 2009-12-31 2011-06-30 Disney Enterprises, Inc. Apparatus and method for indicating depth of one or more pixels of a stereoscopic 3-d image comprised from a plurality of 2-d layers

Also Published As

Publication number Publication date
CN102905143A (en) 2013-01-30

Similar Documents

Publication Publication Date Title
CN104662896B (en) Apparatus and method for image procossing
CN105931240B (en) Three dimensional depth sensing device and method
JP5594477B2 (en) Image display device, image display method, and program
CN103810708B (en) A kind of laser speckle image depth perception method and device
CN103428516B (en) Method, circuit and system for stablizing digital picture
CN102665086A (en) Method for obtaining parallax by using region-based local stereo matching
US20130342529A1 (en) Parallax image generating apparatus, stereoscopic picture displaying apparatus and parallax image generation method
CN102572486A (en) Acquisition system and method for stereoscopic video
CN104982033B (en) System for generating medial view image
CN103546737A (en) Image data scaling method and image display apparatus
TWI489414B (en) 2d to 3d image conversion apparatus and method thereof
CN103037226A (en) Method and device for depth fusion
CN105657401A (en) Naked eye 3D display method and system and naked eye 3D display device
CN105611270A (en) Binocular vision auto-stereoscopic display system
CN102496138A (en) Method for converting two-dimensional images into three-dimensional images
CN103310446B (en) A kind of solid matching method that instructs filtering based on iteration
CN102905143B (en) 2D (two-dimensional)-3D (three-dimensional) image conversion device and method thereof
CN103493482A (en) Method and device for extracting and optimizing depth map of image
CN104159095A (en) Code rate control method for multi-view texture video and depth map coding
CN103747269B (en) A kind of wave filter interpolation method and wave filter
CN103024419A (en) Video image processing method and system
CN102111637A (en) Stereoscopic video depth map generation method and device
CN105657319B (en) The method and system of candidate vector penalty value are controlled in ME based on feature dynamic
JP2951291B2 (en) Apparatus and method for converting 2D image to 3D image
US20140204175A1 (en) Image conversion method and module for naked-eye 3d display

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant