WO2010070323A1 - Crime scene mark identification system - Google Patents

Crime scene mark identification system Download PDF

Info

Publication number
WO2010070323A1
WO2010070323A1 PCT/GB2009/051687 GB2009051687W WO2010070323A1 WO 2010070323 A1 WO2010070323 A1 WO 2010070323A1 GB 2009051687 W GB2009051687 W GB 2009051687W WO 2010070323 A1 WO2010070323 A1 WO 2010070323A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
shape
feature
images
target image
Prior art date
Application number
PCT/GB2009/051687
Other languages
French (fr)
Inventor
Maria Pavlou
Nigel Allinson
Original Assignee
The University Of Sheffield
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of Sheffield filed Critical The University Of Sheffield
Publication of WO2010070323A1 publication Critical patent/WO2010070323A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship

Definitions

  • This invention relates to a system for identification of marks left at the scene of a crime by the perpetrators or others involved.
  • Such marks come in many forms, such as glove and finger prints, tyre marks, shoe imprints and many others.
  • the present invention is not concerned with material samples left at crime scenes, but only with visual marks capable of being reduced to an image of the mark.
  • Images may be obtained by multiple known techniques and none of these have any relevance to the present invention. Moreover, the invention is not concerned with the precise form of the image provided that it is capable of electronic storage in association with a database of a computer and that it is capable of manipulation as described further below.
  • shoes provide distinctive imprints at crime scenes and that they have substantial evidential value. That value is both in terms of detecting criminals, as well as in convicting criminals, although, because shoes can be taken off and worn by other people, their value in providing incontrovertible evidence is generally not the same as finger print or DNA evidence. However, in identifying suspects for further investigation, the value of shoeprint evidence cannot be underestimated. The same could be said of any marks, such as tyre marks or glove marks, if they could be identified.
  • Computers are extremely useful in processing complex information, and in processing large quantities of information. However, they are not particularly good, yet, at recognising vague or incomplete images. Here the human brain still reigns supreme. Moreover, in the field of forensic crime investigation, while the quantity of information (in terms of images collected from crime scenes) is large, it is neither so immense that human intervention is out of the question, nor so financially significant to justify the computing power and sophistication required to recognise images reliably. In any event, such power and sophistication is not, at present, known or available.
  • a system comprising: a) a computer; b) a display connected to the computer to display at a first location a target image comprising image-features; c) a database accessible by the computer, comprising recorded images having pre-determined shape-features; and d) the computer being adapted to perform the steps of: i. displaying iconic representations at a second location on the display; ii. selecting an iconic representation and positioning it on an image- feature of the target image; iii. fitting the iconic representation to the image-feature; iv. recording the fitted iconic representation as a first shape-feature of the target image; v. searching the database for recorded images comprising said first shape-feature; vi. displaying some or all the recorded images found in the search on the display at a third location of the display; and vii. selecting a said displayed recorded image to enable visual comparison with the target image.
  • the step of fitting includes manipulating and altering the iconic representation so as to fit the iconic representation to the image-feature.
  • said computer is further adapted to perform the steps of: viii. repeating steps i. to iv. with further iconic representations on further image-features to develop further shape-features of the target image; and ix. searching the database for recorded images comprising the combination of said shape-feature and further shape-feature or shape- features to reduce the number of recorded images to be displayed.
  • Said iconic representation is preferably a pictorial representation of an image-feature that might be present in an image of the type to which the present invention relates.
  • these may take several forms. They may comprise a simple geometric outline, such as a circle, oval, square, triangle etc. They may comprise a more complex geometric shape such as an annulus, or zig-zag line. They may comprise a texture, such as stippled, lumpy, smooth. They may comprise a pattern, such as closely spaced wavy lines, or points etc. In the case of tyre treads, simpler geometric shapes may be employed and more variations of zig-zag lines, as well as repeating patterns. With gloves, there may also be the same icons, but additionally weave patterns are likely to feature.
  • Each icon is fitted to an image-feature and, associated with the identity of the icon, there are parameters that define more precisely the icon, and hence the image-feature in question.
  • these may simply be dimensions.
  • the parameters may be the dimensions of each side; whereas with circles, just the radius is required to define it.
  • a square just the length of one side is needed, whereas with a rectangle, two sides are needed.
  • an angle is needed.
  • each shape-feature preferably not only identifies the iconic representation that characterises it, but also the parameters that define it. It is apparent from the foregoing that a shape-feature to be recorded in respect of an image-feature is preferably, firstly the identity of the iconic representation, and then the parameters that define it precisely.
  • said first shape-feature of the target image further comprises information detailing the orientation of the first shape-feature of the target image with respect to further shape-features, and/or with respect to a standard position system of images of the same type as the target image, and the shape-features of said recorded images also comprise such information.
  • the computer is adapted to perform the additional steps of: x. enabling the target image to be characterised by basic type; and xi. enabling the target image to be manipulated to fit a standard template for said basic type so that image-features can be matched to specific areas of the template comprising said standard position system.
  • a shape-feature of an image may therefore comprise: the nature of the iconic representation; at least one parameter that defines one or more relative dimensions of the iconic representation; and position information.
  • said shape-feature further comprises orientation information.
  • Said template may be the outline of a left or right shoe.
  • Said standard position system in respect of shoe imprints, may comprise a grid having three columns and at least six rows, wherein the columns represent left side, right side and middle of the shoe outline, and wherein at least two rows represent a heel end of the shoe outline, at least one row an instep region, and at least three rows a sole portion. Preferably, there are eight rows, with three representing front, middle and rear of the heel region, and four representing the sole region.
  • said position and/or orientation information may be with respect to said standard position system, preferably with respect to said grid associated with said template.
  • said system comprises a touch screen
  • said program permits an operator of the system to touch the screen to effect said selection of an iconic representation and to drag it into registration with the image-feature of the target image.
  • the computer may further permit the iconic representation to be manipulated and/or altered whilst in registration with the image-feature to facilitate fitting of the iconic representation to the image-feature, and thereby defining said parameters.
  • said computer is adapted to enable a displayed recorded image to be dragged and overlaid on the target image to facilitate registration of the two images and permit determination of whether the two images have a common source.
  • Said common source may comprise type-identity, wherein the type of object that created the two images can be determined to be of the same type. For example, in the case of shoe imprints, type-identity might comprise the make and model of the shoe.
  • said common source may comprise unique-identity, whereby the registration between the images is not limited to mere type-identity but includes correspondence of shape- features that are evidently unique to the object that resulted in both the matched recoded image and target image.
  • the computer is adapted to perform the steps of displaying only type-images of said recorded images, type-images being images of known types of product, whereby type-identity only may be established of the target image.
  • the computer is adapted to perform the steps of displaying unique-images of said recorded images, unique-images being images of particular product imprints, wherein said unique-images either share the same type- identity as the target image or have no type-identity but share the same shape-features as the target image.
  • unique-images may comprise imprints left at crime scenes or imprints of suspects or their property, eg shoes.
  • the computer is adapted to perform the steps of displaying unique-images of said recorded images that also have no type- identity but share the same shape-features as the target image.
  • the present invention thereby offers the best of all worlds in the sense of maximising the capabilities of the human/computer combination.
  • the human is best at recognising image-features of a target image and of recognising a match between a recorded image and a target image.
  • a computer is best at recording image-features and searching for images that share selected image-features.
  • the present invention permits an interaction between human and computer that first of all employs the human capacity to identify the presence of an imprint in an image of the imprint (for example a photograph of the imprint) and image-features of the imprint including their geometric limits, and human input of those features into a database.
  • the computer stores the recorded images and shape-features and is fast at identifying other images that share shape-features of the target image. Then, the human element is employed again to enable comparison of identified images with the target image to find a match.
  • the operator matches and fits iconic representations to image-features as precisely as possible.
  • the computer ranks recorded images according to the precision with which the shape-features of the recorded images match the shape-features of the target, whereby approximate matches can also be retrieved in the event that a truly "matching" recorded image has differences in shape-features that a human can appreciate do not, in particular cases, mean a different image. Such situations can occur when an image is distorted for some reason, or the imprint is distorted or possibly smudged.
  • image distortion can occur simply because a camera recording an image is not perpendicular to the image. This is often deliberate in order to better detect the imprint using different angles of light reflection illuminating the imprint. However, it might also be impossible to take an image from a perpendicular direction.
  • the system further comprises a ruler having at least two elements whose ends join at a corner of the ruler and define between them an acute or obtuse angle, indicia being provided on a surface of the ruler near the corner and each end of each element remote from the corner; and the computer being adapted to identify the indicia of the ruler when included in the target- image and adjust the target image to normalise the aspect of the target image.
  • Aspect of a target image is the apparent (or actual) direction of view of the image by the recording device (camera).
  • a normal aspect produces an image of an imprint that matches the imprint in respect of all its shape-features, other than in respect of dimensions, perhaps.
  • the present invention usefully adjusts images to a normal aspect, but this is not essential. What is essential is that the aspect is known and is the same (ie normalised) when the computer compares images and their shape-features, since these will vary with the aspect of the image.
  • the angle between the elements of the ruler is a right-angle.
  • the indicia include concentric circles.
  • the aspects of the target image and recorded images are normal.
  • the ruler is U-shaped having a third element connected to the end of one of the other two elements that is remote from said corner.
  • Said indicia may comprise one or more contrast bands parallel the lengths of said elements and defining precise corners and ends.
  • said ruler may be flexible and be adhered to said surface adjacent the imprint to be imaged, said indicia comprising length markings along the length of the elements whereby fore-shortening at least in one direction caused by said non-planar surface may be approximated and accommodated by the fore-shortening of the visible distance between said length markings in said image.
  • FIG. 1 is a schematic illustration of a system in accordance with the invention.
  • Figure 2a is a perspective view of capture of an imprint image
  • Figure 2b is a plan view of a preferred ruler
  • Figure 3 flow diagram of process steps in implementing the system of the present invention
  • Figure 4a is a sample of iconic representations for use with car tyre print images
  • Figure 4b shows a grid of a shoe imprint
  • Figure 4c shows a device for measuring weave of a glove print.
  • FIG. 1 a system 10 is shown in accordance with one embodiment of the present invention.
  • the system 10 comprises a screen display 12 connected to a computer 140 that is itself provided with a database 18.
  • the screen 12 is a touch-screen whereby an operator can implement commands to a program running on the computer by touching different areas of the screen according to the displays on it.
  • the screen 12 may be divided into 3 areas A, B, C.
  • screen area A is displayed an image 13 of an imprint 14 and ruler 16.
  • the ruler 16 will have been placed to one side of the imprint 14 prior to the image being captured by the device (eg a camera 40 - see Figure 2).
  • the purpose of the ruler is to permit the image 13 to be "normalised".
  • the simplest form of normalisation is to render the image of the imprint 14 such as it appears when viewed from a direction normal to the surface 15 in which the imprint is formed. However, frequently a photograph will be taken from either a slightly different angle (by mistake) or at markedly different angle, so as to take advantage of certain light conditions or simply because a "normal" orientation is not possible. An image might be more visible taken from a particular direction with respect to the plane of the surface and the incident light.
  • the ruler 16 has two (or preferably three) elements or limbs 20, 22 connected end to end at a corner 24 and between which there is an angle ⁇ .
  • the angle ⁇ between the limbs 20,22 is preferably a right angle.
  • At the corner 24 and near each end of the elements 20,22 remote from the corner 24 are formed indicia 26,28,30 that are thereby in a triangular formation.
  • the indicia 26-30 are preferably concentric circles.
  • Figure 2a illustrates a preferred ruler 16' having three elements 20', 22' and 23 joined end to end at right angles and having high contrast (eg black on white) bands 25 running around the entire periphery of the ruler defining precise location points 27 at corners and ends that enable aspect to be determined.
  • the U-shape facilitates normalisation when the camera 40 is non-perpendicularly inclined with respect to the imprint 14 in two directions (and not just one direction as shown in Figure 2a).
  • the computer 140 is adapted to analyse the image of the ruler 16 and to adjust the entire image 13 to correct distortions in the image caused by the image taking process. Indeed, the corrections possible are not limited to merely the direction in which the camera 40 is pointed with respect to the imprint 14 and the ruler 16.
  • the camera is pointed at a heel portion 14a of the imprint and is at an angle ⁇ with respect thereto.
  • the distance to the heel portion 14a is different to other distances to different parts of the image and to the different limbs of the ruler 16. Accordingly, there will be distortions in the image 13 caused by these factors and the focal length of the lens of the camera 40.
  • These anomalies can, to some extent, be corrected by a computer knowing the dimensions of the ruler 16 and the concentricities and positions of the indicia 26-30.
  • This processing results in a consistent image of the imprint 14 so that it is most closely comparable to the object that left the imprint in the surface 15, and will be comparable to other imprints of the same or different objects in or on other surfaces when they have likewise been treated.
  • the computer 140 stores in a database 16 the image 13.
  • the image is first characterised for computer-searching by an operator. This begins by fitting the image to a standard template so that its image-features will have a consistent location with respect to the template compared with recorded images of the same type of imprint.
  • Figure 4b shows a shoe outline 142 with a standard grid pattern 146.
  • the image of the imprint 14 is resized and oriented to fit in the outline 142 so that each of its image- features 44 are in specific locations of the template.
  • the operator identifies an image-feature 44 of the imprint 14 and selects one of several iconic representations 42 displayed in a section B of the display 12; one that corresponds as closely to the image-feature as possible.
  • the icons 42 presented for selection depend on the object being analysed. Basic geometric shapes such as circles, squares, diamonds etc will apply to all objects, but tyres tend to have zig-zag lines, shoes to have trapezia, and knitted gloves weave patterns. Shoes also have textures and repeating patterns.
  • the computer presents an appropriate selection of iconic representations according to the basic identity of the imprint 14 (ie tyre, shoe glove etc).
  • the selected icon 42 is dragged across onto the image 13 and fitted to the image- feature 44.
  • the computer permits the iconic representation to be rotated (i.e. manipulated) and adjusted in size to be in close correspondence with the image-feature 44.
  • the user notifies the computer (for example, by touching region 46 of the screen 12 to instruct the computer to save the image).
  • shape-feature will comprise the iconic representation and certain parameters that define it, such as its dimensions and its position in the image 13.
  • Recording of a single image-feature may not be particularly informative, and so the process can be repeated with many image-features 44 of the imprint 14.
  • a shape-feature is simply the shape of the image-feature.
  • dimension refers to relative dimension (that defines shape), rather than absolute dimension. The availability of the latter, of course, depends on the size of the image. While absolute dimensions (of the object that formed the imprint and its image 13) are capable of being determined from the image 13 (for example, a ruler 16 may be provided with scales 23), absolute dimensions are not always necessary when it is shape and relative dimensions that are used.
  • the system of the present invention permits further iconic representations 42 to be selected and fitted to other image-features of the imprint 14.
  • the image 13 is not merely characterised by the shape and size of a particular shape-feature but also by reference to its orientation with respect to another shape- feature, or at least with respect to the standard template.
  • images 13 may be of known shoes, or of known tyres or of known gloves and the operator analyses a large majority of the image-features of each image so that any one or several of them may be employed to identify a particular image by comparison therewith.
  • images 13 may be of known shoes, or of known tyres or of known gloves and the operator analyses a large majority of the image-features of each image so that any one or several of them may be employed to identify a particular image by comparison therewith.
  • images 13 may be of known shoes, or of known tyres or of known gloves and the operator analyses a large majority of the image-features of each image so that any one or several of them may be employed to identify a particular image by comparison therewith.
  • the second function of the system is being employed to identify an unknown target image then possibly only one or a few shape-features will need to be identified before a relatively few number of images from the database 18 are isolated from the remainder as having those shape-features.
  • These images 50 are selected by the computer on the basis that they share the shape-features currently saved in respect of the target image 13 under investigation. Initially, there may be many hundreds of such images 50 (only 6 of which are shown in Figure 1 ). However, as more shape-features of the imprint 14 are saved, fewer images 50 from the database 18 will have the same combination of shape-features.
  • the operator can potentially identify one imprint image 50 that seems, potentially, to correspond with the target image.
  • the computer 140 enables an image 50 to be dragged from the screen area C to the screen A and be overlaid on the target image 13. Means to magnify or compress either image to achieve size equality can ensure that the images match in that respect. However, if the imprint image has already been standardised against the template 142, then a direct comparison can be made since all the recorded images will ideally also be of standardised size.
  • This process might have one of two results. Firstly, may simply result in identification of the type of product that left the imprint. By this, is meant more than that the product is a tyre mark, or shoe imprint or glove imprint, (since these will be self-evident from the wider circumstances of the imprint 14). Instead, it is meant that the manufacturer and model of the object that left the imprint might be identified. This would be the manufacturer of the shoe and the particular model; or the same in respect of a tyre. This might be adequate for the purposes of the investigation. Indeed, the image 50 in this event may not then be taken from some crime scene or from a suspect, but simply be a sample image of a particular shoe or tyre, possibly even supplied by the manufacture for this very purpose (type-identification).
  • the target image 13 may be matched not just to a particular product manufacturer and model, but also to a unique object that may or may not be already known and identified.
  • a unique object may or may not be already known and identified.
  • an imprint from the identical object may have been left at a previous crime scene that has already been catalogued and entered in the database 18, although its unique provenance might be unknown, even if its generalised source is. Nevertheless, the information that is now obtained that the two crimes in question are linked may be vital information.
  • a suspect may have a particular shoe (or car tyre) that provides an exact match with an imprint left at a crime scene (unique-identification).
  • FIG 3 a schematic representation of the process undertaken by the system of Figure 1 is illustrated, commencing with the first step of obtaining an image of the imprint.
  • This is usually and preferably by means of a digital camera, but the invention is not so limited, and other methods of obtaining images are not excluded. Indeed, the images do not even themselves require to be digital.
  • What is stored in the database is the image, which can be in any form, and the associated shape-features, which are in the form of digital data.
  • step 1 10 in which the ruler 16 is placed around the imprint of interest.
  • step 1 12 the image of the imprint (the "target image") is captured by use of the camera 40 and this is displayed on section A of the screen in step 1 14.
  • step 115 involves fitting the image to a particular template and its associated grid.
  • Step 116 involves the selection of iconic representations 42, which are then fit to image-features of the target image in step 1 18.
  • step 1 18 In association with the image 13, the shape-features created in step 1 18 are recorded in the database 18 in step 120.
  • the computer searches in step 122 for images 50 that have corresponding shape-features, and that are already recorded in the database 18.
  • step 124 thumbnail images of the identified records are displayed on area C of the screen. If further image-features are present, and there are a large number of potential matches 50, then steps 1 16 to 124 may be repeated.
  • the user selects images from the thumbnail screens C in step 126 and overlays one of those images on the target image 13 to establish whether there is a match. If there is no match, then the process may be repeated, with more image-features being added, or selecting different images 50 for matching. Finally, if a match is found then the process may end. However, it may also change to a cataloguing process, where the image is simply analysed for all of its shape-features for subsequent searches against different target images in due course.
  • Figure 4a shows two iconic representations 50a, b, together with hand signs 70a-g that effect different functions.
  • iconic representation 50a is a solid rectangle.
  • the cursor clicking on the image of the iconic representation enables it to be picked up from area B and placed on the image 13.
  • hand 70a By clicking on hand 70a, the rectangle can be rotated.
  • hands 70b or c By clicking on hands 70b or c, then length and breadth of the rectangle can be changed. In this way, the rectangle can be fitted to the image-feature.
  • Iconic representation 50b is a wavy line or zigzag. Again, clicking on the iconic representation enables it to be positioned. However, clicking on hand 7Od enables rotation, on 7Oe enables the angle between waves to be changed, and hands 7Of and g enables the pitch and amplitude to be adjusted.
  • a shoe imprint image 13 is overlaid with a grid that defines broad positions of the shoe.
  • rows 0 to 3 are the sole, row 4 the instep, and rows 5 to 7 the heel.
  • columns a and c are the side edges of the imprint, and column b the centre.
  • a circle in grid c1 is near the front, but not at the front, of the shoe on one side of the shoe.
  • Figure 4c shows the tool 50c for measuring weave of a glove, where clicking on the hand 7Oh enables the weave to be expanded or contracted.
  • Numerous other iconic representations and features can be employed to define the image of a crime scene mark so that it is effectively catalogued so that it can subsequently be searched.

Abstract

A system for processing, storing and comparing images of crime scene marks comprises a) a computer; b) a display connected to the computer to display at a first location a target image comprising image-features; c )a database accessible by the computer, comprising recorded images having pre-determined shape-features; and d) the computer being adapted to perform the steps of: i. displaying iconic representations at a second location on the display; ii. selecting an iconic representation and positioning it on an image-feature of the target image; iii. fitting the iconic representation so as to fit the iconic representation to the image-feature; iv. recording the manipulated iconic representation as a first shape-feature of the target image; v. searching the database for recorded images comprising said first shape-feature; vi. displaying some or all the recorded images found in the search on the display at a third location of the display; and vii. selecting a said displayed recorded image to enable visual comparison with the target image.

Description

Crime Scene Mark Identification System
This invention relates to a system for identification of marks left at the scene of a crime by the perpetrators or others involved. Such marks come in many forms, such as glove and finger prints, tyre marks, shoe imprints and many others. The present invention is not concerned with material samples left at crime scenes, but only with visual marks capable of being reduced to an image of the mark.
Images may be obtained by multiple known techniques and none of these have any relevance to the present invention. Moreover, the invention is not concerned with the precise form of the image provided that it is capable of electronic storage in association with a database of a computer and that it is capable of manipulation as described further below.
BACKGROUND
It is well known that shoes provide distinctive imprints at crime scenes and that they have substantial evidential value. That value is both in terms of detecting criminals, as well as in convicting criminals, although, because shoes can be taken off and worn by other people, their value in providing incontrovertible evidence is generally not the same as finger print or DNA evidence. However, in identifying suspects for further investigation, the value of shoeprint evidence cannot be underestimated. The same could be said of any marks, such as tyre marks or glove marks, if they could be identified.
As mentioned above, there are numerous known methods and techniques for analysing shoeprint images and providing key data in relation to them that can be stored and subsequently correlated against data taken from other imprints in order to provide a possible match. EP-A-877990, EP-A-1 125244, GB-A-2432029, CN-A-1936922 and CN-A-1776717 all disclose methods of analysing and identifying shoe prints. Similar identification problems exist with fingerprint and retina identification and US-A- 200401 14785 and GB-A-1593001 deal with them.
Computers are extremely useful in processing complex information, and in processing large quantities of information. However, they are not particularly good, yet, at recognising vague or incomplete images. Here the human brain still reigns supreme. Moreover, in the field of forensic crime investigation, while the quantity of information (in terms of images collected from crime scenes) is large, it is neither so immense that human intervention is out of the question, nor so financially significant to justify the computing power and sophistication required to recognise images reliably. In any event, such power and sophistication is not, at present, known or available.
Consequently, it is an object of the present invention to provide crime scene investigators with a tool that enables them to classify marks and subsequently to identify, more easily and conveniently, unknown marks.
BRIEF SUMMARY OF THE DISCLOSURE
In accordance with the present invention there is provided a system comprising: a) a computer; b) a display connected to the computer to display at a first location a target image comprising image-features; c) a database accessible by the computer, comprising recorded images having pre-determined shape-features; and d) the computer being adapted to perform the steps of: i. displaying iconic representations at a second location on the display; ii. selecting an iconic representation and positioning it on an image- feature of the target image; iii. fitting the iconic representation to the image-feature; iv. recording the fitted iconic representation as a first shape-feature of the target image; v. searching the database for recorded images comprising said first shape-feature; vi. displaying some or all the recorded images found in the search on the display at a third location of the display; and vii. selecting a said displayed recorded image to enable visual comparison with the target image.
Preferably, the step of fitting includes manipulating and altering the iconic representation so as to fit the iconic representation to the image-feature.
Preferably, said computer is further adapted to perform the steps of: viii. repeating steps i. to iv. with further iconic representations on further image-features to develop further shape-features of the target image; and ix. searching the database for recorded images comprising the combination of said shape-feature and further shape-feature or shape- features to reduce the number of recorded images to be displayed.
Said iconic representation is preferably a pictorial representation of an image-feature that might be present in an image of the type to which the present invention relates. In terms of shoes, these may take several forms. They may comprise a simple geometric outline, such as a circle, oval, square, triangle etc. They may comprise a more complex geometric shape such as an annulus, or zig-zag line. They may comprise a texture, such as stippled, lumpy, smooth. They may comprise a pattern, such as closely spaced wavy lines, or points etc. In the case of tyre treads, simpler geometric shapes may be employed and more variations of zig-zag lines, as well as repeating patterns. With gloves, there may also be the same icons, but additionally weave patterns are likely to feature.
Each icon is fitted to an image-feature and, associated with the identity of the icon, there are parameters that define more precisely the icon, and hence the image-feature in question. In the case of geometric shapes, these may simply be dimensions. For example, with triangles, the parameters may be the dimensions of each side; whereas with circles, just the radius is required to define it. Similarly with a square, just the length of one side is needed, whereas with a rectangle, two sides are needed. With a diamond, an angle is needed. However, with modern computers it is easy to record the length of each side and the angle between adjacent sides so that any four-sided figure can be defined precisely. Indeed, it is necessary to be able to record the minimum information needed to thoroughly define any image-feature, once its iconic representation has been selected. With zig-zags, for example, the angles between the lines, their lengths and their widths are necessary. With patterns, more information is needed, such as the repeat length, or separation between features. With weaves, the necessary information, apart from matching the kind of weave, is the thread width, as well as the weave pitch. Thus, each shape-feature preferably not only identifies the iconic representation that characterises it, but also the parameters that define it. It is apparent from the foregoing that a shape-feature to be recorded in respect of an image-feature is preferably, firstly the identity of the iconic representation, and then the parameters that define it precisely.
Preferably, especially when one or more further shape-features are recorded, said first shape-feature of the target image further comprises information detailing the orientation of the first shape-feature of the target image with respect to further shape-features, and/or with respect to a standard position system of images of the same type as the target image, and the shape-features of said recorded images also comprise such information.
Indeed, preferably, the computer is adapted to perform the additional steps of: x. enabling the target image to be characterised by basic type; and xi. enabling the target image to be manipulated to fit a standard template for said basic type so that image-features can be matched to specific areas of the template comprising said standard position system.
A shape-feature of an image may therefore comprise: the nature of the iconic representation; at least one parameter that defines one or more relative dimensions of the iconic representation; and position information.
Preferably, said shape-feature further comprises orientation information.
Said template may be the outline of a left or right shoe. Said standard position system, in respect of shoe imprints, may comprise a grid having three columns and at least six rows, wherein the columns represent left side, right side and middle of the shoe outline, and wherein at least two rows represent a heel end of the shoe outline, at least one row an instep region, and at least three rows a sole portion. Preferably, there are eight rows, with three representing front, middle and rear of the heel region, and four representing the sole region. By means of the grid, and relating shape-features to it, shoes of different sizes will be comparable and a match of at least the type of shoe can possibly be achieved when shape-features are merely scaled (pro-rated in terms of size) between different sizes of shoe. Thus, said position and/or orientation information may be with respect to said standard position system, preferably with respect to said grid associated with said template.
Preferably, said system comprises a touch screen, and said program permits an operator of the system to touch the screen to effect said selection of an iconic representation and to drag it into registration with the image-feature of the target image. The computer may further permit the iconic representation to be manipulated and/or altered whilst in registration with the image-feature to facilitate fitting of the iconic representation to the image-feature, and thereby defining said parameters.
Preferably, said computer is adapted to enable a displayed recorded image to be dragged and overlaid on the target image to facilitate registration of the two images and permit determination of whether the two images have a common source. Said common source may comprise type-identity, wherein the type of object that created the two images can be determined to be of the same type. For example, in the case of shoe imprints, type-identity might comprise the make and model of the shoe. Alternatively, said common source may comprise unique-identity, whereby the registration between the images is not limited to mere type-identity but includes correspondence of shape- features that are evidently unique to the object that resulted in both the matched recoded image and target image.
Preferably, the computer is adapted to perform the steps of displaying only type-images of said recorded images, type-images being images of known types of product, whereby type-identity only may be established of the target image.
If type-identity is established, preferably the computer is adapted to perform the steps of displaying unique-images of said recorded images, unique-images being images of particular product imprints, wherein said unique-images either share the same type- identity as the target image or have no type-identity but share the same shape-features as the target image. Such particular product imprints may comprise imprints left at crime scenes or imprints of suspects or their property, eg shoes.
If type-identity is not established, preferably the computer is adapted to perform the steps of displaying unique-images of said recorded images that also have no type- identity but share the same shape-features as the target image. The present invention thereby offers the best of all worlds in the sense of maximising the capabilities of the human/computer combination. Thus the human is best at recognising image-features of a target image and of recognising a match between a recorded image and a target image. On the other hand, a computer is best at recording image-features and searching for images that share selected image-features. The present invention permits an interaction between human and computer that first of all employs the human capacity to identify the presence of an imprint in an image of the imprint (for example a photograph of the imprint) and image-features of the imprint including their geometric limits, and human input of those features into a database. The computer stores the recorded images and shape-features and is fast at identifying other images that share shape-features of the target image. Then, the human element is employed again to enable comparison of identified images with the target image to find a match. If there are many recorded images that share the shape-features of the target image, then either the human operator can sieve through them all in an attempt to find an exact match, or he/she can enter further shape-features of the target image to eliminate some of the computer-matched images thereby facilitating the human task of finding a match.
It therefore follows that an element of tolerance is required with respect to shape- features. The operator matches and fits iconic representations to image-features as precisely as possible. Preferably, however, the computer ranks recorded images according to the precision with which the shape-features of the recorded images match the shape-features of the target, whereby approximate matches can also be retrieved in the event that a truly "matching" recorded image has differences in shape-features that a human can appreciate do not, in particular cases, mean a different image. Such situations can occur when an image is distorted for some reason, or the imprint is distorted or possibly smudged.
Indeed, image distortion can occur simply because a camera recording an image is not perpendicular to the image. This is often deliberate in order to better detect the imprint using different angles of light reflection illuminating the imprint. However, it might also be impossible to take an image from a perpendicular direction.
In accordance with a preferred embodiment, the system further comprises a ruler having at least two elements whose ends join at a corner of the ruler and define between them an acute or obtuse angle, indicia being provided on a surface of the ruler near the corner and each end of each element remote from the corner; and the computer being adapted to identify the indicia of the ruler when included in the target- image and adjust the target image to normalise the aspect of the target image.
"Aspect" of a target image is the apparent (or actual) direction of view of the image by the recording device (camera). A normal aspect produces an image of an imprint that matches the imprint in respect of all its shape-features, other than in respect of dimensions, perhaps. The present invention usefully adjusts images to a normal aspect, but this is not essential. What is essential is that the aspect is known and is the same (ie normalised) when the computer compares images and their shape-features, since these will vary with the aspect of the image. Preferably, the angle between the elements of the ruler is a right-angle. Preferably, the indicia include concentric circles. Preferably, the aspects of the target image and recorded images are normal. This is preferred so that the image is actually the same as the object that left the mark in the first place. Preferably, the ruler is U-shaped having a third element connected to the end of one of the other two elements that is remote from said corner. Said indicia may comprise one or more contrast bands parallel the lengths of said elements and defining precise corners and ends.
Where said image is on a non-planar surface, said ruler may be flexible and be adhered to said surface adjacent the imprint to be imaged, said indicia comprising length markings along the length of the elements whereby fore-shortening at least in one direction caused by said non-planar surface may be approximated and accommodated by the fore-shortening of the visible distance between said length markings in said image.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention are further described hereinafter, by way of example, with reference to the accompanying drawings, in which:
Figure 1 is a schematic illustration of a system in accordance with the invention;
Figure 2a is a perspective view of capture of an imprint image, and Figure 2b is a plan view of a preferred ruler;
Figure 3 flow diagram of process steps in implementing the system of the present invention; and Figure 4a is a sample of iconic representations for use with car tyre print images, Figure 4b shows a grid of a shoe imprint, and Figure 4c shows a device for measuring weave of a glove print.
DETAILED DESCRIPTION
In Figure 1 a system 10 is shown in accordance with one embodiment of the present invention. The system 10 comprises a screen display 12 connected to a computer 140 that is itself provided with a database 18. Ideally, the screen 12 is a touch-screen whereby an operator can implement commands to a program running on the computer by touching different areas of the screen according to the displays on it.
The screen 12 may be divided into 3 areas A, B, C. In screen area A is displayed an image 13 of an imprint 14 and ruler 16. The ruler 16 will have been placed to one side of the imprint 14 prior to the image being captured by the device (eg a camera 40 - see Figure 2). The purpose of the ruler is to permit the image 13 to be "normalised". The simplest form of normalisation is to render the image of the imprint 14 such as it appears when viewed from a direction normal to the surface 15 in which the imprint is formed. However, frequently a photograph will be taken from either a slightly different angle (by mistake) or at markedly different angle, so as to take advantage of certain light conditions or simply because a "normal" orientation is not possible. An image might be more visible taken from a particular direction with respect to the plane of the surface and the incident light.
Thus, the ruler 16 has two (or preferably three) elements or limbs 20, 22 connected end to end at a corner 24 and between which there is an angle β. The angle β between the limbs 20,22 is preferably a right angle. At the corner 24 and near each end of the elements 20,22 remote from the corner 24 are formed indicia 26,28,30 that are thereby in a triangular formation. The indicia 26-30 are preferably concentric circles.
Figure 2a illustrates a preferred ruler 16' having three elements 20', 22' and 23 joined end to end at right angles and having high contrast (eg black on white) bands 25 running around the entire periphery of the ruler defining precise location points 27 at corners and ends that enable aspect to be determined. The U-shape facilitates normalisation when the camera 40 is non-perpendicularly inclined with respect to the imprint 14 in two directions (and not just one direction as shown in Figure 2a). The computer 140 is adapted to analyse the image of the ruler 16 and to adjust the entire image 13 to correct distortions in the image caused by the image taking process. Indeed, the corrections possible are not limited to merely the direction in which the camera 40 is pointed with respect to the imprint 14 and the ruler 16. Indeed, as shown in Figure 2a, the camera is pointed at a heel portion 14a of the imprint and is at an angle α with respect thereto. However, the distance to the heel portion 14a is different to other distances to different parts of the image and to the different limbs of the ruler 16. Accordingly, there will be distortions in the image 13 caused by these factors and the focal length of the lens of the camera 40. These anomalies can, to some extent, be corrected by a computer knowing the dimensions of the ruler 16 and the concentricities and positions of the indicia 26-30.
This processing results in a consistent image of the imprint 14 so that it is most closely comparable to the object that left the imprint in the surface 15, and will be comparable to other imprints of the same or different objects in or on other surfaces when they have likewise been treated.
In any event, the computer 140 stores in a database 16 the image 13. However, the image is first characterised for computer-searching by an operator. This begins by fitting the image to a standard template so that its image-features will have a consistent location with respect to the template compared with recorded images of the same type of imprint.
Figure 4b shows a shoe outline 142 with a standard grid pattern 146. The image of the imprint 14 is resized and oriented to fit in the outline 142 so that each of its image- features 44 are in specific locations of the template.
Returning to Figure 1 , the operator identifies an image-feature 44 of the imprint 14 and selects one of several iconic representations 42 displayed in a section B of the display 12; one that corresponds as closely to the image-feature as possible. The icons 42 presented for selection depend on the object being analysed. Basic geometric shapes such as circles, squares, diamonds etc will apply to all objects, but tyres tend to have zig-zag lines, shoes to have trapezia, and knitted gloves weave patterns. Shoes also have textures and repeating patterns. The computer presents an appropriate selection of iconic representations according to the basic identity of the imprint 14 (ie tyre, shoe glove etc).
The selected icon 42 is dragged across onto the image 13 and fitted to the image- feature 44. Preferably, the computer permits the iconic representation to be rotated (i.e. manipulated) and adjusted in size to be in close correspondence with the image-feature 44. When the user is satisfied that the iconic representation best fits the image-feature, the user notifies the computer (for example, by touching region 46 of the screen 12 to instruct the computer to save the image).
When the image is saved, not only is it saved as the image 13, but in addition is attached information concerning the shape-feature that corresponds with the image- feature 44. Such shape-feature will comprise the iconic representation and certain parameters that define it, such as its dimensions and its position in the image 13.
Recording of a single image-feature may not be particularly informative, and so the process can be repeated with many image-features 44 of the imprint 14.
In that respect, it is necessary to make clear that there are two processes that are intended to be employed using the present system. One purpose is to enter an image in the database so that subsequent images of different imprints can be compared against it. In that event, any one of its features may be an image-feature selected by a user in due course in relation to attempts to identify that image. Consequently, in this event, all image-features 44 of the imprint 14 need to be categorised, according to the iconic representations available.
However, when the system is employed for its second purpose, to identify an unknown target image with respect to images already stored and recorded on the database 18, the user might then only select one or more distinctive image-features. By itself, a single shape-feature may have only limited value in characterising a particular imprint object and it is likely there will be many images recorded in the database that have the same shape-feature.
At a minimum, a shape-feature is simply the shape of the image-feature. As used herein, "dimension", unless the context requires differently, refers to relative dimension (that defines shape), rather than absolute dimension. The availability of the latter, of course, depends on the size of the image. While absolute dimensions (of the object that formed the imprint and its image 13) are capable of being determined from the image 13 (for example, a ruler 16 may be provided with scales 23), absolute dimensions are not always necessary when it is shape and relative dimensions that are used.
However, in addition to the dimensions (absolute or otherwise) of a particular image- feature is the orientation of the image-feature with respect to other image-features of the image. Accordingly, the system of the present invention permits further iconic representations 42 to be selected and fitted to other image-features of the imprint 14. Thus, the image 13 is not merely characterised by the shape and size of a particular shape-feature but also by reference to its orientation with respect to another shape- feature, or at least with respect to the standard template.
Thus, in respect of the first process mentioned above, and in building the database 18, images 13 may be of known shoes, or of known tyres or of known gloves and the operator analyses a large majority of the image-features of each image so that any one or several of them may be employed to identify a particular image by comparison therewith. However, once a database of identified imprint images has been established, and the second function of the system is being employed to identify an unknown target image then possibly only one or a few shape-features will need to be identified before a relatively few number of images from the database 18 are isolated from the remainder as having those shape-features.
In a separate area C of the screen 12, is displayed a number of smaller images (thumbnails) 50 from the database 18. These images 50 are selected by the computer on the basis that they share the shape-features currently saved in respect of the target image 13 under investigation. Initially, there may be many hundreds of such images 50 (only 6 of which are shown in Figure 1 ). However, as more shape-features of the imprint 14 are saved, fewer images 50 from the database 18 will have the same combination of shape-features.
As the number of images 50 reduces, the operator can potentially identify one imprint image 50 that seems, potentially, to correspond with the target image. The computer 140 enables an image 50 to be dragged from the screen area C to the screen A and be overlaid on the target image 13. Means to magnify or compress either image to achieve size equality can ensure that the images match in that respect. However, if the imprint image has already been standardised against the template 142, then a direct comparison can be made since all the recorded images will ideally also be of standardised size.
This process might have one of two results. Firstly, may simply result in identification of the type of product that left the imprint. By this, is meant more than that the product is a tyre mark, or shoe imprint or glove imprint, (since these will be self-evident from the wider circumstances of the imprint 14). Instead, it is meant that the manufacturer and model of the object that left the imprint might be identified. This would be the manufacturer of the shoe and the particular model; or the same in respect of a tyre. This might be adequate for the purposes of the investigation. Indeed, the image 50 in this event may not then be taken from some crime scene or from a suspect, but simply be a sample image of a particular shoe or tyre, possibly even supplied by the manufacture for this very purpose (type-identification).
However, there is also the possibility of more specific matching, where the target image 13 may be matched not just to a particular product manufacturer and model, but also to a unique object that may or may not be already known and identified. For example, an imprint from the identical object may have been left at a previous crime scene that has already been catalogued and entered in the database 18, although its unique provenance might be unknown, even if its generalised source is. Nevertheless, the information that is now obtained that the two crimes in question are linked may be vital information. Alternatively, a suspect may have a particular shoe (or car tyre) that provides an exact match with an imprint left at a crime scene (unique-identification).
Consequently, for a given crime scene mark, it may well be that there is a match with a particular model, but also the system enables existing other crime scene marks of the same model to be linked and compared with the target image to see if more than mere type-identity can be established, but perhaps also unique-identity.
It can be seen that both aspects of identification of an imprint, as well as of cataloguing and storing an imprint will apply to any particular imprint image. Thus, even when the purpose is to identify a particular imprint 14 that may initially only require one or two shape-features to be entered, nevertheless it is likely that any such imprint should itself by entered into the database for subsequent capacity to be searched itself. In that event, the process of entering all the shape-features of the object may be undertaken.
Turning to Figure 3, a schematic representation of the process undertaken by the system of Figure 1 is illustrated, commencing with the first step of obtaining an image of the imprint. This is usually and preferably by means of a digital camera, but the invention is not so limited, and other methods of obtaining images are not excluded. Indeed, the images do not even themselves require to be digital. What is stored in the database is the image, which can be in any form, and the associated shape-features, which are in the form of digital data.
However, one embodiment of the process begins in Figure 3 with step 1 10, in which the ruler 16 is placed around the imprint of interest. In step 1 12, the image of the imprint (the "target image") is captured by use of the camera 40 and this is displayed on section A of the screen in step 1 14. Step 115 involves fitting the image to a particular template and its associated grid. Step 116 involves the selection of iconic representations 42, which are then fit to image-features of the target image in step 1 18.
In association with the image 13, the shape-features created in step 1 18 are recorded in the database 18 in step 120. The computer then searches in step 122 for images 50 that have corresponding shape-features, and that are already recorded in the database 18. In step 124, thumbnail images of the identified records are displayed on area C of the screen. If further image-features are present, and there are a large number of potential matches 50, then steps 1 16 to 124 may be repeated.
However, at some point, the user selects images from the thumbnail screens C in step 126 and overlays one of those images on the target image 13 to establish whether there is a match. If there is no match, then the process may be repeated, with more image-features being added, or selecting different images 50 for matching. Finally, if a match is found then the process may end. However, it may also change to a cataloguing process, where the image is simply analysed for all of its shape-features for subsequent searches against different target images in due course.
Turning to Figures 4a to c, Figure 4a shows two iconic representations 50a, b, together with hand signs 70a-g that effect different functions. Thus, iconic representation 50a is a solid rectangle. In a mouse-controlled computer, the cursor clicking on the image of the iconic representation enables it to be picked up from area B and placed on the image 13. By clicking on hand 70a, the rectangle can be rotated. By clicking on hands 70b or c, then length and breadth of the rectangle can be changed. In this way, the rectangle can be fitted to the image-feature.
Iconic representation 50b is a wavy line or zigzag. Again, clicking on the iconic representation enables it to be positioned. However, clicking on hand 7Od enables rotation, on 7Oe enables the angle between waves to be changed, and hands 7Of and g enables the pitch and amplitude to be adjusted.
Keeping the iconic representations simple and simply defined facilitates searching and comparison. All measurements and fitting is performed, of course, with as much precision as possible, but the system should include means to recall stored images 50 which do not match precise dimensions but which may be different for many reasons beyond them being different products. That is, they may be the same model, for example, but have different sizes. Consequently, some leeway needs to be permitted for best results.
In Figure 4b, a shoe imprint image 13 is overlaid with a grid that defines broad positions of the shoe. For example, rows 0 to 3 are the sole, row 4 the instep, and rows 5 to 7 the heel. Likewise, columns a and c are the side edges of the imprint, and column b the centre. Thus a circle in grid c1 is near the front, but not at the front, of the shoe on one side of the shoe.
Finally, Figure 4c shows the tool 50c for measuring weave of a glove, where clicking on the hand 7Oh enables the weave to be expanded or contracted. Numerous other iconic representations and features can be employed to define the image of a crime scene mark so that it is effectively catalogued so that it can subsequently be searched.
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", means "including but not limited to", and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.

Claims

1. A system comprising: a) a computer; b) a display connected to the computer to display at a first location a target image comprising image-features; c) a database accessible by the computer, comprising recorded images having pre-determined shape-features; and d) the computer being adapted to perform the steps of: i. displaying iconic representations at a second location on the display; ii. selecting an iconic representation and positioning it on a first image- feature of the target image; iii. fitting the iconic representation to the first image-feature; iv. recording the fitted iconic representation as a first shape-feature of the target image; v. searching the database for recorded images comprising said first shape-feature; vi. displaying some or all the recorded images found in the search on the display at a third location of the display; and vii. selecting a said displayed recorded image to enable visual comparison with the target image.
2. A system as claimed in claim 1 , in which said first shape-feature comprises the identity of said iconic representation and parameters that define characteristics of the iconic representation.
3. A system as claimed in claim 2, in which said iconic representation is a geometric shape and said parameters that define characteristics of the iconic representation comprise defining dimensions of the shape.
4. A system as claimed in any preceding claim, in which the step of fitting includes manipulating and altering the iconic representation so as to fit the iconic representation to the image-feature.
5. A system as claimed in any preceding claim, in which said computer is further adapted to perform the steps of: viii. repeating steps i. to iv. with further iconic representations on further image-features to develop further shape-features of the target image; and ix. searching the database for recorded images comprising the combination of said first shape-feature and further shape-feature or shape-features to reduce the number of recorded images to be displayed.
6. A system as claimed in any preceding claim, in which said first shape- feature of the target image further comprises parameters detailing the location and/or orientation of the first image-feature of the target image with respect to a standard position system of images of the same type as the target image, and the shape- features of said recorded images also comprise such information.
7. A system as claimed in claim 6, in which the computer is adapted to perform the additional steps of: x. enabling the target image to be characterised by basic type; and xi. enabling the target image to be manipulated to fit a standard template for said basic type so that image-features can be matched to specific areas of the template comprising said standard position system.
8. A system as claimed in claim 7, in which said template comprises the outline of a left or right shoe.
9. A system as claimed in claim 8, in which said standard position system, in respect of shoe imprints, comprises a grid having three columns and at least six rows, wherein the columns represent left side, right side and middle of the shoe outline, and wherein at least two rows represent a heel end of the shoe outline, at least one row an instep region, and at least three rows a sole portion.
10. A system as claimed in claim 9, in which there are eight rows, with three representing front, middle and rear of the heel region, and four representing the sole region.
1 1. A system as claimed in any of claims 6 to 10, in which said shape-feature of comprises: the nature of the iconic representation; at least one parameter that defines one or more relative dimensions of the iconic representation; and position information.
12. A system as claimed in any of claims 6 to 1 1 , in which said shape-feature of comprises: the nature of the iconic representation; at least one parameter that defines one or more relative dimensions of the iconic representation; and orientation information.
13. A system as claimed in any preceding claim, in which said system comprises a touch screen, and said program permits an operator of the system to touch the screen to effect said selection of an iconic representation and to drag it into registration with the image-feature of the target image.
14. A system as claimed in any preceding claim, in which the computer further permits the iconic representation to be manipulated and/or altered whilst in registration with the image-feature to facilitate fitting of the iconic representation to the image- feature.
15. A system as claimed in any preceding claim, in which said computer is adapted to enable a displayed recorded image to be dragged and overlaid on the target image to facilitate registration of the two images and permit determination of whether the two images have a common source.
16. A system as claimed in claim 15, in which said common source comprises type-identity, wherein the type of object that created the two images can be determined to be of the same type.
17. A system as claimed in claim 15 or 16, in which said common source comprises unique-identity, whereby the registration between the images includes correspondence of shape-features that are unique to the object that resulted in both the matched recorded image and target image.
18. A system as claimed in claim 16 or 17, in which the computer is adapted to initially perform the steps of displaying only type-images of said recorded images, type- images being images of known types of product, whereby type-identity only may be established of the target image.
19. A system as claimed in claim 17 or 18, in which, in the event that type- identity is established, the computer is adapted to perform the steps of displaying unique-images of said recorded images, unique-images being images of particular product imprints, wherein said unique-images either share the same type-identity as the target image or have no type-identity but share the same shape-features as the target image.
20. A system as claimed in claim 17 or 18, in which, in the event that type- identity is not established, the computer is adapted to perform the steps of displaying unique-images of said recorded images that also have no type-identity but share the same shape-features as the target image.
21. A system as claimed in any preceding claim, in which the computer ranks recorded images according to the precision with which the shape-features of the recorded images match the shape-features of the target, whereby approximate matches can also be retrieved.
22. A system as claimed in any preceding claim, further comprising a ruler extending in at least two dimensions, indicia being provided on a surface of the ruler in a triangular formation; and the computer being adapted to identify the indicia of the ruler when included in the target-image and adjust the target image to normalise the aspect of the target image.
23. A system as claimed in claim 22, wherein the ruler is U-shaped having a third element connected to the end of one of the other two elements that is remote from said corner.
24. A system as claimed in claim 23, wherein, where said image is on a non- planar surface, said ruler is flexible and is adhered to said surface adjacent the imprint to be imaged, said indicia comprising length markings along the length of the elements whereby fore-shortening at least in one direction caused by said non-planar surface may be approximated and accommodated by the fore-shortening of the visible distance between said length markings in said image.
25. A system as claimed in claim 22, 23 or 24, in which said normalisation is to a normal aspect of the image.
26. A system for processing, storing and comparing images of crime scene marks substantially as hereinbefore described with reference to the accompanying drawings.
27. A method of identifying a crime scene mark comprising the steps of: a) providing a computer; b) connecting a display to the computer to display at a first location a target image comprising image-features; c) accessing a database comprising recorded images having pre-determined shape-features; d) displaying iconic representations at a second location on the display; e) selecting one of said iconic representations and positioning it on an image- feature of the target image; f) fitting the iconic representation to the image-feature; g) recording the fitted iconic representation as a first shape-feature of the target image; h) searching the database for recorded images comprising said first shape- feature; i) displaying some or all the recorded images found in the search on the display at a third location of the display; and j) selecting a said displayed recorded image to enable visual comparison with the target image.
28. A method as claimed in claim 27 including cataloguing the crime scene mark, the method further comprising the steps of: k) saving the target image in the database in association with the first shape- feature.
29. A method as claimed in claim 27 or 28, further comprising the step of manipulating and altering the iconic representation so as to fit the iconic representation to the image-feature.
30. A method as claimed in claim 27, 28 or 29, further comprising the step of manipulating and/or altering the iconic representation whilst it is in registration with the image-feature to facilitate fitting of the iconic representation to the image-feature.
31. A method as claimed in any of claims 27 to 30, further comprising the steps of steps of:
I) repeating steps e) to j) with further iconic representations on further image- features to develop further shape-features of the target image; and m) searching the database for recorded images comprising the combination of said shape-feature and further shape-feature or shape-features to reduce the number of recorded images to be displayed.
32. A method as claimed in claim 31 when dependent on claim 28, the method further comprising the steps of: n) saving the target image in the database in association with the first and further shape-features of the target image.
33. A method as claimed in claim 31 or 32, further comprising the step of detailing the orientation of the shape-features of the target image with respect to each other and/or with respect to a standard position system of images of the same type as the target image, the shape-features of said recorded images also comprising such information enabling comparison therebetween.
34. A method as claimed in claim 33, wherein said standard position system in respect of shoe imprints comprise a three column by at least six row grid, wherein the columns represent left side, right side and middle of a shoe imprint, and wherein at least two rows represent a heel end of the shoe imprint, at least one row an instep region and at least three rows a sole portion.
PCT/GB2009/051687 2008-12-15 2009-12-10 Crime scene mark identification system WO2010070323A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0822813.2 2008-12-15
GB0822813A GB2466245A (en) 2008-12-15 2008-12-15 Crime Scene Mark Identification System

Publications (1)

Publication Number Publication Date
WO2010070323A1 true WO2010070323A1 (en) 2010-06-24

Family

ID=40326110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2009/051687 WO2010070323A1 (en) 2008-12-15 2009-12-10 Crime scene mark identification system

Country Status (2)

Country Link
GB (1) GB2466245A (en)
WO (1) WO2010070323A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005565A (en) * 2014-04-18 2015-10-28 大连恒锐科技股份有限公司 Onsite sole trace pattern image retrieval method
CN105023027A (en) * 2014-04-18 2015-11-04 大连恒锐科技股份有限公司 Sole trace pattern image retrieval method based on multi-feedback mechanism
CN106940727A (en) * 2017-03-22 2017-07-11 重庆市公安局刑事警察总队 The coding method that shoe sole print various dimensions are classified with identification
CN107423715A (en) * 2017-07-31 2017-12-01 大连海事大学 A kind of footprint automatic identifying method based on multiple features combining decision-making
CN110287370A (en) * 2019-06-26 2019-09-27 中国人民公安大学 Suspect's method for tracing, device and storage medium based on field shoe print

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1593001A (en) 1977-01-17 1981-07-15 Hill R B Apparatus and method for identifying individuals through their retinal vasculature patterns
US5802361A (en) * 1994-09-30 1998-09-01 Apple Computer, Inc. Method and system for searching graphic images and videos
EP0877990A1 (en) 1996-01-31 1998-11-18 Dudley Bryan Crossling Imprint identification system
EP1125244A1 (en) 1998-11-03 2001-08-22 Dudley Bryan Crossling Imprint identification system
US20040114785A1 (en) 2002-12-06 2004-06-17 Cross Match Technologies, Inc. Methods for obtaining print and other hand characteristic information using a non-planar prism
CN1776717A (en) 2005-12-01 2006-05-24 上海交通大学 Method for identifying shoes print at criminal scene
CN1936922A (en) 2006-08-17 2007-03-28 上海交通大学 Shoe-print automatic matching method
GB2432029A (en) 2005-11-02 2007-05-09 Crime Scene Invest Equipment L Imprint identification system using image scanner calibration

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2322448A1 (en) * 1998-03-04 1999-09-10 The Trustees Of Columbia University In The City Of New York Method and system for generating semantic visual templates for image and video retrieval
US6563959B1 (en) * 1999-07-30 2003-05-13 Pixlogic Llc Perceptual similarity image retrieval method
AU2002953384A0 (en) * 2002-12-16 2003-01-09 Canon Kabushiki Kaisha Method and apparatus for image metadata entry
US20070288453A1 (en) * 2006-06-12 2007-12-13 D&S Consultants, Inc. System and Method for Searching Multimedia using Exemplar Images
US8027549B2 (en) * 2006-06-12 2011-09-27 D&S Consultants, Inc. System and method for searching a multimedia database using a pictorial language

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1593001A (en) 1977-01-17 1981-07-15 Hill R B Apparatus and method for identifying individuals through their retinal vasculature patterns
US5802361A (en) * 1994-09-30 1998-09-01 Apple Computer, Inc. Method and system for searching graphic images and videos
EP0877990A1 (en) 1996-01-31 1998-11-18 Dudley Bryan Crossling Imprint identification system
EP1125244A1 (en) 1998-11-03 2001-08-22 Dudley Bryan Crossling Imprint identification system
US20040114785A1 (en) 2002-12-06 2004-06-17 Cross Match Technologies, Inc. Methods for obtaining print and other hand characteristic information using a non-planar prism
GB2432029A (en) 2005-11-02 2007-05-09 Crime Scene Invest Equipment L Imprint identification system using image scanner calibration
CN1776717A (en) 2005-12-01 2006-05-24 上海交通大学 Method for identifying shoes print at criminal scene
CN1936922A (en) 2006-08-17 2007-03-28 上海交通大学 Shoe-print automatic matching method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PHILLIPS M: "A SHOEPRINT IMAGE CODING AND RETRIEVAL SYSTEM", ECOS. EUROPEAN CONVENTION ON SECURITY AND DETECTION, XX, XX, 16 May 1995 (1995-05-16), pages 267 - 271, XP000672461 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005565A (en) * 2014-04-18 2015-10-28 大连恒锐科技股份有限公司 Onsite sole trace pattern image retrieval method
CN105023027A (en) * 2014-04-18 2015-11-04 大连恒锐科技股份有限公司 Sole trace pattern image retrieval method based on multi-feedback mechanism
CN105023027B (en) * 2014-04-18 2019-03-05 大连恒锐科技股份有限公司 Live soles spoor decorative pattern image search method based on multiple feedback mechanism
CN106940727A (en) * 2017-03-22 2017-07-11 重庆市公安局刑事警察总队 The coding method that shoe sole print various dimensions are classified with identification
CN107423715A (en) * 2017-07-31 2017-12-01 大连海事大学 A kind of footprint automatic identifying method based on multiple features combining decision-making
CN110287370A (en) * 2019-06-26 2019-09-27 中国人民公安大学 Suspect's method for tracing, device and storage medium based on field shoe print
CN110287370B (en) * 2019-06-26 2022-03-04 中国人民公安大学 Crime suspect tracking method and device based on-site shoe printing and storage medium

Also Published As

Publication number Publication date
GB0822813D0 (en) 2009-01-21
GB2466245A (en) 2010-06-23

Similar Documents

Publication Publication Date Title
Hosny et al. Copy-move forgery detection of duplicated objects using accurate PCET moments and morphological operators
Kupfer et al. An efficient SIFT-based mode-seeking algorithm for sub-pixel registration of remotely sensed images
Chetverikov Pattern regularity as a visual key
CN109241374B (en) Book information base updating method and library book positioning method
TW393629B (en) Hand gesture recognition system and method
CN105404657B (en) A kind of image search method based on CEDD features and PHOG features
US20070263912A1 (en) Method Of Identifying An Individual From Image Fragments
WO2010070323A1 (en) Crime scene mark identification system
EP2106599A2 (en) Feature matching method
US20180307708A1 (en) Method and System for Analyzing an Image Generated by at Least One Camera
JP2010027025A (en) Object recognition device, object recognition method, and program for object recognition method
JP2013109773A (en) Feature matching method and article recognition system
Maaten et al. Computer vision and machine learning for archaeology
Rangkuti et al. Batik image retrieval based on similarity of shape and texture characteristics
Janu et al. Query-based image retrieval using SVM
Kucheryavski Extracting useful information from images
JP2000082075A (en) Device and method for retrieving image by straight line and program recording medium thereof
Kumar et al. A comprehensive study on content based trademark retrieval system
Ansari et al. An Image Retrieval Framework: A Review.
CN112800267A (en) Fine-grained shoe print image retrieval method
Ayech et al. Texture description using statistical feature extraction
Wen et al. Image retrieval of digital crime scene images
GHINDAWI et al. Proposed image forgery detection method using important features matching technique
Dardi et al. A combined approach for footwear retrieval of crime scene shoe marks
Johnson et al. Whole-painting canvas analysis using high-and low-level features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09799698

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09799698

Country of ref document: EP

Kind code of ref document: A1