WO1994008311A1 - Image display system - Google Patents

Image display system Download PDF

Info

Publication number
WO1994008311A1
WO1994008311A1 PCT/GB1993/002052 GB9302052W WO9408311A1 WO 1994008311 A1 WO1994008311 A1 WO 1994008311A1 GB 9302052 W GB9302052 W GB 9302052W WO 9408311 A1 WO9408311 A1 WO 9408311A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
data
features
blocks
image
Prior art date
Application number
PCT/GB1993/002052
Other languages
French (fr)
Inventor
John Desmond Platten
Original Assignee
Aspley Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB929221040A external-priority patent/GB9221040D0/en
Application filed by Aspley Limited filed Critical Aspley Limited
Priority to AU48323/93A priority Critical patent/AU4832393A/en
Publication of WO1994008311A1 publication Critical patent/WO1994008311A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays

Definitions

  • This invention relates to an image display system.
  • the invention has particular, although not exclusive, relevance to systems for producing an image of the face of a person. Such systems are used, for example, by the police as an alternative to the use of a police artist for producing an image of the face of a suspect, based on information given by a witness to a crime.
  • an image display system including a computer having an information store including a number of sets of stored data corresponding to images of different facial features taken from a number of different faces.
  • An operator acting on information relating to each feature of the face of a suspect provided by a witness, feeds the information into the computer.
  • the computer performs a search amongst the sets of stored data, and selects data from the information store corresponding to images of the facial features having particular "characteristics most closely corresponding to the features described by the witness.
  • the chosen images are combined by the computer, and displayed on a visual display device so as to present a composite image of a face to the witness.
  • an image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from the selection of said images of features.
  • the composite images are preferably images of human faces.
  • the information store includes a plurality of discrete data bases corresponding to composite images of different types, and the input means is able to select blocks of data from different ones of the data bases during the formation of a single composite image.
  • the computing means assigns weighting factors to the input commands, and performs the selection dependent on the weighting factors .
  • the system includes means for modifying the images of features prior to the production of data relating to the composite image.
  • Figure 1 is a schematic overview of the components of the system
  • Figure 2 illustrates the organisation of data within the storage device shown in Figure 1;
  • Figure 3 illustrates the correlation between stored data value and the grey scale of the displayed image for each pixel of the image screen shown in
  • FIG. 1; Figure 4, 5 and 6 illustrate schematically three different image screen configurations
  • Figure 7 illustrates a face shape image displayed on the image screen.
  • the image display system to be described includes a storage device 1 for storing digital data relating to sets of images of possible features of different faces, for example face shapes, ear shapes, eye shapes etc.
  • the device 1 is linked to a central processing unit (CPU) 3 which is arranged to combine images corresponding to data from the storage device 1 to form images of a face, and display the images sequentially on the screen 5 of a visual display unit which is linked to the CPU 3 via a display buffer memory 7.
  • CPU central processing unit
  • a suitable computer system for use in the system is, for example, an Intel 80X86.
  • a human operator is able to control the CPU 3 via an operator input device 11 which may take any suitable form such as a keyboard, or a mouse device or a combination thereof .
  • the operator 9 will generally act in response to verbal directions from the witness, indicated as 13.
  • the CPU 3 is arranged to send suitable control signals to a printer 15.
  • FIG. 2 within the storage device 1, there are incorporated a number of discrete data storage areas.
  • a large proportion of storage space is dedicated to a number of databases, DATABASE1, DATABASE2, ... DATABASEn.
  • Each database is dedicated to a different racial group, for example, Afro-Carribbean, Caucasian etc. for either male or female subjects and includes data files relating to different possible versions of facial features, for example, different face shapes, different hair styles, different nose shapes and so on.
  • the storage device 1 also includes an output store which stores data defining images which have been "pasted” using the system as will be described in more detail hereafter, together with information relating to a description of the pasted images .
  • the remaining part of the storage device 1 includes the software for a paintbrush program, for example Photostyler produced by Aldus, and a publishing program, for example Pagemaker produced by Aldus .
  • the storage device 1 includes the software for the image pasting function, that is the "E-FIT" software, this comprising the system menus for display on the image screen 5, and the likeness data basing software whose operation will be described in more detail hereafter.
  • the image data stored in the storage device 1 is formed from complete source faces which are then broken down into part images, one image for each facial feature such as head shape, eye shape etc.
  • the image data is stored in the storage device 1 using a static medium such as magnetic tapes, magnetic disks or optical storage units.
  • the image data is then loaded as required into the faster volatile local memory of the CPU 3 as indicated on the right hand side of Figure 1 for improved handling speed, the data being copied at the time of loading into the local memory, and modified in order to prepare it for the image pasting operation which will follow.
  • Each pixel of a part image is stored in a byte where 0 corresponds to black, 127 to white and 1-126 inclusive are linearly progressive scales of grey as illustrated in Figure 3. It will be seen that working in binary the maximum and minimum values that a byte may store are:
  • the byte values 128-255 not used in the grey scale values are reserved for use as "padding" bytes in order to enable data corresponding to the part images to be stored in a byte array as will now be explained.
  • a byte array is a representation of a rectangle in a continuous storage medium, for example a frame store in the storage device 1.
  • the addresses of the eight bytes can be established as follows:
  • part images are always stored in an enclosing rectangle regardless of the shape of the part image. Parts of the rectangle which are not part of the image are then signified by special padding bytes. Thus in the particular system being described, any byte values 128-255 are part of the padding and do not represent part of the image itself.
  • the first screen 41 is designed to display the image of the face currently under consideration by the witness, the separate screen 43 containing the menus and other control features for consideration by the controller 9.
  • the particular system being described uses the WINDOWS software produced by Microsoft which enables the controller to select the appropriate operations from the control screen by means of a mouse.
  • Such a two screen system suffers the disadvantage however that it may be appropriate to display some information for example, lists of options to the witness.
  • the use of a two screen system necessitates the witnesses attention being diverted from the display of the image, as well as being more difficult for the operator.
  • Figures 5 and 6 illustrate a combined image and control screen which is designed to give both the witness 13 and the operator 9 a clear view of the particular image being displayed on the screen at any time, and to give the operator 9 a clear view of the controls.
  • the screen display illustrated in Figure 5 is designed for use by a right-handed operator whilst the screen display illustrated in Figure 6 is designed for use by a left-handed operator.
  • the handedness of the display on the screen is controlled by a software instruction entered by the operator.
  • the particular screen configurations illustrated in Figures 5 and 6 have the advantages of avoiding instructions appearing in the region of the image being displayed to the witness as this is known to cause confusion.
  • the controls are designed to be displayed as compact "button palettes" which are designed for ease of use of the system by a relatively unskilled operator.
  • the system therefore starts from a type likeness, that is an example of the generic face group to which the suspect belongs based on an initial statement from the witness. All the features are thus coded from the verbal descriptions from the witness with the code for each feature being placed in index files in the storage device 1.
  • the witness 13 is initially provided with a display on the screen 51 of a world map as an aid to choosing a data base corresponding to one of a number of different ethnic groups, for example:
  • the choice of the features displayed on the image screen 5 is determined by so called "descriptors" taken from the information provided by the witness.
  • the operator enters an initial statement on behalf of the witness 13 and the relevant indexes for each feature are searched for feature images, which match the descriptors taken from the statement given by the witness.
  • the face shape displayed on the image screen 5 will thus be chosen to be one of the various possible shapes, for example round, oval, square etc. If the first face shape displayed to the witness is not correct due, for example, to an inexact verbal description by the witness, subsequent facial shapes may be chosen and displayed.
  • n*2+tan exceeds 255 then the byte value of the Bitmap is set at 255 if n*2+tan is less than 0 then the byte value of the Bitmap is set at 0
  • an image byte cannot be whiter than white (255) or darker than black (0). It is found that a witness will respond better to an image of a complete face rather than a catalogue of parts of the face. This is because a complete face places each individual feature in a natural context. This can be seen by considering the example of a deep set eye. If the eye is shown in isolation on a white background the eye is out of context and information is lost about just how deep set the eye will appear when set on a face shape.
  • the eye By contrast where the eye is shown as part of a complete face the eye will be seen in context. Where the eye is displayed in its natural background the eye is seen to sink into the face shape even if the face shape is not correct. Thus the witness can make a judgement about how deep set the eye will be in the finished likeness.
  • the system is designed to compensate for the witness being slightly inaccurate in his initial statement, or the witness describing the remembered features of the suspect's face in different terms to what is understood by the operator.
  • the witness may say:
  • the descriptors for the system may be divided into two types which may be designated as "independent" and "scalar".
  • a scalar descriptor is a descriptor which exhibits a linear range of values from its first possible value to its last possible value. For example, the descriptor "hair length" may be seen to be a scalar descriptor as it may have the following values:
  • descriptor for face shape is an independent descriptor as it may have values which are unconnected, for example:
  • face shapes lack an obvious trend or gradation between each value and the descriptions for face shapes are therefore independent.
  • the descriptor to either side of that put forward by the witness for a scalar descriptor may be said to be a close match.
  • the descriptor to either side of that put forward for an independent descriptor can only be seen as a complete mismatch.
  • the system is designed to treat independent and scalar descriptors in a different manner.
  • the system is also designed to recognise some descriptors as being more important than others .
  • the storage device 1 is loaded with data relating to each of all the possible descriptors.
  • Each descriptor is allocated a designation of scalar or indpendent, and weak or strong.
  • the system includes a score table for allocating a weighting factor to each descriptor.
  • An example of a possible score table is as follows:
  • the system searches for data corresponding to a set of features designated by the witness by means of descriptors, more weight will be put to some features than others. Furthermore, the system does not require the operator 9 to enter every possible descriptor, as a witness may remember some aspects of a suspect's face better than others, for example:
  • the operator may interrogate the system as to the exact descriptor of that feature as stored in the index file. The operator can then feed this new descriptor back into the system where the intelligent search mechanism of the system will ensure that the images of the most similar features to that identified by that of the witness are brought forward for consideration.
  • a witness has a particularly poor recall
  • the system will undertake a default search to provide an image of a face including an unobtrusive features as described herebefore.
  • This initial default face can be progressively amended by means of the feedback mechanism as described above thus resulting in similar features being sequentially presented to the witness until the correct likeness is achieved.
  • the witness will becomes fatigued having seen too many features, the witness then having trouble in remembering the face of the suspect.
  • the system is designed such that using the controls displayed on the image screen 5 the operator 9 is able to clear the image from the screen. This then allows the operator 9 and witness 13 to discuss the witnesses' memory of the suspect witout influence by the image displayed on the screen. The operator 9 is then able to reveal the latest image on the display screen 5 when the witness is ready to proceed.
  • Some images stored in the store 1 may conflict.
  • some images of beards may include a moustache.
  • the system will detect when such a feature is present and prevent an additional moustache from appearing.
  • long hair images may include ears where the ears are slotted through the hair or some eyes may include brows because some pairs of eyes and brows are naturally matched to give a specific "brooding" effect which would be lost if the eyes and brows were separated.
  • the system will prevent an additional pair of ears or brows from appearing. It is, however, possible for the operator to override this facility where part of a compound feature is of particular interest. The operator will then be able to compensate for some degree of image duplication.
  • FIG 7 shows a typical example of the face shapes stored in the databases PATABASE1 DATABASEn from which it can be seen that the central face area is featureless and heavily roughened.
  • the central featureless core of the face shape provides a disruptive background. This is important as it has been proven by psychologists, for example Dr John Shepherd of the University of Aberdeen that spurious lines interfere with recognition.
  • the central featureless core allows images of the remaining details of the face to be pasted into the face shape without visible edges thus giving a naturalistic appearance.
  • the roughened area in the central face region will effectively camouflage the edges of the other facial features which are pasted into this area without the need for complicated smoothing procedures.
  • the features to be placed on the face shape are also designed to help to reduce join lines.
  • the silhouette of each part image is not random, but is designed such that all edges occur along contours of equal brightness. Thus, when placed upon a disruptive background the join lines will appear as the edge of the natural shadow surrounding the feature.
  • Each selected feature is added to the face shape displayed on the image screen 5 in such a way that the padding in the byte array of data for the feature is not transferred.
  • First the mask for the feature is pasted onto the face shape combining byte for byte with the face shape then a mathematical operation known as an AND operation.
  • This operation leaves a silhouette of the feature superimposed on the face shape.
  • the Bitmap i.e. the grey scale distribution for the image is then applied byte by byte using a mathematical operation known as an OR operation. This places the Bitmap in the silhouette of the feature leaving the rest of the face untouched and completing the paste of the feature.
  • the data input into the display buffer memory 7 which has an image buffer of bytes corresponding exactly to the pixels of the image screen 5, thus enabling the specified image to be displayed on the image screen 5.
  • the system has a facility for normal and boost mode editing.
  • the boost mode allows editing of any particular feature in large increments. This thus speeds up the procedure of editing in so much as the operator may make coarse modifications to a feature using the boost mode followed by fine adjustment of the feature using the normal mode.
  • the system allows a multiple selection of features to be edited together, for example: “Move the nose, mouth and both eyes up.”
  • the system has the ability to perform the necessary editing of the displayed image on all these features, or any chosen group of features simultaneously.
  • pairs of features may be edited so that they remain a naturally matching pair.
  • each current value of the data displayed on the screen of the system is stored in the system local memory.
  • These stored values may be protected during the loading into the local memory of a new feature from the storage device 1.
  • the stored value may be allowed to be modified to a value more natural to the incoming image, for example:
  • the move edit function is performed on the currently displayed set of displayed data so as to raise the eyebrow position, the value of the data then being locked.
  • the eyebrows displayed on the screen take their natural position on the displayed face together with the locked data increment.
  • the eyebrows thus appear higher than normal.
  • This feature is very important when features from different databases of naturally different tone are mixed, for example an Afro-Carribbean nose will be darker than a Caucasian nose. If, for example it is requried to produce a simulated image of a half-cast subject or perhaps a genetic throwback, it is distracting for the witness to have to load each darker feature in turn and then lighten the feature using the edit mode.
  • This particular locking technique enables the first feature chosen to be lightened, and then the edit brightness increment data to be locked into the system thus enabling subsequent features to be lightened automatically to the required amount.
  • the system allows for an interactive art system to perform freehand alteration of the displayed image.
  • the interactive art system may be used at any time during the pasting process.
  • the art system will alter the feature currently held in the local memeory of the CPU 3, the original feature stored in the storage device 1 being unaltered.
  • the alteration produced by the art system will only be stored until the local memory is flushed, for example when a further possible alternative feature is loaded.
  • the feature altered by the art work system may be permanently saved in the storage device 1.
  • the wart is painted on the painted nose.
  • the nose plus wart is input into the local memory and mixed into the stored likeness.
  • the local CPU memory includes the mount of storage for storing blank feature areas in which images known as "Overlays" may be assigned.
  • the images for the overlays may be derived from a predefined library of parts stored in the storage device 1.
  • the library may include such features as hats, glasses, sunglasses, moles, scars, lines and other items which can similarly be added to a face.
  • the contents of the library may be described as a menu both on the image screen 5 and in an indexed pictorial catalogue which may be presented to the witness.
  • the pasting of the overlay on the face displayed on the image screen 5 will be as any other feature.
  • the data relating to the overlay may, however, include information relating to the position of the overlay on the composite image, eg. at the top of the image in the case of a hat.
  • the operator is able to compose the features on the overlay using information from the current composite picture displayed on the image screen 5 using the interactive art system as described above.
  • the witness may realise that the image of a suspect requires a question mark shaped scar.
  • the current face shown on the display screen 5 is transferred to the interactive art system.
  • An appropriate scar is painted onto the composite image.
  • the image of the scar is then cut out of the face and assigned to an overlay.
  • the image of the scar may be then moved, altered in tone or colour, or rescaled just like any other component.
  • the whole face is transferred to the interactive art system as this will then provide an appropriate context for the scar. If the scar was drawn as an isolated feature, or upon a local sample of the face, for example the cheek, the overall effect of the scar on the whole face would be difficult to judge.
  • the operator can add images to an overlay from other sources.
  • the operator may use a scanner, indicated as 17 on Figure 1, or other appropriate hardware such as a video camera to turn an example of the hat motif into an image which is then stored in the system, either in the local CPU memory or in the storage device 1.
  • the image may either be displayed separately to the witness or may be assigned to an overlay and thus may be added to the composite image, moved, altered in tone or colour, and/or rescaled.
  • the overlays will be laid over the top of the standard facial components of an image, this procedure fitting in with the general predefined composition order of the system which ensures for example that eyes always appear beneath brows, but underneath the hair.
  • the operator may, however, change the order of construction of the composite image where this appears necessary.
  • An example of this may be where an unidentified murder victim has been found in which the face is largely intact but the eyes have decomposed.
  • the face of the murder victim may be scanned into the system and the image stored.
  • the image may then be assigned to an overlay, thus appearing as the initial composite image on the display screen 5.
  • the eyes of the facial image may then be moved down the order in which the face is normally composed, such that they are drawn after the overlay. New eyes can then be selected from the various stored images of eyes in the store 1, thus creating a composite image which is acceptable for display in newspaper, posters or television broadcasts.
  • the operator causes the system to export a byte image of the displayed face to n magnetic storage so that the image can be printed or processed for example, for inclusion in posters displaying the suspect's face to be published at a later date.
  • the image itself contains no information regarding the contents of the image or how the image was formulated.
  • the system does however, call a procedure for interpreting the image in terms of descriptors. These descriptors are then saved to the storage device 1. This description in terms of descriptors may then be used to search through a further data base containing other identification data.
  • it may be used to search through a storage system including images of previous which causes the system to draw a large circle in the memory of the display device 5. This appears on the screen of the display device 5, the monitor then being adjusted until the image is of a circle. If this procedure is not taken in order to ensure the correct setting of the monitor, the final print will not match the image on the screen. Thus for example, where the circle in the memory of the display device appears as an oval on a poorly adjusted display device monitor, the circle will still be printed as a circle by the printer 15. Thus, face images will be printed either thinner or fatter than that determined by the witness on the viewing of the image on the image screen 5.
  • the system described herebefore has particular benefits in the production of a face image using information from a witness based on his memory of a suspect
  • the system will have other uses.
  • an input image is put into the system for example, from a digitization of a photograph of a child who has been missing for some time
  • the system may be used to produce an ageing of the face of the child.
  • the system may be used not only to produce a likeness of a suspect, but to crystalise the witnesses' initial loose verbal description into a structured description using a recognized facial coding scheme.

Abstract

An image display system includes an information store (1) which stores blocks of data corresponding to different images of the features of a number of different faces. A computer (3) is arranged to select blocks of data from the store (1) dependent on input commands entered by an operator (9), and display an image of a composite face produced from the selected features on a visual display screen (5). Different input commands are assigned weighting factors by the computer (3) so as to enable the computer to select features dependent on the weighting factors.

Description

IMAGE DISPLAY SYSTEM
This invention relates to an image display system. The invention has particular, although not exclusive, relevance to systems for producing an image of the face of a person. Such systems are used, for example, by the police as an alternative to the use of a police artist for producing an image of the face of a suspect, based on information given by a witness to a crime.
In UK Patent No. 1605135 there is described an image display system including a computer having an information store including a number of sets of stored data corresponding to images of different facial features taken from a number of different faces. An operator, acting on information relating to each feature of the face of a suspect provided by a witness, feeds the information into the computer. The computer performs a search amongst the sets of stored data, and selects data from the information store corresponding to images of the facial features having particular "characteristics most closely corresponding to the features described by the witness. The chosen images are combined by the computer, and displayed on a visual display device so as to present a composite image of a face to the witness.
The system disclosed in UK Patent No. 1605135 has been developed over the years to provide a useful police tool. In the system presently being used, images of the composite face on the visual display device may be edited so as, for example, to reduce the intensity, size or position of each of the selected facial features and add facial arrangements such as pairs of spectacles. The image displayed on the visual display unit may be further edited by means of a software computer graphics system in which lines may be "drawn" or shapes may be "painted" on the composite image so as, for example, to add facial blemishes. Such a known system has a number of drawbacks. It is not possible to edit the image displayed on the display device until all the different parts of the face have been chosen and displayed on the visual display device as a composite face. This is confusing for the witness who has to continue to instruct the operator in the selection of further features of the face to be displayed, whilst viewing unsatisfactory representations of the features which he has chosen up to that time. Furthermore, the witness will often give inexact descriptions of features which he cannot remember or which he cannot verbalise. The witness may also describe features in different terms of reference to those of the operator. This leads to a large number of images of the wrong facial features being displayed to the witness, thus lengthening the process of producing an acceptable composite image, and being a cause of possible confusion and fatigue to the witness.
It is an object of the present invention to provide an image display system wherein at least some of the disadvantages of systems used hitherto are at least alleviated.
According to the present invention there is provided an image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from the selection of said images of features.
The composite images are preferably images of human faces. Preferably the information store includes a plurality of discrete data bases corresponding to composite images of different types, and the input means is able to select blocks of data from different ones of the data bases during the formation of a single composite image.
Preferably, the computing means assigns weighting factors to the input commands, and performs the selection dependent on the weighting factors . Preferably, the system includes means for modifying the images of features prior to the production of data relating to the composite image.
One image display system in accordance with an embodiment of the invention, will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 is a schematic overview of the components of the system;
Figure 2 illustrates the organisation of data within the storage device shown in Figure 1;
Figure 3 illustrates the correlation between stored data value and the grey scale of the displayed image for each pixel of the image screen shown in
Figure 1; Figure 4, 5 and 6 illustrate schematically three different image screen configurations; and
Figure 7 illustrates a face shape image displayed on the image screen.
1. SYSTEM OVERVIEW
The system to be described is designed to enable an image of the face of a suspected criminal to be built up using a recollection of the face by a witness to the crime. Referring firstly to Figure 1, the image display system to be described includes a storage device 1 for storing digital data relating to sets of images of possible features of different faces, for example face shapes, ear shapes, eye shapes etc. The device 1 is linked to a central processing unit (CPU) 3 which is arranged to combine images corresponding to data from the storage device 1 to form images of a face, and display the images sequentially on the screen 5 of a visual display unit which is linked to the CPU 3 via a display buffer memory 7. A suitable computer system for use in the system is, for example, an Intel 80X86.
A human operator, indicated as 9, is able to control the CPU 3 via an operator input device 11 which may take any suitable form such as a keyboard, or a mouse device or a combination thereof . The operator 9 will generally act in response to verbal directions from the witness, indicated as 13.
When a "fit" of the face of the suspect is obtained, or a record of an interim image displayed on the image screen 5 is required, the CPU 3 is arranged to send suitable control signals to a printer 15.
Turning now also to Figure 2, within the storage device 1, there are incorporated a number of discrete data storage areas. A large proportion of storage space is dedicated to a number of databases, DATABASE1, DATABASE2, ... DATABASEn. Each database is dedicated to a different racial group, for example, Afro-Carribbean, Caucasian etc. for either male or female subjects and includes data files relating to different possible versions of facial features, for example, different face shapes, different hair styles, different nose shapes and so on. Within the databases for each set of data relating to a particular part of the face, there is included an index to the data files categorising the particular sets of data in each file, for example in the face shapes file, the stored images will be categorised as oval, round, square, triangular etc. The storage device 1 also includes an output store which stores data defining images which have been "pasted" using the system as will be described in more detail hereafter, together with information relating to a description of the pasted images . The remaining part of the storage device 1 includes the software for a paintbrush program, for example Photostyler produced by Aldus, and a publishing program, for example Pagemaker produced by Aldus . Finally the storage device 1 includes the software for the image pasting function, that is the "E-FIT" software, this comprising the system menus for display on the image screen 5, and the likeness data basing software whose operation will be described in more detail hereafter.
2. DATA STORAGE AND RETRIEVAL
The image data stored in the storage device 1 is formed from complete source faces which are then broken down into part images, one image for each facial feature such as head shape, eye shape etc. When not in use the image data is stored in the storage device 1 using a static medium such as magnetic tapes, magnetic disks or optical storage units. The image data is then loaded as required into the faster volatile local memory of the CPU 3 as indicated on the right hand side of Figure 1 for improved handling speed, the data being copied at the time of loading into the local memory, and modified in order to prepare it for the image pasting operation which will follow.
Each pixel of a part image is stored in a byte where 0 corresponds to black, 127 to white and 1-126 inclusive are linearly progressive scales of grey as illustrated in Figure 3. It will be seen that working in binary the maximum and minimum values that a byte may store are:
00000000 = 0 in base 10 11111111 = 255 in base 10
The byte values 128-255 not used in the grey scale values are reserved for use as "padding" bytes in order to enable data corresponding to the part images to be stored in a byte array as will now be explained.
A byte array is a representation of a rectangle in a continuous storage medium, for example a frame store in the storage device 1. Thus, taking the example of a 2x4 rectangle stored as a byte array, the addresses of the eight bytes can be established as follows:
Byte Number Address of Byte first byte line 1, column 1 first byte + 1 line 1, column 2 first byte + 2 line 1, column 3 first byte + 3 line 1, column 4 first byte + 4 line 2, column 1 first byte + 5 line 2, column 2 first byte + 6 line 2, column 3 first byte + 7 line 2, column 4
Thus, it can be seen that the byte corresponding to any particular pixel of the part image can be addressed as follows:
byte [Line, Column] is stored at (Address of 1st byte)
+ ((Line * Column)-l)
In view of this ease of addressing images stored in byte arrays, part images are always stored in an enclosing rectangle regardless of the shape of the part image. Parts of the rectangle which are not part of the image are then signified by special padding bytes. Thus in the particular system being described, any byte values 128-255 are part of the padding and do not represent part of the image itself.
When the data representing the face images are loaded from the storage device 1 into the local memory of the CPU 3, byte values of each part image are doubled so to create two pictures . The first picture is called a "bitmap" and corresponds to the grey value distribution over the part image. The second picture corresponds to the silhouette of the part image, and is called the "mask". The modification of the data into these two pictures is as follows:
Picture 1 - bitmap
Image bytes (0-127) -*-^ Image bytes 0-255
Padding bytes (128-255) -^ 0
Picture 2 - mask
Image bytes (0-127) -^ 0 Padding bytes (128-255) -^ 255
3. FORM OF SCREEN DISPLAY
Turning now to Figures 4, 5 and 6, it is necessary that the witness be given a clear view of the image displayed on the image screen 5 at all times. It is, however, also necessary that the operator 9 have some means of control of the sytem in order to enable the images displayed on the screen to be changed where necessary. These conflicting criteria can be met in a number of ways .
Referring firstly to Figure 4, it is possible for two separate screens 41, 43 to be provided. The first screen 41 is designed to display the image of the face currently under consideration by the witness, the separate screen 43 containing the menus and other control features for consideration by the controller 9. The particular system being described uses the WINDOWS software produced by Microsoft which enables the controller to select the appropriate operations from the control screen by means of a mouse. Such a two screen system suffers the disadvantage however that it may be appropriate to display some information for example, lists of options to the witness. The use of a two screen system necessitates the witnesses attention being diverted from the display of the image, as well as being more difficult for the operator.
Figures 5 and 6 illustrate a combined image and control screen which is designed to give both the witness 13 and the operator 9 a clear view of the particular image being displayed on the screen at any time, and to give the operator 9 a clear view of the controls. The screen display illustrated in Figure 5 is designed for use by a right-handed operator whilst the screen display illustrated in Figure 6 is designed for use by a left-handed operator. The handedness of the display on the screen is controlled by a software instruction entered by the operator. The particular screen configurations illustrated in Figures 5 and 6 have the advantages of avoiding instructions appearing in the region of the image being displayed to the witness as this is known to cause confusion. The controls are designed to be displayed as compact "button palettes" which are designed for ease of use of the system by a relatively unskilled operator.
4. SELECTION OF FEATURES FOR DISPLAY
It is very important to provide a witness with a good as possible a likeness of the face of a suspect as soon as possible as a witness will tend to remember less of the memorised face with each additional incorrect feature with which they are presented. The system therefore starts from a type likeness, that is an example of the generic face group to which the suspect belongs based on an initial statement from the witness. All the features are thus coded from the verbal descriptions from the witness with the code for each feature being placed in index files in the storage device 1.
The witness 13 is initially provided with a display on the screen 51 of a world map as an aid to choosing a data base corresponding to one of a number of different ethnic groups, for example:
Afro-Carribbean Caucasian Hispanic Arabic Oriental
Asian Aboriginal.
A choice will also be made at this point between male and female, male and female data bases being provided for each of the above ethnic groups . As each feature of the face is described by the witness, all part images in the storage device 1 will be drawn from the nominated database unless otherwise indicated. The division of the stored data into different databases is necessary to increase the speed of addressing the storage areas . It is however a feature of the sytem that if necessary features from a non-nominated database may be called up, for example a Caucasian subject may have a "dreadlocks" type hairstyle normally found only in Afro-Carribbean racial groups.
The choice of the features displayed on the image screen 5 is determined by so called "descriptors" taken from the information provided by the witness. The operator enters an initial statement on behalf of the witness 13 and the relevant indexes for each feature are searched for feature images, which match the descriptors taken from the statement given by the witness. The face shape displayed on the image screen 5 will thus be chosen to be one of the various possible shapes, for example round, oval, square etc. If the first face shape displayed to the witness is not correct due, for example, to an inexact verbal description by the witness, subsequent facial shapes may be chosen and displayed.
It is a feature of the system that individual members of naturally occurring pairs of features i.e.: Right Eye & Left Eye Right Ear & Left Ear Right Brow & Left Brow may have different descriptors, i.e. be derived from different source faces. An initial tan value for the face may be selected by the operator acting under the witnesses' instructions, such that the image initially displayed to the witness has the correct tone. This tan value is stored in a static memory location, and remains constant throughout the contruction of the image. Whenever further features are called from the store device 1 into the local memory in the CPU 3, the tan value is applied so that the loading transformation as described above is amended to:
Feature image byte of n - Bitmap byte of value n*2+tan
Checks are made by the software such that:
if n*2+tan exceeds 255 then the byte value of the Bitmap is set at 255 if n*2+tan is less than 0 then the byte value of the Bitmap is set at 0 In other words an image byte cannot be whiter than white (255) or darker than black (0). It is found that a witness will respond better to an image of a complete face rather than a catalogue of parts of the face. This is because a complete face places each individual feature in a natural context. This can be seen by considering the example of a deep set eye. If the eye is shown in isolation on a white background the eye is out of context and information is lost about just how deep set the eye will appear when set on a face shape. By contrast where the eye is shown as part of a complete face the eye will be seen in context. Where the eye is displayed in its natural background the eye is seen to sink into the face shape even if the face shape is not correct. Thus the witness can make a judgement about how deep set the eye will be in the finished likeness.
Similar considerations apply to other features, and hence it is important that a whole face is always displayed on the image screen to the witness 13 rather than a series of images of disjointed facial features. Thus if the witness 13 fails to make an initial statement, such that the operator is not able to enter descriptors for each facial feature, the system must complete the face to be displayed with default features. Rather than supply a single invariable default feature for each unspecified feature, the system instead makes a default search using descriptors such as average, medium, straight (in the case of a mouth), oval (in the case of a face) and so on. This gives a complete selection of unobtrusive features to enable a facial image to be displayed so as to provide a prompt for further discussion between the witness 13 and the operator 9.
When features which the witness remembers strongly have been located from the storage device and built into the displayed image, an attempt can be made to replace those facial features which were found in a default search with more accurate features. It is found that when strongly remembered .features are correctly displayed, this strengthens the ability of the witness to remember the remaining features, and thus increase the likelihood of further correct recollection by the witness.
The system is designed to compensate for the witness being slightly inaccurate in his initial statement, or the witness describing the remembered features of the suspect's face in different terms to what is understood by the operator. Thus for example, the witness may say:
"He had short dark hair."
When the image appears on the image screen 5, it will be realised that the correct descriptor for what the witness can remember of the suspect is:
"He had very short dark hair."
In normal use of the system, the operator will assume initially that the witness is correct, and thus the first features to be displayed on the image screen 5 will be those which conform exactly to the description given by the witness. The system is, however, designed to conduct an intelligent search to compensate for the witness being slightly inaccurate so as to enable the system also where required to display those features which are just slightly different to those described by the witness, followed by those features which are slightly similar to those described by the witness and finally those featues which are definitely different. This may be achieved as follows: The descriptors for the system may be divided into two types which may be designated as "independent" and "scalar". A scalar descriptor is a descriptor which exhibits a linear range of values from its first possible value to its last possible value. For example, the descriptor "hair length" may be seen to be a scalar descriptor as it may have the following values:
Very Short Short
Collar Length
Long
Very Long
By contrast the descriptor for face shape is an independent descriptor as it may have values which are unconnected, for example:
Oval Round
Square Triangular Angular Thus, face shapes lack an obvious trend or gradation between each value and the descriptions for face shapes are therefore independent.
The descriptor to either side of that put forward by the witness for a scalar descriptor may be said to be a close match. On the other hand, the descriptor to either side of that put forward for an independent descriptor can only be seen as a complete mismatch. Th s, the system is designed to treat independent and scalar descriptors in a different manner. The system is also designed to recognise some descriptors as being more important than others . In order to achieve this, the storage device 1 is loaded with data relating to each of all the possible descriptors. Each descriptor is allocated a designation of scalar or indpendent, and weak or strong.
The system includes a score table for allocating a weighting factor to each descriptor. An example of a possible score table is as follows:
Strong Scalar Exact +3 points Strong Scalar One-out +2 points Strong Scalar Miss -3 points Weak Scalar Exact +2 points
Weak Scalar One-out +1 points
Weak Scalar Miss -2 points
Strong Indep. Exact +3 points Strong Indep. Miss -3 points
Weak Indep. Exact +2 points Weak Indep. Miss -2 points
Descriptor unused 0 points
Thus, when the system searches for data corresponding to a set of features designated by the witness by means of descriptors, more weight will be put to some features than others. Furthermore, the system does not require the operator 9 to enter every possible descriptor, as a witness may remember some aspects of a suspect's face better than others, for example:
"His hair was brown but I cannot remember its length" In the example of the above score table, such a hair length descriptor will be scored with 0 points .
When the witness views an image of a feature which the witness considers is particularly close to his memory of the suspect, the operator may interrogate the system as to the exact descriptor of that feature as stored in the index file. The operator can then feed this new descriptor back into the system where the intelligent search mechanism of the system will ensure that the images of the most similar features to that identified by that of the witness are brought forward for consideration.
Where a witness has a particularly poor recall, it is possible to browse through the stored images at random using the system without providing an initial descriptor. Where no descriptors are provided, the system will undertake a default search to provide an image of a face including an unobtrusive features as described herebefore. This initial default face can be progressively amended by means of the feedback mechanism as described above thus resulting in similar features being sequentially presented to the witness until the correct likeness is achieved.
During the image forming process, it is possible that the witness will becomes fatigued having seen too many features, the witness then having trouble in remembering the face of the suspect. The system is designed such that using the controls displayed on the image screen 5 the operator 9 is able to clear the image from the screen. This then allows the operator 9 and witness 13 to discuss the witnesses' memory of the suspect witout influence by the image displayed on the screen. The operator 9 is then able to reveal the latest image on the display screen 5 when the witness is ready to proceed.
There will be some instances in which the types of images stored in the store 1 may conflict. For example some images of beards may include a moustache. The system will detect when such a feature is present and prevent an additional moustache from appearing. Likewise long hair images may include ears where the ears are slotted through the hair or some eyes may include brows because some pairs of eyes and brows are naturally matched to give a specific "brooding" effect which would be lost if the eyes and brows were separated. The system will prevent an additional pair of ears or brows from appearing. It is, however, possible for the operator to override this facility where part of a compound feature is of particular interest. The operator will then be able to compensate for some degree of image duplication.
5. PASTING THE IMAGES The technique by which various features of the face stored in the storage device 1 are combined to provide an image of a face on the image screen 5 is known as pasting.
Turning now to Figure 7, the construction of each composite image is based on a face shape, this face shape providing the setting in which the other features will lie. Figure 7 shows a typical example of the face shapes stored in the databases PATABASE1 DATABASEn from which it can be seen that the central face area is featureless and heavily roughened. The central featureless core of the face shape provides a disruptive background. This is important as it has been proven by psychologists, for example Dr John Shepherd of the University of Aberdeen that spurious lines interfere with recognition. The central featureless core allows images of the remaining details of the face to be pasted into the face shape without visible edges thus giving a naturalistic appearance. The roughened area in the central face region will effectively camouflage the edges of the other facial features which are pasted into this area without the need for complicated smoothing procedures. The features to be placed on the face shape are also designed to help to reduce join lines. The silhouette of each part image is not random, but is designed such that all edges occur along contours of equal brightness. Thus, when placed upon a disruptive background the join lines will appear as the edge of the natural shadow surrounding the feature. Each selected feature is added to the face shape displayed on the image screen 5 in such a way that the padding in the byte array of data for the feature is not transferred. First the mask for the feature is pasted onto the face shape combining byte for byte with the face shape then a mathematical operation known as an AND operation. This operation leaves a silhouette of the feature superimposed on the face shape. The Bitmap i.e. the grey scale distribution for the image is then applied byte by byte using a mathematical operation known as an OR operation. This places the Bitmap in the silhouette of the feature leaving the rest of the face untouched and completing the paste of the feature.
The AND and OR operations will now be described by way of particular examples: AND OPERATION
Overlay feature Padding
Face detail 11101110 11101110 Mask 00000000 AND 11111111 AND
Result 00000000 11101110
OR OPERATION
Overlay feature Padding
Result of AND
Operation 00000000 11101110
Bitmap 10011001 OR 00000000 OR
Result 10011001 11101110
It will be seen that the result of the combined AND and OR operations is to enable the image of the selected feature to be pasted over the existing image on the display screen 5, whilst the padding for the feature is submerged by the rest of the existing image.
Features are added successively using the AND and OR operations until the data for a completed face is produced. The data input into the display buffer memory 7 which has an image buffer of bytes corresponding exactly to the pixels of the image screen 5, thus enabling the specified image to be displayed on the image screen 5.
6. EDIT MODE
It is a feature of the system that at any point within the pasting procedure it is possible to edit the image displayed on the image screen 5. There are three primary edit functions which can be made to any individual feature of the displayed face, these are:
Alter Brightness
Alter Size ,
Alter Position (including "Together" and "Apart")
Additionally, the system has a facility for normal and boost mode editing. The boost mode allows editing of any particular feature in large increments. This thus speeds up the procedure of editing in so much as the operator may make coarse modifications to a feature using the boost mode followed by fine adjustment of the feature using the normal mode.
Where required the system allows a multiple selection of features to be edited together, for example: "Move the nose, mouth and both eyes up."
The system has the ability to perform the necessary editing of the displayed image on all these features, or any chosen group of features simultaneously.
It is also a particular feature of the system that it is possible, where required, for the edit function to operate separately on individual members of naturally occurring pairs of features for example, right eye and left eye. In particular where the descriptors for a pair of features have been derived from different source faces, the system will remember this and automatically perform separate edit functions on the members of the pair.
This contrasts with the alternative situation where the pairs of features may be edited so that they remain a naturally matching pair.
When an edit has been made, each current value of the data displayed on the screen of the system is stored in the system local memory. These stored values may be protected during the loading into the local memory of a new feature from the storage device 1. Alternatively, the stored value may be allowed to be modified to a value more natural to the incoming image, for example:
"The suspect has unnaturally high eyebrows."
In this particular example, the move edit function is performed on the currently displayed set of displayed data so as to raise the eyebrow position, the value of the data then being locked. When the next alternative eyebrow shape is called, the eyebrows displayed on the screen take their natural position on the displayed face together with the locked data increment. The eyebrows thus appear higher than normal. This feature is very important when features from different databases of naturally different tone are mixed, for example an Afro-Carribbean nose will be darker than a Caucasian nose. If, for example it is requried to produce a simulated image of a half-cast subject or perhaps a genetic throwback, it is distracting for the witness to have to load each darker feature in turn and then lighten the feature using the edit mode. This particular locking technique enables the first feature chosen to be lightened, and then the edit brightness increment data to be locked into the system thus enabling subsequent features to be lightened automatically to the required amount.
7. INTERACTIVE ART SYSTEM
The system allows for an interactive art system to perform freehand alteration of the displayed image. The interactive art system may be used at any time during the pasting process. The art system will alter the feature currently held in the local memeory of the CPU 3, the original feature stored in the storage device 1 being unaltered. The alteration produced by the art system will only be stored until the local memory is flushed, for example when a further possible alternative feature is loaded. However, if required, the feature altered by the art work system may be permanently saved in the storage device 1.
An example of the operation of the freehand art system is as follows:
1. The correct nose shape has been displayed using the selection processes described above.
2. According to the witness the nose should have an additional wart. 3. The nose is displayed and the painting mode is selected.
4. The wart is painted on the painted nose.
5. The nose plus wart is input into the local memory and mixed into the stored likeness.
8. OVERLAYS
The local CPU memory includes the mount of storage for storing blank feature areas in which images known as "Overlays" may be assigned. The images for the overlays may be derived from a predefined library of parts stored in the storage device 1. The library may include such features as hats, glasses, sunglasses, moles, scars, lines and other items which can similarly be added to a face. The contents of the library may be described as a menu both on the image screen 5 and in an indexed pictorial catalogue which may be presented to the witness. Once an image has been assigned to an overlay, the pasting of the overlay on the face displayed on the image screen 5 will be as any other feature. The data relating to the overlay may, however, include information relating to the position of the overlay on the composite image, eg. at the top of the image in the case of a hat.
Alternatively the operator is able to compose the features on the overlay using information from the current composite picture displayed on the image screen 5 using the interactive art system as described above. For example the witness may realise that the image of a suspect requires a question mark shaped scar. The current face shown on the display screen 5 is transferred to the interactive art system. An appropriate scar is painted onto the composite image. The image of the scar is then cut out of the face and assigned to an overlay. The image of the scar may be then moved, altered in tone or colour, or rescaled just like any other component.
It will be appreciated that is preferable for the whole face to be transferred to the interactive art system as this will then provide an appropriate context for the scar. If the scar was drawn as an isolated feature, or upon a local sample of the face, for example the cheek, the overall effect of the scar on the whole face would be difficult to judge.
As a further alternative, the operator can add images to an overlay from other sources. In one example where a local street gang design their own hat motif, the operator may use a scanner, indicated as 17 on Figure 1, or other appropriate hardware such as a video camera to turn an example of the hat motif into an image which is then stored in the system, either in the local CPU memory or in the storage device 1. The image may either be displayed separately to the witness or may be assigned to an overlay and thus may be added to the composite image, moved, altered in tone or colour, and/or rescaled.
Generally the overlays will be laid over the top of the standard facial components of an image, this procedure fitting in with the general predefined composition order of the system which ensures for example that eyes always appear beneath brows, but underneath the hair. The operator may, however, change the order of construction of the composite image where this appears necessary. An example of this may be where an unidentified murder victim has been found in which the face is largely intact but the eyes have decomposed. The face of the murder victim may be scanned into the system and the image stored. The image may then be assigned to an overlay, thus appearing as the initial composite image on the display screen 5. The eyes of the facial image may then be moved down the order in which the face is normally composed, such that they are drawn after the overlay. New eyes can then be selected from the various stored images of eyes in the store 1, thus creating a composite image which is acceptable for display in newspaper, posters or television broadcasts.
It will be appreciated that once data relating to a particular overlay has been introduced into the system, it may be stored and used in subsequent composite images as with any other feature.
9. OUTPUT OF THE SYSTEM
When the witness is satisfied that the image of the face displayed on the display screen bears a close resemblance to the suspect, the operator causes the system to export a byte image of the displayed face to n magnetic storage so that the image can be printed or processed for example, for inclusion in posters displaying the suspect's face to be published at a later date. The image itself contains no information regarding the contents of the image or how the image was formulated. The system does however, call a procedure for interpreting the image in terms of descriptors. These descriptors are then saved to the storage device 1. This description in terms of descriptors may then be used to search through a further data base containing other identification data. For example, it may be used to search through a storage system including images of previous which causes the system to draw a large circle in the memory of the display device 5. This appears on the screen of the display device 5, the monitor then being adjusted until the image is of a circle. If this procedure is not taken in order to ensure the correct setting of the monitor, the final print will not match the image on the screen. Thus for example, where the circle in the memory of the display device appears as an oval on a poorly adjusted display device monitor, the circle will still be printed as a circle by the printer 15. Thus, face images will be printed either thinner or fatter than that determined by the witness on the viewing of the image on the image screen 5.
10. ALTERNATIVE USES OF THE SYSTEM
It will be appreciated that while the system described herebefore has particular benefits in the production of a face image using information from a witness based on his memory of a suspect, the system will have other uses. Where an input image is put into the system for example, from a digitization of a photograph of a child who has been missing for some time, the system may be used to produce an ageing of the face of the child. Furthermore, the system may be used not only to produce a likeness of a suspect, but to crystalise the witnesses' initial loose verbal description into a structured description using a recognized facial coding scheme.
It will also be appreciated that whilst a system in accordance with the invention finds particular application in the construction of human faces, it also will find application in the construction of other composite images.

Claims

CLAIMS :
1. An image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from a selection of said images of features, wherein the computing means locates categories of images of features by means of a search in which weighting factors are applied to the input commands used to define the categories of images.
2. A system according to claim 1 in which the computing means selects subsequent blocks of data dependent on earlier input commands so as to select blocks of data corresponding to images of features which have a chosen relationship.
3. A system according to either of the preceding claims including means for sampling data relating to features of the currently displayed composite image, and means for causing the computing means to use the sampled data to influence the subsequent selection of blocks of data.
4. A system according to any one of the preceding claims in which the selected block of data is substituted for by the block of data selected using the next closest score according to the weighting factors .
5. A system according to any one of the preceding claims including means for forming additional images of features not stored in the information store, means for feeding data representative of the additional images into the display system, and means for producing data corresponding to said additional images so that the additional images may be incorporated in said chosen composite image.
6. An image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from a selection of said images of features wherein the system includes means for forming additional images of features not stored in the information store, means for feeding data representative of the additional images into the display system, and means for producing data corresponding to said additional images so that the additional images may be incorporated in said chosen composite image.
7. A system according to claim 5 or claim 6, including means for storing blocks of data relating to the additional images such that said blocks of data may be selected by the computing means for use in subsequent composite images.
8. A system according to claim 5,6 or 7 in which said data representative of the additional images may include data representative of the position of the additional image on the chosen composite image.
9. A system according to any one of the preceding claims including means for modifying the images of features prior to the production of data relating to the chosen composite image.
10. A system according to claim 9 in which the modifying means is able to modify the images using either a coarse adjustment mode for major modifications, or a fine adjustment mode for minor modifications.
11. An image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from a selection of said images of features and including means for modifying the images of features prior to the production of data relating to the chosen composite image, wherein the modifying means is able to modify the images using either a coarse adjustment mode for major modifications, or a fine adjustment mode for minor modifications.
12. A system according to claim 9,10 or 11 including means, for storing data relating to the modification of a feature, and means for applying a corresponding modification to corresponding features which are subsequently selected.
13. A system according to claim 9,10,11 or 12 in which the modification means may modify more than one feature simultaneously.
14. An image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from a selection of said images of features wherein the system contains means for recognising the omission of one or more input commands necessary to complete a composite image, and includes means for searching through the store for blocks of data using a selection of predetermined input commands so as to provide a selection of composite images .
15. An image display system including an information store for storing blocks of data corresponding to different images of features of a number of different composite images, a computing means for selecting blocks of data from the store dependent on input commands to the system, and combining means for combining blocks of data so as to produce data corresponding to a chosen composite image formed from a selection of said images of features in which the computer includes means for searching through the store for blocks of data so as to produce data corresponding to a composite image which has a predetermined relationship to an earlier composite image.
16. A system according to any one of the preceding claims wherein the information store includes a plurality of discrete data bases corresponding to composite images of different types, and the computer is able to select blocks of data from different ones of the data bases during the formation of a single composite image.
17. A system according to any one of the preceding claims in which the features are features of a human face, and the composite images are images of human faces.
18. A system according to any one of the preceding claims including a visual display unit for displaying both information relating to the input commands, and the composite image on a single screen, wherein the information and the composite image are kept separate so as to allow a clear view of the composite image.
19. A system according to claim 18 including means for changing the relative positions of the information and the composite image on the screen.
20. A system according to claim 18 or 19 in which the display of the composite image can be blanked by means of an input command to the system.
21. A system according to any one of the preceding claims including means for defining a predetermined order for combining the blocks of data corresponding to the selected features, and means for varying the predetermined order when required.
22. A system according to any one of the preceding claims including means for detecting when blocks of data causing duplication of a feature of a composite image have been selected, means for preventing data from one of the blocks of data relating to said duplicated feature from producing an image of the duplicated feature in a chosen composite image, and means for overriding said means for preventing when required.
23. A system according to any one of the preceding claims including camouflaging means for camouflaging the edges of the images of features in the chosen composite image.
24. A system according to claim 22 in which the images of features are pasted onto a featureless core having a roughened area effective to camouflage said edges.
25. A system according to claim 22 or 23 in which the edges of the images of features all occur along contours of equal brightness.
26. An image display system substantially as hereinbefore described with reference to the accompanying drawings.
PCT/GB1993/002052 1992-10-07 1993-10-01 Image display system WO1994008311A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU48323/93A AU4832393A (en) 1992-10-07 1993-10-01 Image display system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB929221040A GB9221040D0 (en) 1992-10-07 1992-10-07 Image display system
GB9221040.0 1992-10-07
GB939316726A GB9316726D0 (en) 1992-10-07 1993-08-12 Image display system
GB9316726.0 1993-08-12

Publications (1)

Publication Number Publication Date
WO1994008311A1 true WO1994008311A1 (en) 1994-04-14

Family

ID=26301742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1993/002052 WO1994008311A1 (en) 1992-10-07 1993-10-01 Image display system

Country Status (2)

Country Link
AU (1) AU4832393A (en)
WO (1) WO1994008311A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0626657A2 (en) * 1993-05-25 1994-11-30 Casio Computer Co., Ltd. Face image data processing devices
EP0669600A2 (en) * 1994-02-25 1995-08-30 Casio Computer Co., Ltd. Devices for creating a target image by combining any part images
EP0814432A1 (en) * 1996-06-20 1997-12-29 Brother Kogyo Kabushiki Kaisha Composite picture editing device
WO1999056248A1 (en) * 1998-04-29 1999-11-04 Inter Quest Inc. Method and apparatus for creating facial images
FR2879323A1 (en) * 2004-12-13 2006-06-16 Sagem METHOD FOR SEARCHING INFORMATION IN A DATABASE
US7471835B2 (en) 1999-05-28 2008-12-30 Iq Biometrix Corp. Method and apparatus for encoding/decoding image data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0275124A2 (en) * 1987-01-16 1988-07-20 Sharp Kabushiki Kaisha Database system for image composition
US5057019A (en) * 1988-12-23 1991-10-15 Sirchie Finger Print Laboratories Computerized facial identification system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0275124A2 (en) * 1987-01-16 1988-07-20 Sharp Kabushiki Kaisha Database system for image composition
US5057019A (en) * 1988-12-23 1991-10-15 Sirchie Finger Print Laboratories Computerized facial identification system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CALDWELL ET AL.: "tracking a criminal suspect through "face-space" with a genetic algorithm", GENETIC ALGORITHMS, 13 July 1991 (1991-07-13), USA, pages 416 - 421, XP000260130 *
CHANG SEOK CHOI ET AL.: "a system of analyzing and synthesizing facial images", 1991IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, 11 June 1991 (1991-06-11), SINGAPORE, pages 2665 - 2668, XP000298983 *
GILLESON ET AL.: "A HEURISTIC STRATEGY FOR DEVELOPING HUMAN FACIAL IMAGES ON A CRT", PATTERN RECOGNITION, vol. 7, 1975, UK, pages 187 - 196 *
LIM ET AL.: "A FACE RECOGNITION SYSTEM USING FUZZY LOGIC AND ARTIFICIAL NEURAL NETWORK", IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, 8 March 1992 (1992-03-08), USA, pages 1063 - 1069, XP000342979 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226013B1 (en) 1993-05-25 2001-05-01 Casio Computer Co., Ltd. Face image data processing devices
EP0626657A2 (en) * 1993-05-25 1994-11-30 Casio Computer Co., Ltd. Face image data processing devices
US5867171A (en) * 1993-05-25 1999-02-02 Casio Computer Co., Ltd. Face image data processing devices
US5818457A (en) * 1993-05-25 1998-10-06 Casio Computer Co., Ltd. Face image data processing devices
EP0626657A3 (en) * 1993-05-25 1995-12-06 Casio Computer Co Ltd Face image data processing devices.
EP1045342A3 (en) * 1993-05-25 2000-11-08 Casio Computer Co., Ltd. Face image data processing devices
EP1045342A2 (en) * 1993-05-25 2000-10-18 Casio Computer Co., Ltd. Face image data processing devices
EP0669600A3 (en) * 1994-02-25 1996-01-31 Casio Computer Co Ltd Devices for creating a target image by combining any part images.
US5600767A (en) * 1994-02-25 1997-02-04 Casio Computer Co., Ltd. Image creation device
EP0669600A2 (en) * 1994-02-25 1995-08-30 Casio Computer Co., Ltd. Devices for creating a target image by combining any part images
EP0814432A1 (en) * 1996-06-20 1997-12-29 Brother Kogyo Kabushiki Kaisha Composite picture editing device
US5831590A (en) * 1996-06-20 1998-11-03 Brother Kogyo Kabushiki Kaisha Composite picture editing device
US9230353B2 (en) 1998-04-29 2016-01-05 Recognicorp, Llc Method and apparatus for encoding/decoding image data
US6731302B1 (en) 1998-04-29 2004-05-04 Iq Biometrix, Inc. Method and apparatus for creating facial images
WO1999056248A1 (en) * 1998-04-29 1999-11-04 Inter Quest Inc. Method and apparatus for creating facial images
US7471835B2 (en) 1999-05-28 2008-12-30 Iq Biometrix Corp. Method and apparatus for encoding/decoding image data
FR2879323A1 (en) * 2004-12-13 2006-06-16 Sagem METHOD FOR SEARCHING INFORMATION IN A DATABASE
WO2006064119A1 (en) 2004-12-13 2006-06-22 Sagem Defense Securite Method for data search in a database
US8095522B2 (en) 2004-12-13 2012-01-10 Morpho Method of searching for information in a database

Also Published As

Publication number Publication date
AU4832393A (en) 1994-04-26

Similar Documents

Publication Publication Date Title
JP3912834B2 (en) Face image correction method, makeup simulation method, makeup method, makeup support apparatus, and foundation transfer film
US20190096097A1 (en) System and Method for Applying a Reflectance Modifying Agent to Change a Person's Appearance Based on a Digital Image
CN101779218B (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
US4539585A (en) Previewer
US7289647B2 (en) System and method for creating and displaying a composite facial image
US8077931B1 (en) Method and apparatus for determining facial characteristics
JP5085636B2 (en) Makeup face image generating apparatus, method thereof, server and program
US20070052726A1 (en) Method and system for likeness reconstruction
DE69531316T2 (en) Device for generating an image by combining partial images
KR101109793B1 (en) Eye form classifying method, form classification map, and eye cosmetic treatment method
CA2155901A1 (en) Method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images
US20020024528A1 (en) Virtual makeover system and method
US6731302B1 (en) Method and apparatus for creating facial images
CN107545576A (en) Image edit method based on composition rule
EP0584759B1 (en) Image displaying apparatus
US5057019A (en) Computerized facial identification system
EP1807804B1 (en) A method for generating a composite image
Romney Computer assisted assembly and rendering of solids
WO1998021695A1 (en) Imaging system for simulating hair styles
WO1994008311A1 (en) Image display system
JP3444148B2 (en) Eyebrow drawing method
JP2677573B2 (en) Pattern generation method
US5572656A (en) Portrait drawing apparatus having image data input function
CN110458751A (en) A kind of face replacement method, equipment and medium based on Guangdong opera picture
DE102021126332A1 (en) PRESENTATION OF AUGMENTED REALITY IN RESPONSE TO AN INPUT

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase