WO1991014180A1 - Evaluating carcasses by image analysis and object definition - Google Patents

Evaluating carcasses by image analysis and object definition Download PDF

Info

Publication number
WO1991014180A1
WO1991014180A1 PCT/AU1991/000091 AU9100091W WO9114180A1 WO 1991014180 A1 WO1991014180 A1 WO 1991014180A1 AU 9100091 W AU9100091 W AU 9100091W WO 9114180 A1 WO9114180 A1 WO 9114180A1
Authority
WO
WIPO (PCT)
Prior art keywords
carcass
image
images
carcass portion
background
Prior art date
Application number
PCT/AU1991/000091
Other languages
French (fr)
Inventor
Alan Benn
Original Assignee
Meat Research Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meat Research Corporation filed Critical Meat Research Corporation
Publication of WO1991014180A1 publication Critical patent/WO1991014180A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • AHUMAN NECESSITIES
    • A22BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
    • A22BSLAUGHTERING
    • A22B5/00Accessories for use during or after slaughtering
    • A22B5/0064Accessories for use during or after slaughtering for classifying or grading carcasses; for measuring back fat
    • A22B5/007Non-invasive scanning of carcasses, e.g. using image recognition, tomography, X-rays, ultrasound
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/02Food
    • G01N33/12Meat; fish
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Definitions

  • the present invention relates to a method of image analysis of objects.
  • it relates to a method of object image definition.
  • the present invention also relates to an apparatus for image analysis and object definition.
  • the method and apparatus can be designed to provide analysis of specific features, various characteristics, and descriptive parameters of objects.
  • the present invention relates to the evaluation of animal carcasses.
  • the specification will primarily refer to beef carcasses in describing the invention. It is appreciated that the invention is applicable to all types of animal carcasses, for example sheep, goats, pigs, as can be slaughtered in abattoirs.
  • the evaluation of animal carcasses may involve the determination of various qualitative properties and descriptive parameters of the carcasses, which include:
  • composition being mainly the relative proportions of lean meat, fat and bone within a carcass and also the degree of muscularity
  • yield being typically the proportion of saleable meat which can be extracted from a carcass of a given carcass weight
  • bruise score relating to the amount of meat which has had to be removed by cutting from specific sections of a carcass.
  • Carcasses are also subject to a significant range of variation in yield which for beef carcasses say, can typically vary from 55% to 75% of the dressed carcass weight. With retailers willing to pay a fixed price per kilogram of meat from the carcass, the price per kilogram of carcass weight can vary considerably.
  • Yield and quality variability can be due to different factors, for example, genetic differences, breed of animal, age, sex, nutrition.
  • carcasses may often be further reduced to saleable meat cuts in the abattoirs and hence the yield or specific amount of saleable meat which comes from a particular carcass could be calculated, such a system can be complex by the fact that it would need to evaluate every meat cut produced from that carcass and also impractical and inconvenient in several respects, which for example require the carcasses to be hung or chilled before any further cutting and boning processing, and is not suitable for some abattoirs. In assigning value to whole carcasses, it has become necessary to attempt to predict the eventual saleable meat yield and quality of meat from the whole carcass or split half carcasses.
  • Butt Profile - a subjective classification of the hindquarter of the carcass as to its concavity or convexity.
  • the AUS-MEAT system is optional and adoption and use by processors varies widely.
  • AUS-MEAT system is optional and adoption and use by processors varies widely.
  • research has been under way in several countries to define technologies and techniques for making objective carcass measurements.
  • A. Real-Time-Ultrasound Imaging whereby the relevant equipment can provide an image of the internal structure of the carcass on which the various composition properties can be discriminated, that is,- meat, fat and bone.
  • images are taken on particular anatomical sites (between the 12th and 13th ribs) to delineate known internal features. These features can be objectively measured from the image and used to describe carcass composition.
  • Optical Reflectance Probes whereby the relevant devices are used to pierce the carcass and in doing so are able to measure relative amounts of fat and lean meat. Scientific studies can relate these quantities to total carcass composition and enable prediction of this to within a specified accuracy.
  • Image Analysis which is a technique relying on the use of cameras to record an image of the object which is then converted into a computer readable form by image digitisers.
  • the image is divided into a large number of small rectangular regions referred to as "pixels".
  • pixels typically a sufficient number are used such that when the pixellated image is displayed on a normal screen or monitor, the individual pixels are not noticeable to the eye, for example, 512 x 512 pixels in the vertical and horizontal directions giving 262,144 pixels for a complete image frame.
  • the image brightness or colour intensity in each pixel is uniform and is represented in the computer by an integer number. This is typically an 8bit binary number and hence can represent 256 different intensity levels.
  • computer software can be used to analyse the pixel location, and its intensity level to make objective determinations of features in the image in terms of the number of pixels and the values associated therewith. Calibration of geometrical relationships from the scene to the image enable measurement of the object in absolute terms.
  • Such image analysis systems can be used to make objective measurements of carcasses, and also to derive new methods capable of determining carcass values which may have various advantages over the subjective techniques.
  • a colour video camera electronic image digitising equipment and computer software as the means embodying image analysis techniques
  • the extracted information from the image of a carcass can be used to determine the parameters of carcasses and carcass values.
  • Image analysis can also be used on sections cut through the carcass to expose internal features which can then be objectively measured.
  • West German Patent 27 28 913 describes the use of video image analysis to objectively measure parameters on the cut surface of a split pig carcass. Differentiation of the various components of the carcass is achieved by enclosing the carcass in a dark coloured chamber such that, when illuminated, the chamber surfaces appear dark, fat materials white and meat appears red. Detection of the variously coloured components enables detection and measurement of features, for example, fat layers, fat depths, meat depths, used for classification.
  • US Patent 4745472 describes an animal measuring system for determining characteristics and traits of live animals.
  • the measuring system comprises a special chute apparatus for holding the animal during profile image recording and measurement.
  • the front and top sides of the chute apparatus comprise respective viewing walls made of a grid-work of steel rods or a translucent material having grid-lines located thereon.
  • the back side of the chute apparatus comprises a solid plate or the like which will enable the side profile of the animal to be readily outlined or defined before a video camera.
  • the measuring process requires the identification and marking of critical points of the animal with locators adhesively attached to the animal.
  • US Patent 4226540 describes a method and apparatus for the contact-free determination of features of meat quality such as freshness, colour and ratio of meat and fat. Radiation emanating from the meat object is detected to create definite radiation values which are then analysed or compared to reference values. It is essential for the attainment of definite radiation values free from difficulty that both the meat object as well as also at least the optical part of the apparatus be arranged in a darkened room with non-deflecting walls or neutral surfaces.
  • US Patent 4413279 describes the use of image analysis for objectively measuring the relative fat and meat in a carcass portion.
  • the method involves editing the real image, which is originally composed of an analog sequence of gray values from bright to dark, into a binary black and white image using a discriminating brightness threshold value.
  • the sample is optically scanned line and image-wise against a dark background with a black-white camera to provide a sharply emerging contour which arises at the brightness transition between background and sample.
  • US Patent 4939574 describes a complete beef carcass classification system developed in Denmark and based on video image analysis of the carcass exterior surface and optical probe readings of meat and fat depths.
  • the system is intended to enable objective evaluation of the current EEC/EUROP carcass grading scores, which are based on subjective descriptions and also an accurate quantitative estimation of the carcass yield.
  • the method involves placing a carcass or half of a split carcass in a light-screening chamber in front of a light-emitting, contrasting surface, so that the carcass stands out as a dark silhouette against the contrasting surface.
  • the first step of any system using image analysis for quantitative measurements on an object is that of defining the object, as distinct from any surroundings or other objects contained in the image.
  • defining the object it has been known to place the object in front of a dark background or uniquely coloured screen and also possibly to place the object in an enclosed box to control illumination conditions, and then to record a grey tone image. It can be seen that in controlling the specific requirements of the background screen and the illumination conditions, expensive equipment and bulky image capture chambers are necessary. Further the conditions in which equipment is used in the sterilised environment of abattoirs must also be controlled in order to integrate into the processing operations of abattoirs.
  • the present invention provides an improved method and apparatus for evaluating carcasses.
  • the present invention provides an improved process of image analysis and process of object image definition.
  • the present invention as provided can be seen to have advantages over the known techniques and systems. Summary of the Invention I accordance with one aspect of the present invention, there is provided a method of evaluating a carcass or carcass portion, characterised by a process of object image definition, said object image definition process comprising the steps of: (a) recording a first image of a background taken from a viewing point;
  • the location and/or profile of the carcass or carcass portion positioned in the background can be resolved using such process of object image definition. Further, the qualitative properties of the carcass or carcass portion can be determined using the further step of analysing the defined image.
  • the differentiation may include the subtraction of the first or second image from the other, preferably the first image from the second image, to provide a difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
  • the method of evaluating a carcass or carcass portion may further comprise the steps of illuminating the background and the carcass or carcass portion when the first and second images are being taken to be recorded.
  • the illumination it is preferable to arrange the illumination so minimal shadows appear in the recorded images, especially in the carcass or carcass portion image, since the shadows may make it difficult to differentiate the carcass or carcass portion image from the background image.
  • an illumination source or sources may be positioned adjacent the viewing point from which the first and second images are taken, that is close to the image or video camera.
  • any shadow cast by the carcass or carcass portion is preferably not to be seen by the camera or simply cast behind the carcass or carcass portion as seen from the viewing point.
  • the method of the present invention is specifically suitable for full colour images having all three colour components of red, green and blue and may also be conveniently adapted for monochrome images.
  • the multi-colour images may appropriately be broken down into their respective colour component images representing red, green and/or blue.
  • the first and second images may be recorded repeatedly but in different colour components. Accordingly, the differentiation may be performed on the first and second images of each colour component respectively and the respective difference images of each colour component can advantageously be recombined to provide an absolute difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
  • the absolute difference image of the multi-colour recording or the different colour components recording is more accurate than that of the monochrome recording because there are two or three component difference images rather than one.
  • the illumination sources or the other lighting arrangements can provide a substantially uniform illumination to at least the carcass or carcass portion. Further, it is also preferable that the carcass or carcass portion can be illuminated to result in substantially the natural colours thereof being recorded in the second image. Tungsten lamps can be used to provide such a result.
  • the method of the present invention is specifically applicable to whole carcasses, split half carcasses or cut portions of carcasses. Further, for the purpose of explanation, the present invention will be described relating to split half carcasses and analysis of external features thereof. Nevertheless, it is appreciated that the present invention may be adapted to cut portions of carcass and analysis of exposed internal features thereof.
  • the method of evaluating a carcass or carcass portion may also comprise the further step of analysing the difference and/or defined image to determine qualitative properties of the carcass or carcass portion.
  • the analysing step may be systematically performed by a programmed computer with appropriate software.
  • the further analysis may include tracing the profile of the carcass or carcass portion from the difference and/or defined image and providing a sequence of values representing the profile.
  • the sequence of values can be the respective locations or co-ordinates of adjoining pixels in the direction of tracing, or the respective directions, conventionally in degrees of rotation clockwise, between adjoining pixels in the direction of tracing.
  • the further analysis may include locating anatomical points of the carcass or carcass portion from the traced profile or sequence of values. This would subsequently provide for the cutting of the carcass or carcass portion and measuring the thickness of subcutaneous fat at a given anatomical point, either manually, or by mechanically driven equipment or automated probe.
  • the further analysis may include measuring the size of the carcass or carcass portion in terms of dimensional measurements of different sections thereof.
  • the dimensional measurements may preferably be taken between preselected anatomical points.
  • the further analysis may include identifying and delineating selected sections of the carcass or carcass portion. These sections may be selected for their qualities, compositions or saleable values. Preferably, bruise detecting and scoring regions are also delineated.
  • the further analysis may include measuring the intensity of each colour component for each pixel in the defined image of the carcass or carcass portion and performing an algebraic operation on the measured intensity of each colour component to determine the property represented by said each pixel, for determining the composition properties of the carcass or carcass portion.
  • the algebraic operation may preferably involve the relevant proportions of each colour component.
  • the further analysis may include counting the number of pixels representing a selected property and determining the proportion of said selected property, in the carcass or carcass portion or in the selected sections thereof.
  • the selected property may be fat, bruise, lean meat or membrane.
  • the present invention is compatible with other known methods of evaluating carcasses and known processes of image analysis of objects.
  • the present invention can be modified and varied and improved to incorporate partly or wholly such known methods and processes, for example, including the subjective techniques for taking measurements physically and the use of image analysis technologies and equipment, including lighting means, image or video cameras, image digitisers, data storage or recording means, data processors and computer systems.
  • the present invention also provides an apparatus for evaluating a carcass or carcass portion, characterised by comprising:
  • the transport system may form part of the processing line equipment of the abattoir and the image recording system and data processing system are standard image analysis equipment.
  • the data processing system may include computer means programmed to analyse the difference and/or defined image to determine qualitative properties of the carcass or carcass portion.
  • different software can be developed for that purpose.
  • specific lighting means may be used to illuminate the background and the carcass or carcass portion when the first and second images are being taken to be recorded.
  • Figures 2A and 2B show the plan and elevation views of a practical arrangement of relevant parts of an apparatus of the present invention
  • Figure 3 shows a carcass profile with anatomical points and selected dimensional measurements
  • Figure 4 shows a carcass profile with bruise detection and scoring regions.
  • a process of image analysis in particular illustrating a process of object image definition
  • an artificial background and a triangular object are used in place respectively of a background in an abattoir and a carcass.
  • the process of object image definition is to be performed to differentiate an image of the triangular object from an image of the artificial background having the triangular object therein.
  • the object image definition process may be a preliminary step in the image analysis process.
  • Figure 1A shows an image of the artificial background which has been recorded.
  • the background image is conveniently digitised and stored in a computer.
  • a full colour image can be broken down into three images, one for each colour component, red, green and blue, which are stored to represent the full colour image.
  • suitable image recording means can be used to record the background image repeatedly in each colour component so to provide three separate colour component images.
  • each object is assumed to be of a uniform colour.
  • the intensity in each of the three colour component images is shown on each object.
  • the value of intensity can range from 0 to 255, that is, 256 intensity levels.
  • Figure IB shows an image of the triangular object positioned in the artificial background which also has been recorded by the full colour camera and conveniently digitised and stored in the computer. The intensity of the triangular object in each of the three colour component images is also shown. Similarly, the triangular object is also assumed to be of a uniform colour.
  • Figure 1C shows the resultant images obtained when the images of Figure IB showing the triangular object within the artificial background are differentiated from the images of Figure 1A showing only the artificial background.
  • the differentiation is performed by subtracting one image from the other, for example, the artificial background image from the triangular object and artificial background image.
  • the resultant values of intensity difference of each object are shown in the three difference images, one for each colour component.
  • the images of Figures 1A and IB are recorded from the same viewing point with the same image recording system so that the differentiation or subtraction thereof can be easily carried out.
  • the absolute value of the resultant values of intensity difference is to be taken so that the resultant images show the magnitude of the intensity difference between the two images.
  • a measurement can be made of the noise levels and the value of maximum noi- ⁇ e can be used to provide a threshold level of intensity difference.
  • the threshold level of intensity difference is used to discriminate the intensity difference in the resultant images, that is, comparing the intensity difference of each pixel in the difference images with the threshold level. If the intensity difference of a given pixel is above the threshold level, the given pixel is assigned a new binary intensity value of 1. If the intensity difference of the given pixel is below the threshold level, the given pixel is assigned a new binary intensity value of 0.
  • binary images for each colour component can be obtained as shown in Figure ID.
  • the binary image for each colour component has a value of 1 where the intensity difference between the artificial background image and the triangular object and artificial background image in the same pixel location is greater than the threshold noise level.
  • the binary images of Figure ID correspond to white images of the triangular object against a black artificial background, as long as a colour component intensity of the triangular object image is not close to the same colour component intensity of the artificial background image.
  • the triangular object is not completely defined in any of the three colour component images due to certain parts of the triangular object having almost the same colour and intensity as the artificial background.
  • a combined binary image can be produced from the three binary images of Figure ID, that is, for a given pixel, if the binary intensity value is 1 in any of the red, green or blue colour binary images, then the combined binary image will have a binary intensity value of 1 at that given pixel. In this way, an absolute difference image can be produced.
  • the triangular object can be defined for that one colour. Further, it is appreciated that if the images are monochrome, then there is only one difference image for the monochrome images.
  • the triangular object can still be defined as long as the overall intensity of the triangular object differs from the overall intensity of the artificial background by the appropriate noise threshold level.
  • the recorded or stored images may consist of more than one colour component and there is more than one difference image which can be combined to form an absolute difference image in order to more effectively define the triangular object more accurately.
  • Figure IE shows the combined binary image produced by the logical "OR" process. It is noted that a circular area within the triangular object is not defined as part of the triangular object due to a circular object in the artificial background having intensities close to the triangular object for all the three colour components, that is red 125 to 120, green 145 to 150, blue 25 to 20. Therefore, an error in defining the triangular object may result. In this regard, it will be relatively easy to ensure that no object in the artificial background has colours and intensities close to the triangular object colours and intensities to overcome the error in defining the triangular object.
  • FIG. 2A and 2B a practical arrangement of the relevant parts of the apparatus according to the present invention in an abattoir is shown in plan view and elevation view, respectively.
  • An image or video camera 1 and an illumination or light source 2 are positioned at a location in an abattoir where carcasses 5 will be available for positioning in front of the camera via an overhead rail transport system 6 universally used in abattoirs.
  • the background 4 as can be seen by the camera 1 contains no areas which have colours closely matching those appearing on the carcasses 5.
  • the camera 1 and light source 2 are supported by a frame 3 and connected by electrical cables 7 to a computer and power supply.
  • the background 4 can be simply an abattoir wall.
  • Figure 2A shows a particular configuration of camera 1 and light source 2.
  • Primary frontal light source 2 may be used to ensure that the camera 1 does not see any significant shadows cast by the carcasses 5 on the background 4. Close or adjacent positioning of the camera 1 and light source 2 mean that the shadows of the carcasses 5 will essentially be completely behind the carcasses 5 when seen from the camera 1. Such relative positioning will eliminate the necessity to account for any shadows in the subsequent image analysis process.
  • the carcasses 5 are positioned 0.5 metres in front of the abattoir wall 4, and the camera 1 and the frontal light source 2 are positioned 5 metres in front of the carcasses.
  • Calculations for the particular configuration as shown in Figures 2A and 2B, show that the maximum width of any shadow seen to be established around the carcass edges to be 0.5 cm on the carcasses. In this regard, such negligible shadows can be ignored when defining and analysing the profiles of carcasses. Further, if the shadows show strong contrast around the carcass edges, they may advantageously be used to trace and define the profile of the carcasses.
  • Secondary side light sources can also be provided to illuminate specific sections of the carcasses, for example, to provide better illumination of the butt region. Similarly, the positioning of the secondary side light sources relative to the camera may preferably ensure that no shadows are to be seen from the camera.
  • the camera is activated to record an image of the background from a viewing point.
  • the background image is digitised and stored in the computer.
  • Carcasses are sequentially introduced by the transport rail to the region in front of the camera to enable capture of images of each carcass.
  • Positioning of carcasses and operation of the equipment can be manual.
  • automated systems are used and may involve the detection of the carcass position and activation of motors to move the carcass.
  • the camera is then again activated to record an image of the carcass positioned in the background from the same viewing point.
  • This carcass image is also digitised and stored in the computer.
  • the background image is subtracted from the carcass image to detect regions of difference between the two images and to produce a difference image defining the location and profile of the carcass, as illustrated by the method of Figures 1A to IE.
  • the difference image can then be used to obtain a defined image of the carcass from the image of the carcass positioned in the background.
  • Data processing system having suitable computer software can be used to trace the profile of the difference i age or the defined carcass image.
  • the profile can be represented by a continuous sequence of individual, adjoining pixels in the direction of tracing.
  • the set of pixels describing a profile may also be represented by a range of numbers for each pixel, that is, the pixel number sequentially from a starting pixel, x and y co-ordinate of the pixel location in terms of the x,y co-ordinate grid of the digitised image and/or the direction in degrees of rotation from each pixel to the next pixel, clockwise around the edge.
  • the direction information can only be an integer multiple of 45° due to the grid array of pixels.
  • the complete set of direction values is also numerically filtered to provide a more accurate, average "bearing" value for each pixel which represents the tangential direction of the profile at each pixel.
  • suitable software can be used to analyse this description of the ⁇ traced carcass profile to identify particular anatomical points, as shown in Figure 3, for example, point A indicates start of rump, point B indicates tail, point C indicates start of butt profile, point D indicates butt profile mid-point, point E indicates end of butt profile, point F indicates end hind leg, point G indicates start foreleg, point H indicates end foreleg, point I indicates start loin, and line J indicates dimension reference line.
  • the anatomical points can be identified by algorithms derived from experimental data. The identification may specify a particular anatomical point to fall within a prescribed window of pixel numbers with the x or y co-ordinate having a maximum or minimum value and/or a prescribed tangential direction on the profile. From knowledge of the carcass shape and experimental analysis of the profile descriptions, techniques can be developed to reliably identify and select sections of the carcass which have significance for descriptive purposes, for example, butt start and end.
  • the anatomical points may be identified by profile matching as well as by searching for particular combinations of pixel numbers, x,y co-ordinates, and bearing values.
  • the profile matching involves storing a large number of carcass profiles with defined anatomical points as reference profiles. As a new carcass profile is analysed, it is matched by searching to a reference profile which has the smallest area difference between the two profiles.
  • the defined anatomical points on the reference profile are used to identify anatomical points on the carcass profile being analysed.
  • dimensional measurements can be taken from the profile of the carcass. Experiments involving imaging numerous carcasses, which are then subsequently dissected to reveal detailed carcass composition, can allow the evaluation of a large number of carcass dimensions to predict composition.
  • the relevant dimensional measurements can be taken from the carcass which are known to indicate parameters and yield of the carcass.
  • dimensional measurements can be taken to determine the shape and size of carcasses or selected sections thereof, for example, the butt region for enabling classification of the AUS-MEAT butt profile. The positions at which the dimensional measurements are taken are related to the anatomical points previously located.
  • the anatomical points detected on the carcass profile can also be used to identify and delineate the standard AUS-MEAT bruise detecting and scoring areas, as shown in Figure ' 4, for example, section K indicates shoulder, section indicates loin, section M indicates rump and section N indicates butt. Such areas may represent sections of the carcass which will attract higher values.
  • the three numbers describing the colour and intensity of each pixel contained within the carcass image may be operated on by equations which classify each pixel as to its likely composition property and material.
  • particular algebraic combinations and algebraic operations on the red, green and blue intensity values can provide distinctive values for the various colours.
  • the algebraic combinations can be developed by statistically analysing pixels representing different composition properties to determine the relative distribution of intensity in each colour component. For each pixel therefore, the algebraic combination involving the appropriate relevant proportions of red, green and blue intensities is calculated and the calculated value is checked against predetermined ranges to ascertain the colour category.
  • Evaluation of numerous images can produce the predetermined ranges to enable separation of pixel colours into six general composition property categories: bruises, lean meat, thin fat, yellow fat, creamy fat and white fat or membranes.
  • the carcass image may therefore be transformed into a categorised image which has only six colour classifications corresponding to these composition property categories.
  • a percentage measure of the particular property covering the total surface of the area of the carcase can be made, for example, the fat cover percentage.
  • the pixel colour categories are counted to detect the presence of colours in the "bruise" category. If the number of pixels corresponding to a particular surface area exceeds an acceptable threshold value for that particular surface area, then a bruise score varying from 1 to 7, is assigned to the carcass and applied as a penalty to the evaluation.
  • the quartering process produces carcass portions exposing a cross-sectional view of the composition properties within the carcass portion, namely the rib-eye or longi ⁇ simus dorsi area, as well as several other muscles, intramuscular fat and the subcutaneous fat layer. Objective measurement and analysis of such features exposed in the cross section have been used to provide an estimation or determination of the quality and quantity of meat products within the carcass, thus evaluating the carcass.
  • the specific parameters such as extent of rib-eye area, intramuscular fat content of the rib-eye, known as marbling, fat colour, meat colour and subcutaneous fat depth, can be objectively measured and determined from the exposed internal features of the quartered carcass cross-sections, for use in setting a value to the carcass portions.

Abstract

Evaluation of carcass, or carcass portions, by object image definition processing comprising the steps of: recording a first image of a background (4) taken from a viewing point; positioning the carcass (5) or carcass portion in the background; recording a second image taken from the viewing point of the background with the carcass or carcass portion positioned therein; and analysing the first and second images to differentiate the carcass or carcass portion image from the background image for providing a defined image of the carcass or carcass portion, thus resolving the location and/or profile of the carcass or carcass portion positioned in the background. The differentiation includes the subtraction of the first or second image from the other, preferably for each colour component respectively, so as to provide a number of component difference images which are then recombined to provide an absolute difference image.

Description

EV__L_JATING CARCASSES Bif IMAGE ANALYSIS AND OBJECT 1____'_J_IT10N
Field of the Invention
The present invention relates to a method of image analysis of objects. In particular, it relates to a method of object image definition. The present invention also relates to an apparatus for image analysis and object definition. The method and apparatus can be designed to provide analysis of specific features, various characteristics, and descriptive parameters of objects.
More specifically, but not exclusively, the present invention relates to the evaluation of animal carcasses. For the purpose of explanation, the specification will primarily refer to beef carcasses in describing the invention. It is appreciated that the invention is applicable to all types of animal carcasses, for example sheep, goats, pigs, as can be slaughtered in abattoirs.
The evaluation of animal carcasses may involve the determination of various qualitative properties and descriptive parameters of the carcasses, which include:
(a) shape and size of a carcass,
(b) butt profile indicating the degree of concavity or convexity of the hindquarter of a carcass,
(c) composition being mainly the relative proportions of lean meat, fat and bone within a carcass and also the degree of muscularity,
(d) yield being typically the proportion of saleable meat which can be extracted from a carcass of a given carcass weight, (e) bruise score relating to the amount of meat which has had to be removed by cutting from specific sections of a carcass.
Background of the Invention
Within the meat processing industry, particularly in the operation of abattoirs, importance has been attached to the requirement for accurate descriptions of animal carcasses. These descriptions are useful in evaluating animal carcasses and providing a fair price to be paid by the processor to the animal producer and ensuring that the individual carcasses can be directed to the most appropriate end market.
Various markets, although all seeking to purchase beef carcasses say, attach varying importance to different descriptive parameters of the carcasses and are willing to pay higher prices for particular carcasses which meet their specific requirements,, for example, Japanese markets prefer meat with a high marbling fat content, whereas domestic Australian markets prefer fat-free meat.
Carcasses are also subject to a significant range of variation in yield which for beef carcasses say, can typically vary from 55% to 75% of the dressed carcass weight. With retailers willing to pay a fixed price per kilogram of meat from the carcass, the price per kilogram of carcass weight can vary considerably.
Yield and quality variability can be due to different factors, for example, genetic differences, breed of animal, age, sex, nutrition.
Although the descriptive parameters can be predicted to a certain extent from the live animal, assessment is usually restricted to the eviscerated carcass. Although carcasses may often be further reduced to saleable meat cuts in the abattoirs and hence the yield or specific amount of saleable meat which comes from a particular carcass could be calculated, such a system can be complex by the fact that it would need to evaluate every meat cut produced from that carcass and also impractical and inconvenient in several respects, which for example require the carcasses to be hung or chilled before any further cutting and boning processing, and is not suitable for some abattoirs. In assigning value to whole carcasses, it has become necessary to attempt to predict the eventual saleable meat yield and quality of meat from the whole carcass or split half carcasses.
Extensive scientific research in several countries has resulted in a variety of techniques for estimating carcass yield and these have been put in place as National
Standard Grading systems. Within Australia such a description system is controlled by AUS-MEAT, the Authority for the Uniform Specification of Meat products. This is an industry and Government supported organisation which specifies standard methods for assigning various descriptive parameters to beef carcasses. The relevant components used to describe the beef carcasses and which can have an effect on price levels are:
(i) P8 Fat Depth - the thickness of subcutaneous fat at a particular anatomical site on the carcass, measured by cutting with a knife and using a ruler.
(ii) Butt Profile - a subjective classification of the hindquarter of the carcass as to its concavity or convexity.
Five grades are assigned (A-E) based on published standard profile shapes.
(iii) Bruise Score - a system to describe the presence of bruises which have had to be removed by cutting from sections of the carcass which attach high value.
(iv) Hot Carcass Weight - weight of the whole carcass before chilling and with a standard trim of extraneous material, for example, fat and viscera, applied.
The AUS-MEAT system is optional and adoption and use by processors varies widely. With a continual demand for automated objective methods of describing the carcasses, research has been under way in several countries to define technologies and techniques for making objective carcass measurements.
The major technologies under development are:
A. Real-Time-Ultrasound Imaging, whereby the relevant equipment can provide an image of the internal structure of the carcass on which the various composition properties can be discriminated, that is,- meat, fat and bone. Typically images are taken on particular anatomical sites (between the 12th and 13th ribs) to delineate known internal features. These features can be objectively measured from the image and used to describe carcass composition.
B. Optical Reflectance Probes, whereby the relevant devices are used to pierce the carcass and in doing so are able to measure relative amounts of fat and lean meat. Scientific studies can relate these quantities to total carcass composition and enable prediction of this to within a specified accuracy.
C. Image Analysis, which is a technique relying on the use of cameras to record an image of the object which is then converted into a computer readable form by image digitisers. The image is divided into a large number of small rectangular regions referred to as "pixels". Typically a sufficient number are used such that when the pixellated image is displayed on a normal screen or monitor, the individual pixels are not noticeable to the eye, for example, 512 x 512 pixels in the vertical and horizontal directions giving 262,144 pixels for a complete image frame. The image brightness or colour intensity in each pixel is uniform and is represented in the computer by an integer number. This is typically an 8bit binary number and hence can represent 256 different intensity levels.
Once the image is digitised, computer software can be used to analyse the pixel location, and its intensity level to make objective determinations of features in the image in terms of the number of pixels and the values associated therewith. Calibration of geometrical relationships from the scene to the image enable measurement of the object in absolute terms.
Such image analysis systems can be used to make objective measurements of carcasses, and also to derive new methods capable of determining carcass values which may have various advantages over the subjective techniques. With the use of a colour video camera, electronic image digitising equipment and computer software as the means embodying image analysis techniques, the extracted information from the image of a carcass can be used to determine the parameters of carcasses and carcass values. Image analysis can also be used on sections cut through the carcass to expose internal features which can then be objectively measured. Prior Art References
West German Patent 27 28 913 describes the use of video image analysis to objectively measure parameters on the cut surface of a split pig carcass. Differentiation of the various components of the carcass is achieved by enclosing the carcass in a dark coloured chamber such that, when illuminated, the chamber surfaces appear dark, fat materials white and meat appears red. Detection of the variously coloured components enables detection and measurement of features, for example, fat layers, fat depths, meat depths, used for classification.
US Patent 4745472 describes an animal measuring system for determining characteristics and traits of live animals. The measuring system comprises a special chute apparatus for holding the animal during profile image recording and measurement. The front and top sides of the chute apparatus comprise respective viewing walls made of a grid-work of steel rods or a translucent material having grid-lines located thereon. The back side of the chute apparatus comprises a solid plate or the like which will enable the side profile of the animal to be readily outlined or defined before a video camera. The measuring process requires the identification and marking of critical points of the animal with locators adhesively attached to the animal.
US Patent 4226540 describes a method and apparatus for the contact-free determination of features of meat quality such as freshness, colour and ratio of meat and fat. Radiation emanating from the meat object is detected to create definite radiation values which are then analysed or compared to reference values. It is essential for the attainment of definite radiation values free from difficulty that both the meat object as well as also at least the optical part of the apparatus be arranged in a darkened room with non-deflecting walls or neutral surfaces.
US Patent 4413279 describes the use of image analysis for objectively measuring the relative fat and meat in a carcass portion. The method involves editing the real image, which is originally composed of an analog sequence of gray values from bright to dark, into a binary black and white image using a discriminating brightness threshold value. The sample is optically scanned line and image-wise against a dark background with a black-white camera to provide a sharply emerging contour which arises at the brightness transition between background and sample.
Research Papers published by Y.R. Chen and T.P. McDonald at the U.S. Department of Agriculture Research Station, Nebraska, U.S.A.., detail extensive development of image processing software to enable the automated assessment of sections through beef carcasses.
US Patent 4939574 describes a complete beef carcass classification system developed in Denmark and based on video image analysis of the carcass exterior surface and optical probe readings of meat and fat depths. The system is intended to enable objective evaluation of the current EEC/EUROP carcass grading scores, which are based on subjective descriptions and also an accurate quantitative estimation of the carcass yield. The method involves placing a carcass or half of a split carcass in a light-screening chamber in front of a light-emitting, contrasting surface, so that the carcass stands out as a dark silhouette against the contrasting surface.
The first step of any system using image analysis for quantitative measurements on an object is that of defining the object, as distinct from any surroundings or other objects contained in the image. In defining the object, it has been known to place the object in front of a dark background or uniquely coloured screen and also possibly to place the object in an enclosed box to control illumination conditions, and then to record a grey tone image. It can be seen that in controlling the specific requirements of the background screen and the illumination conditions, expensive equipment and bulky image capture chambers are necessary. Further the conditions in which equipment is used in the sterilised environment of abattoirs must also be controlled in order to integrate into the processing operations of abattoirs.
It is an object of the present invention to provide an improved method and apparatus for evaluating carcasses. In particular, the present invention provides an improved process of image analysis and process of object image definition. The present invention as provided can be seen to have advantages over the known techniques and systems. Summary of the Invention I accordance with one aspect of the present invention, there is provided a method of evaluating a carcass or carcass portion, characterised by a process of object image definition, said object image definition process comprising the steps of: (a) recording a first image of a background taken from a viewing point;
(b) positioning the carcass or carcass portion in said background;
(c) recording a second image taken from said viewing point of said background with the carcass or carcass portion positioned therein; and
(d) analysing said first and second images to differentiate the carcass or carcass portion image from the background image for providing a defined image of said carcass or carcass portion.
It can be appreciated that the location and/or profile of the carcass or carcass portion positioned in the background can be resolved using such process of object image definition. Further, the qualitative properties of the carcass or carcass portion can be determined using the further step of analysing the defined image.
Conveniently, the differentiation may include the subtraction of the first or second image from the other, preferably the first image from the second image, to provide a difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
In practice, the background consisting or being part of the surrounding environment in an abattoir, the method of evaluating a carcass or carcass portion may further comprise the steps of illuminating the background and the carcass or carcass portion when the first and second images are being taken to be recorded. In this regard, it is preferable to arrange the illumination so minimal shadows appear in the recorded images, especially in the carcass or carcass portion image, since the shadows may make it difficult to differentiate the carcass or carcass portion image from the background image.
Conveniently, an illumination source or sources may be positioned adjacent the viewing point from which the first and second images are taken, that is close to the image or video camera. Alternatively, with other lighting arrangements relative to the viewing point, any shadow cast by the carcass or carcass portion is preferably not to be seen by the camera or simply cast behind the carcass or carcass portion as seen from the viewing point.
The method of the present invention is specifically suitable for full colour images having all three colour components of red, green and blue and may also be conveniently adapted for monochrome images. Where multi-colour images are recorded, the multi-colour images may appropriately be broken down into their respective colour component images representing red, green and/or blue. Alternatively, the first and second images may be recorded repeatedly but in different colour components. Accordingly, the differentiation may be performed on the first and second images of each colour component respectively and the respective difference images of each colour component can advantageously be recombined to provide an absolute difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background. The absolute difference image of the multi-colour recording or the different colour components recording is more accurate than that of the monochrome recording because there are two or three component difference images rather than one.
Thus, it is preferable that the illumination sources or the other lighting arrangements can provide a substantially uniform illumination to at least the carcass or carcass portion. Further, it is also preferable that the carcass or carcass portion can be illuminated to result in substantially the natural colours thereof being recorded in the second image. Tungsten lamps can be used to provide such a result.
The method of the present invention is specifically applicable to whole carcasses, split half carcasses or cut portions of carcasses. Further, for the purpose of explanation, the present invention will be described relating to split half carcasses and analysis of external features thereof. Nevertheless, it is appreciated that the present invention may be adapted to cut portions of carcass and analysis of exposed internal features thereof.
In accordance with another aspect of the present invention, the method of evaluating a carcass or carcass portion may also comprise the further step of analysing the difference and/or defined image to determine qualitative properties of the carcass or carcass portion. The analysing step may be systematically performed by a programmed computer with appropriate software.
The further analysis may include tracing the profile of the carcass or carcass portion from the difference and/or defined image and providing a sequence of values representing the profile. The sequence of values can be the respective locations or co-ordinates of adjoining pixels in the direction of tracing, or the respective directions, conventionally in degrees of rotation clockwise, between adjoining pixels in the direction of tracing.
The further analysis may include locating anatomical points of the carcass or carcass portion from the traced profile or sequence of values. This would subsequently provide for the cutting of the carcass or carcass portion and measuring the thickness of subcutaneous fat at a given anatomical point, either manually, or by mechanically driven equipment or automated probe.
The further analysis may include measuring the size of the carcass or carcass portion in terms of dimensional measurements of different sections thereof. The dimensional measurements may preferably be taken between preselected anatomical points.
The further analysis may include identifying and delineating selected sections of the carcass or carcass portion. These sections may be selected for their qualities, compositions or saleable values. Preferably, bruise detecting and scoring regions are also delineated.
The further analysis may include measuring the intensity of each colour component for each pixel in the defined image of the carcass or carcass portion and performing an algebraic operation on the measured intensity of each colour component to determine the property represented by said each pixel, for determining the composition properties of the carcass or carcass portion. The algebraic operation may preferably involve the relevant proportions of each colour component. The further analysis may include counting the number of pixels representing a selected property and determining the proportion of said selected property, in the carcass or carcass portion or in the selected sections thereof. The selected property may be fat, bruise, lean meat or membrane.
It is appreciated that the present invention is compatible with other known methods of evaluating carcasses and known processes of image analysis of objects. Thus, the present invention can be modified and varied and improved to incorporate partly or wholly such known methods and processes, for example, including the subjective techniques for taking measurements physically and the use of image analysis technologies and equipment, including lighting means, image or video cameras, image digitisers, data storage or recording means, data processors and computer systems.
Accordingly, the present invention also provides an apparatus for evaluating a carcass or carcass portion, characterised by comprising:
(a) transport system to position the carcass or carcass portion in a background;
(b) image recording system to record a first image of said background taken from a viewing point and a second image taken from said viewing point of said background with the carcass or carcass portion positioned therein; and
(c) data processing system to analyse said first and second images to differentiate the carcass or carcass portion image from the background image for providing a defined image of the carcass or carcass portion, thus resolving the location and/or profile of the carcass or carcass portion positioned in the background.
The transport system may form part of the processing line equipment of the abattoir and the image recording system and data processing system are standard image analysis equipment. Thus, there is no requirement of using any specific unusual equipment, for example illumination chambers, and the apparatus of the present invention can easily be integrated with the processing line equipment. Conveniently, the data processing system may include computer means programmed to analyse the difference and/or defined image to determine qualitative properties of the carcass or carcass portion. In this regard, different software can be developed for that purpose. Further, in addition to the lighting already provided in the abattoir, specific lighting means may be used to illuminate the background and the carcass or carcass portion when the first and second images are being taken to be recorded. Brief Description of the Drawings
Features of the present invention will be appreciated more fully in the following detailed description of preferred embodiments with reference to the accompanying drawings, in which: Figures 1A to IE illustrate a practical method of the present invention;
Figures 2A and 2B show the plan and elevation views of a practical arrangement of relevant parts of an apparatus of the present invention; Figure 3 shows a carcass profile with anatomical points and selected dimensional measurements;
Figure 4 shows a carcass profile with bruise detection and scoring regions. Detailed Description of the Invention Referring to Figures 1A to IE, which illustrate a practical method of evaluating carcasses according to the present invention, by a process of image analysis, in particular illustrating a process of object image definition, an artificial background and a triangular object are used in place respectively of a background in an abattoir and a carcass. The process of object image definition is to be performed to differentiate an image of the triangular object from an image of the artificial background having the triangular object therein. The object image definition process may be a preliminary step in the image analysis process.
Figure 1A shows an image of the artificial background which has been recorded. The background image is conveniently digitised and stored in a computer. With the use of a full colour camera, a full colour image can be broken down into three images, one for each colour component, red, green and blue, which are stored to represent the full colour image. Alternatively, suitable image recording means can be used to record the background image repeatedly in each colour component so to provide three separate colour component images.
In the artificial background, there consists of five different objects and each object is assumed to be of a uniform colour. The intensity in each of the three colour component images is shown on each object. The value of intensity can range from 0 to 255, that is, 256 intensity levels.
Figure IB shows an image of the triangular object positioned in the artificial background which also has been recorded by the full colour camera and conveniently digitised and stored in the computer. The intensity of the triangular object in each of the three colour component images is also shown. Similarly, the triangular object is also assumed to be of a uniform colour.
It can be appreciated that in the specific application of the object image definition process in the method of evaluating carcasses, the background in an abattoir will be much simpler, for example, a plain painted wall, and the carcasses will have numerous varying colours.
Figure 1C shows the resultant images obtained when the images of Figure IB showing the triangular object within the artificial background are differentiated from the images of Figure 1A showing only the artificial background. The differentiation is performed by subtracting one image from the other, for example, the artificial background image from the triangular object and artificial background image. The resultant values of intensity difference of each object are shown in the three difference images, one for each colour component. In this regard, the images of Figures 1A and IB are recorded from the same viewing point with the same image recording system so that the differentiation or subtraction thereof can be easily carried out.
It can be noted that in the resultant images of Figure IC, the areas outside the triangular object would ideally have an intensity of zero, that is exact cancellation. In reality, however, when sequential images of the triangular object and artificial background are captured and differentiated, the respective intensity difference values of the resultant images will vary due to the random variation in the measuring equipment. For sufficient illumination, the random variations will be small with respect to the relevant intensities measured, for example, with a maximum intensity of 255 and object intensities of 50 to 100, the maximum variation for an object may be only 10. The random intensity differences in areas of the image where exact cancellation should occur, are referred to as "noise".
Further, it can be noted that where the artificial background intensity is greater than the triangular object intensity in the same location, a negative resultant value of intensity difference will be derived from subtraction. In this regard, the absolute value of the resultant values of intensity difference is to be taken so that the resultant images show the magnitude of the intensity difference between the two images.
Based on measurements of the difference images of Figure IC, in areas where the triangular object is known not to occur, a measurement can be made of the noise levels and the value of maximum noi-εe can be used to provide a threshold level of intensity difference. The threshold level of intensity difference is used to discriminate the intensity difference in the resultant images, that is, comparing the intensity difference of each pixel in the difference images with the threshold level. If the intensity difference of a given pixel is above the threshold level, the given pixel is assigned a new binary intensity value of 1. If the intensity difference of the given pixel is below the threshold level, the given pixel is assigned a new binary intensity value of 0.
In this way, binary images for each colour component can be obtained as shown in Figure ID. The binary image for each colour component has a value of 1 where the intensity difference between the artificial background image and the triangular object and artificial background image in the same pixel location is greater than the threshold noise level. In this regard, it is noted that the binary images of Figure ID correspond to white images of the triangular object against a black artificial background, as long as a colour component intensity of the triangular object image is not close to the same colour component intensity of the artificial background image.
It is appreciated that in the binary images of Figure ID, the triangular object is not completely defined in any of the three colour component images due to certain parts of the triangular object having almost the same colour and intensity as the artificial background.
By using a logical "OR" process, a combined binary image can be produced from the three binary images of Figure ID, that is, for a given pixel, if the binary intensity value is 1 in any of the red, green or blue colour binary images, then the combined binary image will have a binary intensity value of 1 at that given pixel. In this way, an absolute difference image can be produced. Thus, as long as one of the colour components of the triangular object differε from the artificial background by the noise threshold level, the triangular object can be defined for that one colour. Further, it is appreciated that if the images are monochrome, then there is only one difference image for the monochrome images. The triangular object can still be defined as long as the overall intensity of the triangular object differs from the overall intensity of the artificial background by the appropriate noise threshold level. Preferably, the recorded or stored images may consist of more than one colour component and there is more than one difference image which can be combined to form an absolute difference image in order to more effectively define the triangular object more accurately.
Figure IE shows the combined binary image produced by the logical "OR" process. It is noted that a circular area within the triangular object is not defined as part of the triangular object due to a circular object in the artificial background having intensities close to the triangular object for all the three colour components, that is red 125 to 120, green 145 to 150, blue 25 to 20. Therefore, an error in defining the triangular object may result. In this regard, it will be relatively easy to ensure that no object in the artificial background has colours and intensities close to the triangular object colours and intensities to overcome the error in defining the triangular object.
Even when such error occurs in the object image definition process, as shown in Figure IE, if the non- defined area lies wholly within the boundaries of the defined area, that is, the non-defined area appears as a hole in the triangular object, then no errors will result. This is because the object image definition process is used to define the profile of the triangular object which is then assumed to contain no holes. In this case, only when errors occur which overlap the boundary of the triangular object will the triangular object be incorrectly defined. It can be appreciated that the combined binary image of Figure IE defining the location and profile of the triangular object in a black and white image can be used to obtain a defined image of the triangular object in its original colours from the colour component images of Figure IB.
Referring to Figures 2A and 2B, a practical arrangement of the relevant parts of the apparatus according to the present invention in an abattoir is shown in plan view and elevation view, respectively. An image or video camera 1 and an illumination or light source 2 are positioned at a location in an abattoir where carcasses 5 will be available for positioning in front of the camera via an overhead rail transport system 6 universally used in abattoirs. It is preferred that the background 4 as can be seen by the camera 1 contains no areas which have colours closely matching those appearing on the carcasses 5. The camera 1 and light source 2 are supported by a frame 3 and connected by electrical cables 7 to a computer and power supply. The background 4 can be simply an abattoir wall.
The practical arrangement of Figure 2A shows a particular configuration of camera 1 and light source 2.
Primary frontal light source 2 may be used to ensure that the camera 1 does not see any significant shadows cast by the carcasses 5 on the background 4. Close or adjacent positioning of the camera 1 and light source 2 mean that the shadows of the carcasses 5 will essentially be completely behind the carcasses 5 when seen from the camera 1. Such relative positioning will eliminate the necessity to account for any shadows in the subsequent image analysis process.
In the particular configuration, the carcasses 5 are positioned 0.5 metres in front of the abattoir wall 4, and the camera 1 and the frontal light source 2 are positioned 5 metres in front of the carcasses. Calculations for the particular configuration as shown in Figures 2A and 2B, show that the maximum width of any shadow seen to be established around the carcass edges to be 0.5 cm on the carcasses. In this regard, such negligible shadows can be ignored when defining and analysing the profiles of carcasses. Further, if the shadows show strong contrast around the carcass edges, they may advantageously be used to trace and define the profile of the carcasses.
Secondary side light sources can also be provided to illuminate specific sections of the carcasses, for example, to provide better illumination of the butt region. Similarly, the positioning of the secondary side light sources relative to the camera may preferably ensure that no shadows are to be seen from the camera.
In accordance with the present invention, the camera is activated to record an image of the background from a viewing point. The background image is digitised and stored in the computer.
Carcasses are sequentially introduced by the transport rail to the region in front of the camera to enable capture of images of each carcass. Positioning of carcasses and operation of the equipment can be manual. Preferably, automated systems are used and may involve the detection of the carcass position and activation of motors to move the carcass.
The camera is then again activated to record an image of the carcass positioned in the background from the same viewing point. This carcass image is also digitised and stored in the computer.
The background image is subtracted from the carcass image to detect regions of difference between the two images and to produce a difference image defining the location and profile of the carcass, as illustrated by the method of Figures 1A to IE. The difference image can then be used to obtain a defined image of the carcass from the image of the carcass positioned in the background. Data processing system having suitable computer software can be used to trace the profile of the difference i age or the defined carcass image. Conveniently, the profile can be represented by a continuous sequence of individual, adjoining pixels in the direction of tracing. The set of pixels describing a profile may also be represented by a range of numbers for each pixel, that is, the pixel number sequentially from a starting pixel, x and y co-ordinate of the pixel location in terms of the x,y co-ordinate grid of the digitised image and/or the direction in degrees of rotation from each pixel to the next pixel, clockwise around the edge. The direction information can only be an integer multiple of 45° due to the grid array of pixels. The complete set of direction values is also numerically filtered to provide a more accurate, average "bearing" value for each pixel which represents the tangential direction of the profile at each pixel.
After the carcass profile is traced, suitable software can be used to analyse this description of the traced carcass profile to identify particular anatomical points, as shown in Figure 3, for example, point A indicates start of rump, point B indicates tail, point C indicates start of butt profile, point D indicates butt profile mid-point, point E indicates end of butt profile, point F indicates end hind leg, point G indicates start foreleg, point H indicates end foreleg, point I indicates start loin, and line J indicates dimension reference line.
The anatomical points can be identified by algorithms derived from experimental data. The identification may specify a particular anatomical point to fall within a prescribed window of pixel numbers with the x or y co-ordinate having a maximum or minimum value and/or a prescribed tangential direction on the profile. From knowledge of the carcass shape and experimental analysis of the profile descriptions, techniques can be developed to reliably identify and select sections of the carcass which have significance for descriptive purposes, for example, butt start and end. The anatomical points may be identified by profile matching as well as by searching for particular combinations of pixel numbers, x,y co-ordinates, and bearing values. The profile matching involves storing a large number of carcass profiles with defined anatomical points as reference profiles. As a new carcass profile is analysed, it is matched by searching to a reference profile which has the smallest area difference between the two profiles. The defined anatomical points on the reference profile are used to identify anatomical points on the carcass profile being analysed.
With reference to Figure 3, in using the anatomical points, quantitative dimensional measurements can be taken from the profile of the carcass. Experiments involving imaging numerous carcasses, which are then subsequently dissected to reveal detailed carcass composition, can allow the evaluation of a large number of carcass dimensions to predict composition. In this regard, the relevant dimensional measurements can be taken from the carcass which are known to indicate parameters and yield of the carcass. Also dimensional measurements can be taken to determine the shape and size of carcasses or selected sections thereof, for example, the butt region for enabling classification of the AUS-MEAT butt profile. The positions at which the dimensional measurements are taken are related to the anatomical points previously located.
The anatomical points detected on the carcass profile can also be used to identify and delineate the standard AUS-MEAT bruise detecting and scoring areas, as shown in Figure ' 4, for example, section K indicates shoulder, section indicates loin, section M indicates rump and section N indicates butt. Such areas may represent sections of the carcass which will attract higher values.
The three numbers describing the colour and intensity of each pixel contained within the carcass image, that is, red, green and blue intensity, may be operated on by equations which classify each pixel as to its likely composition property and material. In this regard, particular algebraic combinations and algebraic operations on the red, green and blue intensity values can provide distinctive values for the various colours. The algebraic combinations can be developed by statistically analysing pixels representing different composition properties to determine the relative distribution of intensity in each colour component. For each pixel therefore, the algebraic combination involving the appropriate relevant proportions of red, green and blue intensities is calculated and the calculated value is checked against predetermined ranges to ascertain the colour category. Evaluation of numerous images can produce the predetermined ranges to enable separation of pixel colours into six general composition property categories: bruises, lean meat, thin fat, yellow fat, creamy fat and white fat or membranes. The carcass image may therefore be transformed into a categorised image which has only six colour classifications corresponding to these composition property categories.
By counting the number of pixels in the categorised carcass image as belonging to a particular property category and comparing to the total number of pixels in the image, a percentage measure of the particular property covering the total surface of the area of the carcase can be made, for example, the fat cover percentage.
Similarly, for each of the bruise detecting and scoring areas as shown in Figure 4, that is shoulder, loin, rump and butt regions, the pixel colour categories are counted to detect the presence of colours in the "bruise" category. If the number of pixels corresponding to a particular surface area exceeds an acceptable threshold value for that particular surface area, then a bruise score varying from 1 to 7, is assigned to the carcass and applied as a penalty to the evaluation. Although the present invention has been described and shown in the drawings as relating to whole or split half carcasses and external features thereof, the method and apparatus in accordance with the present invention are equally applicable to cut portions of carcasses and the exposed internal features, for example, quartered beef carcasses and cross-sections thereof.
Within the beef processing industry, there has been a practice that whole carcasses are cut firstly into halves by splitting along the spine and then, further along the processing chain after the split half carcasses have been hung or chilled for a predetermined period of time, into quarters by cutting either between the 10th and 11th ribs or between the 12th and 13th ribs. Such cutting practice produces a hindquarter and a forequarter from each half for further cutting and boning processes to produce primal cuts or subsequent sale to wholesalers or retailers.
The quartering process produces carcass portions exposing a cross-sectional view of the composition properties within the carcass portion, namely the rib-eye or longiεsimus dorsi area, as well as several other muscles, intramuscular fat and the subcutaneous fat layer. Objective measurement and analysis of such features exposed in the cross section have been used to provide an estimation or determination of the quality and quantity of meat products within the carcass, thus evaluating the carcass.
In applying the present invention, the specific parameters such as extent of rib-eye area, intramuscular fat content of the rib-eye, known as marbling, fat colour, meat colour and subcutaneous fat depth, can be objectively measured and determined from the exposed internal features of the quartered carcass cross-sections, for use in setting a value to the carcass portions.

Claims

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
1. A method of evaluating a carcass or carcass portion, characterised by a process of object image definition, said object image definition process comprising the steps of:
(a) recording a first image of a background taken from a viewing point;
(b) positioning the carcass or carcass portion in said background;
(c) recording a second image taken from said viewing point of said background with the carcass or carcass portion positioned therein; and
(d) analysing said first and second images to differentiate the carcass or carcass portion image from the background image for providing a defined image of said carcass or carcass portion, thus resolving the location and/or profile of the carcass or carcass portion positioned in the background.
2. A method of evaluating a carcass or carcass portion according to claim 1, characterised by the further step of analysing said defined image to determine qualitative properties of the carcass or carcass portion.
3. A method according to claim 1, characterised in that the differentiation includes the subtraction of the first or second image from the other to provide a difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
4. A method according to claim 1, characterised in that the recorded first and second images are multi-colour images which are broken down into their respective colour component images.
5. A method according to claim 1, characterised in that the first and second images are recorded repeatedly but in different colour components.
6. A method according to claim 4, characterised in that the differentiation includes the subtraction of the first or second image from the other for each colour component respectively, so to provide two or three component difference images which are then recombined to provide an absolute difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
7. A method according to claim 5, characterised in that the differentiation includes the subtraction of the first or second image from the other for each colour component respectively, so to provide two or three component difference images which are then recombined to provide an absolute difference image, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
8. A method according to claim 1, comprising the steps of illuminating the background and the carcass or carcass portion when the first and second images are being taken to be recorded, characterised in arranging the illumination so minimal shadows appear in the recorded images.
9. A method according to claim 8, characterised in establishing shadows having a maximum width of 0.5 cm around the edges of the carcass or carcass portion.
10. A method according to claim 8, characterised in that an illumination source or sources are positioned relative to the viewing point from which the first and second images are taken, such that any shadow cast by the carcass or carcass portion is cast behind the carcass or carcass portion as seen from the viewing point.
11. A method according to claim 1, characterised in that substantially uniform illumination is provided to at least the carcass or carcass portion which will result in substantially the natural colours thereof being recorded in the second image.
12. A method according to any one of claims 1 and 3 to 11, characterised by the further step of analysing said difference and/or defined image to determine qualitative properties of the carcass or carcass portion.
13. A method according to claim 12, characterised in that the further analysis includes tracing the profile of the carcass or carcass portion from the difference and/or defined image and providing a sequence of values representing the profile.
14. A method according to claim 13, characterised in that said sequence of values are the respective locations or co-ordinates of adjoining pixels in the direction of tracing.
15. A method according to claim 13, characterised in that said sequence of values are the respective directions, in degrees of rotation, between adjoining pixels in the direction of tracing.
16. A method according to claim 13, characterised in that said further analysis includes locating anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
17. A method according to claim 14, characterised in that said further analysis includes locating anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
18. A method according to claim 15, characterised in that said further analysis includes locating anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
19. A method according to claim 16, characterised in that said further analysis includes measuring the size of the carcass or carcass portion in terms of dimensional measurements of different sections thereof, said dimensional measurements being taken between preselected anatomical points.
20. A method according to claim 16, characterised in that said further analysis includes identifying and delineating selected sections of the carcass or carcass portion.
21. A method according to claim 12, characterised in that the further analysis includes measuring the intensity of each colour component for each pixel in the defined image of the carcass or carcass portion and performing an algebraic operation on the measured intensity of each colour component to determine the property represented by said each pixel, for determining the composition properties of the carcass or carcass portion.
22. A method according to claim 21, characterised in that said algebraic operation involves the relevant proportions of each colour component.
23. A method according to claim 21, characterised in that said further analysis includes counting the number of pixels representing a selected property and determining the proportion of said selected property, in the carcass or carcass portion or in selected sections thereof.
24. A method according to claim 22, characterised in that said further analysis includes counting the number of pixels representing a selected property and determining the proportion of said selected property, in the carcass or carcass portion or in selected sections thereof.
25. A method according to claim 23, characterised in that the selected property is fat, bruise, lean meat, or membrane.
26. A method according to claim 24, characterised in that the selected property is fat, bruise, lean meat, or membrane.
27. An apparatus for evaluating a carcass or carcass portion, characterised by comprising:
(a) transport system to position the carcass or carcass portion in a background;
(b) image recording system to record a first image of said background taken from a viewing point and a second image taken from said viewing point of said background with the carcass or carcass portion positioned therein; and
(c) data processing system to analyse said first and second images to' differentiate the carcass or carcass portion image from the background image for providing a defined image of the carcass or carcass portion, thus resolving the location and/or profile of the carcass or carcass portion positioned in the background.
28. An apparatus according to claim 27, characterised in that said data processing system includes computer means programmed to analyse the defined image to determine qualitative properties of the carcass or carcass portion.
29. An apparatus according to claim 27, characterised in that said image recording system is adapted to record the first and second images in multi-colour which are broken down into their respective colour component images.
30. An apparatus according to claim 27, characterised in that said image recording system is adapted to record the first and second images repeatedly but in different colour components.
31. An apparatus according to claim 27, 29 or 30, characterised in that said data processing system includes image digitiser means to provide digitised images of said first and second images and subtractor means to provide a difference image of said first and second images, or two or three difference images, one for each colour component, which are then recombined to provide an absolute difference image of said first and second images, for resolving the location and/or profile of the carcass or carcass portion positioned in the background.
32. An apparatus according to claim 31, characterised in that said data processing system includes computer means programmed to analyse the difference and/or defined image to determine qualitative properties of the carcass or carcass portion.
33. An apparatus according to claim 27, 28, 29 or 30, comprising lighting means to illuminate the background and the carcass or carcass portion when the first and second images are being taken to be recorded, characterised in that said lighting means is so arranged to establish shadows having a maximum width of -0.5 cm around the edges of the carcass or carcass portion.
34. An apparatus according to claim 27, 28, 29 or 30, wherein said image recording system includes camera means positioned at the viewing point for taking the first and second images, characterised in that lighting means is positioned relative to said camera means such that any shadow cast by the carcass or carcass portion is cast behind the carcass or carcass portion as viewed by the camera means.
35. An apparatus according to claim 27, 28, 29 or 30, characterised in that tungsten lamps are provided to illuminate at least the carcass or carcass portion so to provide substantially uniform illumination.
36. An apparatus according to claim- 32, characterised in that said computer means is programmed to trace the profile of the carcass or carcass portion from the difference and/or defined image and to provide a sequence of values representing the profile.
37. An apparatus according to claim 36, characterised in that said computer means is programmed to provide the respective locations or co-ordinates of adjoining pixels in the direction of tracing as said sequence of values.
38. An apparatus according to claim 36, characterised in that said computer means is programmed to provide the respective directions, in degrees of rotation, between adjoining pixels in the direction of tracing as said sequence of values.
39. An apparatus according to claim 36, characterised in that said computer means is programmed to locate anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
40. An apparatus according to claim 37, characterised in that said computer means is programmed to locate anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
41. An apparatus according to claim 38, characterised in that said computer means is programmed to locate anatomical points of the carcass or carcass portion from the traced profile or sequence of values.
42. An apparatus according to claim 39, characterised in that said computer means is programmed to measure the size of the carcass or carcass portion in terms of dimensional measurements of different sections thereof, said dimensional measurements being taken between preselected anatomical points.
43. An apparatus according to claim 39, characterised in that said computer means is programmed to identify and delineate selected sections of the carcass or carcass portion.
44. An apparatus according to claim 32, characterised in that said computer means is programmed to measure the intensity of each colour component for each pixel in the defined image of the carcass or carcass portion and to perform an algebraic operation on the measured intensity of each colour component to determing the property represented by said each pixel, for determining the composition properties of the carcass or carcass portion.
45. An apparatus according to claim 44, characterised in that said computer means is programmed to use the relevant proportions of each colour component in performing said algebraic operation.
46. An apparatus according to claim 44, characterised in that said computer means is programmed to count the number of pixels representing a selected property and to determine the proportion of said selected property, in the carcass or carcass portion or in selected sections thereof.
47. An apparatus according to claim 45, characterised in that said computer means is programmed to count the number of pixels representing a selected property and to determine the proportion of said selected property, in the carcass or carcass portion or in selected sections thereof.
48. An apparatus according to claim 46, characterised in that said computer means is programmed to indicate said selected property as fat, bruise, lean meat or membrane.
49. An apparatus according to claim 47, characterised in that said computer means is programmed to indicate said selected property as fat, bruise, lean meat or membrane.
PCT/AU1991/000091 1990-03-14 1991-03-14 Evaluating carcasses by image analysis and object definition WO1991014180A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPJ9090 1990-03-14
AUPJ909090 1990-03-14

Publications (1)

Publication Number Publication Date
WO1991014180A1 true WO1991014180A1 (en) 1991-09-19

Family

ID=3774548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1991/000091 WO1991014180A1 (en) 1990-03-14 1991-03-14 Evaluating carcasses by image analysis and object definition

Country Status (1)

Country Link
WO (1) WO1991014180A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994000997A1 (en) * 1992-07-03 1994-01-20 Paul Bernard David Newman A quality control and grading system for meat
GB2276001A (en) * 1993-03-11 1994-09-14 British Nuclear Fuels Plc Object length measurement
WO1995001567A1 (en) * 1993-07-02 1995-01-12 Her Majesty The Queen In Right Of Canada, Represented By The Department Of Agriculture And Agri-Food Canada Method for detecting poor meat quality in live animals
EP0636262A1 (en) * 1992-04-13 1995-02-01 Meat Research Corporation Image analysis for meat
DE4408604A1 (en) * 1994-03-08 1995-12-21 Horst Dipl Ing Eger Commercial assessment of carcasses by video camera
EP0730146A2 (en) * 1995-03-01 1996-09-04 Slagteriernes Forskningsinstitut Method for the determination of quality properties of individual carcasses, lamp for the illumination of a carcass and use of the method and lamp
US5595444A (en) * 1993-07-02 1997-01-21 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method for detecting poor meat quality in groups of live animals
WO1998008088A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
WO2000011936A1 (en) * 1998-08-31 2000-03-09 Alfa Laval Agri Ab An improved apparatus and method for monitoring an animal related volume
WO2000011940A1 (en) * 1998-08-31 2000-03-09 Alfa Laval Agri Ab An apparatus and a method for monitoring an animal related area
US8147299B2 (en) 2005-02-08 2012-04-03 Cargill, Incorporated Meat sortation
CN104950091A (en) * 2015-06-16 2015-09-30 黄涛 Method for quickly determining intramuscular fat content of fattening duroc landrace big white
WO2017118757A1 (en) * 2016-01-08 2017-07-13 Teknologisk Institut A system and method for determining the presence and/or position of at least one bone in a meat piece
US11803958B1 (en) 2021-10-21 2023-10-31 Triumph Foods Llc Systems and methods for determining muscle fascicle fracturing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3851156A (en) * 1972-09-05 1974-11-26 Green James E Analysis method and apparatus utilizing color algebra and image processing techniques
GB2000280A (en) * 1977-06-27 1979-01-04 Breitsameter H Method and apparatus for classifying meat
US4226540A (en) * 1977-06-25 1980-10-07 Pfister Gmbh Method for the contactless determination of features of meat quality
US4413279A (en) * 1980-12-17 1983-11-01 Pfister Gmbh Method for contact-free determination of quality features of a test subject of the meat goods class
AU2708188A (en) * 1987-12-22 1989-06-22 Slagteriernes Forskningsinstitut A method and apparatus for the determination of quality properties of individual cattle carcasses
EP0402877A1 (en) * 1989-06-12 1990-12-19 Slagteriernes Forskningsinstitut Method and apparatus for photometric determination of properties of meat pieces

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3851156A (en) * 1972-09-05 1974-11-26 Green James E Analysis method and apparatus utilizing color algebra and image processing techniques
US4226540A (en) * 1977-06-25 1980-10-07 Pfister Gmbh Method for the contactless determination of features of meat quality
GB2000280A (en) * 1977-06-27 1979-01-04 Breitsameter H Method and apparatus for classifying meat
US4413279A (en) * 1980-12-17 1983-11-01 Pfister Gmbh Method for contact-free determination of quality features of a test subject of the meat goods class
AU2708188A (en) * 1987-12-22 1989-06-22 Slagteriernes Forskningsinstitut A method and apparatus for the determination of quality properties of individual cattle carcasses
EP0402877A1 (en) * 1989-06-12 1990-12-19 Slagteriernes Forskningsinstitut Method and apparatus for photometric determination of properties of meat pieces

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104827A (en) * 1992-04-13 2000-08-15 Meat & Livestock Australia Limited Image analysis for meat
US5793879A (en) * 1992-04-13 1998-08-11 Meat Research Corporation Image analysis for meat
EP0636262A1 (en) * 1992-04-13 1995-02-01 Meat Research Corporation Image analysis for meat
EP0636262A4 (en) * 1992-04-13 1995-05-03 Meat Research Corp Image analysis for meat.
WO1994000997A1 (en) * 1992-07-03 1994-01-20 Paul Bernard David Newman A quality control and grading system for meat
GB2276001A (en) * 1993-03-11 1994-09-14 British Nuclear Fuels Plc Object length measurement
FR2702558A1 (en) * 1993-03-11 1994-09-16 British Nuclear Fuels Plc Method for measuring the linear dimension of an object
GB2276001B (en) * 1993-03-11 1997-03-05 British Nuclear Fuels Plc Optical measuring system
US5530728A (en) * 1993-03-11 1996-06-25 British Nuclear Fuels Plc Optical measuring system
WO1995001567A1 (en) * 1993-07-02 1995-01-12 Her Majesty The Queen In Right Of Canada, Represented By The Department Of Agriculture And Agri-Food Canada Method for detecting poor meat quality in live animals
AU673942B2 (en) * 1993-07-02 1996-11-28 Her Majesty The Queen of Right of Canada as represented by The Department of Agriculture and Agri-Food Canada Method for detecting poor meat quality in live animals
US5595444A (en) * 1993-07-02 1997-01-21 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method for detecting poor meat quality in groups of live animals
US5458418A (en) * 1993-07-02 1995-10-17 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Agriculture Method for detecting poor meat quality in live animals
DE4408604A1 (en) * 1994-03-08 1995-12-21 Horst Dipl Ing Eger Commercial assessment of carcasses by video camera
EP0730146A3 (en) * 1995-03-01 1997-03-05 Slagteriernes Forskningsinst Method for the determination of quality properties of individual carcasses, lamp for the illumination of a carcass and use of the method and lamp
EP0730146A2 (en) * 1995-03-01 1996-09-04 Slagteriernes Forskningsinstitut Method for the determination of quality properties of individual carcasses, lamp for the illumination of a carcass and use of the method and lamp
WO1998008088A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada, As Represented By The Department Of Agriculture And Agri-Food Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
US5944598A (en) * 1996-08-23 1999-08-31 Her Majesty The Queen In Right Of Canada As Represented By The Department Of Agriculture Method and apparatus for using image analysis to determine meat and carcass characteristics
AU722769B2 (en) * 1996-08-23 2000-08-10 Her Majesty The Queen In Right Of Canada As Represented By The Department Of Agriculture And Agri-Food Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
WO2000011936A1 (en) * 1998-08-31 2000-03-09 Alfa Laval Agri Ab An improved apparatus and method for monitoring an animal related volume
WO2000011940A1 (en) * 1998-08-31 2000-03-09 Alfa Laval Agri Ab An apparatus and a method for monitoring an animal related area
US8147299B2 (en) 2005-02-08 2012-04-03 Cargill, Incorporated Meat sortation
US8721405B2 (en) 2005-02-08 2014-05-13 Cargill, Incorporated Meat sortation
US9386781B2 (en) 2005-02-08 2016-07-12 Cargill, Incorporated Meat sortation
CN104950091A (en) * 2015-06-16 2015-09-30 黄涛 Method for quickly determining intramuscular fat content of fattening duroc landrace big white
WO2017118757A1 (en) * 2016-01-08 2017-07-13 Teknologisk Institut A system and method for determining the presence and/or position of at least one bone in a meat piece
US11803958B1 (en) 2021-10-21 2023-10-31 Triumph Foods Llc Systems and methods for determining muscle fascicle fracturing

Similar Documents

Publication Publication Date Title
CA2133825C (en) Image analysis for meat
CA2263763C (en) Method and apparatus for using image analysis to determine meat and carcass characteristics
EP1060391B1 (en) Meat color imaging system for palatability and yield prediction
US4939574A (en) Method and apparatus for classifying livestock carcasses and in particular cattle carcasses using a data processing system to determine the properties of the carcass
WO1991014180A1 (en) Evaluating carcasses by image analysis and object definition
US7110572B1 (en) Animal carcase analysis
Liu et al. Pork carcass evaluation with an automated and computerized ultrasonic system
GB2247524A (en) Automatic carcass grading apparatus and method
PT1827116E (en) Data acquisition for classifying slaughtered animal bodies and for their qualitative and quantitative determination
Jia et al. Prediction of lean and fat composition in swine carcasses from ham area measurements with image analysis
ES2145728T1 (en) PROCEDURE FOR THE EVALUATION OF HALF OF SLAUGHTER ANIMALS BY OPTICAL PROCESSING OF THE IMAGE.
UA88458C2 (en) Method for qualitative and quantitative assessment of bodies
EP3567551A1 (en) Method of analyzing three-dimensional images for the purpose of animal carcass assessment
CA2466289A1 (en) Method and apparatus for using image analysis to determine meat and carcass characteristics
Carnier et al. Computer image analysis for measuring lean and fatty areas in cross-sectioned dry-cured hams
AU767212B2 (en) Animal carcase analysis
Mendizábal Aizpuru et al. Predicting beef carcass fatness using an image analysis system
Mendizabal et al. Predicting Beef Carcass Fatness Using an Image Analysis System. Animals 2021, 11, 2897
MXPA99001702A (en) Method and apparatus for using image analysis to determine meat and carcass characteristics

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA