WO2006011891A1 - System and methods of automatic view recognition of echocardiogram videos using parts-based representation - Google Patents

System and methods of automatic view recognition of echocardiogram videos using parts-based representation Download PDF

Info

Publication number
WO2006011891A1
WO2006011891A1 PCT/US2004/028722 US2004028722W WO2006011891A1 WO 2006011891 A1 WO2006011891 A1 WO 2006011891A1 US 2004028722 W US2004028722 W US 2004028722W WO 2006011891 A1 WO2006011891 A1 WO 2006011891A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
int
graph
nodes
image
Prior art date
Application number
PCT/US2004/028722
Other languages
French (fr)
Inventor
Shahram Ebadollahi
Shih-Fu Chang
Henry Wu
Original Assignee
The Trustees Of Columbia University In The City Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Trustees Of Columbia University In The City Of New York filed Critical The Trustees Of Columbia University In The City Of New York
Publication of WO2006011891A1 publication Critical patent/WO2006011891A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • G06V10/426Graphical representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/196Recognition using electronic means using sequential comparisons of the image signals with a plurality of references
    • G06V30/1983Syntactic or structural pattern recognition, e.g. symbolic string recognition
    • G06V30/1988Graph matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This invention relates to video indexing and summarization, and more particularly to systems and methods for automatic identification of the images containing a plurality of objects, using the characteristics, such as the spatial arrangement, of such objects.
  • Echocardiography is a common diagnostic imaging modality that uses ultrasound to capture the structure and function of the heart.
  • a comprehensive evaluation typically entails imaging the heart in several planes (e.g., "aspects") by placing the ultrasound transducer at various locations on the patient's chest wall.
  • the recorded image sequence also referred to herein as "echocardiogram video,” or “echo video”
  • echo video displays the three-dimensional heart from a sequence of different two-dimensional cross sections (also referred to herein as "views").
  • views displayed under different views.
  • different sets of cardiac cavities are visible. The spatial arrangement of those objects is unique to each view.
  • Figure 1 illustrates a representative image, or frame 10, taken from the apical four chamber view.
  • the echo videos consist of a sequence of frames or images which comprises a particular view. Several views are typically recorded consecutively, in which there is no easily discernible demarcation between different views. Such echo videos may be as long as 10 minutes in duration.
  • a problem associated with the currently available techniques for analyzing echo videos is the necessity of reviewing long sections of video in order to locate a particular view which may be relevant to a physician's diagnosis. Indexing the echo videos provide a more efficient access to the video content.
  • U.S. Patent No. 6,514,207 to Ebadollahi et al. provides a technique for indexing echo videos, and is incorporated by reference in its entirety herein.
  • a further useful technique is to provide automatic recognition of various cardiac objects and their spatial arrangement.
  • the use of the spatial arrangement of the chambers of the heart in order to identify the appropriate view was proven effective by Tagare et al. (see, e.g., H.T. Tagare, F.M. Vos, CC. Jaffe, and J.S. Duncan, "Arrangement: A Spatial Relation Between Parts For Evaluating Similarity of Tomographic Section," IEEE Transactions of Pattern Analysis and Machine Intelligence, 17(9):880-893, September 1995, which is incorporated by reference in its entirety herein).
  • Tagare et al. found similar image planes in tomographic images.
  • a first challenge concerns the variations in the cardiac structures appearing in different portions of the echo videos. For example, the appearance of the images captured under the same view of the heart among different patients is subject to a high degree of variations. In addition, images for the same patient taken at different times are also subject to variation. This variation arises for at least the following reasons: First, different patients may have slightly different heart structures based on their physical characteristics. Second, the absence of natural markers for placing the ultrasound transducer on the patient's body for imaging makes it difficult for a technician to identically replicate the same view on the same patient taken at two different times.
  • a second challenge facing those skilled in the art is the image quality of the echo videos being analyzed. Because echo videos are the result of the ultrasound interrogation of the structure of the heart, the images may be highly degraded by multiplicative noise. Therefore, techniques for automatic detection of the cardiac cavities may result in the failure to detect cavities (also referred to herein as "occlusion” of the images) and/or the “detection” of false cavities (also referred to herein as "clutter”). In addition, there is a great degree of uncertainty in the properties of the chambers in the "constellation,” or grouping of cardiac structures or objects, which uncertainty is a result of artifacts such as those mentioned above.
  • a parts-based representation addresses some of the uncertainty in the features of image objects or parts, in which the variation in the properties of such objects or parts in the echo video.
  • This approach identifies an instance of an object when correct parts are observed in the correct spatial configuration (see, e.g., M. Weber, M. Welling, and P. Perona, "Unsupervised Learning of Models For Recognition,” 6th European Conference on Computer Vision. ECCV OO, June 2000).
  • the features (parts) are distinct, and specific detectors are available for each of the different types of parts.
  • the goal is then to choose a set of foreground parts in the correct spatial configuration in order to be able to identify the objects of interest.
  • pictorial structures Another method that is used to model parts or structures is referred to as "pictorial structures".
  • the parts are described using appearance models, and their constellation is described using a spring-like model (see, e.g., P. Felzenszwalb and D. Huttenlocher, "Efficient Matching of Pictorial Structures," IEEE Conference on Computer Vision and Pattern Recognition, pages 66-73, 2000; M. Burl, M. Weber, and P. Perona, "A Probabilistic Approach to Object Recognition Using Local Photometry and Global Geometry,” European Conference on Computer Vision, Volume 2, pages 628-641, 1998).
  • the image is subsequently searched for the optimal position of the parts based on their match with their appearance model and their spatial configuration.
  • Boshra and Bhanu addressed the issues of recognition performance and its dependence on object model similarity and the effects of distortions such as occlusion and clutter (see, e.g., M. Boshra and B. Bhanu, "Predicting Performance of Object Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):956-969, September 2000). They also obtained bounds on the performance of the recognition in the presence of the distortion factors for the case where the uncertainties of the parts properties were expressed using uniform distributions and finding the correspondence between the model and the scene was performed using a voting scheme.
  • the fusion framework is employed to resolve ambiguities and make correct recognition.
  • a method for labeling video images includes providing a representation of an image comprising nodes and edges corresponding to objects in the image.
  • a representation of a model for each label in a set of labels is provided.
  • Another step in the method includes determining an optimal mapping of the representation of the image with the representation of each of said respective models.
  • a confidence measure is determined, in which the confidence measure comprises a set of confidence values, such that each confidence value corresponds to the optimal mapping of the representation of the image with the representation of each respective model.
  • a further step includes training a classifier, and classifying the image by applying the learned classifier to the confidence measure.
  • the step of providing a representation of the image may include determining an attribution-relational graph of the image.
  • the step of providing a representation of a model for each label may include determining an attribution-relational graph of the model.
  • the step of determining an optimal mapping of the representation of the image with the representation of each of said respective models may include determining the optimal configuration of the set of random variables, i.e., a random field. Such optimal configuration of the random field may be a configuration that results in the maximum posterior probability, or equivalently, which minimizes the posterior energy.
  • the step of determining a confidence measure may include determining an energy vector comprising a set of energy values, hi an exemplary embodiment, the step of training a classifier may include training a discriminative classifier, such as a Support Vector Machine classifier.
  • the step of classifying the image by applying the learned classifier to the confidence measure comprises classifying the confidence measure using the learned Support Vector Machine classifier.
  • the step of providing a representation of an image comprising nodes and edges corresponding to objects in the image may include providing a representation of an echocardiogram image comprising nodes and edges corresponding to cardiac chambers in the image, hi the exemplary embodiment, the label may refer to a particular view, such as the Parasternal Long Axis view, the Parasternal Short Axis view, and the Apical view.
  • the step of providing a representation of a model for each label in a set of labels may include providing a representation of an echocardiogram image comprising providing a representation of a model for each view label in a set of view labels.
  • FIG. 1 is a representative frame of an echo video depicting several cardiac objects.
  • FIG. 2 depicts several representative frames of echo video depicting a plurality of cardiac objects as identified in accordance with the present invention.
  • FIG. 3 is a representative frame of an echo video undergoing analysis in accordance with the present invention.
  • FIG. 4 is a flowchart illustrating a plurality of steps in accordance with the present invention.
  • FIG. 5 is a series of plots representing a statistical analysis in accordance with the invention.
  • FIG. 6 is a series of plots representing a statistical analysis in accordance with the invention.
  • image acquisition equipment which may include well-known echocardiogram acquisition equipment, such as ultrasound equipment, is used for acquiring the images of the patient's cardiac structure.
  • Video capture and storage equipment is typically used, which may include a video capture card to digitize the analog video and video storage, such as a hard drive or other storage medium, to store the resulting video.
  • Each temporal segment of the echo video corresponds to a view of the heart is represented by a set of representative frames or key- frames which are sampled from the content at semantically meaningful time instances (see, e.g., U.S. Patent 6,514,207, to Ebadollahi et al. ⁇ and S. Ebadollahi, S.-F. Chang, and H. Wu, "Echocardiogram Videos; Summarization, Temporal Segmentation and Browsing," IEEE International Conference on Image Processing (ICIP V2), Volume 1, pages 613-616, September 2002, which are incorporated by reference in their entirety herein).
  • the heart passes through different phases of activity, wherein the parts, their spatial relationships, and their properties change.
  • each of the views in which chambers change size, position, or disappear from the viewing plane.
  • Such structural variability may introduce additional complications to the automatic recognition process, hi the exemplary embodiment, the key-frames being analyzed are all taken from the same state to reduce this variability in the image data.
  • the selection of each key-frame in the exemplary embodiment corresponds to the structural state where the heart is most expanded, i.e., end-diastole. According to this embodiment, images of each view are taken only from this structurally unique state. It is understood that the end-diastole is but one representative structural state, and that other cardiac states may also be analyzed.
  • a generic cardiac chamber detector may be used on the images to locate the chambers.
  • An exemplary approach for cardiac chamber segmentation is that proposed by Bailes (see, e.g., D.R. Bailes, "The Use of the Gray Level SAT to Find the Salient Cavities in Echocardiogram," Journal of Visual Communication and Image Representation, 7(2): 169- 195, June 1996, is incorporated by reference in its entirety herein), which is based on the Gray-Level Symmetric Axis Transform (GSAT) to detect the cardiac cavities.
  • GSAT Gray-Level Symmetric Axis Transform
  • Figure 2 shows the application of the GSA T method to key-frames taken from a view of the echo video.
  • Figure 2 illustrates four key-frames 20, 22, 24, and 26.
  • the chambers, which are automatically detected by this process, are depicted as solid closed shapes or blobs 28.
  • the boundary lines of the actual location of the chambers are illustrated by solid outlines 30.
  • Errors in the chamber detection process are illustrated in key-frame 22, which illustrates a false positive, i.e., a chamber 32 was "detected" by the automatic detection technique, whereas the chamber does not, in fact, exist.
  • Key- frame 24 depicts an error in the recognition process, i.e., a missing chamber, in which an actual cardiac chamber 34 was not detected at all. The presence of such false and missed chambers is discussed in greater detail hereinbelow.
  • FIG. 4 illustrates a pictorial representation of an attributed relational structure for the constellation of parts of an image 50 taken from one of the views.
  • the constellation of cardiac cavities for the apical four chamber view is depicted.
  • Each node 52 in such relational structure represents a cavity or chamber of the heart.
  • the attributes of the nodes 52 are the properties of the cavities.
  • the edges 54 if the relational structure are depicted as lines connecting the various nodes 52.
  • the attributes of the edges 54 may be, e.g., the spatial relationships between two neighboring nodes 52.
  • All the nodes 52 are allowed to be related to each other in the relational structure.
  • N represents the fully connected neighborhood structure in the relational structure.
  • the model of a view is represented as the attributed graph.
  • N' denotes the full connectivity between the parts in the model.
  • N' denotes the full connectivity between the parts in the model.
  • m n.
  • the step of finding the correspondence between the two relational structures expressing (1) the observed constellation and (2) the model of a view may be posed as a search for the optimal configuration of a random field defined on the parts of the observed constellation.
  • the optimal configuration is the one that minimizes the overall posterior energy of the field, as will be described in greater detail herein.
  • An exemplary statistical modeling method is the Markov Random Field ("MRF") (see, e.g., S.Z. Li, Markov Random Field Modeling in Image Analysis. Springer-Verlag, 2001, which is incorporated by reference in its entirety herein.) in which one could efficiently express the contextual constraints using locally defined dependencies between the interacting entities. Through the Hammersly-Clifford theorem, these local dependencies lead to the encapsulation of the joint global characteristics.
  • MRF Markov Random Field
  • the task of labeling the observed constellation using the model of a certain view may be posed as finding the best mapping between the nodes of the two attributed graphs: / : G -> G.
  • a set of random variables is defined
  • U(J) is the prior energy and the likelihood energy.
  • the prior energy of a configuration is defined as (note that we only consider cliques of size up to two):
  • V 1 and V 2 are positive values.
  • the pair- site prior potential encourages the sites to have distinct labels. This is derived from the fact that in the constellation of the heart chambers a chamber can not appear more than once.
  • the energy likelihood is defined as: (d 2 (i,i') ⁇ f,f v ) (4)
  • .ST 1 and X 2 are the total number of unary and binary features defined on single and pair-site cliques.
  • location, area, and directionality are used as the properties of each individual part, and distance and angle between a pair of parts as their joint properties.
  • the properties of the single and double site cliques are considered to be distributed according to Gaussian distributions with means defined
  • the maximum likelihood estimate of the parameters of the potential functions may be obtained from the manually labeled training data.
  • the bounds on the prior parameters V 1 and v 2 are estimated such that the training data are embedded at the minimum energy configurations of the configuration space.
  • Equations 6 and 7 illustrate the conditions to obtain the lower and the upper bounds for the prior parameters.
  • is zero if the label assigned to the previously NULL site is a label that is missing from the observed constellation and is one if that label already exists in the constellation; and therefore, we incur a penalty by having two sites with the same label.
  • This estimation of the bounds of the prior parameters is applied to each constellation in the training data and take the common value over all those constellations.
  • the value of the local energy resulting from assigning the label y ⁇ to a site i is defined as:
  • the method described herein is used to determine the correct view-label F* for a constellation C obtained from a test image /..
  • the optimal labeling of the chambers in the observed constellation according to each of the models is inferred by minimizing the posterior energy of the configuration of the labels such that the resulting configuration is both consistent with the prior knowledge and the evidence.
  • the HCF method proposed by Chou and Brown (see, e.g., P.B. Chou and CM. Brown, "The Theory and Practice of Bayesian Image Labeling,
  • the optimal labeling of its parts may be obtained according to view-model M 1 62, M 2 64, M 3 66, ... M k 68.
  • the vector 70 of the energies assigned to the observed constellation at the optimal configuration of the random field according is also illustrated.
  • the energy vector quantifies how each model "sees" the constellation C.
  • the constellation C is taken to be complete, meaning that it does not have any false parts and there are no chambers missing from it.
  • SVM Support Vector Machines
  • the vector 70 of the energies assigned to the observed constellation at the optimal configuration of the random field according to the different view-models is used to classify the constellation into the correct view-class.
  • the assessments of the observed constellation by the different view-models are fused in order to correctly identify its originating view.
  • the SVM classifier learns the decision boundaries to correctly classify the instances of the different views in this space. This approach is similar in nature to the one proposed by Selinger and Nelson (see, e.g., A. Selinger and R.C.
  • graph 80 shows the energies (E 4 ) 81 assigned to constellations taken from view 9 (C 9 ) 82 and view 10 (C 1O ) 84 by the models of view 4 (M 4 ).
  • Graph 86 shows the energies (E 9 ) 87 assigned to constellations taken from views 9 (C 9 ) 88 and 10 (C 10 ) 90 by the models of view 9 (M 9 ).
  • Graph 92 shows the energies (E 10 ) 93 assigned to constellations taken from views 9(C 9 ) 94 and 10 (C 1O ) 96 by the models of view 10 (M 10 ). It is apparent from the graphs 80, 86, and 92 that neither the model of view 9 (M 9 ) (as depicted in graph 86) nor the model of view 10 (M 10 ) (as depicted in graph 92) could be used to distinguish the constellations.
  • model of view 4 (M 4 ) (as depicted in graph 80) could be used to distinguish the constellations taken from those two views, as indicated by the discontinuity 98 in the graph.
  • the model of view 4 (M 4 ) "sees” the constellations taken from views 9 (M 9 ) and 10 (M 10 ) differently.
  • a multi-hypothesis testing case is formed.
  • the observed constellation was taken from that view, i.e., C e V k , where F /t has the model M k .
  • the constellation C is labeled according to the model M k , and the optimal label of the parts is found. Then the parts labeled as false according to this labeling are deleted and then the energy vector for the filtered constellation is obtained.
  • the observed constellation C was truly taken from view V k , it would be correctly classified by the classifier learned for the case where only missing parts were allowed.
  • the system includes image acquisition equipment.
  • the image acquisition equipment includes well-known echocardiogram acquisition equipment, such as ultrasound equipment, for acquiring the images of the patient's cardiac structure.
  • the resulting images are typically in analog videotape form.
  • Video capture and storage equipment is typically used, which may include a video capture card to digitize the analog video and video storage, such as a hard drive or other storage medium, to store the resulting video.
  • a commercial video capture card was used for the digitization of the analog echocardiogram videos. All videos where digitized into MPEG-I format at 30 frames per second and with frame resolution of 352x240.
  • MPEG-I videos were decoded into PPM image (uncompressed) format prior to processing. Videos and uncompressed images where stored on computer hard disk.
  • An echo video in compressed format typically takes about 200 to 300 MBytes.
  • the echocardiogram video may be directly captured in digital format, thereby eliminating the need for an intermediate digitizing step.
  • a processor or processing subsection for performing detection of objects, e.g., cardiac chambers in the images may be performed by a CPU or processing chip. In the exemplary embodiment, such processing is performed according to the routine "chamber_segment" attached hereto.
  • a processor or processing subsection for performing modeling of the various views may be performed by a CPU or processing chip.
  • training data is obtained by manually annotating the segmented echo frames (e.g., in the exemplary embodiment, only key-frames which are sampled from the content of the echocardiogram at the End-Diastole state of the heart).
  • all the training data may be annotated.
  • the code in the directory "view_rec" should be used for learning the MRP model and performing the view recognition.
  • processors for performing the functions listed above and discussed in greater detail herein may be located on a single CPU or PC, or may be distributed among several locations by a LAN, intranet, the internet, wireless connection, or other arrangement known in the art. For example, one may capture the videos at one location, perform chamber segmentation at another location, and perform view recognition at yet another location.
  • the echos of the normal heart with a total of 2657 key-frames are used both for training and testing (in "leave-one-out” fashion, as discussed below), while those of the abnormal echo videos with a total of 552 key-frames are only used for testing purposes. Every key-frame in the data set is manually labeled by an expert.
  • K 10 different views present in these echo videos, taken from the Parasternal Long Axis (2 views), Parasternal Short Axis (4 views), and Apical (4 views) angles of imaging.
  • the experiments were conducted in a leave-one-out scheme, such that during each round (15 rounds total), one echo video is left out as the test data and the models are learned from the remaining 14 echos.
  • the priors and the parameters of the Gaussian distributions of the single and double cliques are learned from the hand-labeled training data.
  • the SVM classifiers are also learned for each round of experiment. The experiments and results are discussed for four different cases below.
  • a multi-class SVM classifier is then trained for each round.
  • the energy vectors of the test key ⁇ frames are then classified using the learned multi-class SVM classifier.
  • the top two images in Figure 6 show the Hinton diagram of the confusion matrix for this case without considering clinical similarities (image 100) and with considering clinical similarities (image 110).
  • a confusion matrix is an array showing relationships between true and predicted classes. Entries on the diagonal of the matrix count the correct calls. Entries off the diagonal count the misclassifications.
  • the Hinton diagram provides a qualitative display of the values in a data matrix (normally a weight matrix).
  • the multi-hypothesis approach may be used as explained hereinabove, and the same SVM classifiers learned for the complete case may be used.
  • the Hinton diagrams of the confusion matrices for the cases without taking into account the clinical similarities and with taking into account the clinical similarities are shown in the middle row of Figure 6 as diagrams 120 and 130, respectively.
  • the average precisions of 54.1 % and 74.34% are obtained for the original and the clinically similar cases respectively.
  • CASE 3 Normal Echos-Constellations with Missed and False Chambers.
  • the SVM classifiers are learned for each round of experiment using constellations with their false chambers removed. Then, the multi- hypothesis approach was used to classify each real constellation (both false chamber and missed chambers exist).
  • the techniques described herein are used to detect an abnormality or disease in the patient's heart function.
  • This approach is particularly useful where the abnormality is discernible in a single frame or image.
  • the particular frame may be a key- frame, such as the end-diastole state.
  • a relational structure or attributed graph of the image of the abnormal hear is expressed, hi addition, a relational structure is expressed for a model which is trained on images of heart having the particular disease or abnormality. Additional models may be expressed for normal heart images or hearts having different abnormalities.
  • a set of random variables i.e., a Markov Random field
  • an optimal mapping is obtained between the attributed graph of the image of the abnormal heart and the attributed graph of the model of the abnormality.
  • a confidence value e.g., energy value
  • an optimal mapping e.g., an optimal configuration of the random field defined on the cardiac objects in the image when viewed by the model for the abnormality.
  • the disambiguation techniques discussed herein e.g., the use of SVM classifiers, are used to determine whether the abnormality is a proper label for the view being analyzed.
  • a parts-based representation for the constellation of the heart chambers in the different views was used and Markov Random Fields were used to model such representations.
  • the collection of the energies obtained from comparing a test image to the models of the different views was then used as the input to an SVM classifier to find the view label. This fusion scheme helped to disambiguate the constellations taken from the views with structural similarities.
  • more complex distributions is be used to represent the properties of the parts.
  • the training of the MRF model of the constellations of the cardiac chambers and the SVM classifiers were performed separately.
  • the training of the MRF model and the SVM classifiers are combined to obtain better results.
  • the models of the different views of the echocardiogram video were learned from the key-frames extracted from the content of the videos.
  • the spatio- temporal characteristics of the constellation of the chambers of the heart are used, which may recover the different phases of activity of the heart during each cycle of its activity.
  • the invention provides the capability to automatically index and annotate the content of the echocardiogram videos by identifying cardiac objects and identifying particular views. This provides efficient access to the desired content of the echocardiogram videos, which can be a short sequence in an otherwise lengthy video.
  • random access to the desired content of the videos is provided, which allows efficient browsing of the video content.
  • the capability of annotating the cardiac obj ects identified herein permits the augmentation of videos with contextual information, which is useful in categorizing and archiving the video.
  • An exemplary embodiment of the invention is a teaching application in which the techniques described herein are employed to index and annotate the video images.
  • Multimedia files are created thereby having annotations that provide additional information on the various cardiac objects and views. These multimedia files may be accumulated and organized into libraries or files to be reviewed by students, with or without teaching supervision, to gain a better understanding of the physiological structure and functioning of the heart, as well as the techniques for recognizing and diagnosing pathology.
  • the system is embedded to the image acquisition equipment itself, e.g., ultrasound machines, being used to capture the echocardiogram images.
  • Such application enables the indexing and/or annotation of the echocardiogram images simultaneous with, or shortly after, the acquisition of the images, which allows for a more rapid diagnosis.
  • the system is used to index and select different segments of the video for tele-medicine applications, such as the Picture Archiving & Communication System ("PACS"), which is a combination of hardware and software that digitally stores and manages medical information and information. Using PACS, medical professionals can access these radiological images.
  • PACS Picture Archiving & Communication System
  • the system and techniques described herein provide the capability to provide this ancillary data automatically.
  • the system and techniques described herein are used in computer-assisted diagnostics.
  • the libraries of indexed and annotated echocardiogram images which have been automatically indexed at the view and cardiac object levels, would be a valuable resource for a physician diagnosing a new case.
  • the echo video of the new case may be automatically indexed so that certain key frames may be quickly accessed.
  • the physician may consult such library of previously diagnosed cases and compare the structure and functioning of the heart for the new case with the archived, annotated cases.
  • linkJSTodeLists (LI, LJ, A, I, J, SFPG);
  • link NodeLists (LI, LJ, A, X 1 J, SFPG) ;
  • link_BranchVertices G, A, SFPG, SFPA, l_min, l_max, nodes, node_cnt
  • #endif link_VertexLists (LI, LJ, A, I, J, SFPG, SFPA, nodes, node ⁇ cnt, last_linked_VI) ;
  • SFP__Attribute (CvPoint p, unsigned char s, unsigned char d, int n)
  • SFP_Attribute (const SFP Attributes NA)
  • _N right._N;
  • CvPoint _P // Position of the vertex in the graph unsigned char _S; // Number of maximal discs at the vertex location unsigned char _D; // Value of the image at the vertex location vector ⁇ CvPoint> _ADJ; // Location of the neighbors of current node vector ⁇ CvPoint> OL,- // Original Location of the nodes.
  • N node_array[P.x] [P.y]; return TRUE; ⁇ else return FALSE; ⁇ else return FALSE; ⁇ bool LUT :: getNode(int x, int y, node& N)
  • N node_array [x] [y] ; return TRUE; ⁇ else return FALSE; ⁇ else return FALSE; ⁇
  • node_iterator ib G.nodes_begin()
  • Each node can be of the type: ISOLATED, TERMINAL, ARC, and BRANCH
  • * - node_cnt the number of the present node * - ' S ' , 1 D ' : refer to Baile ' s paper
  • * - FLAGS is a boolean matrix showing which image pixs are processed
  • * - A is a map that carries the node attributes
  • CvPoint P (A[nodes [node_cnt] ] ) .getPositionO ;
  • * are: square, diamond, roof and triangle.
  • NodeAttribute NA_0 P_0, 0, 0 ) ;
  • NodeAttribute NA__0 P_0, 0, 0 ) ;
  • NodeAttribute na node nO, nl, n2, n3, n4, n5, n6; switch(T)
  • G.new_edge(nO, nl) G.new_edge (nl, n2) ; G.new_edge (n2, n3) ; G.new_edge (n0, n3) ;
  • G.new_edge(nO, nl) G.new_edge(nl, n2) ; G.new_edge(n2, n3) ; G.new_edge(nO, n3) ;
  • _it G.nodes_begin() ;
  • _end G.nodes_end() ;
  • ⁇ tmpl_point ( TA [*tmpl_it] ) . getPosition ( ) ; if ( vertexJMatch ( anchor, tmpl__point , vrtx_point ) )
  • ⁇ tmpl__point ( TA[*tmpl_it] ) .getPosition();
  • ⁇ vrtx_point ( GA[VM[U] ) .getPosition(); // the current vertex of the cluster
  • ⁇ tmpl_jpoint ( TA[*tmpl_it] ) . getPosition ( ) ; if ( vertex_Match ( anchor, tmpl_jpoint, vrtx_point ) )
  • 'node_vect' is a list of nodes that have matched the template.
  • int vect_size node_vect.size () ; if ( complete_Set0fEdges (node__vect, graph_attrib, T) )
  • int i, j, cntr node_vect.size() ; int NUM_EDGES;
  • ⁇ int outdeg (node_vect[i] ) .outdeg(); if( outdeg ⁇ 2 ) break; else
  • ⁇ node target ei_i->target() ;
  • NUM_EDGES 20; break,- case RHOMBUSl: case RHOMBUS2 :
  • node_iterator V_it G.nodes_begin()
  • graph :: node_iterator V_end G.nodes_end() ;
  • n_tmp edge_it->target ( ) ;
  • case RHOMBUS2 printf ("Cluster size: %d ⁇ n", cluster.size()),- if( cluster.size() ⁇ 4 ) break; else delete_ComplexEdges( cluster, G, GA, type ) ,- break;
  • ⁇ bool edge_FLAG FALSE; int i, j ; vector ⁇ node> V; vector ⁇ int> ind; CvPoint Pl, P2, P3; node : : out_edges_iterator edge_it; node : : out_edges_iterator edge_end; switch( T )
  • edge_FLAG TRUE; ⁇ ⁇ if (edgeJFLAG)
  • V.insert( V.beginO + V.size() , VN[i] ) ; ⁇ for( i 0; i ⁇ V.sizeO ; i++ )
  • This function is used to output a sub-graph to GML format.
  • __SFPA node_map ⁇ SFP_Attribute> (_J3FPG, sfpaO) ;

Abstract

A system and methods (Figure 4) for the automatic identification of a test image, such as images from an echocardiogram video, is disclosed herein. The structure of the image is represented by the constellation (Fig. 4, 60) of its parts (e.g., chambers of the hear) under the different labels (e.g., different views). The statistical variations of the parts in the constellation and their spatial relationships are modeled using a generative model, such as Markov Random Field models. A discriminative method is then used for view recognition which fuses the assessments of a test image by all the view-models (fig. 4, M1 62, M2 64, Mk, 68).

Description

SYSTEM AND METHODS OF AUTOMATIC VIEW
RECOGNITION OF ECHOCARDIOGRAM VIDEOS
USING PARTS-BASED REPRESENTATION
SPECIFICATION
Statement Of Government Right
The present invention was made in part with support from the National Science Foundation, Grant No. NSF 9817434 Digital Library Phase 2. Accordingly, the United States Government may have certain rights to this invention.
Copyright Notice
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyrights whatsoever.
Computer Program Listing
A computer program listing is submitted in duplicate on CD. Each CD contains the routines listed in the attached Appendix. The CD was created on June 25, 2004 The files on this CD are incorporated by reference in their entirety herein.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to video indexing and summarization, and more particularly to systems and methods for automatic identification of the images containing a plurality of objects, using the characteristics, such as the spatial arrangement, of such objects.
2. Background
Echocardiography is a common diagnostic imaging modality that uses ultrasound to capture the structure and function of the heart. A comprehensive evaluation typically entails imaging the heart in several planes (e.g., "aspects") by placing the ultrasound transducer at various locations on the patient's chest wall. Accordingly, the recorded image sequence (also referred to herein as "echocardiogram video," or "echo video") displays the three-dimensional heart from a sequence of different two-dimensional cross sections (also referred to herein as "views"). Under different views, different sets of cardiac cavities ("objects") are visible. The spatial arrangement of those objects is unique to each view. For example, Figure 1 illustrates a representative image, or frame 10, taken from the apical four chamber view. (It is understood that the term frame and image are closely related.) Several cardiac structures are depicted in the image, including the left ventricle L V 12, the left atrium LA 14, the right ventricle RV 16, and the right atrium RA 18. Typically, the echo videos consist of a sequence of frames or images which comprises a particular view. Several views are typically recorded consecutively, in which there is no easily discernible demarcation between different views. Such echo videos may be as long as 10 minutes in duration.
A problem associated with the currently available techniques for analyzing echo videos is the necessity of reviewing long sections of video in order to locate a particular view which may be relevant to a physician's diagnosis. Indexing the echo videos provide a more efficient access to the video content. U.S. Patent No. 6,514,207 to Ebadollahi et al. provides a technique for indexing echo videos, and is incorporated by reference in its entirety herein.
A further useful technique is to provide automatic recognition of various cardiac objects and their spatial arrangement. The use of the spatial arrangement of the chambers of the heart in order to identify the appropriate view was proven effective by Tagare et al. (see, e.g., H.T. Tagare, F.M. Vos, CC. Jaffe, and J.S. Duncan, "Arrangement: A Spatial Relation Between Parts For Evaluating Similarity of Tomographic Section," IEEE Transactions of Pattern Analysis and Machine Intelligence, 17(9):880-893, September 1995, which is incorporated by reference in its entirety herein). Tagare et al. found similar image planes in tomographic images. The premise of this analysis was that if two images contain the same chambers, and each chamber is surrounded in a similar manner by other chambers, then one would be able to conclude that those two images have been taken from the same view of the heart. This approach considers the images in the echo video as cross sections of the imaging plane and the three-dimensional structure of the heart at different angles. A substantial limitation to the analysis of Tagare et al, however, is that the chambers of the heart must be manually located and labeled by an expert. The focus of the analysis was expressing the spatial arrangements and assessing the similarity between such representations. In practice, however, physicians do not rely solely on the spatial arrangement of the chambers for identifying the different views and will use additional contextual information in doing so. Characteristics of the echo video images themselves have posed challenges to the successful practice of automatic view recognition. A first challenge concerns the variations in the cardiac structures appearing in different portions of the echo videos. For example, the appearance of the images captured under the same view of the heart among different patients is subject to a high degree of variations. In addition, images for the same patient taken at different times are also subject to variation. This variation arises for at least the following reasons: First, different patients may have slightly different heart structures based on their physical characteristics. Second, the absence of natural markers for placing the ultrasound transducer on the patient's body for imaging makes it difficult for a technician to identically replicate the same view on the same patient taken at two different times. Given these variations, "appearance-based" methods, which have been successfully applied to some recognition problems (see, e.g., M. A. Turk and A.P. Pentland, "Face Recognition Using Eigen-Faces," /EEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 586-591, June 1991), cannot however, be successfully employed for the task of view recognition in echo videos.
A second challenge facing those skilled in the art is the image quality of the echo videos being analyzed. Because echo videos are the result of the ultrasound interrogation of the structure of the heart, the images may be highly degraded by multiplicative noise. Therefore, techniques for automatic detection of the cardiac cavities may result in the failure to detect cavities (also referred to herein as "occlusion" of the images) and/or the "detection" of false cavities (also referred to herein as "clutter"). In addition, there is a great degree of uncertainty in the properties of the chambers in the "constellation," or grouping of cardiac structures or objects, which uncertainty is a result of artifacts such as those mentioned above.
Even if the prior art provided techniques to correctly detect cardiac cavities and avoid the false chambers, one skilled in the art nevertheless is faced with a third challenge, i.e., difficulties in distinguishing between views, given the high degree of structural similarity among the constellations of the different views, e.g., the commonality of relative chamber sizes and arrangements. The ambiguities inherent in the structural similarity between the views are further complicated by the possibility of missing and false chambers. Li view of the above difficulties, the challenging task is to be able to distinguish between the instances of the different views of the echo video in the presence of occlusion (missing parts), clutter (false parts), uncertainty in the features of the parts, and structural similarity between the different views.
A parts-based representation, as is known in the art, addresses some of the uncertainty in the features of image objects or parts, in which the variation in the properties of such objects or parts in the echo video. This approach identifies an instance of an object when correct parts are observed in the correct spatial configuration (see, e.g., M. Weber, M. Welling, and P. Perona, "Unsupervised Learning of Models For Recognition," 6th European Conference on Computer Vision. ECCV OO, June 2000). According to this type of modeling, the features (parts) are distinct, and specific detectors are available for each of the different types of parts.
The goal is then to choose a set of foreground parts in the correct spatial configuration in order to be able to identify the objects of interest.
Another method that is used to model parts or structures is referred to as "pictorial structures". In a pictorial structure, the parts are described using appearance models, and their constellation is described using a spring-like model (see, e.g., P. Felzenszwalb and D. Huttenlocher, "Efficient Matching of Pictorial Structures," IEEE Conference on Computer Vision and Pattern Recognition, pages 66-73, 2000; M. Burl, M. Weber, and P. Perona, "A Probabilistic Approach to Object Recognition Using Local Photometry and Global Geometry," European Conference on Computer Vision, Volume 2, pages 628-641, 1998). The image is subsequently searched for the optimal position of the parts based on their match with their appearance model and their spatial configuration. Although the above mentioned approaches, and similar techniques known in the art focus on object/clutter discrimination, they are not meant to distinguish between different objects with high structural similarity, such as the different views from which echo videos are recorded. Boshra and Bhanu addressed the issues of recognition performance and its dependence on object model similarity and the effects of distortions such as occlusion and clutter (see, e.g., M. Boshra and B. Bhanu, "Predicting Performance of Object Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):956-969, September 2000). They also obtained bounds on the performance of the recognition in the presence of the distortion factors for the case where the uncertainties of the parts properties were expressed using uniform distributions and finding the correspondence between the model and the scene was performed using a voting scheme.
Therefore there is a need to develop a technique for automatic view recognition which addresses the limitations of the prior art when faced with constellations of parts having a high degree of structural similarity between different views and with uncertainties or variations in the properties of the various objects in the images.
SUMMARY OF THE INVENTION
It is an object of the invention to automatically index the content of videos at the view and object levels using the characteristics of the spatial arrangement of the image objects in the different views.
It is a another object of the invention to provide a view-centered modeling approach in which the spatial arrangement of image objects, e.g., the cardiac cavities, and the statistical variations of their properties are modeled for each view.
It is a further object of the invention to provide a view recognition technique which resolves ambiguities due to the structural similarities between the constellation of image objects, e.g., cardiac chambers, in the different views of the video. It is a still further object of the invention to classify an image into one of the view classes by fusing the assessments of it by all the view-models. The fusion framework is employed to resolve ambiguities and make correct recognition.
A method for labeling video images is disclosed herein. The method includes providing a representation of an image comprising nodes and edges corresponding to objects in the image. A representation of a model for each label in a set of labels is provided. Another step in the method includes determining an optimal mapping of the representation of the image with the representation of each of said respective models. A confidence measure is determined, in which the confidence measure comprises a set of confidence values, such that each confidence value corresponds to the optimal mapping of the representation of the image with the representation of each respective model. A further step includes training a classifier, and classifying the image by applying the learned classifier to the confidence measure. hi an exemplary embodiment, the step of providing a representation of the image may include determining an attribution-relational graph of the image. The step of providing a representation of a model for each label may include determining an attribution-relational graph of the model. The attribution-relational graph of the image and the attribution-relational graph of the model may each include nodes and edges, and the step of determining an optimal mapping of the representation of the image with the representation of each of said respective models may include defining a generative model, such as a set of random variables on the nodes of the attributed graph of the image. The step of determining an optimal mapping of the representation of the image with the representation of each of said respective models may include determining the optimal configuration of the set of random variables, i.e., a random field. Such optimal configuration of the random field may be a configuration that results in the maximum posterior probability, or equivalently, which minimizes the posterior energy.
The step of determining a confidence measure may include determining an energy vector comprising a set of energy values, hi an exemplary embodiment, the step of training a classifier may include training a discriminative classifier, such as a Support Vector Machine classifier. The step of classifying the image by applying the learned classifier to the confidence measure comprises classifying the confidence measure using the learned Support Vector Machine classifier.
The system and methods herein are particularly useful in analyzing echocardiogram video images. Accordingly, the step of providing a representation of an image comprising nodes and edges corresponding to objects in the image may include providing a representation of an echocardiogram image comprising nodes and edges corresponding to cardiac chambers in the image, hi the exemplary embodiment, the label may refer to a particular view, such as the Parasternal Long Axis view, the Parasternal Short Axis view, and the Apical view. The step of providing a representation of a model for each label in a set of labels may include providing a representation of an echocardiogram image comprising providing a representation of a model for each view label in a set of view labels.
In accordance with the invention, the objects described above have been met, and the need is the art for a technique for automatic view recognition has been satisfied.
BRIEF DESCRIPTION OF THE DRAWINGS
Further objects, features and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying figures showing illustrative embodiments of the invention, in which:
FIG. 1 is a representative frame of an echo video depicting several cardiac objects.
FIG. 2 depicts several representative frames of echo video depicting a plurality of cardiac objects as identified in accordance with the present invention. FIG. 3 is a representative frame of an echo video undergoing analysis in accordance with the present invention.
FIG. 4 is a flowchart illustrating a plurality of steps in accordance with the present invention.
FIG. 5 is a series of plots representing a statistical analysis in accordance with the invention. FIG. 6 is a series of plots representing a statistical analysis in accordance with the invention.
Throughout the figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the subject invention will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments. It is intended that changes and modifications can be made to the described embodiments without departing from the true scope and spirit of the subject invention as defined by the appended claims.
DETAILED DESCRIPTION OF THE INVENTION
A preferred embodiment of the system and methods for automatic view recognition is described herein. Although this exemplary embodiment herein is directed to a technique for view recognition of echocardiogram videos. It is understood that the invention has application to any type of image which may be represented by a parts-based relational structure, as will be described in greater detail below.
In the exemplary embodiment, image acquisition equipment, which may include well-known echocardiogram acquisition equipment, such as ultrasound equipment, is used for acquiring the images of the patient's cardiac structure. Video capture and storage equipment is typically used, which may include a video capture card to digitize the analog video and video storage, such as a hard drive or other storage medium, to store the resulting video.
Each temporal segment of the echo video corresponds to a view of the heart is represented by a set of representative frames or key- frames which are sampled from the content at semantically meaningful time instances (see, e.g., U.S. Patent 6,514,207, to Ebadollahi et al.\ and S. Ebadollahi, S.-F. Chang, and H. Wu, "Echocardiogram Videos; Summarization, Temporal Segmentation and Browsing," IEEE International Conference on Image Processing (ICIP V2), Volume 1, pages 613-616, September 2002, which are incorporated by reference in their entirety herein). The heart passes through different phases of activity, wherein the parts, their spatial relationships, and their properties change. Such changes are reflected in each of the views, in which chambers change size, position, or disappear from the viewing plane. Such structural variability may introduce additional complications to the automatic recognition process, hi the exemplary embodiment, the key-frames being analyzed are all taken from the same state to reduce this variability in the image data. Further, the selection of each key-frame in the exemplary embodiment corresponds to the structural state where the heart is most expanded, i.e., end-diastole. According to this embodiment, images of each view are taken only from this structurally unique state. It is understood that the end-diastole is but one representative structural state, and that other cardiac states may also be analyzed. One stage in the process disclosed herein is creating a representation of the image which identifies cardiac objects, their properties, and their spatial arrangement. In the exemplary embodiment, a generic cardiac chamber detector may be used on the images to locate the chambers. An exemplary approach for cardiac chamber segmentation is that proposed by Bailes (see, e.g., D.R. Bailes, "The Use of the Gray Level SAT to Find the Salient Cavities in Echocardiogram," Journal of Visual Communication and Image Representation, 7(2): 169- 195, June 1996, is incorporated by reference in its entirety herein), which is based on the Gray-Level Symmetric Axis Transform (GSAT) to detect the cardiac cavities. This method assumes that there exists a distinct cavity in the image for each chamber of the heart. It then tries to locate the deepest cavities in the image. Figure 2 shows the application of the GSA T method to key-frames taken from a view of the echo video. Figure 2 illustrates four key-frames 20, 22, 24, and 26. The chambers, which are automatically detected by this process, are depicted as solid closed shapes or blobs 28. The boundary lines of the actual location of the chambers are illustrated by solid outlines 30. Errors in the chamber detection process are illustrated in key-frame 22, which illustrates a false positive, i.e., a chamber 32 was "detected" by the automatic detection technique, whereas the chamber does not, in fact, exist. Key- frame 24 depicts an error in the recognition process, i.e., a missing chamber, in which an actual cardiac chamber 34 was not detected at all. The presence of such false and missed chambers is discussed in greater detail hereinbelow.
The spatial arrangement and the properties of the cardiac cavities are expressed using an attributed relational structure. Figure 4 illustrates a pictorial representation of an attributed relational structure for the constellation of parts of an image 50 taken from one of the views. In this case, the constellation of cardiac cavities for the apical four chamber view is depicted. Each node 52 in such relational structure represents a cavity or chamber of the heart. The attributes of the nodes 52 are the properties of the cavities. The edges 54 if the relational structure are depicted as lines connecting the various nodes 52. The attributes of the edges 54 may be, e.g., the spatial relationships between two neighboring nodes 52. For handling the missed and false cavities in the observed constellation of parts, all the nodes 52 are allowed to be related to each other in the relational structure. (Models of the various views, which represent the cardiac cavities, including the spatial arrangement and the distribution of their properties, derived from domain knowledge, are also expressed as a relational structure.)
The relational structure of the observed constellation is represented as the attributed graph: G=(S,N,d) . The n parts in the observed constellation are indexed by the set S = {1, 2, ..., »}, and
Figure imgf000011_0001
(i), d2 (i, i')] , where i, ϊ 6 S are the unary and binary properties of the parts (the notation used herein is presented in S.Z. Li, Markov Random Field Modeling in Image Analysis. Springer- Verlag, 2001). N represents the fully connected neighborhood structure in the relational structure.
Likewise, the model of a view is represented as the attributed graph. In particular, a model with m parts having labels L=[I1, 12, ...J1n ], and the parts properties D=[D1 (/), D2 (I, I')] , where I s L, may be expressed as the attributed graph: G' = (L, N', D), in which N' denotes the full connectivity between the parts in the model. In the ideal case where there are no missing and false cavities in the observed constellation, one would have m = n. The step of finding the correspondence between the two relational structures expressing (1) the observed constellation and (2) the model of a view may be posed as a search for the optimal configuration of a random field defined on the parts of the observed constellation. The optimal configuration is the one that minimizes the overall posterior energy of the field, as will be described in greater detail herein. An exemplary statistical modeling method is the Markov Random Field ("MRF") (see, e.g., S.Z. Li, Markov Random Field Modeling in Image Analysis. Springer-Verlag, 2001, which is incorporated by reference in its entirety herein.) in which one could efficiently express the contextual constraints using locally defined dependencies between the interacting entities. Through the Hammersly-Clifford theorem, these local dependencies lead to the encapsulation of the joint global characteristics.
The task of labeling the observed constellation using the model of a certain view may be posed as finding the best mapping between the nodes of the two attributed graphs: / : G -> G. A set of random variables is defined
/ = {/i > Λ > • • • » /N } (Random Field) on the nodes of the attributed graph of the scene, where each of the random variables take values in the set of labels f. e L in the attributed graph of the model of the view. Using this framework, the problem of finding the optimal mapping between the two attributed graphs could be posed as a search for the optimal configuration of the random field / in the space of possible configurations Ω = LN .
The optimal configuration of the random field is the one that results in maximum posterior probability
Figure imgf000012_0001
(MAP-MRF) or equivalently minimizes the posterior energy, i.e., /* = arg win (u(f\d)) = arg min (u(f) + u(d\f)) (1)
/sΩ /eΩ
where U(J) is the prior energy and
Figure imgf000012_0002
the likelihood energy. The prior energy of a configuration is defined as (note that we only consider cliques of size up to two):
U(f) = V2(Z1J11) (2)
Figure imgf000012_0003
where the potentials are defined as follows:
Figure imgf000013_0001
where both V1 and V2 are positive values. The theory associated with this definition is that a NULL label assignment (hereafter NULL and/- = 0 will be used interchangeably) to a site incurs a positive energy. This is to avoid all sites in the field to be labeled NULL. A NULL label only will be assigned to a site if the relative penalty of it is smaller than the likelihood of the site having any other label. The pair- site prior potential encourages the sites to have distinct labels. This is derived from the fact that in the constellation of the heart chambers a chamber can not appear more than once.
The energy likelihood is defined as: (d2 (i,i')\f,fv) (4)
Figure imgf000013_0002
where the likelihood potentials on single cliques and double cliques are defined as:
Figure imgf000013_0003
.ST1 and X2 are the total number of unary and binary features defined on single and pair-site cliques. In the present work, location, area, and directionality are used as the properties of each individual part, and distance and angle between a pair of parts as their joint properties. The properties of the single and double site cliques are considered to be distributed according to Gaussian distributions with means defined
Figure imgf000014_0001
The maximum likelihood estimate of the parameters of the potential functions may be obtained from the manually labeled training data. The bounds on the prior parameters V1 and v2 are estimated such that the training data are embedded at the minimum energy configurations of the configuration space.
To embed the training constellations at the minimum energy configuration in the configuration space, any perturbation of the configuration of the labels on the training set should result in non-optimal configuration. By changing the non-NULL labeled parts to NULL and vice versa and obtaining the energy of the new configuration, a set of inequalities is obtained which provides us with the bounds on the prior parameters. Equations 6 and 7 illustrate the conditions to obtain the lower and the upper bounds for the prior parameters.
, VpV2J)O, (6)
=> v, > E(/;.* |/;_{,.}, v15 v2 )
\/i e S,f; = 0 → fi ≠ 0, tf (/i|/Hr>> V1, V2) - E0?|/;4.}, V1, v2)> 0, (7)
Figure imgf000014_0003
)
where α is zero if the label assigned to the previously NULL site is a label that is missing from the observed constellation and is one if that label already exists in the constellation; and therefore, we incur a penalty by having two sites with the same label. This estimation of the bounds of the prior parameters is applied to each constellation in the training data and take the common value over all those constellations. The value of the local energy resulting from assigning the label yϊ to a site i is defined as:
Figure imgf000015_0001
where
Figure imgf000015_0002
.
V2[U2 [WIf1, ft) if /, ≠ 0, /P ≠ 0, (9)
E2 (ft, fr) = V2 if /, = fr ≠ 0,
0 otherwise.
Having discussed the technique for learning the model of the constellation of the cardiac chambers for each view of the echo video, the method described herein is used to determine the correct view-label F* for a constellation C obtained from a test image /..
The chamber constellation C is matched against the model of each view. In the discussion which follows, each model is has the notation Mk, where k = 1 , ... , K (total number of views in the example is K=I 0). The optimal labeling of the chambers in the observed constellation according to each of the models is inferred by minimizing the posterior energy of the configuration of the labels such that the resulting configuration is both consistent with the prior knowledge and the evidence.
In the exemplary method, the HCF method proposed by Chou and Brown (see, e.g., P.B. Chou and CM. Brown, "The Theory and Practice of Bayesian Image Labeling,
InternationalJournal of Computer Vision, 4:185-210, 1990), which is a deterministic algorithm for combinatorial optimization, may be used to obtain the optimal configuration of the random field. The main advantage of this method is its low complexity O(ή) (n = number of sites). According to the method described herein, labeling the constellation C against all the K models results in the set of optimal configurations and their corresponding energy values,
Figure imgf000015_0003
= {E; [c) \ k = ι,...,κ} { } where Fk' (C) is the optimal configuration of the random field defined on the sites in the constellation C according to the model of the k— th view, and El (c) is its corresponding energy. Each element of the energy vector is a
"confidence measure," i.e., an indicator of how the corresponding model matches the constellation. As illustrated in FIG. 4, for each observed constellation 60, the optimal labeling of its parts may be obtained according to view-model M1 62, M2 64, M3 66, ... Mk 68. The vector 70 of the energies assigned to the observed constellation at the optimal configuration of the random field according is also illustrated. The energy vector quantifies how each model "sees" the constellation C. According to an exemplary embodiment of the method, the constellation C is taken to be complete, meaning that it does not have any false parts and there are no chambers missing from it. In this case if one slightly perturbs the properties of the nodes and edges in the attributed relational structure of C, the corresponding optimal energy vector will change to E * (c) + SE (c) . Therefore, the energy vectors for all the complete constellations taken from the same view populate the same region in the energy space 72. (See Figure 4.)
In order to disambiguate between the constellations taken from the different views and to be able to recognize the correct view-label for them, a discriminative approach maybe used. Methods, such as, e.g., Support Vector Machines ("SVM") (see, e.g., N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, 2000) have been shown to perform well in discriminating between the instance of the different classes in general pattern recognition applications.
With continued reference to Figure 4, the vector 70 of the energies assigned to the observed constellation at the optimal configuration of the random field according to the different view-models is used to classify the constellation into the correct view-class. By use of this method, the assessments of the observed constellation by the different view-models are fused in order to correctly identify its originating view. The SVM classifier learns the decision boundaries to correctly classify the instances of the different views in this space. This approach is similar in nature to the one proposed by Selinger and Nelson (see, e.g., A. Selinger and R.C. Nelson, "Appearance-Based Object Recognition Using Multiple Views," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 905-911, 2001) in the way that all the models are used to disambiguate the object; however, there are some fundamental differences between them. Therefore, the results of matching the observed constellation C against all the models are fused together in order to decide the correct view label. As illustrated in Figure 5, graph 80 shows the energies (E4) 81 assigned to constellations taken from view 9 (C9) 82 and view 10 (C1O) 84 by the models of view 4 (M4). Graph 86 shows the energies (E9) 87 assigned to constellations taken from views 9 (C9) 88 and 10 (C10) 90 by the models of view 9 (M9). Graph 92 shows the energies (E10) 93 assigned to constellations taken from views 9(C9) 94 and 10 (C1O) 96 by the models of view 10 (M10). It is apparent from the graphs 80, 86, and 92 that neither the model of view 9 (M9) (as depicted in graph 86) nor the model of view 10 (M10) (as depicted in graph 92) could be used to distinguish the constellations. However, the model of view 4 (M4) (as depicted in graph 80) could be used to distinguish the constellations taken from those two views, as indicated by the discontinuity 98 in the graph. In other words, the model of view 4 (M4) "sees" the constellations taken from views 9 (M9) and 10 (M10) differently.
Two views (classes) will become indistinguishable from each other in this framework when the energy vector of the constellations taken from them populate the same region in the energy space. This would happen if none of the view-models can distinguish the identities of the two views, hi this case the representation used and the models of the constellation of the parts in the different views do not possess enough complexity. If missing parts are allowed (although still no false parts observed), the distributions of the energy vectors of the constellations of the different views will tend to spread out and have more overlap with the ones belonging to the other models. The reason is that the observed constellation could potentially become more similar to the typical constellations in another view depending upon which combination of parts is missing. Still one could learn a classifier to separate the constellations of the different views. Naturally, one would expect higher error rates in the recognition results. A more challenging case would be the one when both missed and false parts are allowed in the constellations. The effect of the false positives will completely deviate the energy vector associated with the observed constellation from its normal distribution. This is because the introduction of the false positives will potentially contribute to a higher degree of similarity between the two models, based on the properties of those false parts and their numbers.
In this case, a multi-hypothesis testing case is formed. For each possible view k e 1, ... , K, it is assumed that the observed constellation was taken from that view, i.e., C e Vk, where F/thas the model Mk. For each such assumption, the constellation C is labeled according to the model Mk, and the optimal label of the parts is found. Then the parts labeled as false according to this labeling are deleted and then the energy vector for the filtered constellation is obtained. According to this method, if the observed constellation C was truly taken from view Vk, it would be correctly classified by the classifier learned for the case where only missing parts were allowed. The same process is applied to the observed constellation according to all the different view assumptions and the corresponding classification results for each such obtained energy vectors are found. One could then decide on the correct view label for the observed constellation by comparing the confidence scores of the different classification results under the different view assumptions and fuse the decisions.
An exemplary embodiment of the system for automatic view recognition is described herein. The system includes image acquisition equipment. In the exemplary case of echo video analysis, the image acquisition equipment includes well-known echocardiogram acquisition equipment, such as ultrasound equipment, for acquiring the images of the patient's cardiac structure. The resulting images are typically in analog videotape form. Video capture and storage equipment is typically used, which may include a video capture card to digitize the analog video and video storage, such as a hard drive or other storage medium, to store the resulting video. In the exemplary embodiment, a commercial video capture card was used for the digitization of the analog echocardiogram videos. All videos where digitized into MPEG-I format at 30 frames per second and with frame resolution of 352x240. MPEG-I videos were decoded into PPM image (uncompressed) format prior to processing. Videos and uncompressed images where stored on computer hard disk. An echo video in compressed format typically takes about 200 to 300 MBytes. Alternatively, the echocardiogram video may be directly captured in digital format, thereby eliminating the need for an intermediate digitizing step. A processor or processing subsection for performing detection of objects, e.g., cardiac chambers in the images may be performed by a CPU or processing chip. In the exemplary embodiment, such processing is performed according to the routine "chamber_segment" attached hereto. Similarly, a processor or processing subsection for performing modeling of the various views may be performed by a CPU or processing chip. For modeling the constellation of cardiac chambers training data is obtained by manually annotating the segmented echo frames (e.g., in the exemplary embodiment, only key-frames which are sampled from the content of the echocardiogram at the End-Diastole state of the heart). Using the code in the "datacollect" directory of the attached code which provides a user interface for annotating the segmented cardiac chambers according to the domain knowledge, all the training data may be annotated. After annotation and obtaining the training data the code in the directory "view_rec" should be used for learning the MRP model and performing the view recognition.
It is understood that the processors for performing the functions listed above and discussed in greater detail herein may be located on a single CPU or PC, or may be distributed among several locations by a LAN, intranet, the internet, wireless connection, or other arrangement known in the art. For example, one may capture the videos at one location, perform chamber segmentation at another location, and perform view recognition at yet another location.
EXAMPLE
Without limiting the foregoing, a particular example of view recognition in echo videos using the proposed method is discussed herein. The data set consisted OfN1 = 15 echo videos of normal, and N2 = 6 echos of abnormal cases. The echos of the normal heart with a total of 2657 key-frames are used both for training and testing (in "leave-one-out" fashion, as discussed below), while those of the abnormal echo videos with a total of 552 key-frames are only used for testing purposes. Every key-frame in the data set is manually labeled by an expert. There are K= 10 different views present in these echo videos, taken from the Parasternal Long Axis (2 views), Parasternal Short Axis (4 views), and Apical (4 views) angles of imaging. The experiments were conducted in a leave-one-out scheme, such that during each round (15 rounds total), one echo video is left out as the test data and the models are learned from the remaining 14 echos. For each of the views, the priors and the parameters of the Gaussian distributions of the single and double cliques are learned from the hand-labeled training data. The SVM classifiers are also learned for each round of experiment. The experiments and results are discussed for four different cases below.
CASE 1 (Normal Echos-Complete Constellations). In the first set of experiments, the performance of the view recognizer was investigated for the case where the constellations are complete, i.e., they do not have false parts and none of the parts are missing. In this case, the uncertainties in the properties of the parts in the model of each view and the closeness of the geometry of the constellations creates ambiguities between the constellations of the different views.
From the available training data, only the key-frames which contain complete constellations and perhaps false positive chambers are selected. If the constellation contains false parts, those parts are removed from the constellation (filtered constellations). Each filtered constellation is then labeled according to the models of the different views, and the corresponding energy vector at the optimal configurations is found.
Using the energy vectors obtained for all training data, a multi-class SVM classifier is then trained for each round. The energy vectors of the test key¬ frames (also filtered constellations) are then classified using the learned multi-class SVM classifier. The top two images in Figure 6 show the Hinton diagram of the confusion matrix for this case without considering clinical similarities (image 100) and with considering clinical similarities (image 110). (A confusion matrix is an array showing relationships between true and predicted classes. Entries on the diagonal of the matrix count the correct calls. Entries off the diagonal count the misclassifications. The Hinton diagram provides a qualitative display of the values in a data matrix (normally a weight matrix). Each value is represented by a square whose size is associated with the magnitude, and whose color indicates the sign.) "Clinically similar views" may be defined as the ones that even the human expert could not discriminate and for clinical purposes they are regarded as identical. In the present case, the views in each of the following sets {4, 5, 6}, {7, 8}, and {9, 10} are considered clinically similar.
The average precision over all rounds of experiment and all views is 67.8% without and 88.35% with taking into account the clinical similarities. As shown in Figure 6, for the original case the set of views that are often confused with each other are {4, 5, 6}, {1, 3, 7}, and {2, 10}. In the first set, all views have a single part (degenerate case) with highly overlapping property distributions; in the second case all views have constellations with four parts; and in the last case, all have two part constellations.
The direct comparison of the energy values obtained from comparing the test images with the different view-models results in an average precision of 20% for this case.
CASE 2 (Normal Echos-Complete Constellations with False Chambers), hi this case, the introduction of the false chambers contributes to the similarity between the constellations taken from the different views. The false chambers only occur at certain regions of the image based on the view. However, because of the Gaussian model used for the cliques in the random field modeling, they still could be labeled non-Null under certain view assumptions based on their properties in the image.
In order to do the view recognition in this case, the multi-hypothesis approach may be used as explained hereinabove, and the same SVM classifiers learned for the complete case may be used.
The Hinton diagrams of the confusion matrices for the cases without taking into account the clinical similarities and with taking into account the clinical similarities are shown in the middle row of Figure 6 as diagrams 120 and 130, respectively. The average precisions of 54.1 % and 74.34% are obtained for the original and the clinically similar cases respectively. CASE 3 (Normal Echos-Constellations with Missed and False Chambers). For this case, the SVM classifiers are learned for each round of experiment using constellations with their false chambers removed. Then, the multi- hypothesis approach was used to classify each real constellation (both false chamber and missed chambers exist).
As seen from the confusion matrices for the cases without and with taking into account the clinical similarities (shown in the bottom row of Figure 6 as diagrams 140 arid 150 respectively), the ambiguity between the views naturally becomes worse than the case that only false positives were allowed. The average precision in this case drops to 34% without and 52% with the clinical similarity taken into account.
These results may be compared to the random guessing of the views with non-uniform priors. Given that the prior probability of correctly guessing the view labels for the constellations taken from the view k ispk, the overall rate of error
K K becomes: e = ^ pk (l - pk ) = 1 - ∑ pi . The average rate of error of randomly
A=I 4=1 guessing the view labels with non-uniform priors over the 15 rounds of experiments became e = 88.13%, that is average rate of correct recognition by guessing is 11.87%. Therefore, the automatic recognition is almost three times better than random guessing in the worst case. CASE 4 (Abnormal Echos). hi this experiment, it was desired to observe how the models learned from the key-frames taken from the normal echos perform on the test images taken from the abnormal ones. There are 6 abnormal echo videos on which were tested the performance of the view recognizer. Only the complete constellation case was considered in this experiment. The confusion matrix of this case is very similar to that of case 1, above. The average precision is 56% without and 78% with taking into account the clinical similarities. Compared to the similar case for the normal echos, the precision has slightly dropped.
According to another embodiment of the invention, the techniques described herein are used to detect an abnormality or disease in the patient's heart function. This approach is particularly useful where the abnormality is discernible in a single frame or image. As discussed above, the particular frame may be a key- frame, such as the end-diastole state. Accordingly, a relational structure or attributed graph of the image of the abnormal hear is expressed, hi addition, a relational structure is expressed for a model which is trained on images of heart having the particular disease or abnormality. Additional models may be expressed for normal heart images or hearts having different abnormalities. A set of random variables (i.e., a Markov Random field) may be defined on the nodes of the attributed graph of the image. As discussed hereinabove, an optimal mapping is obtained between the attributed graph of the image of the abnormal heart and the attributed graph of the model of the abnormality. A confidence value (e.g., energy value) is determined for an optimal mapping, e.g., an optimal configuration of the random field defined on the cardiac objects in the image when viewed by the model for the abnormality. The disambiguation techniques discussed herein, e.g., the use of SVM classifiers, are used to determine whether the abnormality is a proper label for the view being analyzed.
The issue of automatic view recognition in echocardiogram videos and its application to the indexing of the content of these videos is discussed herein. It is understood that the automatic indexing process could be applied to both the analog echo videos (after digitization) and the digital ones. It is very expensive to manually index the content of the analog echo videos. For the digital echos although the new acquisition devices provide online annotation tools their use is burdensome because the focus during the acquisition process is on capturing the best and the most clear images rather than annotating the content.
A parts-based representation for the constellation of the heart chambers in the different views was used and Markov Random Fields were used to model such representations. The collection of the energies obtained from comparing a test image to the models of the different views was then used as the input to an SVM classifier to find the view label. This fusion scheme helped to disambiguate the constellations taken from the views with structural similarities.
In another embodiment, more complex distributions is be used to represent the properties of the parts. Also, the training of the MRF model of the constellations of the cardiac chambers and the SVM classifiers were performed separately. In yet another embodiment, the training of the MRF model and the SVM classifiers are combined to obtain better results. According to the exemplary embodiment described herein, the models of the different views of the echocardiogram video were learned from the key-frames extracted from the content of the videos. In still another embodiment, the spatio- temporal characteristics of the constellation of the chambers of the heart are used, which may recover the different phases of activity of the heart during each cycle of its activity.
The features and advantages of the system and methods described herein are particularly useful in numerous applications. As described above, the invention provides the capability to automatically index and annotate the content of the echocardiogram videos by identifying cardiac objects and identifying particular views. This provides efficient access to the desired content of the echocardiogram videos, which can be a short sequence in an otherwise lengthy video. By application of this system to an echocardiogram video viewer, random access to the desired content of the videos is provided, which allows efficient browsing of the video content. The capability of annotating the cardiac obj ects identified herein permits the augmentation of videos with contextual information, which is useful in categorizing and archiving the video.
The capabilities of the system and techniques described herein are particularly useful in the educational context. An exemplary embodiment of the invention is a teaching application in which the techniques described herein are employed to index and annotate the video images. Multimedia files are created thereby having annotations that provide additional information on the various cardiac objects and views. These multimedia files may be accumulated and organized into libraries or files to be reviewed by students, with or without teaching supervision, to gain a better understanding of the physiological structure and functioning of the heart, as well as the techniques for recognizing and diagnosing pathology.
In another exemplary embodiment, the system is embedded to the image acquisition equipment itself, e.g., ultrasound machines, being used to capture the echocardiogram images. Such application enables the indexing and/or annotation of the echocardiogram images simultaneous with, or shortly after, the acquisition of the images, which allows for a more rapid diagnosis. In a further exemplary embodiment, the system is used to index and select different segments of the video for tele-medicine applications, such as the Picture Archiving & Communication System ("PACS"), which is a combination of hardware and software that digitally stores and manages medical information and information. Using PACS, medical professionals can access these radiological images. Unlike older film-only systems, multiple users, even those in different facilities, can view images in PACS simultaneously, hi order to integrate echocardiogram images to the PACS system, one needs to provide ancillary data that identifies the content. The system and techniques described herein provide the capability to provide this ancillary data automatically. hi yet another embodiment, the system and techniques described herein are used in computer-assisted diagnostics. The libraries of indexed and annotated echocardiogram images, which have been automatically indexed at the view and cardiac object levels, would be a valuable resource for a physician diagnosing a new case. First, the echo video of the new case may be automatically indexed so that certain key frames may be quickly accessed. Second, the physician may consult such library of previously diagnosed cases and compare the structure and functioning of the heart for the new case with the archived, annotated cases.
Although the exemplary embodiment is described herein as particularly useful for echo images, it is understood that the principles discussed herein may be applied to any multiple object recognition problem when the objects are represented by the constellation of their parts and there are ambiguities due to similarity in their structures. Various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention.
APPENDIX
CHAMBER SEGMENTATION
it = (G[I] ) .nodes_begin() ; end = (G[I] ) . nodes_end() ; while (it != end) {
T = ((A[I]) [*it] ) .getTypeO ; if (T == TERMINAL && ((A[I]) [*it] ) . isVisitedO == FALSE) { LI. clear () ; LJ. clear () ; this_V = *it; last_V = thisJV;
DO_LINKING1: ( ref_I = this_V; if (find_LinkableVertex2 (ref_I, ref_J, A, L, I, J)) {
// Only consider vertices which are not linked yet if (! ((A[J]) £ref_J]) .isLinked O) { find_VList(this_V, last_V, ref_J, G, A, LI, LJ, I, J);
// Link the two vertex lists. linkJSTodeLists (LI, LJ, A, I, J, SFPG);
#ifdef DEBUG printf("Vertex list at level %d of length %d is:\n",I, LI.sizeO); for (1=0,-l<LI.size() ; 1++) { printf ("%d " , (A[I] [LI [1] ]) .getlndex( )) ;
} printf ("\nVertex list at level %d of length %d is:\n",J, LJ.size() ) ; for (1=0;1<LJ.size0 ; 1++) { printf ("%d ", (A[J] [LJ[I] ]) .getlndexO ) ;
} printf ("\n\n") ,- #endif
} } else { if (get_NextVertex (this_V, last_V, next_V, G[I], A[I])) { if ( ( (A[I] ) [next_V] ) .getTypeO != BRANCH) { last_V = this_V; this_V = next_V; goto DO_LINKING1; } }
} ++it;
}
/** Get all possible neighbors of linked BRANCH vertices */ it = (G[I] ) .nodes_begin() ; end = (G[I] ) .nodes__end() ; while (it != end) {
T = ((A[I] ) [*it] ) .getTypeO ; if (T == BRANCH ScSc ((A[I]) [*it] ) .isLinked() == TRUE)
{ eib = (*it) .out_edges_begin0 ; eie = (*it) .out_edges_end() ; while (eib != eie) {
// get the neighbor vertex this__V = eib->target () ;
// add neighbor vertex to vector if not linked already if { ((A[I]) [this_V]) .isVisited()== FALSE) { LI. clear() ; LJ.clear() ; last__V = *it;
DO__LINKING2 : ref__I = this_V; if (find_LinkableVertex2 (ref_I, ref_J, A, L, 1, J)) {
// Only consider vertices which are not linked yet if ( ! ((A[J]) [ref_J] ) .isLinkedO ) { find_VList (this_V, last__V, ref_J, G, A, LI, LJ, I,J)
// Link the two vertex lists. link NodeLists (LI, LJ, A, X1 J, SFPG) ;
#ifdef DEBUG printf ("Vertex list at level %d of length %d is:\n",I,
LI.size()) ; for (1=0,- 1<LI. size () ; 1++) { printf ("%d «, (A[I] [LI [1] ]) .getlndexO ) ;
} printf ("\nVertex list at level %d of length %d is:\n",J,
LJ.size() ) for (l=0;l<LJ.size() ; 1++) { printf ("%d «, (A[J] [LJ[I] ]) .getlndexO );
} printf ("\n\n") ; #endif
}
} else { if (get_NextVertex (this_V, last__V, next_V, G[I], A[I])) if ( ( (A[I] ) [next_V] ) .getTypeO != BRANCH ) { last_V = this_V; this^V = next_V; goto DO_LINKING2;
Figure imgf000029_0001
++it;
}
/** Get all possible neighbors of non-linked BRANCH vertices */ it = (G[I] ) .nodes_begin0 ; end = (G[I] ) .nodes__end() ; while (it != end) {
T = ((A[I]) [*it]) .getτype(); if (T == BRANCH && ( (A[I]) [*it] ) .isLinkedO == FALSE) { eib = (*it) .out_edges_begin() ,- eie = (*it) .out_edges_end() ,- while (eib != eie) {
// get the neighbor vertex this_V = eib->target () ;
// add neighbor vertex to vector if not linked already if ( { (A[I] ) [this_v] ) .isvisited()== FALSE ) { LI.clear() , LJ.clear() ; last_V = *it;
DO_LINKING3 : ref_I = this_V; if (find_LinkableVertex2 (ref_I, ref_J, A, L, I, J)) {
// Only consider vertices which are not linked yet if ( J ((A[J]) [ref_J] ) .isLinkedO ) { find_VList(this_V, last_V, ref_J, G, A, LI, LJ, I,J) link NodeLists (LI, LJ, A, I, J, SFPG);
#ifdef DEBUG printf ("Vertex list at level %d of length %d is:\n",I,
LI.size ()) for (1=0 ,-l<LI. size (); 1++) { printf ("%d ", (A[I] [LI [1] ]) .getlndexO ) ;
} printf("\nVertex list at level %d of length %d is:\n",J,
LJ.size() ) for (1=0;1<IIJ.size(),- 1++) { printf("%d ", (A[J] [LJ[1] ] ) .getlndex() ) ;
} printf("\n\n") ; #endif
}
} else { if (get_NextVertex (this_V, last_V, next_V, G[I], A[I])) if ( ((A[I]) [next_V] ) .getTypeO != BRANCH ) { last_V = this_V; this_V = next_V; goto DO LI1TKING3;
Figure imgf000030_0001
++it; /**
* form_SurfacePaths ()
* Given the "l_max-l_min+l" sub-graphs in 'G' with attributes
* given in 1A', this function will construct a graph 'SFPG'
* with attributes 'SFPA*. The nodes of this graph correspond
* to the nodes in 1G' and edges are determined by the connectivity
* criteria as specified by "Baile's" paper.
*/ void form_SurfacePaths ( graph* G, // stack of subset graphs node_map<NodeAttribute>* A, // their attributes
LUT* L, // their look-up tables graphs SFPG, // Surface-paths graph node_map<JsrodeAttribute>& SFPA, // their attributes int l_min, // min and max subsets to process int l_max)
{
// Making the surface-path graph undirected. (SFPG) .make_undirected() ; int 1; int num_nodes = 0; int node_cnt = 0; for (l=l_min; 1 < l__max; 1++) num_nodes += (G[I]) .number_of_nodes () ; node* nodes = new node [num_nodes] ; // = nodes of the SFP
// Linking branch vertices from sub-graphs together. link_BranchVertices (G, A, SFPG, SFPA, l_min, l_max, nodes, node_cnt) ;
// Linking the other vertices together. for (1 = l_min; 1 < ljmax ; 1++) { reset_AllLinkTags (G[I], A[I]); // Unlink vertices except BRANCH link_OtherVertices (G, A, L, SFPG, SFPA, 1, 1+1, nodes, node_cnt) ,-
}
// Writing surface path graph. save_SubGraph(SFPG, SFPA, "SP3D.GML", 1) ; delete [] nodes; }
/**
* link_BranchVertices ()
* , _ ,
* Links the Branch Vertices of the different
* sub-graphs together .
*/ void link_BranchVertices ( graph* G, // sub-set graph set node_map<NodeAttribute>* A, // their attributes graph& SFPG, // surface-paths graphs node_map<NodeAttribute>& SFPA, // their attributes int l_min, // min and max levels int l_max, // of sub-graphs to be used node* nodes, // total number of nodes in SFPG int& node_cnt) // counts nodes of SFPG
{
CvPoint Pl, P2; int i, index,- double rain, dist; int cntrl_cnt = 0; vector <NodeAttribute> una,- // a vector of attributes of upper graph nodes
// Iterators for going over the sub-graphs, graph : : node_iterator itl; graph : : node_iterator endl; graph : : node_iterator'it2; graph :: node_iterator end2;
/*
* Go over all sub-graphs and
* extract the Branch Vertices .
*/ for (int l=l__min; 1 < l_max; 1++) { itl = (G[I] ) .nodes_begin() ; endl = (G[I] ) .nodes_end() ; while (itl != endl) // Go over vertices of lower graph
{ if ( ( (A [I ] ) [* itl] ) . getType O > 2 ) // only get BV s
{ cntrl_cnt = 0; una.clear() ;
// location of th lower graph vertex Pl = ( (A[I] ) [*itl] ) .getPositionO ; it2 = (G[l+1] ) .nodes_begin() ; end2 = (G[1+1] ) .nodes_end() ; while (it2 != end2) // Go over vertices of upper graph
{ if ( ( {A [l+1 ] ) [*it2 ] ) . getType O > 2 ) // only get BV s
{
// location of the upper graph vertex P2 = ( (A[l+1] ) [*it2] ) .getPositionO ;
// Find out if the two vertices are 8-connected or not. if (fabs (double (Pl.x - P2.x)) <=1 && fabs (double(Pl.y - P2.y)) <=1)
{
// See if BVs are linkable if (linkable_BranchVertices (*itl, *it2, A, 1, 1+1))
{
/ / put attributes of linkable node in vector una . insert (una . begin ( ) +una . size ( ) , A [1+1 ] [* it2 ] ) ; // increase number of BV s from upper graph cntrl_cnt++ ; }
} ++it2 ;
}
// If there are linkable vertices available if (cntrl^cnt == 1) { link_Vertices ( (A[I] ) [*itl] , una [0] , 1, 1+1, nodes , node^cnt, SFPG, SFPA ) ;
#ifdef DEBUG printf("N: %d, G: %d Coord.: (%d, %d) ", ( (A[I] ) [*itl] ) .getlndexO , 1, Pl.x, Pl.y ) ; printf (" >") ; printf(" M: %d G: %d Coord..- (%d, %d)\n",
(una[O]) .getlndexO , 1+1, ( (una[0] ) .getPositionO ) .x, ( (una[0] ) .getPositionO ) -y ) ; ttendif
} else if (cntrl_cnt > 1) { for (i=0; i < una.sizeO; i++) { dist = euclideanDistance (Pl, (una[i] ) .getPosition() ) ; if Ui) { min = dist; index = i; } else { if (dist < min) { min = dist; index = i; }
} link_Vertices ( (A [I] ) [*itl] , una [index] , 1 , 1+1 , nodes , node_cnt, SFPG, SFPA ) ; ttifdef DEBUG printf("N: %d, G: %d Coord. : (%d,%d) ", ((A[I]) [*itl]) .getlndexO, 1, Pl.x, Pl.y ); printf (" >") ; printf (" N: %d G: %d Coord.: (%d, %d)\n",
(unatindex] ) .getlndexO , 1+1, ( (una[index] ) .getPositionO ) .x, ( (una[index] ) .getPositionO ) .y ) ; #endif }
++itl ;
}
#ifdef DEBUG printf ( " *** * ***** ** ****** * * ** \n" ) ; ftendif } /**
* link OtherVertices ()
* Linking the non-branch vertices in different
* sub-graphs together to make surface-paths.
*/ void link_OtherVertices ( graph* G , node_tnap<NodeAttribute>* A, LUT* L, graph& SFPG, node_map<NodeAttribute>& SFPA, int I , int J, node* nodes, int& node_cnt )
{ int i, j, k; NodeType T; node ref_I, ref_J; node this_V, next_V, last_V, temp_V,- node last_linked_VI;
// Lists of nodes corresponding to the 2 input sub-graphs vector <node> LI; vector <node> LJ;
// iterators for traversing the graph graph : : node_iterator it; graph : : node_iterator end; node : : out_edges_iterator eib; node : : out_edges_iterator eie;
/** Get all possible starting TERMINAL vertices */ //printf ("Starting from unvisited TERMINAL vertices.\n"); it = (G[I] ) .nodes_begin{) ; end = (G[I] ) .nodes_end() ; while (it != end) {
T = ((A[I]) [*it]) .getTypeO; if (T == TERMINAL && ((A[I]) [*it] ) .isVisited() == FALSE)
{
LI.clear() ; LJ.clear() ; this_V = *it; last_V = this_V;
DO_LINKIMG1: ref_I = this_V; if (find_LinkableVertex2 (ref_I, ref_J, A, L, 1, J)) { // Only cons ider vertices which are not linked yet if ( ! ( (A [J] ) [ref_J] ) . isLinked O ) { f ind_VList (this_V, last_V, ref_J, G, A, LI , LJ, I , J) ; link_VertexLists (LI, LJ, A, I, J, SFPG, SFPA, nodes, node_cnt, last_linked_VI) ;
#ifdef DEBUG printf ("Linked vertices:"); for (k=0; k < LI.size0; k++) { if (k) printf ("\t\t"); printf ("\tNode %d of G %d < > Node %d of G %d\n",
((A[I]) [LI[k]]) .getlndexO , I, ((A[J] ) [LJ[k]]) .getIndex() , J ) ;
}
#endif
} else { if (get_NextVertex (this__V, last_V, next_V, G [I] , A [I] ) ) { if ( ( (A [I] ) [next_V] ) . getType O ! = BRANCH ) { last_V = this_V; this_V = next_V; goto DO_LINKING1 ;
++it ; }
/** Get all possible neighbors of linked BRANCH vertices */
//printf ("Starting from unvisited neighbors of linked BRANCH vertices.\n") it = (G[I] ) .nodes_begin() ; end = (G[I] ) .nodes_end() ; while (it != end) {
T = ((A[I]) [*it]) .getTypeO ; if (T == BRANCH && ((A[I]) [*it] ) .isLinked() == TRUE) { eib = (*it) .out_edges_begin() ; eie = (*it) .out_edges_end() ; while (eib != eie) {
// get the neighbor vertex this_V = eib->target() ;
// add neighbor vertex to vector if not linked already if ( ( (A[I] ) [this_V] ) .isVisited()== FALSE ) { LI.clear() ; LJ.clear() ; last_V = *it,-
D0_LINKING2 : ref_I = this_V; if (find_LinkableVertex2 (ref_I, ref_J, A, L, I, J)) {
// Only consider vertices which are not linked yet if ( ! ( (A[J] ) [ref_J] ) .isLinked0 ) { find_VList (this_V, last_V, ref_J, G, A, LI, LJ, I,J) ; ttifdef DEBUG printf {" \n") ;
#endif link_VertexLists (LI, LJ, A, I, J, SFPG, SFPA, nodes, node^cnt, last_linked_VI) ;
#ifdef DEBUG printf ("Linked vertices:"); for (k=0; k < LI.size(); k++) { if (k) printf ("\t\t") ; printf ("\tNode %d of G %d<- >Node %d of G %d\n",
((A[I]) [LI [k] 3) .getlndexO , I, ((A[J]) [LJ[Jc]]) .getlndexO, J ) ;
}
#endif
} } else { if (get_NextVertex (this_V, lastJV, next_V, G[I], A[I]))
{ if ( ((A[I] ) [next_V] ) .getTypeO != BRANCH ) { last_V = this_V; this_V = next_V,- goto DO_LINKING2; }
}
} ++eib;
}
++it; }
/** Get all possible neighbors of non-linked BRANCH vertices */ //printf {"Starting from unvisited neighbors of non-linked BRANCH vertices.\n") ,- it = (G[I] ) .nodes_begin() ; end = (G[I] ) .nodes_end() ; while (it != end) {
T = ((A[I]) [*it]) .getTypeO ; if (T == BRANCH && ( (A[I] ) [*it] ). isLinkedO == FALSE) { eib = (*it) . out_edges_begin ( ) ; eie = (*it) . out_edges_end () ; while (eib != eie) {
// get the neighbor vertex this__V = eib->target () ;
// add neighbor vertex to vector if not linked already if ( ((A[I]) [this_V]) .isVisited()== FALSE ) { LI.clear() ; LJ.clear() ; last V = *it,- DO_LIMKΪNG3 : ref_I = this_V; if (find_LinkableVertex2 (ref__I, ref_J, A, L, I, J)) {
// Only consider vertices which are not linked yet if (! ((A[J]) [ref_J] ) .isLinkedO) { find_VList(this_V, last_V, ref_J, G, A, LI, LJ, I,J);
#ifdef DEBUG printf (" \n") ;
#endif link_VertexLists (LI, LJ, A, I, J, SFPG, SFPA, nodes, node_cnt, last_linked__VT) ; ttifdef DEBUG printf ("Linked vertices-."); for (k=0,- k < LI.size(); k++) { if (k) printf ("\t\t") ; printf ("\tNode %d of G %d< >Node %d of G %d\n",
((A[I]) [LI[k]]) .getIndex() , I, ( (A[J] ) [LJ[k] ] ) .getlndex() , J );
}
#endif
} } else { if (get_NextVertex (this_V, last_V, next_V, G[I], A[I])) if ( ( (A[I] ) [next_V]) .getTypeO != BRANCH ) { last_V = this_V; this_V = next_V; goto D0_LINKING3;
Figure imgf000037_0001
++it;
}
/** * HodeExists ()
* Checks in the nodes in level 'L' to see if a node
* with position 1P' already exist. If it does it
* returns the index of the node, otherwise returns zero. */ bool NodeExists (graph& G, node_map<NodeAttribute>& A, int L,
CvPoint P, int& node__index) { #ifndef_SFP_H #define _SFP_H
#include <CV.h> #include <GTL/graph.h> #include "node_attrib.h" #include "sub_graph.h" #include "surface_path.h" void construct_SurfacePath (graph*, node_map<NodeAttribute>*, LUT*, vector<SFP>&, int, int); void form_SurfacePaths (graph*, node_map<NodeAttribute>*, LUT*, graph&, node_map<NodeAttribute>&, int, int); void !ink_BranchVertices (graph*, node_map<NodeAttribute>*, graph&, node_map<NodeAttribute>&, int, int, node*, int&); void link_OtherVertices (graph*, node_map<NodeAttribute>*, LUT*, graph&, node_map<NodeAttribute>&, int, int, node*, int&); bool NodeExists (graph&, node_map<NodeAttribute>&, int, CvPoint, int&); void link_Vertices ( NodeAttribute&, NodeAttribute&, int, int, node*, int&, graph&, node_map<NodeAttribute>&); void link_VertexLists ( vector<node>, vector<node>, node_map<NodeAttribute>*, int, int, graph&, node_map<NodeAttribute>&, node*, int&, node&); void link_BranchNodes(graph*, node_map<NodeAttribute>*, vector<SFP>&, int, int); void link_OtherNodes(graph*, node_map<NodeAttribute>*, LUT*, vector<SFP>&, int, int); void link_NodeLists (vector <node>, vector <node>, node_map <NodeAttribute>*, int, int, vector <SFP>&); void link_Nodes ( NodeAttribute&, NodeAttribute&, vector <CvPoint>&, vector <CvPoint>&, int, int, vector <SFP>& ); void getNeighbors ( node, node_map <NodeAttribute>&, vector <CvPoint>& ); void transform_SFPG ( vector <SFP>&, graph&, node_map<NodeAttribute>&); void merge_SurfacePaths (vector <SFP>&); bool are_Mergeable(SFP&, SFP&); #endif * sf attrib . cc
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <GTL/graph.h>
^include " ../util/filename.h" #include " ../util/ppmlO.h" #include " ../util/util.h" #include "sfp_attrib.h"
SFP_Attribute :: SFP_Attribute(}
{
_S = 0;
Figure imgf000039_0001
_P.x = 0; _P.y = 0;
_ADJ.clear(} OL.clear{) ,-
SFP__Attribute : : SFP_Attribute (CvPoint p, unsigned char s, unsigned char d, int n)
{
_G = 0;
__N = n;
__Ξ = s;
_D = d;
__P.X = p.X;
_P.y = p.y;
__ADJ.clear() JDh.clear() ,-
}
SFP_Attribute : : SFP_Attribute (const SFP Attributes NA)
{
CvPoint P = NA._P; _P.X = P.X; _J>.y = P.y;
__G = MA._G; _N = NA._N; _S = NA._S; _D = NA._D;
ADJ.clear() ; for (int i=0,- i < NA._ADJ.size() ; i++)
_ADJ.insert (_ADJ.begin() + _ADJ.size(), NA._ADJ[i] ) ;
__0h . clear ( ) ; for (int j=0; j < NA.__0L.size() ; j++)
_OL. insert (_0L.begin() + _OL.size() , NA.JOL [j ] ) ;
} void SFP_Attribute : .- set {CvPoint p, unsigned char s, unsigned char d, int n)
{
_G = 0;
_N = n;
_S = S;
_D = d;
_P.X = p.X; _P.y = p.y;
} bool SFP_Attribute :: operator==( const ΞFP_Attribute& right )
{ if (_N == right._N && _P-x == right._P.x && _P-Y == right._P.y) return TRUE; else return FALSE; } const SFP_Attribute &SFP_Attribute :: operator=( const SFP_Attribute &right )
{ if( &right != this ) {
CvPoint P = right._P; _P.χ = P.x; _P.y = P.y;
_G = right._G;
_N = right._N;
_S = right._s;
_D = right._D;
_ADJ.clear() ; for (int i=0; i<right._ADJ. size () ; i++)
_ADJ.insert (_ADJ.begin() + _ADJ.size() , right._ADJ[i] );
_OL.clear() ; for (int j=0; j<right._0L.size() ; j++)
_0L.insert (_0L.beginO + _0L.size() , right._OL[j]); } return *this; // enables cascaded assignements } void SFP_Attribute : .- setPosition (CvPoint p)
{
_P.x = p.x; _P.y = p.y; /**
* appendToAdj ()
* Add new neighbor nodes to the _ADJ field of the
* object. For this check to verify that the new node
* is not already in the _ADJ field of object and it
* also is not one of the members of _OL field.
*/ void SFP_Attribute : : appendToAdj (vector <CvPoint>& ANL)
{ int i, j, k; bool exists; CvPoint p_ANL, p_ADJ, p_OL,- int size_ANL = AHL.size(); int size_ADJ = _ADJ.size() ,- int size_0L = _OL.size(); for (i=0; i<size_ANL; i++) { p_ANL = ANL[i] ; exists = FALSE;
{
Figure imgf000041_0001
break; } for (k=0; k<size_0L; k++) { p_0L = _0L[k] ; if (p_ANL.x==p_OL.x && p_ANL.y==p_0L.y) { exists = TRUE; break; }
// Add i-th point of ANL to adjacent nodes _ADJ. if (iexists) addNeighbor(p_ANL) ; }
/**
* appendToOL()
*
*/ void SFP_Attribute : : appendToOL (CvPoint P)
{ int i , k;
CvPoint p_OL; bool exists = FALSE; int size_OL = _OL.size(); for (i=0; i < size_OL; i++) { p_OL = _0L[k] ; if (P.x == p_0L.x && P.y == p_OL.y) exists = TRUE ;
} if ( ! exists ) addToOL ( P) ; }
/**
* ch.eckNod.eAdj acency ( )
*
* Friend function of class <SFP_Attribute>
* which checks if two SFP nodes are adjacent.
*/ bool checkNodeAdjacency (SFP_Attribute& Al, SFP_Attribute& A2)
{ int i, j ; CvPoint pi, p2; int size_ADJ; int size_0L;
/** Check _ADJ points of A2 against _0L of Al */ size_ADJ = A2._ADJ.size() ; size_0L = Al._0L.size() ; for (i=0; i<size_OL; i++) { // Go over Original Locations of Al pi = Al._0L[i] ; for (j=0; j<size_ADJ; j++) { // Go over Adjacent Nodes of A2 p2 = A2._ADJ[j] ; if (euclideanDistance(pl,p2) < DISTJTHRESH) return TRUE; } }
/** Check _ADJ points of Al against _0L of A2 */ size_ADJ = Al._ADJ.size{) ; size_0L = A2-_0Ii.size() ; for (i=0; i<size_OIj; i++) { // Go over Original Locations of A2 pi = A2._0L[i] ; for (j=0; j<size_ADJ; j++) { // Go over Adjacent Nodes of Al p2 = Al._ADJ[j] ; if (euclideanDistance(pi,p2) < DISTJTHRESH) return TRUE; } } return FALSE;
* sfp_attribh.h
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* ***************************************************************************/ ftifndef SFP_ATTRIBUTE_H #define SFP_ATTRIBUTE_H
#include <cstring> #include <cstdlib> ^include <GTIi/graph.h> #include <GTL/node.h> #include <CV.h> #include "node_attrib.h" const double DISTJTHRESH = 3; class SFP_Attribute { friend bool checkKTodeAdjacency (SFP_Attributes, SFP_Attribute&) ; public:
SFP^Attribute () ;
SFP_Attribute (CvPoint, unsigned char, unsigned char, int) ;
SFP_Attribute (const SFP_Attribute &) ;
~SFP_Attribute() {} bool operator==(const SFP_Attribute&) ; const SFP_Attribute &operator=(const SFP_Attribute &) ,- void set (CvPoint, unsigned char, unsigned char, int) ; unsigned char gets () { return _S; } unsigned char getD () { return _D,- }
CvPoint& getPosition () { return _P; } void setPosition (CvPoint) ; void setN (int n) { _N = n,- } int getN () { return _N; } void setG (int g) { _G = g; } int getG () { return _G; } void addNeighbor{CvPoint p) { _ADJ.insert(_ADJ.begin() +_ADJ.size(), p) ; } int sizeNeighbor() { return _ADJ.size() ; } CvPoint& getMeighbor(int i) { return _ADJ[i] ; } vector <CvPoint>& getAllNeighbors () { return _ΑDJ; } void appendToAdj (vector <CvPoint>&) ; void addToOL (CvPoint p) { _0L.insert (_0L.begin() +_0L.size(), p) ; } void appendToOL(CvPoint) ; vector <CvPoint>& getOL () { return _OL; } private: int JH; // Size of the maximal disc associated with vertex int J 3; // Number of gray levels associated with disc size _N
CvPoint _P; // Position of the vertex in the graph unsigned char _S; // Number of maximal discs at the vertex location unsigned char _D; // Value of the image at the vertex location vector <CvPoint> _ADJ; // Location of the neighbors of current node vector <CvPoint> OL,- // Original Location of the nodes.
}; bool checkNodeAdjacency (SFP_Attribute&, SFP_Attribute&) ; #endif
/* *************************************************************************** * sub graph. cc
* This file contains the functions for constructing the sub-graphs
* from the maximal discs. It uses the GTL+ graph library for
* the graph operations. *
* Written by Shahram Ebadollahi
* Digital Video(Multimedia Group
* Columbia University
* ***********************************************************************#***/
#include <iostream>
#include <fstream>
Sinclude <cmath>
#include <vector>
#include <ipl/ipl.h>
#include <CV.h>
#include <GTL/graph.h>
#include " ../util/olist.h" #include " ../util/filename.h" #include " ../util/ppmio.h"
#include "sub_graph.h" #include "node_attrib.h"
//#define DEBUG
//
// Defining the methods of class LUT.
/ /
LUT : : LUT()
{ columns = 0; rows = 0; }
LUT :: -LUT()
{ if (node_array) { for (int k=0; k<columns; k++) delete node_array[k] ; delete node_array; } if (used_flag) { for (int 1=0; l<columns,- 1++) delete used_f lag [1] ; delete used_flag; }
void LUT :: setsize(int c, int r)
{ columns = c,- rows = r ; node_array = new node* [c] ; for (int k=0; k<c; k++) node_array [k] = new node [r] ; used_f lag = new bool* [c] ,- for (int k=0; k<c,- k++) used_f lag [k] = new bool [r] ,- for (int j<columns; j++) { for (int i=0; i<rows; i++) { used_flag[j] [i] = FALSE; }
} void IiUT :: putNode (CvPoint P, node N)
{ if (P. x <= columns && P.y <= rows) { node_array [P. x] [P.y] = N; used_flag[P.x] [P.y] = TRUE; } } bool LUT :: getNode (CvPoint P, node& N)
{ if (P.x <= columns && P.y <= rows) { if (used_flag[P.x] [P.y]) {
N = node_array[P.x] [P.y]; return TRUE; } else return FALSE; } else return FALSE; } bool LUT :: getNode(int x, int y, node& N)
{ if (x <= columns && y <= rows) { if (used_f lag [x] [y] ) {
N = node_array [x] [y] ; return TRUE; } else return FALSE; } else return FALSE; }
I/ ,
// Definintion of LUT finished!
I/
/**
* sub__graph ( )
*
* Calulates the sub-graph for a given S image which has the number of * the maximal discs at each pixel point, and D image which is the
* blurred image at each level. The results are kept in the undirected
* graph 'G' .
* The initial number of vertices is equal to the non-zero pixels in
* the subset 'S' .
*/ void sub_graph (Ipllmage* S, IpIImage* D, graph& G, node_map<NodeAttribute>& A, int level)
{ int h, w, i, j ; int node__cnt=0; int p_center, pix_loc; unsigned char s, d;
CvPoint P, P_0; int W = S->width,- // width and height of the images int H = S->height;
// there is one node per non-zero pixel in the graph int num_nodes = cvCountNonZero(S) ;
// initializing the graph with 'num_nodes ' number of nodes node* nodes = new node [num_nodes] ; for( i=0; i < num_nodes; i++ ) nodes [i] = G.new_node() ;
/**
* Assigning attributes to each node of the graph. */ int noffset; char *pSData = S->imageData; char *pDData = D->imageData; node_cnt = 0; for( h=0; h < H; h++ ) { noffset = h * S->widthStep,- for( w=0; w < W; w++ ) { s = (unsigned char)pSData[noffset + w] ; d = (unsigned char)pDData[noffset + w] ; if (S) {
P.X = W; P.y = h;
// setting the attributes of the current node in the sub-graph NodeAttribute na(P, s, d, node_cnt) ; A[nodes [node_cnt] ] = na,- node_cnt++; } }
/**
* Creating edges between 8-connected nodes of the graph. */ graph :: node_iterator ib = G.nodes_begin() ; graph :: node_iterator end = G.nodes_end() ; while (it != end)
{
P_0 = (A[*it] ) -getPositionO ; i = (A[*it] ) .getIndex{) ,- graph :: node_iterator it_2 = G.nodes_begin() ; while (it_2 != end)
{
P = (A[*it_2] ) .getPositionO ; j = (A[*it_2] ) .getIndex() ; if(i!=j && fabs (double(P.x - P_0.x)) <= 1 && fabs (double(P.y - P_0.y)) <= D
G.new_edge(nodes [i] , nodes [j] ) ;
// only create an edge if two nodes are 8-neighbors ++it_2; }
/* NOTE: moved these 3 lines from down to here to save one complete traverse of graph */ int node_degree = it->outdeg() ;
// Getting the degree of the current node = number of incident edges.
(A[*it] ) .setType(node_degree) ;
// Setting the type of the node based on its degree. (A[*it] ) -setLevel (level) ;
// Setting the size of the maximal disc for the current iteration.
++it; }
G.make_undirected() ; /**
* Setting the type of each node of the graph 'G' .
* Each node can be of the type: ISOLATED, TERMINAL, ARC, and BRANCH
* based on the number of neighbors that it has.
* We iterate through the nodes of the graph and determine the types. */ delete [] nodes; }
/**
* connected_subgraph()
* ,
* Starts from 'vertex' and finds all the other vertices that are
* connected to it and collectively form a connected sub-graph.
* It sets the attributes of each vertex, and creates an edge from
* each vertex to the newly-found vertex. *
* - nodes : the array of nodes of the graph
* - node_cnt: the number of the present node * - ' S ' , 1 D ' : refer to Baile ' s paper
* - G : is the subset graph
* - FLAGS : is a boolean matrix showing which image pixs are processed
* - A: is a map that carries the node attributes
*/ void cormected_subgraph( node* nodes, int& node_cnt, Ipllmage* S, Ipllmage* D, graphs G, bool* FLAGS, nσde_map<NodeAttribute>& A, int total_nodes )
{ int W = S->width; // width of the images int cnt_root = node_cnt;
CvPoint P = (A[nodes [node_cnt] ] ) .getPositionO ;
// Getting all the neighboring nodes of the current node vector <CvPoint>* neighborjpoints = find_neighbors( P, S, FLAGS );
// Total number of 8-connected neighbors of the current node int neighbor_cnt = neighbor_points->size() ,-
// End of this branch of the connected sub-graph if ( neighbor__cnt == 0 ) return;
/**
* For each neighbor point set the attributes for the corresponding
* node in the graph and create an edge between the current node
* and the neighbor node in the graph.
*/ for( int n=0; n < neighbor__cnt; n++ )
{ node_cnt ++,- if( node_cnt >= total_nodes )
{ cerr << "node count exceeded total nodes" << endl; return; }
CvPoint new_point = neighbor_points->operator[] (n) ; int pix_loc = new_point.x + new_j?oint.y * W;
// finding the attributes of the neighbor node unsigned char s = static_cast<unsigned char>( S->imageData[pix_loc] ); unsigned char d = static_cast<unsigned char>( D->imageData[pix_loc] ) ;
// Setting the appropriate flag for the current node FLAGSf pix_loc ] = true;
// Setting the attributes of the current node NodeAttribute na( new_point, s, d ) ; A[ nodes [node_cnt] ] = na;
// creating an edge between the current and the neighbor node G.new_edge( nodes [cnt_root] , nodes [node_cnt] ) ;
// calling 'connected_subgraph() ' recursively connected_subgraph( nodes, node_cnt, S, D, G, FLAGS, A, total_nodes ); } * find_neighbors ()
*
* Finds the neighbors of a node corresponding to point 1P' in image 'S'.
* The neighbor of point 1P' is a point in its 8-neighborhood in image
* 'S1 which is non-zero and it hasn't been processed yet as can be
* known from the 'FLAGS' matrix. *
*/ vector <CvPoint>* find_neighbors (CvPoint P, Ipllmage* S, bool* FLAGS)
{ vector <CvPoint>* P_list; int W = S->width; // width and height of the images int H = S->height,- int i, j, cnt=0; int pix_loc; unsigned char s,-
// Look at the 8-neighbors of the current pixel for( i=-l; i<=l; i++ )
{ for( j=-l; j<=l; j++ )
{
// Taking care of the image boundaries if ( (j + P.x) > -1 && (j + P.x) < W &&
(i + P.y) > -1 && (i + P.y) < H && j!=0 && i!=0 ) { pix_loc = (j + P.x) + (i + P.y) * W; s = static_cast<unsigned char> ( S->imageData[pix_loc] );
// If current pixel is not zero and it's not processed so far
// add it to the list of the neighbors if( (s != 0) && (FLAGS [pix_loc] == false) )
{
// Put the pixel location in the neighbor list CvPoint P_new,- P_new.x = j + P.x; P_new.y = i + P.y,- P_list->insert ( P_list->begin() + P_list->size() , P_new ) ; cnt++;
Figure imgf000050_0001
return( P_list ) ; }
/**
* simplify_BranchVertexClusters () * Simplifies the branch vertex clusters . An example of such clusters is
* as follows :
* @ @
J I .
* * @ l \ . I
/ /
* @ @
/ /
* © @ *
* The branch vertex cluster on the left has been simplified and the
* result is shown on the right.
* The graph is cleaned using a set of primitive loop patterns which
* are: square, diamond, roof and triangle.
* 7/24/03: Added template for bow-tie shape! */ void simplify_BranchVertexClusters (graphs G, node_map <NodeAttribute>& GA)
{ graph :: node_iterator it = G.nodes_begin() ; graph :: node_iterator end = G.nodes_end() ; graph triangleT; graph roofT,- graph diamondT; graph squareT; graph tieT;
// Template graphs for different structures
CvPoint P_0;
P_0.x = 0;
P_0.y = 0;
NodeAttribute NA_0 ( P_0, 0, 0 ) ;
// default attribute of the nodes of the template graphs node_map <NodeAttribute> triangleA(triangleT, NA_0) ; node_map <NodeAttribute> roofA(roofT, NA_0) ; node_map <NodeAttribute> diamondA(diamondT, NA_0) ; node_map <NodeAttribute> squareA(squareT, NA_0) ; node_map <NodeAttribute> tieA(tieT, NA_0) ; // setting attributes create_TemplateGraph(triangleT, triangleA, TRIANGL); create_TemplateGraph(roofT, roofA, ROOF) ; create_TemplateGraph(diamondT, diamondA, DIAMOND); create_TemplateGraph(squareT, squareA, SQR); create_TemplateGraph(tieT, tieA, TIE),- // Making up the template graphs mark_BranchVertices (G, GA, tieT, tieA, TIE) ; mark_BranchVertices (G, GA, squareT, squareA, SQR); mark_BranchVertices (G, GA, diamondT, diamondA, DIAMOND); mark_BranchVertices (G, GA, roofT, roofA, ROOF) ,- mark_BranchVertices (G, GA, triangleT, triangleA, TRIANGL) ;
// marking the vertices of graph 1G' which conicide with a template simplify_BranchVertices (G, GA, TRIANGL) ; simplify_BranchVertices (G, GA, ROOF); simplify_BranchVertices (G, GA, DIAMOND); simplify_BranchVertices (G, GA, SQR); simplify_BranchVertices (G, GA, TIE);
// simplify the branch vertices for each label mentioned
// Setting the type of the vertices after simplifying them, it = G.nodes_begin() ; end = G.nodes_end() ; while ( it != end )
{ int node_degree = it->outdeg() ;
// Getting the degree of the current node = number of incident edges (GA[*it] ) .setType( node_degree ) ;
// Setting the type of the node based on its degree (GA[*it] ) .resetTagO ;
// resetting the tags of all the vertices for later. ++it; }
* simplify_BranchVertexClusters_fine()
* Like simplify_BranchVertexClusters () function but is used only
* for cleaning slanted rhombuses. Slanted rhombuses occur after
* the initial simplification. */ void simplify_BranchVertexClusters_fine ( graph& G, node_map<NodeAttribute>& GA )
{ graph :: node_iterator it = G.nodes_begin() ; graph : : node_iterator end = G.nodes_end() ; graph rhombusTl; graph rhombusT2;
CvPoint P_0;
P_0.x = 0;
P_0.y = 0;
NodeAttribute NA__0 ( P_0, 0, 0 ) ;
// default attribute of the nodes of the template graphs node_map <NodeAttribute> rhombusAl ( rhombusTl, NA_0 ) ; node_map <NodeAttribute> rhombusA2 ( rhombusT2, NA_0 ); // setting attributes create_TemplateGraph( rhombusTl, rhombusAl, RHOMBUSl ); create_TeraplateGraph( rhotnbusT2, rhombusA2, RHOMBUS2 ) ; // Making up the template graphs
// Clean the RHOMBUSl structures. mark_BranchVertices ( G, GA, rhombusTl, rhombusAl, RHOMBUSl ) ;
// Clean the RH0MBUS2 structures . mark_BranchVertices ( G, GA, rhombusT2, rhombusA2, RH0MBUS2 ) ; / / Setting the type of the vertices after simpli fying them , it = G . nodes_begin { ) ; end = G . nodes_end ( ) ; whi le ( it ! = end )
{ int node_degree = it->outdeg() ;
// Getting the degree of the current node = number of incident edges (GA[*it] ) .setType( node_degree ) ;
// Setting the type of the node based on its degree (GA[*it] ) .resetTagO ;
// resetting the tags of all the vertices for later. ++it; }
* create_TemplateGraph()
* Creates a template graph based on the
* type of template passed to it. */ void create_TemplateGraph( graphk G, node_map <NodeAttribute>& A, TEMPLATEJTYPE T )
{
G.make_undirected() ;
CvPoint P;
NodeAttribute na; node nO, nl, n2, n3, n4, n5, n6; switch(T)
{ case(TRIANGL) : nO = G.new_node () ,- nl = G.new_node() ; n2 = G.new_node() ;
// creating nodes for template graph
P.x = 0; P.y = 0; na.set (P, 0, 0) ; A[nO] = na;
P.X = 1; P.y = 0; na.set(P, 0, 0) ; A[nl] = na;
P.x = 0; P.y = 1; na.set(P, 0, 0) ; A[n2] = na;
// assigning attributes to each node of template
G.new_edge(nO, nl) ; G.new_edge(nO, n2) ; G.new_edge(nl, n2) ; // creating edges between each pair of nodes break; case(ROOF) : no = G.new_node() ,- nl = G.new_node() n2 = G.new_node() n3 s= G.new_node() ;
// creating nodes for template graph
P-X = 0; P.y = 0; na.set (P, 0, 0) ; A[nO] = na;
P.x = -1; P.y = 1; na.set (P, 0, 0) ; A[nl] = na;
P.x = 0; P.y = 1; na.set(P, 0, O),- A[n2] = na;
P.x = 1; P.y = 1; na.set(P, 0, 0) ; A[n3] = na;
// assigning attributes to each node of template
G.new_edge(nθ, nl) ;
G.new_edge(nθ, n2) ;
G.new_edge(nθ, n3) ;
G.new_edge(nl, n2) ;
G.new_edge(n2, n3) ;
// creating edges between each pair of nodes break; case(DIAMOND) : n0 = G.new_node() ; nl = G.new_node() ; n2 = G.new_node() ; n3 = G.new_node() ; n4 = G.new_node() ;
// creating nodes for template graph
P.x = 0; P.y = 0; na.set (P, 0, 0) ; A[nO] = na;
P.x = 1; P.y = 0,- na.set(P, 0, 0) ; A[nl] = na;
P.x = 0; P.y = 1; na.set (P, 0, 0) ; A[n2] = na;
P.X = -1; P.y = 0; na.set (P, 0, 0) ; A[n3] = na; P.x = 0; P.y = -1; na.set(P, 0, 0) ; A[n4] = na,-
// assigning attributes to each node of template
G.new_edge(nO, nl) ;
G.new_edge(nO, n2) ;
G.new_edge(nO, n3) ,-
G.new_edge{nO, n4) ;
G.new_edge(nl, n2);
G.new_edge(n2, n3) ,-
G.new_edge(n3, n4) ;
G.new_edge(n4, nl) ;
// creating edges between each pair of nodes break; case(SQR) : nO = G.new_node() ; nl = G.new_node() ; n2 = G.new_node() ; n3 = G.new_node () ;
// creating nodes for template graph
P.x = 0; P.y = 0; na.set(P, 0, 0) ; A[nO] = na;
P.x = 1; P.y = 0; na.set(P, 0, 0) ; A[nl] = na;
P.x = 1; P.y = 1; na.set(P, 0, 0) ; A[n2] = na;
P.x = 0; P.y = 1,- na.set(P, 0, 0) ; A[n3] = na;
// assigning attributes to each node of template
G.new_edge(nO, nl) G.new_edge(nO, n2) G.new_edge(nO, n3) G.new_edge(nl, n2) G.new_edge(nl, n3) G.new_edge (n2, n3),
// creating edges between each pair of nodes break; case(TIE) : nO = G.new_node() ; nl = G.new_node() ; n2 = G.new_node() ,- n3 = G.new_node() ; n4 = G.new_node() ; n5 = G.new node() ; n.6 = G.new_node() ;
// Creating nodes for template graph
P.x = 0; P.y = Q; na.set (P, 0, 0) ; A[nθ] = na;
P.x = 1; P.y = 1; na.set (P, 0, 0) ; A[nl] = na;
P.X = 0; P.y = 1; na.set (P, 0, 0) ; A[n2] = na,-
P.X = -1; P.y = 1; na.set (P, 0, 0) ; A[n3] = na;
P.X = -1; P.y = -1; na.set(P, 0, O),- A[n4] = na;
P.x = 0; P.y = -1; na.set(P, 0, O),- A[n5] = na;
P.X = 1; P.y = -1; na.set (P, 0, 0) ; A[n6] = na;
// assigning attributes to each node of template
G.new_edge(nθ, nl) ; G.new_edge(nθ, n2) ; G.new_edge(nθ, n3) ; G.new_edge (nl, n2) ; G.new_edge(n2, n3) ; G.new_edge(n0, n4) ; G.new_edge(nO, n5) ; G.new_edge(n0, n6) ; G.new_edge(n4, n5) ; G.new_edge(n5, n6) ;
// creating edges between each pair of nodes break; case(RHOMBUSl) : n0 = G.new_node() ; nl = G.new_node() ; n2 = G.new_node() ; n3 = G.new_node() ;
/I creating nodes for template graph
P.X = 0; P.y = 0; na.set(P, 0, 0) ; A[nO] = na;
P.X = 1; P.y = 0; na.set (P, 0, O),- A[nl] = na,-
P. x = 2; P.y = 1; na.set (P, O, O) ; A[n2] = αa;
P. x = 1,- P.y = 1; na.set (P, O, O) ; A[n3] = na;
// assigning attributes to each node of template
G.new_edge(nO, nl) ; G.new_edge (nl, n2) ; G.new_edge (n2, n3) ; G.new_edge (n0, n3) ;
// creating edges between each pair of nodes break; case(RHOMBUS2) : nO = G.new_node() ; nl = G.new_node() ; n2 = G.new_node() ; n3 = G.new_node() ;
// creating nodes for template graph
P.x = 0; P.y = 0; na.set(P, 0, 0) ; A[nO] = na;
P.x = 1; P.y = -1; na.set (P, 0, 0) ; A[nl] = na;
P.X = 2; P.y = -1; na.set (P, 0, 0) ; A[n2] = na;
P.x = 1; P.y = 0; na.set (P, 0, 0) ; A[n3] = na;
// assigning attributes to each node of template
G.new_edge(nO, nl) ; G.new_edge(nl, n2) ; G.new_edge(n2, n3) ; G.new_edge(nO, n3) ;
// creating edges between each pair of nodes break;
}
/**
* mark_BranchVertices ()
* , ,
* This function marks the branch vertices of graph 'G' with attribute
* 'GA' using the template 'T' with attribute 1TA". */ void mark_BranchVertices( graph& G, node_map <NodeAttribute>& GA, graphs T, node_map <NodeA.ttribute>& TA, TEMPLATE_TYPE type )
{ vector <node> cluster;
// holds the cluster of nodes around the current BRANCH node
NodeType node_type; bool tag,- CvPoint P_0, P_l; int fit_counter = 0,- graph :: node_iterator node_it = G.nodes__begin() ; graph : : node__iterator node_end = G.nodes__end() ;
// iterators for iterating over the nodes of the graph graph : : node__iterator _it; graph : : node_iterator _end;
// iterators for iterating over the nodes of the graph node : - out_edges_iterator edge_it; node : : out_edges_iterator edge_end;
// iterators for iterating over all the edges going out of a node
/**
* Iterate over all the nodes of the graph 'G'
* and if they are branch vertices get all their
* neighbor nodes. Determine is the template with
* the type specified in the input matches the
* cluster of vertices. */ while ( node_it != node_end )
{ node_type = (GA[*node__it] ) .getType() ,- tag = (GA[*node_it] ) .Tag() ;
P_0 = (GA[*node_it] ) -getPositionO ; cluster.clear() ; if( node_type == BRANCH && !tag ) // only process the BRANCH vertices which are not processed yet
{ if( RHOMBUSl == type |j RHOMBUS2 == type )
{
/*
* For RHOMBUS structures should get all the
* nodes which are in the 2-radius of the current
* BRANCH vertex. */
_it = G.nodes_begin() ; _end = G.nodes_end() ;
// Add the reference node to cluster. cluster. insert{ cluster.begin() + cluster.size(), *node_it } ; while( _it != _end ){ if( *_it != *node_it )
{
P_l = (GA[*_it] ) .getPositionO ; if( fabs (double(P_0.x - P_l.x)) <= 2 && fabs (double(P_0.y - P_l.y) ) <= 2) cluster. insert (cluster.begin0 +cluster.size() , *_it) ;
}
__it++;
}
} else
{
/*
* For Non-RHOMBUS nodes it's enough to collect
* neighboring nodes of the current BRANCH vertex.
*/ edge_it = (*node_it) .out_edges_begin() ; edge_end = (*node_it) .out_edges_end() ; cluster,insert(cluster,begin()+cluster.size() ,edge_it->source() ) ,-
Λ
* Iterate through all neighbor nodes of current
* node and add them to the cluster.
*/ while( edge_it != edge_end ){ cluster.insert(cluster.begin() +cluster-size() ,edge_it->target() ) ; edge_it++; }
} match_TemplateToGraph(cluster, G, GA, T, TA, type, fit_counter) ; // see if the template 'T' fits the cluster
} node_it++;
}
I** * match_TemplateToGraph()
* For the cluster of nodes 'VN1 with attributes in 'GA1, this function
* tries to see if template 'T' with attribute 'TA' which is of type
* 'type' fits the cluster. It also rotates 'T' by 90 degrees and
* also tries to fit the rotated 1T' to the cluster. In the case
* that the template graph 1T' matches with the cluster, the vertices
* of the graph which are present in the vector 1VN1 are tagged.
* 'counter' determines the sequence number of the primitive template
* that matches the cluster 1VN1. */ void match_TemplateToGraph( vector<node> VN, graphs G, node_map <NodeAttribute>& GA, graphk T, node_map <NodeAttribute>& TA, TEMPIiATEJTYPE type , int& counter ) int i , j , k ; int ctit = 0 ; int match_cnt = 0; int rot_cnt = 0; int not_matched; int node_cnt = 0; int cntr, s; node* rhombus_nodes,- vector <node> matched; vector <CvPoint> used_Ps,- int cluster_size = VN.sizeO;
// number of vertices in the cluster int tmpl_node_num = T.number_of__nodes () ;
// number of vertices in the template graph graph :: node_iterator tmpl_it = T.nodes__begin() ; graph :: node_iterator tmpl_end = T.nodes_end() ; // iterators for the template graph
CvPoint anchor = ( GA[VW[O]] ) .getPosition();
CvPoint tmpl_point, P_old; CvPoint vrtx_point; switch( type )
{ easel RHOMBUSl ) : case{ RH0MBUS2 ) : rhombus_nodes = new node [tmpl_node_num] ; while{ rot__cnt < 4 )
{ match_cnt = 0; matched.clear() ;
// fit the template to the cluster for( i=0; i < cluster_size; i++)
{ vrtx_point = ( GA[VN[i]] ) .getPosition(); // the current vertex of the cluster
// which vertex in the template matches current cluster vertex node_cnt = 0; tmpl_it = T.nodes_begin() ,- tmpl_end = T.nodes_end() ; while( tmpl_it != tmpl_end )
{ tmpl_point = ( TA[*tmpl_it] ) .getPosition() ; if( vertex_Match( anchor, tmpl_point, vrtx_point ) )
Figure imgf000060_0001
rhombus_nodes [node_cnt] = VN [i ] ,- match_cnt++ ,-
} tmpl_it++; node_cnt++; } } if { match_cnt == tmpl_node_num )
{ for( j=0; j < tmpl_node_num; j++ ) matched.insert(matched.begin() + matched.size(), rhombus_nodes [J]); if ( complete_SetOfEdges ( matched, GA., type ) ) delete_Edge ( rhombus_nodes [0] , rhombus_nodes [1] , G, GA ) ; goto quit; } rotate_Template( T, TA ),- rot_cnt++,-
} quit: delete [ ] rhombus_nodes ; break; case( TIE ) : while ( rot_cnt < 2 )
{ match_cnt = 0,- matched.clear() ;
// fit the template to the cluster for( i=0; i < cluster_size,- i++)
{ vrtx_point = ( GA[VN[i]] ).getPosition(); // the current vertex of the cluster
// which vertex in the template matches current cluster vertex tmpl_it = T.nodes_begin() ; tmpl_end = T.nodes_end() ; while( tmpl_it != tmpl_end )
{ tmpl_point = { TA [*tmpl_it] ) . getPosition O) ; if ( vertex_Match ( anchor , tmpl_point , vrtx_j?oint ) )
{ matched.insert(matched.begin() + matched.size(), VN[i]); match_cnt++;
} tmpl_it++,-
Figure imgf000061_0001
if ( match_cnt == tmpl_node_num )
{
// set tags for all the vertices that match template if ( tag_Vertices ( matched, GA, type , counter ) )
{ counter++; if ( rot_cnt > 0 )
{ for ( k=0 ,- k < (4 - rot_cnt) ; k++ ) rotatejremplate ( T, TA ) ; } goto quit2 ; } } rotatejremplate ( T, TA ) ; rot_cnt++;
} break,- case( ROOF ) : case( SQR ) : while( rot_cnt < 4 )
{ match_cnt = 0; matched,clear(),-
// fit the template to the cluster for( i=0; i < cluster_size; i++)
{ vrtx__pαint = ( GA[VNCi]] ) .getPositionO ; // the current vertex of the cluster
// which vertex in the template matches current cluster vertex tmpl_it = T.nodes_begin() ; tmpl_end = T.nodes_end() ; while( tmpl_it != tmpl_end )
{ tmpl_point = ( TA [*tmpl_it] ) . getPosition ( ) ; if ( vertexJMatch ( anchor, tmpl__point , vrtx_point ) )
{ matched,insert(matched,begin() + matched,size(), VN[i]),- match_cnt++;
} tmpl_it++;
}
} if ( match__cnt == tmpl_node_num )
{
// set tags for all the vertices that match template if( tag_Vertices ( matched, GA, type, counter ) )
{ counter++; if ( rot_cnt > 0 )
{ for( k=0; k < (4 - rot_cnt) ; k++ ) rotate_Template ( T, TA ) ; } goto quit2 ;
} } rotate_Template( T, TA ) ; rot_cnt++;
} quit2 : break; case ( TRIANGL ) : not_matched = 0,- label: rot_cnh = 0;
/* * Try the 4 different rotations of the template.
*/ while( rot_cnt < 4 )
{ match_cnt = 0,- matched.clear() ;
// fit the template to the cluster for( i=0; i < cluster_size; i++ )
{ vrtx_point = ( GA[VN[i]] ) .getPosition(); // the current vertex of the cluster
// which vertex in the template match current cluster vertex tmpl_it = T.nodes_begin() ; tmpl_end = T.nodes_end() ; used_Ps.clear() ; while( tmpl_it != tmpl_end )
{ tmpl__point = ( TA[*tmpl_it] ) .getPosition();
// check to see if the template vertex was used before cntr = 0; for( k = 0; k < used_Ps.size() ; k++ )
{ if ( tmpl_jpoint . x == (used_Ps [k] ) . x && tmpl_point . y == (used_Ps [k] ) . y ) cntr++ ; }
/*
* Only take into account the template
* vertices that haven't been used. */ if( cntr == 0 )
{
// Check to see if: anchor + tmpl_point == vrtx_point if( vertex_Match( anchor, tmpl_point, vrtx_point ) )
{ if( i ) { if ( edge_Exists ( VW [O ] , VN [i ] , GA ) )
{ matched.insert( matched.begin(} + matched.size(), VN[i] ); used_Ps.insert( used_Ps.begin() + used_Ps.size () , tmpl_point ) ; match_cnt++; } else
{ matched.insert( matched.begin() + matched.size(), VN[i] ) ; used_Ps.insert ( used_Ps.begin() + used_Ps .size() , tmpl_point ) ; match_cnt++; } } } tmpl_it++;
}
if ( match_cnt == tmpl_node_num )
{
// set tags for all the vertices that match template if ( tag_Vertices ( matched, GA, type, counter ) )
{ counter++; if( rot__cnt > 0 )
{ fσr( k=0; k < (4 - rot_cnt) ,- k++ ) rotate_Template( T, TA ) ;
} break;
rotate__Ternplate ( T , TA ) ; rot_cnt++ ;
} if { rot_cnt == 4 && not_matched < 2 )
{
// change the order of vertices of the template and try again not_matched++; shift_TriangularTemplateVertices ( T, TA, not_matched ); goto label,- } if( not_matched != 0 ) { for( int n = not_matched + 1; n < 4; n++ ) shift_TriangularTemplateVertices ( T, TA, n );
// this loop is to bring the template back to its original } break,- case( DIAMOND ) : match_cnt = 0; matched.clear() ;
// fit the template to the cluster for( i=0; i < cluster_size,- i++)
{ vrtx_point = ( GA[VM[U] ) .getPosition(); // the current vertex of the cluster
// which vertex in the template match current cluster vertex tmpl_it = T.nodes_begin() ; tmpl_end = T.nodes_end() ; while ( tmpl_it != tmpl_end )
{ tmpl_jpoint = ( TA[*tmpl_it] ) . getPosition ( ) ; if ( vertex_Match ( anchor, tmpl_jpoint, vrtx_point ) )
{ matched.insert(matched.begin() + matched.size(), VN[i]); match__cnt++;
} tmpl_it++;
if( match_cnt == tmpl_node_num )
{
// set tags for all the vertices that match template if{ tag_Vertices ( matched, GA, type, counter ) ) counter++;
} break;
};
}
/**
* edge_Exists () ie __
* Checks to see if the two nodes of the input
* are actually connected by an edge.
*/ bool edge_Exists ( node Nl, node N2, node_maρ <NodeAttribute>& NM )
{ int i; node : : out_edges_iterator edge_it = Nl.out_edges_begin() ; node : : out_edges_iterator edge_end = Nl.out_edges_end() ; while ( edge it != edge end ) { node n_tmp = edge_it- >target ( ) ; if ( n_tmp == N2 ) return ( TRUE ) ; edge_it++ ; } return( FALSE ) ; }
/**
* vertex_Match()
* : . . ,
* Checks to see if the following holds:
* P2 = Pl + 0
* where '0' is the origin and 1Pl' and 'P2 ' are two points in
* the 2D plane. It returns 'TRUE' if identity holds.
*/ bool vertex_Match( CvPoint& O, CvPointk Pl, CvPointk P2 )
{ if( P2.x == (Pl.x + 0.x) && P2.y == (Pl.y + O.y) ) return TRUE; else return FALSE; }
* tag_Vertices ()
* Sets the 'TAG' attribute of the nodes in the vector. This
* way those nodes in the graph are not processed any more.
*
* Note: Only tag those nodes which have edges between them.
* Note: 'node_vect' is a list of nodes that have matched the template.
*/ int tag_Vertices ( vector<node> node_vect, node_map<NodeAttribute>& graph_attrib,
TEMPLATEJΓYPE T, int N )
{ int i; int vect_size = node_vect.size () ; if ( complete_Set0fEdges (node__vect, graph_attrib, T) )
{ for(i=0; i < vect_size; i++)
{
(graph_attrib [node_vect [i] ] ) . setTag O ; (graph_attrib [node_vect [i] ] ) . appendTNP ( T, N ) ; } } else return (0 ) ; return ( 1 ) ; }
/ ** * complete_SetOfEdges ()
* Used to verify that a cluster of nodes are joined
* with proper number of edges together.
*/ bool complete_SetOfEdges( vector<node> node_vect, node_map<NodeAttribute>&: graph_attrib, TEMPLATEJΓYPE T )
{ int i, j, cntr; int vect_size = node_vect.size() ; int NUM_EDGES;
CvPoint Pl, P2; node :: out_edges_iterator ei_i; node :: out_edges_iterator ei_f; cntr = 0; for( ά=0; i < vect_size; i++ )
{ int outdeg = (node_vect[i] ) .outdeg(); if( outdeg < 2 ) break; else
{ ei_i = (node_vect [i] ) . out_edges_begin ( ) ; ei_f = (node_vect [i] ) . out_edges_end ( ) ; while( ei_i != ei_f )
{ node target = ei_i->target() ;
Pl = (graph_attrib[target] ) .getPositionO ; for( j=0, j!=i; j < vect_size; j++ )
{
P2 = (graph__attrib [node_vect [ j ] ] ) - getPosition ( ) ; if ( Pl . x == P2 . x && Pl . Y == P2 . y ) cntr++;
} ei_i++;
} } switch ( T )
{ case TRIANGL:
HUM_EDGES = 6; break,- case ROOF:
NUM_EDGES = 10; break; case SQR:
NUM_EDGES = 12; break; case DIAMOND:
NUM_EDGES = 16; break,- case TIE:
NUM_EDGES = 20; break,- case RHOMBUSl: case RHOMBUS2 :
NUM_EDGES = 8; break;
}; if( cntr == NUM_EDGES ) return TRUE; else return FALSE;
}
/**
* rotate_Template()
* .
* Using the rotation matrix:
* A = O -I
* 1 0
* it rotates the vertices of the graph 'G' with attributes
* as defined in 'GA' . */ void rotate_Template( graphs G, node_raap <NodeAttribute>& GA )
{
CvPoint P_old; graph :: node_iterator it = G.nodes_begin() ; graph :: node_iterator end = G.nodes_end() ;
// iterators for iterating over graph vertices while ( it ! = end )
{
P_old = (GA [*it] ) . getPosition O ;
(GA [*it] ) . rotateVertex O ,- it++; } }
/* *
* shift_TriangularTemplateVertices ()
* For a triangular template graph changes the origin of the triangle
* each time it is called. It does this by adding (-1,0) to the
* coordinates of each vertex of the graph if N=I and (1,-1) if
* N=2. The caller should note the following:
* The triangulare template should be like following when N=I: *
* (0,0) *---* (1,0)
I / I /
* *I/ (0,1)
* Using the function with M=2 should only happen after calling the
* function with the above template and N=I first.
*/ void shift_TriangularTemplateVertices ( graphs T, node_map <NodeAttribute>& TA, int N ) if( N > 3 )
{ cerr << "Error in calling function: shift_TriangularTemplateVertices",- exit(EXIT-FAIIiURE) , } graph :: node_iterator it = T.nodes_begin() ,- graph : : node_iterator end = T.nodes_end() ;
// iterators for iterating over graph vertices
NodeAttribute na,- CvPoint P; CvPoint P_old; switch( N )
{ case 1: while ( it != end )
{
P_old = { TA[*it] ) .getPositionO ; // current position of the vertex if( P_old.x == 1 && P_old.y == 0 )
{
P.X = -1; P.y = 1; na.set (P, 0, 0) ;
TA[*it] = na;
} else if( P_old.x == 0 && P_old.y == 1 )
{
P.X = -1; P.y = 0; na.set(P, 0, 0) ;
TA[*it] = na;
} it++;
} break; case 2-. while ( it != end )
{
P_old = ( TA[*it] ) .getPositionO ; // current position of the vertex if( P_old.x == -1 && P_old.y == 1 )
{
P.X = 0; P.y = -1; na.set(P, 0, 0) ;
TA[*it] = na;
} else if( P_old.x == -1 && P_old.y == 0 )
{
P.X = 1; P.y = -1; na.set(P, 0, 0) ; TA(*it] = na,-
} it++;
} break; case 3 : while( it != end )
{
P_old = ( TA[*it] ) .getPositionO ; // current position of the vertex if ( P_old.x == 0 && P__old.y == -1 )
{
P.x = l;
P.y = 0; na.set (P, 0, 0);
TiY[*it] = na,-
} elsei if( P _old .x == 1 && P_old.y == -1 )
{
P.x = 0;
P.y = 1; na.set (P, 0, 0);
TAt*it] = na;
} it++;
} break;
};
}
/* * * simplify BranchVertices ()
*
* For a graph 1G' with attributes 1GA1 which the branch vertices
* are already tagged and labeled this function eliminates some
* edges between vertices of type 'type1. */ void simplify_BranchVertices ( graph& G, node_map<NodeAttribute>& GA, TEMPLATEJTYPE type )
{ vector <node> cluster;
// holds the cluster of nodes around the current node with the same type
TYPE_NUMBER_PAIR tnpl, tnp2;
// type-number pair for the current vertex int BVC_count;
// the sequence of the branch vertex cluster graph . : node_iterator V_it = G.nodes_begin() ; graph :: node_iterator V_end = G.nodes_end() ;
// iterators for iterating over the vertices of the graph node : : out_edges_iterator edge_it; node : : out_edges_iterator edge_end;
// iterators for iterating over all the edges going out of a node while ( V_end != V_it )
// iterate over all the nodes of the graph 'G'
{ if( (GA[*V_itl) .TagO )
// iterate through vertices that are already tagged
{
(GA[*V_it] ) .getLastTMP(tnpl) ;
BVC_count = tnpl.PN; if ( tnpl.PT == type )
// only consider vertices with the same type as the input
{ ftifdef DEBUG if (RHOMBUSl==type | | RHOMBUS2==type) printf("\tType matches.\n"); #endif edge_it - (*V_it) .out_edges_begin() ; edge_end = (*V_ib) .out_edges_end() ;
// iterators for going over all the edges out of this vertex cluster.clear() ; cluster.insert( cluster,begin() + cluster.size (), (*V_it)),- while (edge_it ! = edge_end)
{ node n_tmp = edge_it->target ( ) ;
(GA [n_tmp] ) . getLastTNP (tnp2 ) ; if ( (GA [n_tmp] ) . Tag O )
{ if ( tnp2 . PT == type ) cluster . insert (cluster . begin ( ) +cluster . size {) , n_tmp) ;
} edge_it++; } // get neighbors which are of same type and BVC number switch(type)
{ case TRIANGL: if( cluster.size() < 3 ) break,- else delete_ComplexEdges(cluster, G, GA, type) ; break; case ROOF: case SQR: if ( cluster.size() < 4 ) break; else delete_ComplexEdges (cluster, G, GA, type) ,- break; case DIAMOND: if { cluster . s ize ( ) < 5 ) break; else delete_CoraplexEdges {cluster, G, GA, type) ; break; case TIE: if( cluster.size() < 7 ) break; else delete_ComplexEdges (cluster, G, GA, type) ; break;
/* case RHOMBUSl: case RHOMBUS2 : printf ("Cluster size: %d\n", cluster.size()),- if( cluster.size() < 4 ) break; else delete_ComplexEdges( cluster, G, GA, type ) ,- break;
*/
}; // only clean neighborhood when there are enough nodes
} }
V_it++; }
* delete_ComplexEdges ()
*
* For the vector of neighbor nodes 1VN' with attributes as defined
* in the node_map 'A' and of type 'T', this function based on the
* type of the neighborhood deletes appropriate edges to relax the
* complexity of the neighborhood. */ void delete_ComplexEdges ( vector <node>& VW, graph& G, node_map<NodeAttribute>& A,
TEMPLATEJΓYPE T )
{ bool edge_FLAG = FALSE; int i, j ; vector <node> V; vector <int> ind; CvPoint Pl, P2, P3; node : : out_edges_iterator edge_it; node : : out_edges_iterator edge_end; switch( T )
{ case TRIAMGL:
Pl = (A [VN [O ] ] ) . getPosition O ; for ( i = l ; i < VN . s ize O ; i++ ) {
P2 = (A[VN[i] ] ) .getPositionO ; if( P2.x != Pl.x ScSc P2.y != Pl. y )
{ delete_Edge( VN[O], VN [i], G, A ); edge__FLAG = TRUE; }
if f edge_FLAG )
{ f or ( i=0; i < VN.sizeO; i++ )
{
(A[VN[I] ]) . eraseLastTNPO ; if( (A [VN [i] ]) .sizeTNPO == 0 )
(A[VN[I]] ) .resetTagO ; }
break; case ROOF:
Pl = (A[VN[O]] ) .getPositionO ; for( i=l; i < VN.sizeO; i++ ) {
P2 = (A[VN [i] ]) .getPositionO ; if( P2.x != Pl.x ScSc P2.y != Pl. y )
{ delete_Edge( VN[O], VN[i], G, A ); edge_FLAG = TRUE; } } if (edgeJFLAG)
{ for( i=0; i < VN.sizeO; i++ )
{
(A[VNCi] ]) .eraseLastTNPO ; if( (A [VN [i] ]) .sizeTNPO == 0 )
(A [VN [i] ] ) . resetTag ( ) ; } } break; case DIAMOND:
// delete edges on the perimeter of DIAMOND delete_Edge (VN[I] , VN[2], G, A); delete_Edge(VN[2] , VN[3], G, A) ; delete_Edge(VN[3] , VN[4], G, A); delete_Edge(VN[4] , VN[I], G, A); for( i=0; i < VN.sizeO; i++ ) { (A[VNCi]] ) .eraseLastTNPO ; if ( (A[VN [i]] ) .sizeTNPO == 0) (A[VN[i] ] ) .resetTagO ;
} break; case SQR:
Pl = (A[VN[O]] ) .getPositionO ;
V. insert ( V. begin () + V. size O, VN[O] ) ; for( i=l; i < VN.sizeO ; i++ )
{
P2 = (A[VN [i] ]) .getPositionO ; if( P2.x == Pl. x I l P2.y == Pl. y )
V.insert( V.beginO + V.size() , VN[i] ) ; } for( i=0; i < V.sizeO ; i++ )
{ f or ( j=i+l,- j < V.size() ; j++ ) delete_Edge( V[i] , V[j] , G, A ) ; } for( i=0; i < VN.sizeO ; i++ )
{
(A[VN[I]] ) .eraseLastTNPO ; if ( (A[VN [i] ]) .sizeTNPO == 0 )
(A [VN [i] ] ) . resetTag ( ) ; } break ; case TIE:
Pl = (A[VN[O]] ) .getPositionO ;
// Get the 2 nodes which form the middle // axis of tie together with center for(i=l; i < VN.sizeO; i++)
{
P2 = (A[VN[I] ]) .getPositionO ; if (P2.x == Pl.x Il P2.y == Pl.y) { ind.insert(ind.begin() + ind.size(), i) ; }
// For each of nodes found above find the 3 neighbor // nodes and deleted them. Delete node itself last for (i=0,- i<ind.size(); i++)
{
Pl = (A[VN[ind[i] ]] ) .getPositionO ; for (j=0; j<VN.size() ; j++)
{ if (j != ind[i] ) { P2 = (A[VN[J] ]) .getPositionO ; if (P2.x == Pl.x Il P2.y == Pl. y) { delete_Edge (VN[J] , VN[ind[i]], G, A); }
}
for( i=0; i < VN. size (); i++ )
{ (A[VN[i] ] ) -eraseLastTNPO ; if( (A[VN[i] ] ) .sizeTNPO == 0 )
(A[VN[i] ] ) .resetTagO ; } break; case RHOMBUSl: case RHOMBUS2: delete_Edge( VN[O], VN[I], G, A ); f or ( i=0; i < VN.sizeO; i++ )
{ (A[VN[i] ] ) .eraseLastTNPO ; if ( (A [VN [i] ]) .sizeTNPO == 0 ) (A[VNIi] ] ) .resetTagO ;
} break;
};
* delete_Edge{)
* _
* In graph 'G1 deletes the edge between nodes 'Nl' and 'N2'.
*/ void delete_Edge( node Nl, node N2, graph& G, node_map < NodeAttribute >& A )
{ vector <edge> EV; node source, target; graph :: edge_iterator e_it = G.edges_begin() ; graph :: edge_iterator e_end = G.edges_end{) ; while( e_it != e_end )
{ edge E = *e_it; source = e_it->source() ; target = e_it->target() ; if ( (source == Nl && target == N2) || (source == N2 && target == Nl) )
EV.insert ( EV.begin() +• EV.sizeO, *e_it ); e__it++; } // getting the edges that are to be deleted for( int i=0; i < EV.sizeO; i++ ) G.del_edge( EV[i] ) ; // deleting the edges }
/** * delete IsolatedVertices ()
* Deletes the ISOLATED vertices of the graph.
*/ void delete_IsolatedVertices ( graph& G, node_map <NodeAttribute>& A )
{
ModeType T; vector <node> NV; graph : : node_iterator it = G.nodes_begin() ; graph :: node_iterator end = G.nodes_end() ; node : : adj_nodes_iterator ani_b; while( it != end )
{
T = (A[*it]) .getTypeO; if( T == ISOLATED )
NV.insert( NV.beginO + NV.sizeO, *it ) ; else if( T == TERMINAL )
{ ani__b = (*it) . adj_nodes_begin () ; if ( (A[*ani_b] ) .getTypeO == TERMINAL )
NV.insert( NV.beginO + NV.sizeO , *it );
} ++it;
} for{ int i=0; i < NV.sizeO ; i++ )
G.del_node( NV [i] ) ; }
/**
* save_SubGraphO
* This function is used to output a sub-graph to GML format.
* Each node in the output has the coordinate information.
* We could extend the 'graph' class and overwrite its 'save'
* method as well. */ void save_SubGraph(graph& G, node_map <NodeAttribute>& A, const char* F, int O)
{
// openning the file for writing the graph to disk ofstream out_file( P, ios::out ); if( !out_file )
{ cerr << "sub_graph:save_subGraph() : Can not open file for output" << endl; exit(EXIT_FAILURE) ; } out_file << "graph [" << endl; out_file << "directed " ; if ( G.is_directed() ) out_file << "1" << endl; else out_file << "0" << endl; int id, x, y, type, level;
CvPo int P; graph : : node_iterator n_it = G.nodes_begin{) ; graph .- : node_iterator n_end = G . nodes_end ( ) ;
//n_end--,- while (n_it != n_end)
{
P = (Af*n_it] ) .getPositionO ; id = (A[*n_it] ) .getIndex() ; level = (A[*n_it] ) .getLeveK) ; out_file << "node [" << endl; out_file << "id " << id << endl; if( 0 != 0 ) out_file << "label \"" << id << "\π" << endl; out__file << "graphics [" << endl; out_file << "center [" << endl; out_file << "x " << P.x << endl; out_file << "y " << P.y << endl; out_file << "z " << level << endl; out_file << "] " << endl;
// closing bracket for center out_file << "width " << 0.1 << endl; out_file << "height " << 0.1 << endl; out_file << "depth " << 0.1 << endl; out_file << "] " << endl;
// closing bracket for graphics out_file << "vgj [" << endl; out_file << "labelPosition \"below\"" « endl; out_file << "shape \"0val\"" << endl; out_file << "] " << endl;
// closing bracket for vgj out_file << "]" << endl;
// closing bracket for node n_it++; } int source, target; graph :: edge_iterator e_it = G.edges_begin() ; graph :: edge_iterator e_end = G.edges_end() ; while (e_it != e_end)
{ source = A[e_it->source () ] .getlndex () ; target = A[e_it->target () ] .getlndexO ; out_file << "edge [" << endl; out_file << "linestyle \"solid\"" << endl,- out_file << "source " << source << endl; out_file << "target " << target << endl; out_file << "]" << endl;
// closing bracket for edge e_it++,-
} out_file << "] " << endl;
// closing bracket for graph out_file.close() ; }
/**
* add_GraphToImage()
*
* Turns a graph into an image and superimposes on a given image
* I: given image
* G: graph
* A: node attributes of the graph
* F: name of file to output to */ void add_GraphToImage(Ip1Image* I, graphs G, node_map<NodeAttribute>& A, const char* F)
{
CvPoint P,P1,P2; int rad = 1; graph : : node_iterator n_it = G.nodes_begin() ; graph :: node_iterator n_end = G.nodes_end() ; while (n_it != n_end)
{
// get location of the node P = (A[*n_it] ) .getPosition() ; P.x *= 2; P.y *= 2;
// add node to image cvCircle (I, P, rad, CV_RGB(255,255,255) , -1) ; n_it++; } int source, target; graph :: edge__iterator e_it = G.edges_begin() ; graph :: edge_iterator e_end = G.edges_end() ; while (e_it != e_end)
{
// Get the two ends of an edge Pl = A[e_it->source() ] .getPositionO ,- Pl. x *= 2; Pl. Y *= 2;
P2 = A[e_it->target ()] .getPositionO ; P2.x *= 2; P2.y *= 2;
// Add edge to image cvliine (I, Pl, P2, rad, CV_RGB (255 , 255 , 255) ) ; e_it++; } writeIPL2IMG{I, F); }
* sub_graph. h
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* K**************************************************************************/
#ifndef SUB_GRAPH_H ftdefine SUB_GRAPH_H
#include "node_attrib.h" #include "gsat_util.h" #include " ../util/olist.h"
//
// Class LUT is used for speeding up
// the access of a graph vertices .
/ / class LUT { public:
LUT() ;
-LUT() ; void setsize(int, int) ; void putNode (CvPoint, node) ; bool getNode (CvPoint, node&) ; bool getNode(int, int, node&) ; private: int columns; int rows; node** node_array; bool** used_flag; };
void sub_graph( Ipllmage*, Ipllmage*, graphs, node_map<NodeAttribute>&, int ) ; void connected_subgraph( node*, int&, Ipllmage*, Ipllmage*, graph&, bool*, node_map<NodeAttribute>&, int ) ;
//OrderedList< CvPoint >* find_neighbors ( CvPoint*, Ipllmage*, bool* ) , void save_SubGraph(graph&, node_map<NbdeAttribute>&, const char*, int i=0) ; void createJFemplateGraph( graph&, node_map <NodeAttribute>&, TEMPLATE_TYPE ) ; void simplify_BranchVertexClusters ( graphic, node_map <NodeAttribute>& ) ; void simplify_BranchVertexClusters_fine(graph& G, node_map<NodeAttribute>& GA); void match_TemplateToGraph( vector<node>, graph&, node_map <NodeAttribute>&, graph&, node_map <NodeAttribute>&, TEMPLATE_TYPE, int& ) ; void mark_BranchVertices ( graphs, node_map <NodeAttribute>S, graphs, node_map <NodeAttribute>S, TEMPLATEJTYPE ) ; bool vertex_Match( CvPointS, CvPointS, CvPointS ) ; int tag_Vertices ( vector <node>, node_map<NodeAttribute>S, TEMPLATEJTYPE, int) ; void rotate_Template( graph&, node_map <NodeAttribute>S ); void shift_TriangularTemplateVertices( graphs, node__map <NodeAttribute>&, int ); void simplify__BranchVertices ( graph&, node_map<NodeAttribute>&, TEMPDATE_TYPE ) ; void delete_ComplexEdges(vector<node>&, graphs, node__map<NodeAttribute>S,TEMPIiATE_TYPE) ,- void delete__Edge{ node, node, graphs, node_map <NodeAttribute>& ) ; vector< CvPoint >* find_neighbors ( CvPoint, Ipllmage*, bool* ) ; void delete_IsolatedVertices ( graphs, node_map <NodeAttribute>&) ; bool edge_Exists ( node, node, node_map <NodeAttribute>& ) ; bool complete_SetOfEdges (vector<node>, node_map<NodeAttribute>S, TEMP!jATE_TYPE) ; void add_GraphToImage(Ipllmage*, graphs, node_map<NodeAttribute>&, const char*) ,-
#endif
* surfacejpath . cc
* Definition of the interface of the class " SFP " which is defined in
* "surface_path.h" . *
* Written by Shahram Ebadollahi
* Digital Video(Multimedia Group
* Columbia University
* ***************************************************************************/
#include "surface__path.h" //#define DEBUG
SFP : : SFP ()
{
_N0V = 0;
_I = 0;
//_I = totalSFPs;
//totalSFPs++;
// Using undirected graph only _SPPG.make_undirected() ;
// Associating attributes to the graph CvPoint pθ; pO.x = 0; pO.y = 0;
SFP_Attribute sfpaθ(pθ, 0, 0, 0) ;
__SFPA = node_map <SFP_Attribute> (_J3FPG, sfpaO) ;
_Nmin = 0;
__Nmax = 0;
_Ignore = FALSE; }
SFP : : SPP (NodeAttributeS NA, vector<CvPoint> adjacent, int I) {
_i = I;
// Using undirected graph only _SFPG.make_undirected() ;
// Create a new node in the graph node new_node = _SFPG.new_node() ;
// Associating attributes to the graph CvPoint pθ; p0.x = 0; pO.y = 0;
SFP_Attribute sfpaθ(pθ, 0, 0, 0) ;
_SFPA = nodejnap <SFP_Attribute> (_SFPG, sfpaO) ;
CvPoint p = NA.getPositionO ; int s = MA.getNum_discs() ; int d = NA.getImg_val () ; int n = NA.getLevel () ; SFP_Attribute sfpa(p, s, d, n) ; for (int i=0; i < adjacent . size (); i++) sfpa.addNeighbor (adjacent [i] ) ; sfpa.addToOL (p) ,- _SFPA [new_node ] = s f pa ;
_Nmin = n; _Nmax = n;
_N0V = 1; _Ignore = FALSE; } void SFP : : Set (NodeAttribute NA, vector <CvPoint> adjacent, int I)
{
// Create a new node in the graph node new_node = _SFPG.new_node() ;
CvPoint p = NA.getPositionO; int s = NΑ.getNum_discs () ; int d = NA.getImg_yal() ; int n = NA.getLevel(); SFP_Attribute sfpa(p, s, d, n) ; for (int i=0; i < adjacent.size(); i++) sfpa.addNeighbor (adjacent [i] ) ;
_SFPA[new_node] = sfpa;
_Nmin = n; _Nmax = n;
_I = I; __N0V = 1; } void SFP : : addVertex (NodeAttribute& NA, vector <CvPoint> adjacent)
{ //
// create a new node in the surface graph
// and assign its attributes to it.
/ / node new_node = _SFPG.new_node () ;
CvPoint p = NA.getPositionO ; int s = NA.getNum_discs () ; int d = NA.getImg_val () ; int n = NA.getLevel (); SFP_Attribute sfpa(p, s, d, n) ; for (int i=0; i < adjacent .size (); i++) sfpa.addNeighbor (adjacent [i] ) ; sfpa . addToOL (p) ,- _SFPA [aew_node] = sfpa;
_Nmax = n,- _NOV++;
/ /
// Find the node in the surface graph
// which has level 'n-11 and connect this
// node to the newly created node with
// an edge.
/ / graph :: node_iterator nib = _SFPG.nodes_begin() ; graph :: node_iterator nie = _SFPG.nodes_end() ; node previous_node; int level = n - 1; while (nib ! = nie) { if ( (_SFPA [*nib] ) . getN O == level ) { previous_node = *nib;
_SFPG . new_edge (previous_node , new_node) ;
} ++nib;
}
/**
* appendVertex()
* Appends a new vertex to the Surface Path graph
* at the specified level 'L1. */ void SFP : : appendVertex (SFP_Attribute&' A, int L)
{ //
// create a new node in the surface graph
// and assign its attributes to it.
// node new_node = _SFPG.new_node () ;
CvPoint p = A.getPositionO ,- int s = A.gets{) ; int d = A.getD() ; int n = A.getfcTQ ; SFP_Attribute sfpa(p, s, d, n) ,-
vector <CvPoint> pi = A.getAllNeighbors() ; for (int i=0; i < pl.size(); i++) sfpa.addNeighbor (pi [i] ) ; sfpa.addToOL (p) ; _SFPA[new_node] = sfpa,- //
// Find the node in the surface graph
// which has level 'n-1' and connect this
// node to the newly created node with
// an edge.
// graph : : node_iterator nib = _SPPG.nodes_begin() ; graph :: node_iterator nie = _SFPG.nodes_end() ; node previous_node; while (nib != nie) { if ( (_SFPA[*nib] ) .getNO == L ) { previous_node = *nib;
_SFPG.new_edge (previous_node, new_node) ;
} ++nib;
}
_NOV++; }
/** * vtxExists ()
* Does the vertex exist in the surface path.
*/ bool SFP : : vtxExists (CvPoint p, int i)
{ int nθ;
CvPoint pθ; if (_Nraax != i) return FALSE; else { graph :: node_iterator nib = _SFPG.nodes_begin() ; graph :: node_iterator nie = _SFPG.nodes_end() ; while (nib != nie) { pO = _SFPA[*nib] .getPosition() ; nO = _SFPA[*nib] .getN() ; if (nO != i) {
++nib; continue; } else { if (p.x == pO.x &&. p.y == pO.y) return TRUE; else ++nib; } return FALSE; } }
/**
* dumpInfo()
*
* Writes the information about the nodes * of the surface path to the disk .
*/ void SFP : : dumpInfo Q
{ int n; int s , d, g;
CvPoint p ; graph : : node_iterator nib = _SFPG . nodes_begin ( ) ,- graph : : node_iterator nie = _SFPG . nodes_end ( ) ; while (nib != nie) { p = _SFPA[*nib] .getPositionO ; n = _SFPA[*nib] .getN(); s = _SFPA[*nib] .getS() ; d = _SFPA[*nib] .getD() ; printf ("Size:%d, Position: (%d,%d) , NMD:%d, D:%d\n", n, p.x, p.y, s, d) ++nib; } printf ("Info about SFP collectively:\n"); printf ("\tSFP Index: %d\n", getlndexO ) ; printf("\tNumber of Vertices: %d\n", getNOVf)); printf ("\tMin Level: %d\n", MinLevel () ) ; printf ("\tMax Level: %d\n", MaxLevel() ) ; }
* method 'setMDNutnber() '
* Sets the number of gray-levels associated
* with the maximal disc size at each node in
* the surface path.
* This is used for path segmentation and
* finding the average values of path attributes.
* It finds it according to:
* G = D_n+1 - D_n if n_min <= n < n_max
* G = S_n if n = n_max */ void SFP : : setMDNumber(void)
{ int n_current, n_next; int S_max, D_nl, D_n2; graph :: node_iterator nibl = _SFPG.nodes_begin() ; graph :: node_iterator niel = _SFPG.nodes_end() ; graph :: node_iterator nib2; graph :: node_iterator nie2;
// Iterate through all nodes in SFP graph while (nibl != niel) { n_current = getMDSize(*nibl) ; if (n_current == _Nmax) { S_max = getMDS (*nibl) ; _SFPA[*nibl] . setG (S_max) ,- #ifdef DEBUG printf ("G=%d\n", S_max) ; ttendif } else { nib2 = _SFPG.nodes_begin() ; nie2 = _SFPG.nodes_end () ; while (nib2 != nie2) { n_next = getMDSize (*nib2) ; if (n_next == (n_current + 1) ) { D_nl = getMDD(*nibl) ; D_n2 = getMDD ( *nib2 ) ;
_SFPA[*nibl] . setG (D_n2 - D_nl) ; ttifdef DEBUG printf ("G=%d\n", D_n2 - D_nl) ; #endif break;
} ++nib2 ;
}
} ++nibl;
}
/**
* Finds the average size of the maximal
* discs in the surface path.
*/ int SFP :: getAverageMDSize ()
{ int totalGrayLevelΞpan = 0; int avgMDsize = 0; int graylevel_span; graph :: node_iterator itb = _SFPG.nodes_begin() ; graph : : node_iterator ite = _SFPG.nodes_end{) ; while (itb != ite) { graylevel_span = _SFPA[*itb] .getG() ; totalGrayLevelSpan += graylevel_span; avgMDsize += (graylevel_span * _SFPA[*itb] .getN{) ) ;
++itb; } if (totalGrayLevelSpan != 0) return ( (int) ( (float) avgMDsize / (float)totalGrayLevelSpan) ) ,- else return (0) ; }
/**
* Returns the gray level values of the * bottom of the surface path.
*/ int SFP :: getMinGrayLevel ()
{ graph :: node_iterator itb = _SFPG.nodes_begin() ; graph :: node_iterator ite = _SFPG.nodes_end() ; while (itb != ite) { if ( _SFPA[*itb] .getN() == JNmin ) return { _SFPA[*itb] .getD() + 1 ) ;
++itb,- }
/**
* Finds the total number of gray levels
* that the surface path spans. */ int SFP :: getDeltaGrayLevel ()
{ int totalGrayLevelSpan = 0; graph :: node_iterator itb = _SFPG.nodes_begin() ; graph : : node_iterator ite = _SFPG.nodes_end() ; while (itb != ite) { totalGrayLevelSpan += __SFPA[*itb] .getG() ;
++itb; } return (totalGrayLevelSpan) ; }
/**
* Finds the average spatial position of the surface path
* p_avg = SUM(p * g_n) /SUM(g_n) */ void SFP : : getAveragePosition(CvPointk avgPosition)
{ int totalGrayLevelSpan = 0; int graylevel_span; CvPoint p; avgPosition.x = 0; avgPosition.y = 0; graph :: node_iterator itb = _SFPG.nodes_begin() ; graph :: node_iterator ite = _SFPG.nodes_end() ; while (itb != ite) { gray1evel_span = _SFPA[*itb] .getG(); totalGrayLevelSpan += graylevel_span,- p = _SFPA[*itb] .getPositionO ; avgPosition.x += p.x * graylevel_span,- avgPosition.y += p.y * graylevel_span; ++itb; if ( totalGrayLevelSpan ! = 0 ) { avgPosition . x / = totalGrayLevelSpan ; avgPosition . y /= totalGrayLevelSpan; } else { avgPosit ion . x = 0 ; avgPos ition . y = 0 ,-
} } bool matchPointToList ( vector <CvPoint> V,
CvPoint P )
{ int i; for (i=0; i<V.size() ; i++) { if ( (V[i]) .x == P.x && (V[i]) .y == P.y ) return TRUE;
} return FALSE;
}
* mergeSFPs_Vertically()
* Merget SFPs with same average location.
*/ void mergeSFPs_Vertically (SFP& sfpl, SFP& sfp2)
{
CvPoint pi, p2, p_avg; int nl, n2, n_avg, n_cntr,- unsigned char si, s2, s_avg; unsigned char dl, d2, d_avg,- node Nl, N2;
// Iteratrors for going over the nodes of surface paths, graph : : node_iterator itbl; graph : : node_iterator itel; graph :: node_iterator itb2; graph :: node_iterator ite2;
//
// Based on the coplanarity of min and max
// disc of the two SFPs take different actions.
// int minLl = sfpl._Nmin; int maxLl = sfpl._Nmax; int minL2 = sfp2._Nmin; int maxL2 = sfp2.__Nmax; if (maxLl <= minL2) { if (maxLl == minL2) {
/ /
// Average out the corresponding nodes of
// the two surface paths.
// , . . itbl = sfpl._SFPG.nodes_begin() ; itel = sfpl._SFPG.nodes_end() ; while (itbl != itel) { nl = sfpl.getMDSize (*itbl) ; if (nl == maxLl) { Nl = *itbl; break;
}
++itbl;
} itb2 = sfp2.getGraphO -nodes_begin() ; ite2 = sfp2.getGraphO .nodes_end() ; while (itb2 != ite2) { n2 = sfp2.getMDSize (*itb2); if (n2 == minL2) { N2 = *itb2; break;
}
++itb2;
}
// Get the attributes of the nodes. pi = sfpl.getMDPosition (Nl); si = sfpl.getMDS (Nl); dl = sfpl.getMDD (Nl); p2 = sfp2.getMDPosition (N2); s2 = sfp2.getMDS (N2); d2 = sfp2.getMDD (N2);
// Average out node attributes p_avg.x = (pl.x+p2.x) /2; p_avg.y = (pi.y+p2.y) /2; s_avg = (sl+s2)/2; d_avg = (dl+d2)/2; n_avg = (nl+n2)/2;
// Replace node attributes with the averages sfpl._SFPA[Nl] .set (p_avg, s_avg, d_avg, n_avg) ;
// Add adjacent vertices of second sp to first one sfpl._SFPA[Nl] .appendToAdj (sfp2._SFPA[N2] .getAllNeighbors () ) ;
// Add the original location of second // node to the _Oh member of first node. sfpl._SFPA[Nl] .appendToOL (p2) ;
}
//
// Add all nodes of SFP2 to SFPl.
IJ while (sfp2._Nmax - sfpl._Nmax > 0) { itb2 = sfp2.getGraphO . nodes_begin () ; ite2 = sfp2.getGraph {) .nodes_end () ; n_cntr = sfp2._Nmin; while (itb2 != ite2) { n2 = sfp2.getMDSize (*itb2) ; if (n2 == n_cntr) { sfpl . appendVertex (sfp2 . _SFPA [*itb2 ] , sfpl . JNmax) ; s f p 1. _Niτιax = n_cnt r ; n_cntr++ ;
} ++itb2 ;
}
} else { }
/ **
* mergeSFPs__Horizontally()
* Is a friend of class <SFP>.
* It is used to merge attributes of corresponding
* nodes of two surface paths together.
*/ void mergeSFPs_Horizontally (SFP& sfpl, SFP& sfp2)
{
CvPoint pi, p2, p_avg; int nl, n2, n_avg; unsigned char si, ε2, s_avg,- unsigned char dl, d2, d_avg;
// Iteratrors for going over the nodes of surface paths, graph : : node_iterator itbl; graph : : node_iterator itel; graph :: node_iterator itb2; graph :: node_iterator ite2;
/ /
// Average out the corresponding nodes of
// the two surface paths.
// itbl = sfpl._SFPG.nodes_begin() ; itel = sfpl._SFPG.nodes_end() ; while (itbl != itel) {
// Get the attributes of the current node in 1st SP. nl = sfpl.getMDSize (*itbl) ; pi = sfpl.getMDPosition (*itbl) ; si = sfpl.getMDS (*itbl) ; dl = sfpl.getMDD (*itbl) ; itb2 = sfp2.getGraph() .nodes_begin() ; ite2 = sfp2.getGraph() .nodes_end() ; while (itb2 != ite2) {
// Get the attributes of the current node in 1st SP.
Ώ.2 = sfp2.getMDSize (*itb2); p2 = sfp2.getMDPosition (*itb2) ;
Ξ2 = sfp2.getMDS (*itb2) ; d2 = sfp2.getMDD (*itb2);
// If the two vertices are co-planar if (nl==n2) { //if (pl.x!=p2.x Ij pl.y!=p2.y) { // Average out node attributes p_avg.x = (pl.x+p2.x) /2; p_avg.y = (pi .y+p2.y) /2; s_avg = (sl+s2)/2; d_avg = (dl+d2)/2; n_avg = ( nl +n2 ) / 2 ;
Il Replace node attributes with the averages sfpl._SFPA[*itbl] .set (p_avg, s_avg, d_avg, n_avg) ;
// Add adjacent vertices of second sp to first one sfpl._SFPA[*itbl] .appendToAdj (sfp2._SPPA[*itb2] .getAllNeighbors () ) ;
// Add the original- location of second // node to the __0L member of first node. sfpl._SFPA[*itbl] -addToOL (p2) ;
Figure imgf000092_0001
// Add nodes in second sfp which are not
// present in the first sfp.
// if (sfpl._Nmin > sfp2._Nmin) {
// Keep adding till bottoms becomes the same, while (sfpl._Nmin - sfp2 ,_Nmin > 0) {
// Find the node in sfp2 at level (Nmin_l-1) itb2 = sfp2.getGraphO .nodes_begin() ; ite2 = sfp2.getGraphO .nodes_end() ; while (itb2 != ite2) { n2 = sfp2.getMDSize (*itb2); if (n2 == (sfpl._Nmin-l) ) { sfpl.appendVertex (sfp2._SFPA[*itb2] , sfpl._Nmin) ; sfpl._Nmin--;
} ++itb2;
Figure imgf000092_0002
if (sfpl._Nmax < sfp2._Nmax) {
// Keep adding till bottoms becomes the same, while (sfp2._Nmax - sfpl._Nmax > 0) {
// Find the node in sfp2 at level (Nmin_l-1) itb2 = sfp2.getGraphO .nodes_begin() ; ite2 = sfp2.getGraphO .nodes_end() ; while (itb2 != ite2) { n2 = sfp2.getMDSize (*itb2); if (n2 == (sfpl.Jϋmax+l)) { sfpl.appendVertex (sfp2,_SFPA[*itb2] , sfpl._Nmax) ; sfpl._Nmax++; } ++itb2 ;
/**
* areAdjacent{)
* Finds out if two SFPs are connected
* at one of the subgraph levels.
*/ bool areAdjacent (SFP& spl, SFP& sp2)
{
// Case where there is no overlap of surface path stretch //if (spl._Nmin > sp2._Nmax | | sp2._Nmin > spl._Nmax) // return FALSE,-
I i _
// Compare the vertices in one sfp to the other. // For nodes at same level check if they are in
// each others neighborhood.
// int nl, n2; graph : : node_iterator nibl; graph : : node_iterator niel,- graph : : node_iterator nib2; graph : : node_iterator nie2; nibl = spl._SFPG.nodes_begin() ; niel = spl._SFPG.nodes_end() ; while (nibl != niel) { nl = spl._SFPA[*nibl] .getNO ; nib2 = sp2._SFPG.nodes_begin() ; nie2 = sp2._SFPG.nodes_end() ; while (nib2 != nie2) { n2 = sp2._SFPA[*nib2] .getNO ; if (nl == n2) { if (checkNodeAdjacency (spl._SFPA[*nibl] , sp2._SFPA[*nib2] ) ) { #ifdef DEBUG printf ("SFPs %d and %d are adjacent at disk size level %d\n", spl.getlndexO ,sρ2.getlndex() , nl) ; #endif return TRUE; } } ++nib2;
} ++nibl;
} return FALSE; * surface_path.h
*
* Written by Shahram Ebadollahi
* Digital Video(Multimedia Group
* Columbia University
* ***************************************************************************/
#ifndef _SURFACE_PATH_H #define _SURFACE_PATH__H
#include <cstring> #include <cstdlib> #include <GTL/graph.h> #include <GTL/node.h> #include <CV.h> ttinclude "sfp_attrib.h"
/**
* class <SFP>
* This class is a template for expressing
* the Surface Paths with their associated
* parameters.
*/ class SFP { friend void mergeSFPs_Vertically (SFP&, SFP&) ; friend void mergeSFPs_Horizontally (SFP&, SFP&) ; friend bool areAdjacent (SFP&, SFP&) ; public:
SFP {) ;
SFP (NodeAttribute&, vector<CvPoint>, int i=0) ;
-SFPO {}; void Set (NodeAttribute, vector<CvPoint>, int i=0) ; void addVertex(NodeAttribute&, vector<CvPoint>) ; void appendVertex (SFP_Attribute&, int) ; bool vtxExists (CvPoint, int); // Does node exists in the SFP? int getNOVf) { return _NW; } graphSc getGraph() { return _ΞFPG; } node_map<SFP_Attribute>& getNodeAttribs () { return _SFPA; }
// Methods to access values of attributes void setMDNumber{) ; int getMDNumber(node n) { return _SFPA[n] .getG() ; } int getMDSize(node n) { return _SFPA[n] .getN() ; }
CvPointδ: getMDPosition(node n) { return _SFPA[n] .getPositionO ; } unsigned char getMDS(node n) { return __SFPA[n] ,getS(); } unsigned char getMDD(node n) { return _SFPA[n] .getD() ; }
// these methods are used when building a GSAT graph int getftverageMDSize() ; int getMinGraytievel () ; int getDeltaGrayLevel () ; void getAveragePosition(CvPoint&) ,- int MinLevelO { return _Nrain,- } // Min and Max disc size int MaxLevelO { return _Nmax,- } // in the surface path int getlndexO { return _I; } void setlndex(int i) { _I = i; } void Ignore () { _Ignore=TRUE,- } void resetlgnore() { _Ignore=FALSE; } bool isIgnoredO { return _Ignore,- } void dumpInfoO; private: graph _SFPG; // graph of SFP node_map <SFP_Attribut|e> _SFPA; // associated attributes int _I; // index of the path int _NOV,- // Number Of Vertices in SFP int _Nmin; // min and max sizes of maximal discs in SFP int _Nmax; bool _Ignore,-// if set means that this sfp will be ignored }; bool matchPointToList (vector <CvPoint>, CvPoint) ; void mergeSFPs_Horizontally(SFP&, SFP&) ; void mergeSFPs_Vertically(SFP&, SFP&) ; bool areAdjacent (ΞFP&, SFP&) ,-
#endif
#include <algorithm>
#include <vector>
#include " tree . h"
#include "gsat_graph . h" treeNode : : -treeNode O
{ if ( _upPtr ! = NULL ) delete _upPtr,- if ( _downPhr . size ( ) ! = 0 )
{ for(int i=0; i < _downPtr.size () ; i++ )
{ if( _downPtr[i] != NULL ) delete _downPtr[i]; }
_downPtr.clear() ; } }
/**
* Constructor of class 'treeGraph' .
*/ treeGraph :: treeGraph( node& initial_vertex, graphs G, node_map<NodeAttribute>& A )
{ int level = 0 ; _num_l eaves = 0 ; _num_branches = 0 ;
_graph = new treeNode ( level) ; _graph- >_upPtr = NULL; construct_TreeGraph (initial_vertex, initial_vertex, level , G, A, __graph) ; }
/**
* This method finds the longest path of the tree
* and returns a vector of nodes of that path.
* It calls 'traverseTree() ' method recursively.
*/ void treeGraph : : get_LongestBranch ( vector <node>& LP, node_map<NodeAttribute>& A )
{ int i ; int path_cntr = 0;
// We have one path for each TERMINAL node in the 2D graph. vector<node>* paths = new vector<node> [_num__branches] ,- paths [path_cntr] = _graph->_node_list; traverseTree ( _graph, path__cntr, paths, A ); int* sizes = new int [_num_b ranches] ; int* tmp = new int [_num_branches] ; inh longest_path__size = tmp [_num_branches - I]; int longest_path_index; for( i=0; i < _num_branches ; i++ )
{ sizes [i] = (paths [i] ) . size () ,- tmp[i] = (paths [i] ) . size () ;
} sort ( tmp, tmp+_num_branches ) ; longest_path_size = tmp [_num_branches - I] ; for( i=0; i < _num_branches ,- i++ )
{ if ( sizes [i] == longest_path_size ) longest_path_index = i; } for( int k=0; k < (paths [longest_path_index] ) . size () ; k++ )
LP. insert ( LP. begin () + LP.sizeO , (paths [longest_path_index] ) [k] ) ; delete [] tmp,- delete [] sizes; delete [] paths; }
/ * *
* At each branch point in the tree it duplicates
* the previous partial path and concatenates the
* nodes of the decendent nodes to the previous
* paths.
*/ void treeGraph :: traverseTree( treeNode* N, intt path_cntr, vector<node>* P, node_map<NodeAttribute>& A )
{ int i, j, k; int vect_size; int size = (N->_downPtr) .size() ,- vector <node> tmp = P[path_cntr] ; if ( size != 0 )
{ for( i=0; i < size; i++ )
{ path_cntr++;
P [path_cntr] = tmp;
// Get the partial number of nodes at current branch. vect_size = ( ( (N->_downPtr) [i] ) ->_node_list) .size() ; f or ( j=0; j < vect_size; j ++ ) (P [path_cntr] ) .insert ( (P [path_cntr] ) .begin () + (P[path_cntr] ) .size O , ( ( (N->_downPtr) [i] ) - >_node_l i s t ) [J]); traverseTree ( (N->_downPtr) [i] , path_cntr, P, A ) ; } }
/** * construct_TreeGraph
* Used by the constructer to iteratively build
* the tree graph.
*/ void treeGraph :: construct_TreeGraph( nodes current_vertex, node& previous_vertex, int& level, graphs G, node__map <NodeAttribute>& A, treeNode* current_Node )
{ int num_neighbors; node vi_bvn;
// Iterators for outgoing-edges of the branch vertex, node : : out_edges_iterator edge_it; node : : out_edges_iterator edge_end; node this_v, last_v, next_v; NodeType type; this_v = current__vertex; last_v = previous_vertex; type = (A[this__v] ) .getTypeO ; if( (TERMINAL != type) | | (this_v == last_v) )
{
__num_branches++; // Add a new branch.
// Continue traversing a path till you hit a BRANCH vertex. while( type!=BRANCH && ! (A[this_v] ) .Tag() )
{
// Adding the node to the list.
(current_Node->_node_list) .insert( (current_Node->_node_list) .begin()
+ (current_Node->_node_list) .size() , this_v ) ; (A[this_v] ) .setTagO ;
// Get the next_vertex to this_vertex. if ( !get_NextVertex( this_v, last_v, next_v, G, A ) ) break;
// What is the type of next_vertex? type = (A[next_v] ) .getTypeO ; last_v = this_v; this_v = next_v;
// Case were we have hit a TERMINAL node, if ( TERMINAL == type )
{
_num_leaves++; break; } }
// Add the BRANCH -or- TERMINAL (ending) vertex to path list. (current_Node->_node_lisfc) .insert ( (current_Node->_node_list) .begin() +
(current_Node->_node_list) .size() , this_v) ; (A [this_v] ) . setTag O ;
/*
* If a BRANCH vertex was hit
*/ if (BRANCH == type)
{
// Iterator for the branch vertex. edge_it = this_v.out_edges_begin() ; edge_end = this_v.out_edges_end{) ;
// Number of neighbors of the branch vertex excluding this_vertex. num_neighbors = this__v.outdeg() - 1;
// Go one step down the hierarchy. ++level; while( edge_it != edge_end )
{ vi_bvn = edge_it->target() ;
// Should not take into account ' last_vertex' ] if ( last_v == vi_bvn )
{
++edge_it; continue; } treeNode* new_node = new treeNode(level) ; new_node->_upPtr = current_Node;
(current_Node->_downPtr) .insert ( (current_Node->_downPtr) .begin()
+ (current_Node->_downPtr) .size() , new node ) ; construct_TreeGraph (vi_bvn, this_v, level, G, A, new_node) ;
Figure imgf000099_0001
else { _num_branches++; num leaves++;
// Adding the node to the list.
(current_Node->_node_list) .insert( (current_Wode->_node_list) .begin()
(current_Node->_node_list) .size() , this_v ) ; (A[this_v] ) .setTagO ;
* tree . h
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* ***************************************************************************/ ffdfndef _TREE_H #define _TREE_H
#include <GTL/graph.h> #include "node__attrib.h" class treeGraph;
/**
* This class is the ADT for the nodes of a tree data structure.
* Probably it's better to change this to a template to accomodate
* different node list values. Currently it assumes that the node
* has a list of nodes.
*/ class treeNode { friend class treeGraph; public: treeNode (const int kstage)
: _level(stage) , _upPtr(NULL) {}
-treeNode() ; int getNodeLevel() const { return _level;} private: int _level; // last stage of dendogram for this node vector <node> _node_list,- // List of vertices at this node. treeNode* __upPtr; // points to the next in hierarchy, vector <treeNode*> _downPtr; // A vector of down-ward pointers.
};
/**
* This class is used for finding the longest vertex
* path in a 2D planar graph. */ class treeGraph { public:
// Constructor accepts a list of feature vectors treeGraph ( node&, graph&, node_map<NodeAttribute>& ) ;
-treeGraph () {}; void get_LongestBranch( vector<node>&, node_tnap<NodeAttribute>& ) ; int get_NumLeaves (void) { return _num_leaves; } int get_NumBranches (void) { return _num__branches; } private: int _num_leaves,- // Number of TERMINAL nodes in the 2D graph, int _num_branches,- // Total number of branches in the 2D graph. treeNode* _graph,- void construct_TreeGraph(node&,node&, int&,graphs,node_map<NodeAttribute>&, treeNode*) ; void traverseTree( treeNode*, int&, vector<node>*, node_map<NodeAttribute>&
); };
#endif
VIEW REC
code : contains MATALB m-files osu svm : third-party software for Suppport Vector Machine classifier
function [label] = Change_Label(sitelndex, config, constel, objNames, VM, priorModels, vn, rn, O)
% function [label] = Change_Label(sitelndex, config, constel, VM, priorModels, vn, rn, O)
%
% Finds the new label for the given site with index 'sitelndex'.
M = size(objNames{vn}, 1 ); % number of objects N = length(constel); % number of sites if (config(sitelndex))
<yo *****************************
% if site is already committed
0/rQ *****************************
O/ _ _
% Find local energy of the site given the current configuration
0/ __ . if (config(sitelndex)==M)
Ek = priorModels(m,vn,1); else
Ek = compSumDist(config(sitelndex), constel(sitelndex), VM, vn, rn, O); if (O(end)) for n = 1:N if (n~=sitelndex && config(n)~=0 && config(n)~=M) if (config(n)~=config(sitelndex)) SI = [sitelndex rϊj;
Ek = Ek + compSumDist(config(SI), constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label Ek = Ek + priorModels(m,vn,2); end end end end end
% Now change the label of site in order to minimize energy difference
E = zeros(M-1 ,1); l = zeros(M-1 ,1 ); cnt = 1; for m = 1 :M if (m -= coπfig(sitelndex)) if (m == M)
% label of the site is NULL
E(cnt) = priorModels(rn,vn,1); l(cnt) = m; cnt = cnt + 1 ; else
%
% label of the site is non-NULL
% /////////////////////////////// E(cnt) = E(cnt) + compSumDist(m, constel(sitelndex), VM, vn, rn, O); l(cnt) = m;
% energy of pair-site cliques if (O(end)) for n = 1 :N if (n~=sitelndex && config(n)~=M && coπfig(n)~=0) if (config(π)~=m) SI = [sitelndex n]; LI = [m config(n)];
E(cnt) = E(cnt) + compSumDist(LI, constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label E(cnt) = E(cnt) + priorModels(m,vn,2); end end end end cnt = cnt + 1 ; end end end
E = E - Ek;
[minE,label2] = sort(E); label = I(label2(1 )); else
O/ ****************************
% if site is not committed yet
<yo ****************************
E = zeros(M,1); for m = 1 :M if (m == M)
% ////////////////////////////// % label of the site is NULL % ////////////////////////////// E(m) = priorModels(m,vn,1); else
% ////////////////////////////// % label of the site is non-NULL % ////////////////////////////// E(m) = E(m) + compSumDist(m, constel(sitelndex), VM, vn, rn, O);
% energy of pair-site cliques if (O(end)) for n = 1 :N if (n~=sitelndex && config(n)~=0 && config(n)~=M) if (config(n)~=m) SI = [sitelndex n]; LI = [m config(n)];
E(m) = E(m) + compSumDist(U, constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label E(m) = E(m) + priorModels(rn,vn,2); end end end end end end
[minE,label2] = sort(E); label = Iabel2(1 ); end
function [E] = compSumDist(labels, site, VM, vn, rn, O);
% function [E] = compSumDist(labeis, featurs, VM, vn, rn, O);
% Finds the sum of mahalonobis distances for the properties of each part defined in 'labels'
% E = Sum(||d_11(i)-D_11 (H)II + ||d_12(i)-D_12(f_i)|| + ... + ||dJK(i)-D_1K(f_i)||)
%
% Inputs:
% labels: labels of two neighboring sites
% site: site properties
% based on definition in 1O'
% VM: models of views
% vn: view number
% rn: round number (in jacknife experiment)
% O: an array defining which features to be used
%
E = O;
N = length(site); % number of objects featurelndex = find(O==1 ); % number of features to be considered if (N==1 ) for n = 1 :!ength(featurelndex) clear p; property_set = 0; if (featurelndex(n)==1 ) p(1) = site(1).point.X; p(2) = site(1).point.Y; propertyjndex = 1 ; property_set = 1 ; elseif (featurelndex(n)==2) p = site(1).area; propertyjndex = 2; property_set = 1 ; elseif (featurelndex(n)==3) p = site(1 ).angle; propertyjndex = 3; property _set = 1 ; elseif (featurelndex(n)==4) p = site(1 ).eccen; propertyjndex = 4; property _set = 1 ; end if (property _set)
% Get mean and covariance for the selected property M = VM{rn,propertyjndex}{vn}{labels(1 ),labeIs(1)}(:,1); S = VM{rn,propertyjndex}{vn}{labels(1 ),labels(1 )}(:,2);
% Find the energy contribution of the property ind = find(S == 0); SS = S; if (length(ind) > 0)
SS = ones(size(S)); end E = E + mahalonobis(p\ M, diag(SS)); end end elseif (N==2 && O(end)==1 ) % double site clique if (labels(1 ) > labels(2)) L1 = labels(2); L2 = Iabels(1 ); else
L1 = labels(1); L2 = labels(2); end
% Get distance and angle between the sites p(1 ) = site(1 ).point.X; p(2) = site(1 ).point.Y; p(3) = site(2).point.X; p(4) = site(2).point.Y; dist = sqrt((p(1 )-p(3))Λ2 + (p(2)-p(4))Λ2); ang = atan((p(4)-p(2))/(p(3)-p(1 ))); if (length(VM{rn,5}{vn}{L1 ,L2}))
% Get mean and covariance for the selected property distM = VM{rn,5}{vn}{L1,L2}(:,1); distS = VM{rn,5}{vn}{L1 ,L2}(:,2); angM = VM{m,6}{vn}{L1 ,L2}(:,1 ); angS = VM{m,6}{vn}{L1 ,L2}(:,2); indDist = find(distS == 0); distSS = distS; if (length(indDist) > 0) distSS = ones(size(distS)); end indAng = find(angS == 0); angSS = angS; if (length(indAng) > 0) angSS = ones(size(angS)); end
% Find the energy contribution of the property E = E + rnahalonobis(dist, distM, diag(distSS)); E = E + mahalonobis(ang, angM, diag(angSS)); end end
function [E] = config Energy(config, constei, viewjmodel, prior_model, M, O, vn, rn);
% function [E] = configEnergy(config, constei, view_model, priorjriodel, M, O, vn, m); %
/O Find the associated energy for a given configuration of the field for an observed
% constellation, and given model parameters. % % input: % config: A given configuration on the field % constei: An observed constellation in the form of parts properties % view_model: model of the parts parameters (Gaussian) % prior_model: parameters of the prior field % M: total number of objects in current view % O: determines which properties of the sites should be used and % if the properties of pair of sites should also be used. % format: [I, ar, an, ec, p], where if each element is '1' it will be used i: location, ar: area, an: angle, ec: eccentricity, p: pair of sites vn, rn: view-index and round-index
% %
. Output: E: Energy assigned to the configuration
% 11/15/03: Fixed the problem of null label index (I=O) vs. (I=M)
E = O; num_sites = size(config, 1 ); % total number of sites
% find the local energy for each site and sum % them up to get the total energy of the config. for n = 1:num_sites if (config(n)~=M)
% label of site is non-NULL
SI = [n]; % add site to Sitelndex
% energy of single-site clique E = E + compSumDist(config(SI),constel(SI),view_model,vn,rn,O);
% energy of pair-site cliques if (O(end)) for m = 1 :num_sites if (m~=n)
% if sites are distinct if (config(m)~=M)
% if site is non-Null labeled if (config(m)~=config(n))
% if sites have distinct labels si = [Sl m];
E = E + compSumDist(config(si), constel(si), view_model, vn, rn, O); else
% add penalty for 2 sites when they have the same label E = E + priorjnodel(m,vn,2); end end end end end else
%
% label of site is NULL
%
E = E + priorjnodel(rn,vn,1 ); end end
function [I] = confmatrixToimg(C); l = zeros(100); Cmax = max(max(C)); L = 256/Cmax; for i = 1:10 forj = 1 :10 l((i-1 )*10+1 :i*10, Q-1 )*10+1 :j*10) = floor(C(i,j)*L); end end
function [S] = Create_Heap(config, constel, objNames, viewModels, priorModels, vn, rn, O);
% [S] = Create_Heap(config, constel, VM, PM, vn, rn, O);
% used in HCF. Look at comments there.
%
M = size(objNames{vn}, 1 ); % number of objects
N = length(constel); % number of sites
S = zeros(N,1 );
% Go over each site and determine value of single-site local energy for π = 1 :N
% For each site try different labels E = zeros(M,1); for m = 1 :M if (m == M)
E(m) = priorModels(m,vn,1 ); else
E(m) = compSumDist(m, constel(n), viewModels, vn, rn, O); end end
E = sort(E); S(n) = -(E(2)-E(1 )); end
function [LL] = crossModelLabele^C.viewModels.priorModels.objNames.viewNames.fOpt.RorSOpt.pppOptrn)
% function
[LL]=crossModelLabeler(C,viewModels,priorModels,objNames,viewNames,Opt,RorSOpt,pppOpt1rn)
%
% Main function for assigning labels to the constellations by using all models.
%
% Inputs:
% C: constellations to be labeled
% viewModels: models of the different views
% priorModels: estimate of the prior MRF parameters
% fOpt: options determining which features to be used
% RorSOpt: tells if the input constellations are real ones or the sampled ones
% 0: if real, view-index: if sampled
% pppOpt: indicates if calculation of pseudo-posterior probability is needed
% 1 : if needed, 0: if not
% rn: round number
%
% Output:
% LL: a cell array of format:LL(t).echo_index
% .viewjndex
% .kfjndex
% .config each row corresponds to one model
% .energy same
% .pseudo same
% num_views = length(viewNames)-1; reshapedViews = reshape(char(viewNames)\ 1 , 3*(num_views+1 )); if (-RorSOpt)
% Processing real constellations of test sequence in round Yn'
T = length(C{m}); % number of key-frames in sequence cnt = 1; for t = 1 :T
% Constellation of parts for current key-frame sites = C{m}(t).RI; numSites = length(sites); disp(numSites);
% Get the index of the sites which are not false-positive (Cheating!!) viewlnd = 1 + (findstr(reshapedViews, C{rn}(t).view_name)-1)/3; if (numSites) fpObjlnd = 1 + (fιndstr(reshape(char(C{rn}(t).RI.Iabel)', 1 , 2*numSites), 'fp')-1)/2; tmp1 = ones(1, numSites); tmp2 = zeros(1, numSites); tmp2(fpθbjlnd) = 1; nonFPObjlnd = find((tmp1-tmp2) -= 0); okSites = sites(nonFPObjlnd); if (length(nonFPObjlnd)) % Assigning these meta-data for future use LL(cnt).echojndex = str2num(C{rn}(t).echo_num); LL(cnt).viewJndex = viewlnd; LL(cnt).kf_index = C{m}(t).seq_num;
num_sites = length(okSites); labels = zeros(num_sites, num_views); energy = zeros(num_views,1); ppp = zeros(num_views, 1 ); for vn = 1 :num_views
% Find optimal configuration under current view assumption
[labels(:,vn), energy(vn)] = HCF(okSites, viewModels, priorModels, objNames, vn, rn, fOpt);
% Find pseudo-posterior probability if needed if (pppθpt) ppp(vn)=PseudoProb(labels(:,vn),okSites,viewModels,priorModels,objNames,vn,rn,fOpt); end end
LL(cnt).config = labels'; LL(cπt).energy = energy; LL(cnt).pseudo = ppp; cnt = cnt + 1 ; end end end else
T = length(C); % Note treatment of C is different here cnt = 1 ; for t = 1 :T
% Constellation of parts for current key-frame sites = C(t).RI; numSites = length(sites); viewlnd = RorSOpt; if (numSites>0) %fpθbjlnd = 1 + (findstr(reshape(char(sites.label)', 1 , 2*numSites), 'fp')-1)/2;
%tmp1 = ones(1, numSites); %tmp2 = zeros(1.numSites); %tmp2(fpθbjlnd) = 1;
%nonFPObjlnd = find((tmp1-tmp2) ~= 0) ; %okSites = sites(noπFPObjlnd);
%if (length(okSites))
% Assigning these meta-data for future use
LL(cnt).echoJndex = 0; LL(cnt).view_index = RorSOpt; LL(cnt).kf_index = OOOOO';
num_sites = length(sites); labels = zeros(num_sites, num_views); energy = zeros(num__views,1); ppp = zeros(num_views, 1); for vn = 1 :num_views
% Find optimal configuration under current view assumption
[labels(:,vn), energy(vn)] = HCF(sites, viewModels, priorModels, objNames, vn, rn, fOpt);
% Find pseudo-posterior probability if needed if (pppOpt) ppp(vn) = PseudoProb(labels(:,vn),sites,viewModels,priorModels,objNames,vn,rn,fOpt); end end
LL(cnt).config = labels'; LL(cnt).energy = energy; LL(cnt).pseudo = ppp; cnt = cnt + 1 ; %end end end end
function [wronglndex] = detectWrongLabels(kflnfo, objNames, viewNames)
% function [wronglndex] = detectWrongLabels(kflnfo, objNames, viewNames)
%
% Some of the key-frames have their parts labeled wrongly in the ground truth.
% Use this function to identify those key-frames.
% wronglndex = D;
T = length(kflnfo); % number of key-frames
M = length(viewNames); % number of views for t = 1:T message = strcat('Processing key-frame: ', num2str(t)); disp(message);
% get view index for current key-frame reshapedViews = reshape(char(viewNames)',1 ,3*M); vn = 1 + (findstr(reshapedViews, kflnfo(t).view_name)-1)/3;
N = length(kflnfo(t).RI); % number of parts in current constellation config = zeros(1 ,N); % configuration for current constellation if (vn -= M) numObjs = size(objNames{vn},1 ); reshapedObjs = reshape(objNames{vn}',1,2*numObjs);
% Find the list of Null-labeled sites nulllnd = 1 + (findstr([kflnfo(t).RI. label], objNames{vn}(end,:))-1)/2; nullCnt = length(nulllnd);
% Find the list of Non-Null-labeled sites nonNullCnt = N - nullCnt; nonNulllnd = Q; % keeps index of sites with non-null label for n = 1 :N if (length(find(nulllnd == n)) == 0) nonNulllnd = [nonNulllnd n]; labelObj = kflnfo(t).RI(n).label;
I = findstr(reshapedθbjs, labelObj); if (~length(l)) wronglndex = [wronglndex t]; else disp(1+(l-1 )/2); config(n) = 1 + (l-1 )/2; end end end for n = 1 :(numθbjs-1 ) repeatlnd = find(config == n); if (length(repeatlnd) > 1) wronglndex = [wronglndex t]; end end end end
num_echos = length(KFJNFO); % total number of echos
O/ /O — -
% Initialize a cellarray that keep the count % of occurence of each object objCount = cell(length(OBJ_NAMES),1); for I = 1 :length(objCount)
N = length(OBJ_NAMES{l}); % number of objects in current view objCountfJ} = zeros(1,N); end totalKFCount = zeros(length(VIEW_NAMES)-1 ,1 );
O/
/0
% Go over all echos and count occurence of % each of the objects and update ObjCount1 o //0 - - ' for n = 1 :num_echos dιsp( ); message = strcat('Processing KFs of echo number: ', num2str(n)); disp(message); rilcnC 1V
L = length(KF_INFO{n}); for I = 1 :L % Go over all key-frames dlspC *** '); message = strcat('Processing KF number: ', num2str(l)); disp(message);
M = size(KF_INFO{n}(l).RI,2); % Number of objects in current key-frame
% Find the index of the view label of key-frame v_name = KFJNFO{n}(l).view_name; message = strcat('Name of current view is: ', v_name); disp(message); if (~strcmp(VIEW_NAMES{end}, vjiame)) rep_v_name = repmat(v_name, length(VIEW_NAMES), 1 ); viewjndex = find(stmcmp(VIEW_NAMES',rep__v_name, 3) == 1);
% Number and label of objects in current view total_num_objs = length(OBJ_NAMES{view_index}); gt_obj_names = cellstr(OBJ_NAMES{view_index}); message = strcat('lndex of current view is: ', num2str(view_jndex)); disp(message); message = strcat('Total number of objects in current view (GT): ', num2str(total_num_objs)); disp(message); for m = 1 :M % Go over object in each key-frame current_obj_label = KF_INFO{n}(l).RI(m).label; current_obj_name = repmat(current_obj_label, total_num_objs, 1 ); This file contains brief information on how to use the code for automatic cardiac chamber segmentation using the GSAT method.
Required softwared and libraries to make this work:
1) Intel Image Processing Library (IPL)
2) Intel OpenCV Library
3) Graph Templat Library version 1.2 (GLT vl.2)
4) C++ Standard Template Library
5) files in ../echo_video directory: ecg.cc, ecg.h frame.cc, frame.h roi.cc, roi.h
6) files in . ,/util directory: util.cc, util.h ppmIO.cc, ppmIO.h img_util.cc, img_util.h
* Change the entries of the Makefile to reflect the correct paths to the different directories.
* Use script "extractGSAT" (change settings in it to fit your case) to segment cardiac chambers.
* chambereg.cc
^t _ ,
* This file includes the main function for performing the GSAT segmentation. *
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* ***************************************************************************/
#include <iostream> #include <fstream> #include <cstdlib> #include <cstring> #include <cstdio> #include <cmath> #include <ctype.h> #include <unistd.h> #include <dirent.h> #include <sys/stat.h> #include <CV.h> #include <GTL/graph.h>
#include " ../util/filename.h" #include " ../util/ppmIO.h" ttinclude " ../echo_video/frame.h" #include "sat.h" #include "sub_graph.h" #include "gsat_graph.h" #include "sfp.h" #include "surface__path.h" #include "gsat.h"
//#DEB0G void printusage (char*) ; int main( int argc, char* argv[] )
{
CvPoint P_0; int W, H; int graph_option,- int level_min, level_max; char image_name [FILEJKTAME_SIZE] ; char output_name [FIIiE_NAME_SIZE] ; char outlmg_name [FILE_NAME_SIZE] ; char resultlmg_name [FILE_NAME_SIZE] ; char clevel [FILE_NAME_SIZE] ; char graph_name[FILE_NAME_SIZE] ;
Ipllmage* S [TOTAL_STAGES] ; // Stack of subset images at Ipllmage* D[TOTAL_STAGES] ; // different .scales. if { argc < 6 ) { printusage (argv [0 ] ) ; exit (EXIT_FAIIiURE) ; } /*** Scan command line ***/ for (int ind = 1; ind < argc; ind++) {
/*** parse switches ***/ if (argv[ind] [0] == '-') { switch (argvfind] [I]) { case 'i' : strcpy(image_name, argv[++ind] ) ; break; case 'g' • strcpy(output_name, argv[++ind] ) ; break; case 1O': graph_option = atoi (argv [++ind] ) ; break; case ' it ' : level_min = atoi(argv[++ind] ) ; break; case 'M' : level_max = atoi (argv [++ind] ) ; break ; case ' f ' : strcpy (out Img_name, argv [++ind] ) ; break; default : printf ( "Unknown switch ' %c ' \n" , argv [ind] [I] ) ; break; } }
int cnt = 0 ;
IpI Image* mask,- char* mask_name = new char [FILE_NAME_SIZE] ; strcpy (mask_name , MASKJDIR) ,- strcat (mask_name , BW_MASK) ; mask = readIMG2IPL(mask__name,0) ; delete [] mask_name;
Jj
// Read image from disk
//
Ipllmage* irag = readIMG2IPL(image_name, NULL) ;
Frame F(image_name, O),- ROI* roi = F.getROK); Ipllmage* roi__img = roi->getlrag() ;
W = roi_img->width; H = roi_img->height;
//
// Define a gray image with same size as input.
//
Ipllmage* img__gray = iplCreatelmageHeader (1 , 0 , IPL_DEPTH__8U, "GRAY" ,
"GRAY" , IPL DATA ORDER PIXEL, IPL_ORIGIN_TL , IPL_ALIGN_QWORD , W, H, NULL , NULL, NULL, NULL) ; if ( img_gray == NULL) exi t f EXITJFAILURE) ; iplAllocatelmage ( img_gray, 0 , 0 ) ; if (NULL == img_gray->imageData) exit (EXIT_FAILURE) ;
Ipllmage* roi_masked = iplClonelmage (img_gray) ; Ipllraage* roi_filtered = iplClonelmage (img__gray) ,-
/ /
// Changing from color image to gray.
/ / iplColorToGray (roi_img, img_gray) ; iplAnd(mask, img_gray, roi_masked) ; iplMedianFilter(roi_masked, roi_filtered, 3, 3, 1, 1) ;
//
// Finding the centers of the maximal discs
// at different scales.
/ / printf ("Calculating the maximal disc centers .\n") ; maximal_disc_centers (roi_filtered, S, D, TOTAL_STAGES) ;
/ /
// Def ine a graph and initialize its attributes . / / graph ΞG [TOTAL_STAGES] ; // stack of subset graphs
LUT lut [TOTAL_STAGES] ; for ( int j =0 ; j <TOTAL_STAGES ; j ++ ) ( lut [j ] ) . setSize (W, H) ;
P_0.X = 0;
P_0.y = 0;
NodeAttribute NA_0(P_0, 0, 0) ; node_map <NodeAttribute> attribute [TOTAL-STAGES] ; // node attributes
/'/
// Turn each subset to a subset graph and do // some processing on those graphs before
// trying to construct the GSAT.
/ / for( int l=level_min; 1 < level_max; 1++ )
{ ftifdef DEBUG
// making up the name of the graph sprintf (clevel, "%d", 1) ; strcpy (graph_name, "tmp/"); strcat (graph_name, output_name) ; streat (graph_name, clevel) ; strcat (graph_name, ".GML"); #endif
// undirected graph (SG[I] ) .make undirected() ; // initialize node attributes of graph attribute [1] = node_map <NodeAttribute> (SG [I] , NA_0 ) ;
// calculating the sub-graph at stage 1 I ' . sub_graph (S [I] , D [I] , SG [I] , attribute [1] , 1) ;
// simplifying the branch-vertex clusters simplify_BranchVertexClusters (SG [1] , attribute [1] ) ; simplify_BranchVertexClusters (SG [I] , attribute [1] ) ;
// NOTE : less than 2 filterings GSAT gets jumbled at Branches
// Date : 7/24/03
// deleting the isolated vertices . delete_IsolatedVertices (SG [1] , attribute [1] ) ; create_RJT (SG [1] , attribute [1] , lut [1] ) ;
#ifdef DEBUG
// Save info on sub-graphs to disk save_SubGraph(SG[1] , attribute[1] , graph_name, graph_option) ; //write sub-graphs to disk in GML format
Ipllmage* graph__img = iplClonelmage(roi_masked) ; strcpy (image_name, "tmp/") ; streat (image_name, output_name) ; strcat (image_name, clevel) ,- add_GraphToImage(graph_img, SG[I], attribute[1] , image_name) ; iplDeallocate(graph_img, IPL_IMAGE_AI,L) ;
//add image of the graphs to the original image #endif }
//
// Creating a 3D graph. Each node in this graph corresponds // to a node in the sub-graphs. An edge exists between two // nodes if they are connectable and they belong to two
// consecutive levels.
/ / printf ("Constructing the surface paths.\n") ; vector <SFP> sfps; construct_SurfacePath (SG, attribute, lut, sfps, level_min, level_max) ;
// Creating the GSAT from surface paths, printf ("Constructing the GSAT graph\n"); GSAT gsat (sfps) ;
#ifdef DEBUG
// Save GSAT graph to disk gsat. saveToDisk ("tmp/totalGSAT.GML", 0) ;
#endif
// For each node in the GSAT graph give the neighbors graph :-. node_iterator itil = gsat.getGraph() .nodes_begin() ; graph : : node_iterator itel = gsat . getGraph O . nodes_end ( ) ; graph : : node_ifcerator iti2 ; graph : : node_iterator ite2 ; node__map <GSAT_Attrib>& gsa = gsat . getAttrib ( ) ;
/ / .
// Getting a seed vertex and growing a subgraph corresponging
// to a cavity starting from that seed vertex.
// ,
Ipllmage* img_hole = iplCloneImage(img) ; Ipllmage* img_result_total = iplClonelmage(img) ; Ipllmage* img_result = iplClonelmage(img) ,-
Ipllraage* img_tmpg = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY",
"GRAY" , IPL_DATA_ORDER_PIXEL,
IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL) ; if (img_tmpg == NULL) exit(EXIT_FAILURE) ; iplAllocatelmage (img_tmpg, 0, 0) ,- if (NULL == img_tmpg->imageData) exit(EXIT_FAILURE) ;
Ipllmage* img_tmp = iplClonelmage(img_tmpg) ; int min, del, rad, pix,- char pos_txt[FILE_NAME__SIZE] ; char tmp_txt[FILE_NAME_SIZE] ; vector <node> deep_nodes;
CvPoint pos; node seed; int color = 0,- while (gsat.getNextSeed (seed)) { #ifdef DEBUG ) •
Figure imgf000125_0001
rad = gsa[seed] .getAvgDisc () ; pos = gsa[seed] .getAvgPosO ; printf ("node=%d, pos=(%d,%d), minG=%d, delG=%d, rad=%d\n", gsa[seed] .getlndexO , 2*pos.x, 2*pos.y, min, del, 2*rad) ; #endif
// Extract subgraph corresponding to cavity gsat.getSubGraph (seed, color) ,- color++; } cnt = 0,- for (int clr=0; clr <color,- clr++) { iplSet (img_hole, 0) ; iplSet (img_result, 0) ; iplSet (irag_tmp, 0) ,- itil = gsat .getGraphO -nodes__begin() ; itel = gsat.getGraphO .nodes_end() ; while (itil != itel) { if (gsa[*itil] .getColorO == clr) { rad = gsa [*itil] .getAvgDisc () ; pos = gsa [*itil] .getAvgPos () ; pos.X *= 2,- pos.y *= 2; cvCircle (imgjiole, pos, rad, CV_RGB (255, 255, 255) , -1) ;
} ++itil;
} iplColorToGray(img_hole, img_tmpg) ; iplθr(mask, img_tmpg, img_tmp) ; int pix_cntl = cvCountNonZero(mask) ,- int pix_cnt2 = cvCountNonZero(img_tmpg) ; int pix_cnt3 = cvCountNonZero(irag_tmp);
* Print the regions corresponding to chambers * ****************************************************/ if (pix_cnt3==pix_cntl || pix_cnt3==pix_cnt2) {
//iplAdd (irag_hole, img, img_result) ; iplAdd (img_hole, img_result_total, img_result_total) ; strcpy(resultlmg_natne, outlmg_name) ; sprintf (clevel, "%d", ++cnt) ; strcat(resultlmg_name, clevel) ; writeIPL2IMG (img_tmpg, resultImg_name) ;
Figure imgf000126_0001
* Print all regions as overlay on original image *
A***************************************************/ strcpy(resultlmg_name, outlmg_name) ,- strcat(resultImg_name, "_over") ; writeIPL2IMG(img_result_total, resultlrag_name) ;
/* Deallocate all the images */ iplDeallocate (img_tmpg, IPL_IMAGE_ALL) ; iplDeallocate(roi_img, IPL_IMAGE-ALL) ; iplDeallocate(img_hole, IPL_IiVIAGE_ALL) ; iplDeallocate(img_tmp, ΪPIi_IMAGE_ALL) ; iplDeallocate(img_result_total, IPL_IMAGE_ALL) ; iplDeallocate(img_result, IPL_IMAGE_ALL) ; iplDeallocate(mask, IPL_IMAGE_AIiL) ; iplDeallocate(roi_filtered, IPL_IMAGE_ALL) ; iplDeallocate(roi_masked, IPL_IMAGE_ALL) ; iplDeallocate(img_gray, IPL_IMAGE_ALL) ; iplDeallocate(img, IPL_IMAGE_ALL) ; return (O ) ;
void printusage(char* prog)
{ printf("USAGE: %s\n", prog); printf("\t-i <input image narae>\n"); printf("\t-g <output graph name>\n"),- printf("\t-o <1: if output graph is labeled, 0: otherwise>\n") ; printf("\t-m <rainimum graph level (>l)>\nπ); printf("\t-M <maximum graph level (<max gray level) >\n") ; printf("\t-f <result image name>\n") ; }
#!/usr/bin/perl -w
$start = 1;
$stop = 4;
$graph = "SG",-
$srcdir = "./testimg/echon/key-frames";
$tgtdir = " ./testimg/echon3/regions";
$key__frame = "key_frames";
$regions = "regions";
$src = $srcdir,- $tgt = $tgtdir,- opendir (SRCDIR, $src) || die "no directory: $!"; while ($imgln = readdir(SRCDIR) ) { $imgθut = $imgln; $imgθut =- s/.ppra/_seg/; $itngθut = $tgt. '/ ' .$imgθut; $imgln = $src. '/ .$imgln; vrint "$imgln\n"; print "$imgθut\n"; system " ./gsat -i $imgln -g $graph -o 0 -m 4 -M 20 -f $imgθut";
} closedir (SRCDIR) ;
* graph_test.cc
* Written by Shahrara Ebadollahi
* Digital Video(Multimedia Group
* Columbia University
* ***************************************************************************/
#include <stdio.h> ftinclude <CV.h> #include <GTL/graph.h> #include <GTL/node_map.h>
#include "node_attrib.h" int main( int argc, char** argv ) { graph G; G.make_undirected() ; node* v = new node [6] ; v [ 0 ] = G . new_node ( ) ; v[l] = G . new_node ( ) ; v [2] = G . new_node ( ) ; v [3] = G . new_node ( ) ; v [4] = G . new_node ( ) ; v [ 5 ] = G . new_node ( ) ;
G.new_edge <v[0] , v[l]) ;
G.new__edge (v[l] , v[2]) ;
G. new_edge (v [2] , v [3 ] ) ;
G . new_edge (v [3 ] , v [4 ] ) ;
G . new_edge (v [4 ] , v [5 ] ) ;
G.new__edge (v[5] , v[0]); cout << G << endl; node_map<string> label ( G, "Default Label" ) ; label [ v[0] ] = "VO"; label [ v[l] ] = "Vl" ; label [ v[2] ] = "V2" ; label [ v[3] ] = "V3"; label [ v[4] ] = "V4"; label [ v[5] ] = "V5"; for( int i=0; i < 6; i++ ) cout << label [ v[i] ] << endl;
/* for( int i=0; i < non_zero; i++ ) n[i] = G.new_node () ;
// creating a map for assigning attributes to the nodes of G CvPoint* P O = new CvPoint; P_0 ->x = 0 ; P_0 ->γ ~ 0 ;
NodeAttribute NA_0 ( P_0, 0, 0 ); node_map < NodeAttribute > label( G, NA_0 ),-
CvPoint* Pl = new CvPoint; CvPoint* P2 = new CvPoint; CvPoint* P3 = new CvPoint;
Pl->χ = 0; Pl->y = 0; P2->x = 1; P2->y = 1; P3->x = 2; P3->y = 2;
NodeAttribute nal( Pl, 10, 10 ) ; NodeAttribute na2 ( P2, 20, 20 ); NodeAttribute na3 ( P3, 30, 30 ) ; label [ v[0] ] = nal; label [ v[l] ] = na2; label [ v[2j ] = na3; cout << "Printing the node attributes: " << endl; for( int i=0; i < 3; i++ ) {
NodeAttribute na = label [ v[i] ] ; printf( "%d\n", na.getNum_discs () ) ; printf( "%d\n", na.getlmg_val () );
CvPoint* P = na.getPosition() ; cout << "x: " << P->x « "\t" << "y: " << P->y << endl;
} */
G.save( "g2.GML" ); }
* gsat . cc
* Written by Shahram Ebadollahi
* Digital Video ( Multimedia Group
* Columbia University
* * ** ** *** ************ *** ** ** **** ********** *** ********** ******* **** **********/
#include <iostream> #include <fstream> #include <algorithm> ftinclude "gsat_attrib.h" ttinclude "gsat.h" #include "sfp.h"
//#define DEBUG
GSAT : : GSAT ()
{
// GSAT is an undirected graph G.make_undirected() ;
// Initializing the node attributes of GSAT CvPoint p; p.x = 0; p.y = o;
GSAT_Attrib gsaθ(p, 0, 0, 0) ;
A = node_map <GSAT_Attrib> (G, gsaO) ;
_FIRST_SEED_FLAG = false;
// changed from true to false: 8/9/03 }
/**
* Constructor.
* Given the collection of surface paths
* create the GSAT graph and associate
* attributes to its nodes. *
* 8/6/03: Added section to avoid multiple GSAT nodes at same location.
* This usually happens when there are multiple colinear SFPs.
*/ GSAT : : GSAT (vector <SFP>& sfp)
{ int i, j, k, 1; int num_nodes; int minG, avgN, deltaG; int minG_prime, avgN_jprime, deltaG_prime; int minL, maxL; int minL_prime, maxL_prime; int rad, minRad, maxRad, minRad_prime, maxRad_j?rime; int node_cntr = 0; int index; int numV; bool flag; bool foundFLAG; CvPoint avgPos;
CvPoint minLoc, maxLoc,-
CvPoint minlioc__prime , tnaxliθc_prime ;
CvPoint p ; node nodel, node2;
// Merging the surface paths //merge_SurfacePaths (sfp) ; graph : : node_iterator it; graph : : node_iterator end,-
// GSAT is an undirected graph G.make_undirected() ;
/** Initializing the node attributes of GSAT */ p.x = 0; p.y = °;
GSAT_Attrib gsaθ(p, 0, 0, 0) ;
A = node_map <GSAT_Attrib> (G, gsaO) ; graph :: node_iterator iti,- graph : : node_iterator ite; ftifdef DEBUG nrϊnl"f f " —: — — - — : —— ——— \ n " \ • printf ("Printing the specs of each surface path:\n"); #endif
/** First create the nodes in the GSAT and assign their attributes **/ for (1 = 0,- 1 < sfp.sizeO; 1++) { if ( !sfp[1] .isIgnoredO ) { // Only if SFP should not be ignored flag = false;
// Get average attributes of each SP sfp[l] .setMDNumberO ; minG = sfp [1] .getMinGrayLevel() ; deltaG = sfp[1] .getDeltaGrayLevel () ; avgN = sfp[1] .getAverageMDSize() ; sfp[l] .getAveragePosition(avgPos) ;
#ifdef DEBUG prτ[nt;f (Ii *****************************************\n") ■ printf("Surface Path Index = %d\n", sfp[1] .getlndex() ) ; #endif
// Get attributes of top and bottom node in SP iti = sfp[1] .getGraph() .nodes__begin() ; ite = sfp[1] .getGraph() .nodes_end() ; minL = sfp[l] .MinLevelO ; maxL = sfp[l] .MaxLevel() ; int sfp_node_cnt = 0; while (iti != ite) { tfifdef DEBUG
/* Printing for testing purposes */ // Printing parameters of each node of current SFP printf ("\t\tNode: %d\n" , sfp_node_cnt++) ; printf ("\t\tNumber of maximal discs: %d\n", sf p [1] .getMDNumber(*iti) ) ,- printf {"\t\tsize of maximal discs: %d\n" , sfp [1] .getMDSize (*iti) ) ,- printf ("\t\tPosition: (%d, %d) \n" , (sfp [1] .getMDPosition(*iti) ) -x, (sfp [1] .getMDPosition(*iti) ) .y) ; printf ("\t\tValue of S: %d\n", sfp [1] .getMDS (*iti) ) ; printf ("\t\tValue of D: %d\n", sfp [1] .getMDD (*iti) ) ;
/* */ ftendif rad = sfp[1] .getMDSize(*iti) ; if (rad == minL) { minRad = rad; minLoc = sfp[1] .getMDPosition(*iti) ; } else if (rad == maxL) { maxRad = rad; maxLoc = sfp[1] .getMDPosition(*iti) ;
} ++iti;
}
#ifdef DEBUG
// Printing the specs of the Surface Path printf ("\tAvg Position: (%d,%d)\n", avgPos.x, avgPos.y); printf ("\tGray-Level Delta: %d\n", deltaG) ,- printf("\tAvg Disc Size: %d\n", avgN) ; printf("\tMinimutn Gray-Level: %d\n", minG) ; printf ("\tLocation and size of min: (%d,%d) - %d\n" , minLoc.x, minLoc.y, rainRad) ; printf("\tLocation and size of max.- (%d,%d) - %d\n", maxLoc.x, maxLoc.y, maxRad); printf [«*****************************************\nn) ; #endif
/*
* Avoid multiple nodes at some position
* Check to see if already a node exist in
* 'G' with same location as curren SFP
*/ it = G.nodes_begin() ; end = G.nodes_end() ; while (it != end) { p = A[*it] .getAvgPos () ; if (p.x==avgPos.x && p.y==avgPos.y) { //co-located nodes #ifdef DEBUG printf ("SFP %d is colocated with GSAT1s node %d\n",
1, A[*it] .getlndexO ) ; #endif
// Update the parameters of GSAT's node minG_prime = A.[*it] .getGminO ; deltaG_j)rime = A[*it] .getDeltaG() ; avgN_prime = A[*it] .getAvgDisc () ; minLoc_prime = A[*it] . getBotLoc () ; rainRad__prime = A[*it] .getBotRadO ,- minLi__prime = A[*it] .getBotLevel () ; maxLoc_prime = A[*it] . getTopLoc () ■ maxRad_prime = A[*it] .gefcTopRadO • maxLjprime = A[*ifc] .getTopLevel () • Il get the parameters first if (minG_prime < minG) minG = minG_prime; avgN = (avgN*deltaG + avgW_prime*deltaG_prime) / (deltaG+deltaG_prime) ; deltaG += deltaG_prime; if (minRad_prime < minRad) { minLoc.x = minXiθc_prime.x; minLoc.y = minl.oc_prime.y; minRad = minRad_j?rime; } if (maxRad_prime > maxRad) { maxLoc . x = maxLoc_j?r ime . x ; maxLoc.y = maxLoc_prime.y; maxRad = maxRad__prime; } if (maxL_j?rime > maxL) maxli = maxL_prime; if (minL_prime > minL) minL = minL_prime; // update the parameters
GSAT_Attrib gsa(avgPos, minG, deltaG, avgN) ; gsa.setIndex(A[*it] .getlndexO ) ; gsa.setTop(maxRad, maxLoc, maxL) ; gsa.setBot(minRad, minlioc, minL) ;
A[*it] = gsa;
A[*it] . adds FP Index (sf p [1] .getlndexO); flag = true;
} ++it;
}
// SFP does not correspond to any node in 'G'; hence addd // a new node in 'G1 and assign its associated parameters if (Jflag) {
// create a node corresponding to current surface path node new_node = G.new_node() ;
// assign attributes to the newly created node GSAT_Attrib gsa{avgPos, minG, deltaG, avgN) , gsa.setlndex(++node_cntr) ,- gsa . setTop (maxRad, maxLoc , maxL) ; gsa . setBot (minRad, minLoc, minL) ;
A [new_node] = gsa;
A [new_node] . addSFPIndex (sfp [1] . ge t Index ( ) ) ; }
} int linkFlag,- int numNodes = G.number_of__nodes () ;
#ifdef DEBUG printf ("\n\n \n"); printf ("\n\nCreated a GSAT with %d nodes for %d SFPs. Now linking the nodes with edges.\n", numNodes, sfp.sizeO); printf <" \n\n") ;
#endif
/** Place edges between the nodes of the GSAT based on SFP adjacency **/ for (k = 0; k < sfp.sizeO; k++) { if (!sfp[k] .isIgnored() ) { for (1 = 0,- 1 < sfp.sizeO; 1++) { if (l!=k) { if (!sfp[1] -isIgnoredO ) {
/******************************************************
* Link nodes in 'G' corresponding to 2 neighbor SFPs * A*****************************************************/ if (areAdjacent (sfp[k], sfptl])) { // Go over the nodes of GSAT it = G.nodes_begin() ; end = G.nodes_end() ; foundFLAG = FALSE; node__cntr = 0;
// In graph 1G1 find the node corresponding to the // 2 surface paths 'k' and 1I1 and join them with edge while (it «= end) {
// get node index and compare it with the ones above //index = A[*it] .getlndex() ; vector <int> sfplnd = A[*it] .getSFPIndex{) ; // added 8/6/03 int indSize = sfplnd.size(); for (int ii=0; ii < indSize; ii++) { if (sfplnd[ii] == sfp[1] .getlndexO || sfpIndCii] == sfp[k] .getlndexO ) { if ( !foundFLAG) { nodel = *it,- node_cntr++; foundFLAG = TRUE; } else { node2 = *it; node cntr++; foundPLAG = FALSE; break;
Figure imgf000136_0001
++it;
} if (node_cntr == 2 && foundFLAG == FALSE) { linkFlag = 1;
// only join if 2 nodes aren't already joined vector <int> nl = Afnodel] .getNode() ; vector <int> n2 = A[node2] .getNode(); int sizeNl = nl.sizeO ; int sizeM2 = n2.size(); for (int r=0; r<sizeNl; r++) { int currentNl = nl [r] ; if (currentNl==A[node2] -getlndexO ) { linkFlag = 0;
Figure imgf000136_0002
for (int s=0; s<sizeN2; s++) { int currentN2 = n2 [s] ; if (currentN2==A[nodel] .getlndexO ) { linkFlag = 0; }
if (linkFlag) {
G.new_edge(nodel, node2) ; G.new_edge(node2, nodel);
Afnodel] .addNσde(A[node2] .getlndexO ) ; A[node2] . addNode (Afnodel] .getlndexO ) ;
Figure imgf000136_0003
G.make_undirected() ;
_FIRST_SEED_FLAG = false;
// changed from true to false: 8/9/03
}
/**
* dumpInfo 0 * Writes node info to output.
*/ void GSAT :: dumpInfoO
{ int id, avg_n, min__g, del_g; float fac; CvPoint p; graph :: node_iterator n_it = G.nodes_begin() ; graph .- .- node__iterator n_end = G.nodes_end() ; while (n_it != n_end)
{ id = (A[*n_it] ) .getIndex() ; avg_n = (A[*n_it] ) .getAvgDisc() ; p = (A[*n_it] ) .getAvgPos () ; min_g = (A[*n_it] ) .getGmin() ,- del_g = (A[*n_it] } -getDeltaGO ; fac = (float)del_g / (float)min_g; printf ("id:%d, Size:%d, Position: (%d, %d) , MinG:%d, DelG:%d, F:%f\n", id, avg_n, p.x, p-y, min_g, del_g, fac) ; n__it++; }
I**
* This method writes the graph to disk in GML format.
* If option 1O1 is set it also writes the node indices
* along with the node to graph. */ void GSAT : : saveToDisk (const char* F, int O)
{
// openning the file for writing the graph to disk ofstream out_file( F, ios::out ) ,- if ( !out_file)
{ cerr << "GSAT: :saveToDisk() : Can not open file for output" << endl; exit(BXIT_FAILϋRE) ; } out_file << "graph [" << endl; out_file << "directed "; if ( G.is_directed() ) out_file << "1" << endl,- else out_file << "0" << endl,- int id, x, y, type, level;
CvPoint P,- graph :: node_iterator n_it = G.nodes_begin() ; graph :: node_iterator n__end = G.nodes_end() ;
//n__end--;
/** Dump the nodes to the GML file **/ while (n_it != n_end)
{ P = (A[*n_it] ) .getAvgPos () ; id = (A[*n_it] ) .getIndex() ; level = 0;
//printf ("Writing node %d at location (%d,%d) to file\n", id, P.x, P.y); out_file << "node [" << endl; out_file << "id " << id « endl; if( O I= 0 ) out_file << "label \"" << id << "\«« << endl; out_file << "graphics [" << endl; out_file << "center [" << endl; out_file << "x " << P.x << endl; out_file << "y " << P.y << endl; out_file << "z " << level << endl; out_file << "]" << endl;
// closing bracket for center out_file << "width " << 0.1 << endl; out_file << "height " << 0.1 << endl; out_file << "depth " << 0.1 << endl; out_file << "] " << endl;
// closing bracket for graphics out_file << "vgj [" << endl; out_file << "labelPosition \"below\"lc << endl; out_file << "shape \"0val\"" << endl,- out_file << "] " << endl,-
// closing bracket for vgj out_file << "] " << endl;
// closing bracket for node n_it++; } int source, target; graph :: edge_iterator e__it = G.edges_begin() ; graph : : edge_iterator e_end = G.edges_end() ;
/** Dump the edges to the GML file **/ while( e_it != e_end )
{ source = A[e_it->source() ] -getlndexO ; target = A[e_it->target()] .getlndexO ;
//printf ("Linking source %d to target %d\n", source, target); out_file << "edge [" << endl; out_file << "linestyle \"solid\"" << endl; out_file << "source " << source << endl; out_file << "target " << target << endl; out_file << "]" << endl;
// closing bracket for edge e_it++; } out_file << "]" << endl,-
// closing bracket for graph out_file.close() ; }
/**
* getNextSeed ()
* Returns the next available seed vertex in 'N' .
* If no seed is found returns FALSE.
* Seed vertex is a vertex no already used in a
* subgraph and has the highest (current) difference
* between top and bottom gray-level values of
* the corresponding surface path.
*/ bool GSAT .- : getNextSeed(node& H)
{ int cntr=0; int deltaG, deltaG_Max,- int minG, minG_Min; double glc, fac, fac__max; vector <int> deltaG__v; vector <int> minG_v; vector <float> F_v; float P_max;
* Get depth of all unused vertices of the GSAT graph *
A******************************************************************/ graph .- .- node_iterator iti = G . nodes_begin ( ) ; graph : : node_iterator ite = G . nodes_end ( ) ; while (iti ! = ite) { if ( !A [* iti] . isUsed O ) { deltaG = A[*iti] .getDeltaGO ; minG = A[*iti] .getGminO ; deltaG_v.insert (deltaG_v.begin() + deltaG_v.size() , deltaG);
//F_v.insert (F v.begin() + F v.size(), (double)deltaG/ (double)minG) ;
} ++iti;
} sort (deltaG_v.begin() , deltaG_v.begin() + deltaG_v.size() ) ; deltaG_Max = deltaG_v[deltaG_v.size() - I];
//sort (F_v.begin(), F_y.begin() + F_v.size() ) ,- //F_max = F_v[F_v.size () - I],-
// Count number of points that have the deltaG equal to the max for (int i=0; i < deltaG_v.size() ; i++) { if (deltaG_v[i] == deltaG_Max) cntr++; } //for (int i=0; i < F_v.size(); i++) { // if (F_v[i] == F_max) // cntr++;
//}
/*******************************************************************
* Only accept vertices as new seeds which are deep enough * *******************************************************************/ if (_FIRST_SEED_FLAG) {
//#ifdef DEBUG
//printf ("deltaG of current node: %d\tdeltaG of main node: %d\t", deltaG_Max, A[_FIRST_SEED] .getDeltaGO ) ;
//printf ("Their ratio: %lf\n" , (double)deltaG_Max / (double)A[_FIRST_SEED] .getDeltaGO ) ;
//#endif
// Don't accept seeds that are comparatively shallow if ( ( (double)deltaG_Max / (double)A[_FIRST_SEED] .getDeltaGO ) <: GSAT_SEED_TH) return FALSE;
//int seed_delG = A[_FIRST_SEED] .getDeltaGO ; //int seed_minG = A[_FIRST_SEED] .getGminO ;
//printf ("First seed's F: %f\t Current F_max: %f\n", (double) seed_delG/ (double)seed_minG, F_max) ;
//if ( (F_max / ( (double)seed_delG/ (double)seed_minG) ) < GSAT_SEED_TH) // return FALSE; } rainG_v.clear() ; deltaG_v.clear() ; F_v.clear0 ,-
/*******************************************************************
* Find the vertice(s) corresponding to the new deepest level *
A****************************************************************** / if (icntr) { // Only one vertex with deltaG equal to deltaG_Max iti = G.nodes_begin() ; ite = G.nodes_end() ; while (iti != ite) { if (!A[*iti] .isøsedO ) { deltaG = A[*iti] .getDeltaGO ; minG = A[*iti] .getGminO ;
//if (( (double)deltaG/ (double)minG) == F_max) { if (deltaG==deltaG_MaJc) { if (!_FIRST_SEED_FLAG) {
_FIRST_SEED_FLAG = true;
_FIRST_SEED = *iti;
#ifdef DEBUG printf("Index of first seed: %d, depth: %d, minG: %d, F: %f\n",
A[_FIRST_SEED] .getlndexO , A[_FIRST_SEED] .getDeltaGO ,
A[_FIRST_SEED] .getGminO ,
(double) A [_FIRST__SEED] . getDeltaG ()/ (double) A[_FIRST_SEED] .getGminO ) ;
#endif
}
N = *iti; return TRUE; }
++iti;
}
} else { // More than one vertex with deltaG equal to deltaG_Max iti = G . nodes_begin ( ) ; ite = G . nodes__end ( ) ; while (iti != ite) { if (!A[*iti] .isUsedO ) { deltaG = A[*iti] .getDeltaG() ; minG = A[*iti] .getGminO ; if ( deltaG==deltaG_Max ) minG_v.insert (ttιinG_v.begin() + minG_v.size() , minG);
//if (( (double)deltaG/(double)minG) == F_max)
// deltaG_v.insert (deltaG_v.begin() + deltaG__v.size() , deltaG);
} ++iti;
} sort (minG_v. begin 0 , minG__v . begin () + minG_v.size () ) ; minG_Min = minG_v[0] ;
//sort ( del taG_v. begin () , deltaG_v. begin () + deltaG__v. size 0 ) ; //deltaG_Max = deltaG_v[deltaG_v. size () - I] ; iti = G.nodes_begin() ; ite = G . nodes_end ( ) ; while (iti != ite) { if (!A[*iti] .isUsedO ) { deltaG = A[*iti] .getDeltaGO ; minG = A[*iti] .getGminO ;
//if (( (double)deltaG/(double)minG) == F_max && deltaG == deltaG_Max) { if (minG == minG__Min && deltaG == deltaG_Max) { if ( !_FIRST_SEED_FLAG) {
_FIRST_SEED_FLAG = true;
_FIRST_SEED = *iti;
#ifdef DEBUG printf("Index of first seed: %d, depth: %d, minG: %d, F: %f\n",
A[_FIRST_SEED] .getlndex() , A[_FIRST_SEED] .getDeltaGO , A[_FIRST_SEED] .getGminO , (double)A[_FIRST_SEED] .getDeltaG{) / (double)A[_FIRST_SEED] .getGmin<) ) ;
#endif
}
M = *iti; return TRUE; }
++iti,- } } return FALSE; }
/** * getSubGraphf)
* Given a seed vertex it starts growing a
* subgraph based on cavity compatibility
* and gray-level compatibility of the
* neighbor nodes.
* See "Baile's" paper for details. */ void GSAT : : getSubGraph (node seed, int color)
{ vector <node> SG,- // Nodes belonging to the subgraph SG.clear() ;
// Grow the subgraph growSubGraph (SG, seed, color) ; }
* growSubGraph()
* ,
* Starting from a vertex 'c' which is already in
* the subgraph, this method grows the sub-graph
* recursively. */ void GSAT : : growSubGraph (vector <node>& sg, node& c, int color)
{ int i, j; node q, q_prime; vector <node> NN; // Vector for temporary holding neighbors vector <node> T_q; // vector holding current T_q nodes
// Mark and input the new vertex into sub-graph if (A[C] .isUsedO ) return;
A [c] . σsed ( ) ;
A [c] . setColor (color) ; sg . insert ( sg . begin O +■ sg . s ize ( ) , c) ;
#ifdef DEBUG printf (" < > \n\n"); printf ("Inserted node %d at delG=%d and minG=%d with coloring=%d in SG.\n",
A[c] .getlndexO ,
A[c] .getDelfcaGO ,
A[c] .getGminO ,
A[c] .getColorO ) ; printf ("SG currently has %d nodes.\t", sg.size(J); for (i=0; i<sg.size() ,- i++) printf ("%d, ", A[sg[i] ] .getlndexO ); printf ("\n"); #endif
// Get the non-used neighbors of 'c' getMeighbors (c, NN) ;
//
// For each neighbor of 'c' check if it's eligible // to be added to the sub-graph or not.
// Refer to section 7 of Baile's paper for details.
// ttifdef DEBUG printf ("Number of neighbors of current node %d = %d\n", A[c] .getlndexO ,
NN.size0 ) ; printf ( "Neighbors are : \t" ) ; for (j =0 ; j <NN . size ( ) ; j ++) printf < " %d, " , A [NN [J ] ] . getlndex O ) ; printf ( " \n\n" ) ; #endif for (j=0; j<NN.size(); j++) { q = NN[J] ; if (cavityCompatible (q, sg) ) { #ifdef DEBUG printf ("node %d and subgraph are cavity compatible!\n",
A[g] .getlndexO ) ; printf ("Constructing T_q!\n"); #endif constructT (q, sg, T__q) ;
#ifdef DEBUG printf ("T_q has %d nodes.\n", T_q.size() ) ; #endif if (checkGLC(q, T_q) ) { // Check gray-level compatibility #ifdef DEBUG printf ("Node %d and T_q are gray-level compatible!\n", A[q] .getlndexO ) ; printf ("Calling growSubGraph() recursively!!\n") ; #endif growSubGraph (sg, q, color) ; } else { if (getQ_prime (q, q_prime) ) { // get node in 0.9*R_q #ifdef DEBUG printf ("Not GLC compatible! Get q_prime!\n") ; #endif if (checkGLC(q_prime, T_q) ) { // check its compatibility ftifdef DEBUG printf ("Q_prime = %d is GI1C with T_q!\n",
A[q_prime] .getlndexO ) ; printf ("Calling growSubGraphO recursively! !\n") ; #endif
A[q] . DontReconstruct () ; A[q] .setNoise() ; grσwSubGraph (sg, q, color) ; } else { #ifdef DEBUG printf ("Mark neighborhood of node %d as useless\n",
A[q] .getlndexO) ; #endif markNeighborhood (c, q) ; // mark 'c' neighborhood not // to be used later for growing
}
} else { #ifdef DEBUG printf ("No Q_prime exists. Mark %d as useless. \n",
Atq] .getlndexO) ; #endif
A[q] .UsedO ; A[q] .DontReconstruct () ; }
} else { #ifdef DEBUG printf ("Node %d .is not compatible with cavity!\n", A[q] .getlndexO ) ; #endif
A[q] .Used() ;
A[q] .DontReconstruct0 ,-
} tifdef DEBUG printf ("Going to the next vertex.\n")r- #endif
} }
* getMeighbors 0
* Get neighbor nodes of input node 1N' and
* return the list 1NN' of neighbors. */ void GSAT : : getNeighbors (node N, vector <node>& NN)
{ node : : out_edges_iterator eiti = N.out_edges_begin() ; node :: out_edges_iterator eite = N.out_edges_end() ,
NN.clear() ;
#ifdef DEBUG printf("Getting neighbors for node: %d\n", A[N] .getlndexO ),- #endif
// Get neighbors and add'em to list of NN while (eiti != eite)
{ node n_tmp = eiti->target() ,- ttifdef DEBUG printf("\t%d", A[n_tmp] .getlndexQ) ; #endif if (!A[n_tmp] .isUsedO) {
NN.insert (NN.begin{) + NN.sizeO, n_tmp) ; #ifdef DEBUG printf ("\tNot Used!\n"); #endif
} else { #ifdef DEBUG printf("\tls Used!\n"); #endif
} eiti++,-
} }
/**
* constructτ()
* , , ,
* Given a reference node 'Q' and the sub-graph nodes
* 1S', this method constructs a subset of 'S' and
* returns it in 'T'. 1T' is defined as follows: *
* T = {v of 1S1 I D(Q, v) <= R_v}.
*/ void GSAT : .- constructT (node Q, vector <node>& S, vector <node>& T)
{ int i;
CvPoint center1, center2; double center_dist, radius;
T. clear () ; for (i=0; i < S.size(); i++) { centerl = A[S [i] ] .getAvgPos () ; center2 = A[Q] .getAvgPos (); center_dist = sqrt (pow {centerl. x-center2.x, 2.0) + pow(centerl.y-center2.y, 2.0)); radius = (double) A[S [i] ] .getAvgDiscO ; if (center_dist <= radius && A[S [i] ] . isNoiseO == FALSE)
T.insert (T.begin() + T.sizeO, S[i]); }
* cavityCompatible()
*
* Check to see if node 1Q' is cavity compatible with
* the vector of nodes in a subgraph 1S1. */ bool GSAT : : cavityCompatible (node Q, vector <node>& S)
{ int c_cntr = 0; for (int i=0; i < S.sizeO; i++) { if (CC(A[Q], A[S ti]] )) c_cntr++,- }
// All of the nodes in 'S' should be // cavity-compatible with node 1Q1. if (c__cntr == S.sizeO) return TRUE; else return FALSE; }
/**
* checkGLCO
* Checks gray-level compatibility of node 1Q'
* with all nodes in set of nodes 'T' .
*/ bool GSAT : : checkGLC (node Q, vector <node>& T)
{ double fl; int c_cntr = 0; for (int i=0; i < T.sizeO; i++) { fl = Factorl (A[Q], A[T[I]J);
#ifdef DEBUG printf ("For nodes %d with: Gmin=%d, DeltaG=%d\t-AND-\t%d with: Gmin=%d,
DeltaG=%d\tFactor=%lf \n" ,
A[Q] .getlndexO , A[Q] .getGminf) , A[Q] .getDeltaGO , A[TtU ] . getlndexO , A[T[i] 3.getGminO ,
A[TCi]] .getDeltaGO , fl) ;
#endif if (fl < GSAT-THRESH) c_cntr++; }
// only GLC when all nodes in T are GLC with Q //if (c_cntr > int(floor(0.5*T.size0 ))) if (c_cntr == T.sizeO) return TRUE; else return FALSE; }
/**
* getQ_prime()
*
* Given node 1C and its neighbor 1Q' and
* the vector of nodes 'T', this method
* gets a neighbor of 1Q1 which is 0.9*R_Q * away from 1 Q ' . Return TRUE if such a node
* exists . FALSE otherwise .
* / bool GSAT : : getQ_prirae (node Q , nodes Q_prime )
{
// Get all neighbors of node Q. vector <node> NN; getNeighbors (Q, NN) ;
CvPoint centerl = A[Q] .getAvgPos(); , double radius = 0.9 * (double) (A[Q] .getAvgDisc ()),- for (int i=0; i < NN.sizeO; i++) {
CvPoint center2 = A[NN[i] ] .getAvgPos (); double center_dist = sqrt (pow(centerl.x-center2.x, 2.0) + pow(centerl.y-center2.y, 2.0)); if (center_dist <= radius) { Q_prime = NN[i] ; return TRUE; } } return FALSE;
/**
* Recursively marks all the nodes in the path
* starting from 'q' which are less than the
* average radius of 1C away from it and
* have increasing Gmin values. */ void GSAT : : markNeighborhood (node& C, node& Q)
{
// Q is a bad node, mark it so it's // not used later in reconstruction A[Q] .Used() ; A[Q] .DontReconstruct () ;
// Get all neighbors of node Q. vector <node> NN; getNeighbors (Q, NN) ; int g_minl, g_min2; double center_dist, radius; CvPoint centerl, center2; for (int i=0; i < NN.sizeO; i++) { centerl = A[C] .getAvgPos (); center2 = A[NN[i]] .getAvgPos() ; center_dist = sqrt (pow(centerl.x-center2.x, 2.0) + pow(centerl.y-center2.y, 2.0)); radius = (double) (A[C] .getAvgDisc()); g_minl = A[Q] .getGminO ; g_min2 = A[NN [i] ] .getGmin ( ) ;
// New node should fall in Cs radius and also // it should be in the direction of increasing // minimum gray-level. if (center_dist <= radius && g__mini <= g_min2) markNeighborhood (C, NN[i]) ;
* gsat . h
* Written by Shahram Ebadollahi
* Digital Video ( Multimedia Group
* Columbia University
* ************** * * ***** **** * * *************** ** * ****** * * ******* **** ***** *** ***/
#ifndef _GSAT_H #define _GSAT_H
#include "gsat_attrib.h" #include "surface_path.h" const double GSATJTHRESH = 0.1; const double GSAT_SEED_TH = 0.35; class GSAT { public:
GSAT () ;
GSAT (vector <SFP>&) ;
-GSAT() { }
// Returns GSAT graph itself graphs getGraph () { return G; }
// Returns attributes of GSAT graph node_map <GSAT_Attrib>& getAttribO { return A; }
// Gives the next not yet visited seed vertex bool getNextSeed(node&) ,-
// Extract subgraph, corresponding to cavity void getSubGraph (node, int) ;
// Writes GSAT graph to disk void saveToDisk (const char*, int i=0) ;
// Write GSAT node info void dumpInfo () ;
// Find all neighbors of a node in GSAT. void getNeighbors (node, vector <node>&) ; private: graph G; node_map <GSAT_Attrib> A,- bool _FIRST_SEED_FLAG; node _FIRST_SEED;
/ /
// Look at 'gsat.cc' for details on this
// helping methods of the class.
// void growSubGraph (vector <node>&, node&, int) ; bool cavity-Compatible {node, vector <node>&) ; bool checkGLC (node, vector <node>&) ; void constructT (node, vector <node>&, vector <node>&) ; bool getQ_prime (node, node&) ,- void markNeighborhood (node&, node&) ;
#endif
* gsat_attrib.cc
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include "gsat_attrib.h" //#define DEBUG
GSAT_Attrib :: GSAT_Attrib()
{
_avgPos.x = 0; _avgPos-y = 0;
_Gmin = 0; _DeltaG = 0; _avgDisc = 0;
_index = 0; _USED = FALSE; _NO_RECONST = FALSE; _WOISE = FALSE; }
GSAT_Attrib :: GSAT_Attrib(CvPoint p, int gmin, int deltag, int avgdisc)
{
_avgPos.x = p.x; _avgPos.y = p.y,-
_Gmin = gmin; _DeltaG = deltag; _avgDisc = avgdisc;
_USED = FALSE; _NO_RECONST = FALSE; _NOISE = FALSE; } const GSAT_Attrib &GSAT_Attrib :: operafcor=(const GSAT_Attrib &right)
C if ( &right ! = this ) {
_index = right . _index; JϋSED = right . _USED; _NO_RECONST = right . JsIO_RECONST; _NOIS£3 = right . _NOISE ;
_Gmin = right._Gmin; _DeltaG = right.__DeltaG,• _avgDisc = right._avgDisc;
_minR = right._minR; _maxR = right._maxR; _avgPos . x = right . _avgPos . x; _avgPos . y = right ._avgPos . y;
_minL . x = right . _minL . x ;
__minli . y = right . _minL . y;
_maxL . x = right . __raaxL . x ;
_maxL . y = right . _maxL . y ;
} return *this,- // enables cascaded assignements
CCO
* Tests if cavity corresponding to one surface path
* is spatially included in another cavity represented
* by some other surface path.
*/ bool CC (GSAT_Attrib& gsal, GSAT_Attrib& gsa2)
{ double center_dist,- double inclusion; int rad_minl = gsal._minR; int rad_maxl = gsal._maxR; int rad_min2 = gsa2._minR; int rad_max2 = gsa2._maxR; int level_minl = gsal.jminN; int level_maxl = gsal._maxN; int Ievel_min2 = gsa2._minN; int Ievel_max2 = gsa2._maxN;
CvPoint pos__minl = gsal._minL;
CvPoint pos__maxl = gsal._maxL;
CvPoint pos_min2 = gsa2._minL;
CvPoint pos_max2 = gsa2._maxL;
/** if (rad_minl >= rad_max2) { **/ if (level_minl >= Ievel_max2) { center_dist = sqrt(pow(pos_minl.x-pos_max2.x, 2.0) + pow(pos_minl.y- pos_max2.y, 2.0)); inclusion = center_dist + rad_max2 - rad_minl; if (inclusion < 0) { #ifdef DEBUG printf("Node:%d,rain_levell:%d,rad_minl:%d,node:%d,max_level2 :%d,rad_tnax2=%d,dist ance:%lf, incl:%lf,NOT CC!\n", gsal._index, level_minl, rad_minl, gsa2._index, Ievel_raax2, rad_max2, center_dist, inclusion) ; #endif return FALSE; // not cavity compatible } else return TRUE; // cavity compatible /** } else if (rad_min2 >= rad_maxl) { **/ } else if (Ievel_min2 >= level_maxl) { center_dist = sqrt (pow(pos_min2.χ-pos__maxl.x, 2.0) + pow (pos_min2. y- pos_jnaxl . y , 2.0)),- inclusion = center_dist + rad_maxl - rad_miri2 ; if (inclusion < 0) { #ifdef DEBUG printf ("Node:%d,min_level2 : %d, rad_miti2 : %d, node: %d,max_levell: %d, rad_maxl=%d,dist ance:%lf , incl :%lf ,MOT CC!\n", gsa2._index, Ievel_min2, rad_min2, gsal._index, level_maxl, rad_raaxl, center_dist, inclusion) ; #endif return FALSE,- // not cavity compatible } else return TRUE; // cavity compatible
} else {
#ifdef DEBUG printf ("None of the maximal discs is included in the other one!\n"); #endif return TRUE; // cavity compatible
}
/**
* Returns the gray-level compatibility indicator
* for two vertices 'q' and 'r' given by:
* GLC(q,r) = (Gmin(q) - Gmin(r) )/DeltaG(q) .
*/ double GLCFactor (GSAT_Attrib& gsal, GSAT_Attrib& gsa2)
{
#ifdef DEBUG printf ("Gl_delta=%d, G2_delta=%d\n", gsal._DeltaG, gsa2._DeltaG) ,- #endif return ( (double) (gsal._DeltaG - gsa2._DeltaG) )/ (double)gsa2._DeltaG,- } double GLCFactor2 (GSAT_Attrib& gsal, GSAT_Attrib& gsa2)
{ printf ("Gl_min=%d, G2_min=%d, G2_delta=%d\n",gsal._Gmin,gsa2._Gmin,gsa2._DeltaG) ; return ( (double) (gsa2._Gmin - gsal._Gmin) )/ (double)gsa2._DeltaG; } double FactorO (GSAT_Attrib& gsal, GSAT_Attrib& gsa2) { return (fabs (( (double)gsal._Gmin/(double)gsal._DeltaG) -
( (double)gsa2._Gmin/ (double)gsa2._DeltaG) ) ) ; } double Factorl (GSAT_Attrib& gsal, GSAT_Attrib& gsa2)
{
//return (fabs( (double) (gsal._Gmin - gsa2._Gmin) )/(double)gsal._DeltaG) ; int minGl = gsal._Gmin; int minG2 = gsa2,_Gmin; ftifdef DEBUG printf("minGl=%d, \tmingG2=%d, \tdeltaGl=%d, \tdeltaG2=%d\n", minGl, minG2, gsal._DeltaG, gsa2._DeltaG) ; #endif if (minGl <= minG2) return ( (double) (minG2-minGl) / (double)gsa2.JDeltaG) ; else return ((double) (minGl-minG2)/ (double)gsal._DeltaG) ; }
gsat_attrib.h
* Written by Shahram Ebadollahi
* Digital VideoIMultimedia Group
* Columbia University *•
*** / ftifndef _GSAT_ATTRIB_H #define _GSAT_ATTRIB_H
^include <cmath> ftinclude "sfp_attrib.h" class GSAT_Attrib { friend bool CC (GSAT_Attrib&, GSAT_Attrib&) ; friend double GLCFactor (GSAT_Attrib&, GSAT_Attrib&) ; friend double GLCFactor2 (GSAT_Attrib&, GSAT_Attrib&) ; friend double FactorO (GSAT_Attrib&, GSAT_Attrib&) ; friend double Factorl (GSAT_Attrib&, GSAT__Attrib&) ; public:
GSAT_Attrib() ;
GSAT_Attrib(CvPoint, int, int, int};
~GSAT_Attrib() {};
// Copy operator const GSAT_Attrib &operator=(const GSAT_Attrib &) ; void setlndexfint i) { _index = i; } int getlndex() { return _index; } void setTop(int r, CvPoint p, int n) { _maxR=r, _maxL.x = p.x, _maxL.y = p.y, _maxN = n; } void setBotfint r, CvPoint p, int n) { _minR~r, jminL.x = p.x, _minL.y = p.y, _minN = n; } int getTopRad () { return _maxR; } int getBotRad (} { return _minR; }
CvPoint& getTopLoc () { return _rαaxL; }
CvPoint& getBotLoc () { return _minL; } int getTopLevel() { return _maxN; } int getBotLevel() { return _minN; } int getGminO { return _Gmin; } int getDeltaGO { return _DeltaG; } int getAvgDiscO { return _avgDisc; } CvPoint& getAvgPos() { return _avgPos; } void Used() { JJSED = TRUE; } void NotϋsedO { JJSED = FALSE; } bool isϋsedO { return JJSED; } void DontReconstructO { _NO_RECONST = TRUE; } bool isDontReconstruct() ( return _N0_RECONST; } void setNoiseO { NOISE = TRUE; } void resetNoise() { _NOISE = FALSE; } bool isNoiseO { return _NOISE; } void addNode(int n) {_cnctd.insert(_cnctd.begin()+__cnctd.size(), n) ; } vector <int> getNodef) {return _cnctd; }
// These 2 methods were added on: 8/4/03 void addSFPIndex(int n) {_sfplnd.insert(_sfplnd.begin()+_sfplnd.size vector <int> getSFPIndex() {return _sfplnd; } // Was added on: 8/6/03 void setColor{int c) {_color = c; } int getColorO {return _color; } private: int _color; // coloring of the vertex int _index; // index of node in GSAT bool _USED; // TRUE if node was used in subgraph finding bool _NO_RECONST; // Don't use for reconstruction if TRUE bool _NOISE; // if TRUE shows that this vertex is noise int _Grain; // minimum gray-level of the associated SF path int _DeltaG; // gray-level difference between SF path top and bottom int _avgDisc; // average size of maximal disc in SF path
CvPoint _avgPos; // average position of the associated SF path int _minR; // Radius of node at bottom and top of corresponding SP int _maxR;
CvPoint _minli; // Location of node at bottom and top of corresponding SP
CvPoint _maxL; int __minN; // Min and Max level of corresponding SP int jmaxN;
// last 2 lines were added 8/7/03. Needed in computing cavity compatibility vector <int> _cnctd;// List of nodes to which current gsat node is connected
// Was added on: 8/4/03 vector <int> _sfplnd; // List of SFPs corresponding to this node
// Was added on: 8/6/03
}; bool CC (GSAT_Attrib&, GSAT_Attrib&) ; double GLCFactor (GSAT_Attrib&, GSAT_Attrib&) ; double GLCFactor2 (GSAT_Attrib&, GSAT_Attrib&) ; double FactorO (GSAT_Attrib&, GSAT_Attrib&) ; double Factorl (GSAT_Attrib&, GSAT_Attrib&) ;
#endif * gsat_graph. cc
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* ***************************************************************************/
#include <iostream>
#include <fstream>
#include <cmath>
#irιclude <vector>
#include "gsat_graph.h" ^include "gsat_util.h" #include "tree.h"
/**
* get 16Dir()
* Finds the 16-direction of 'this_v' vertex which
* should not be a Branch Vertex, with respect to
* the 'last_v' and 'next_v' vertices.
* 16-Dir is defined in "Bailes" paper.
*/ bool get_16Dir ( NodeAttribute& this_v, NαdeAttribute& next_v, NodeAttribute& last__v, int& dir_16 )
{ int dirl, dir2;
NodeType type = this_v.getType () ; if (type == ISOLATED | | type == BRANCH) return FALSE; else if (type == ARC)
{ dirl = get_8Dir (this_v, next_v) ; dir2 = get_8Dir (last_v, this_v) ; if (fabs (double(dirl - dir2) ) < 4) dir_16 = dirl + dir2; else dir_16 = (dirl + dir2 + 8) % 16;
} else
{ if (last_v == this_v) // going out of TERMINAL vertex dir_16 = 2 * get_8Dir (this_v, next_v) ; else // going into the TERMINAL vertex
{ dir_16 = 2 * get_8Dir (this_v, last_v) ; dir_16 += 8; dir_lG %= 16; } return TRUE; }
/*
* set ISDirO
* For the vertices in list 1LI' which are
* from a graph with attributes 1A' , this
* function sets the 16-direction of the
* vertex.
* Note: list 1LI' of nodes is a list of connected
* nodes in the graph. */ void set_16Dir (node_map<NodeAttribute>& A, graph& G, vector<node>& LI)
{ int dir; // the 16-direction of the current vertex node this_vtx, last_vtx, next_vtx; for (int i=0; i < LI. size () - 1; i++) { if (0 == i) { if ( (A[LI [i]] ) .getTypeO == TERMINAL ) { this_vtx = LI[i] ; next_vtx = LI [i+1] ; last_vtx = LI [i];
} else { this_vfcx = LI [i] ; next_vtx = LI [i+1] ; get_LastVertex (this_vtx, next_vtx, last_vtx, G, A) ; } } else if ( (LI. size () -1) == i) { if ( (A[LI [i] ]) .getTypeO == TERMINAL ) { this_vtx = LI[i] ; next_vtx = LI [i] ; last_vtx = LI[i-l] ;
} else { this_vtx = LI [i] ; next_vtx = LI [i] ; get_LastVertex (this_vtx, next_vtx, last_vtx, G, A) ;
} } else { this_vtx = LI[i]; next_vtx = LI [i+1]; last_vtx = LIEi-I] ; } get_16Dir (A[this_vtx] , A[next_vtx], A[last_vtx] , dir) ; (A [LI [i] ]) . setlβDir (dir) ; } }
/**
* get_8Dir()
* Finds the 8-direction of the "target" node
* with respect to the 8-direction of the '
* "source" node.
* Note that the "source" and "target" should
* be 8-neighbors. This should be checked by
* the caller.
*/ int get_8Dir( NodeAttribute& source, NodeAttribute& target )
{
CvPoint P_source, P_target;
P_source = source.getPositionO ; P_target = target.getPosition(); int x_diff = P_target.x - P_source.x; int y_diff = P_target.y - P_source.y; if ( x_diff == -1 )
{ if( y_diff == -1 ) return(1) ; else if ( y_diff == 1 ) return(7) ; else return(O),-
} else if( x_diff == 1 )
{ if( y_diff == -1 ) return(3) ; else if ( y_diff == 1 ) return(5) ; else return(4) ;
1
/ else
{ if ( y_diff == -1 ) return(2) ; else if( y_diff == 1 ) return(6) ;
}
* Determines if 2 nodes with the given attributes
* as "Al" and "A2" are 8-neighbors with each other.
*/ bool areEightConnected( NodeAttribute& Al, NodeAttributes A2 )
{
CvPoint Pl, P2; Pl = Al.getPosition() ,- P2 = A2.getPositionO ,- if( fabs (double(Pl.x - P2.x) ) <=1 && fabs (double(Pl.y - P2.y)} <=1 ) return (TRUE) ; else return (FALSE) ;
}
/**
* linkable BranchVertices ()
* Given two branch-vertices Nl, and N2 from graphs
* at levels I, and J respectively, with attributes
* as defined in 1A' , this function determines if
* the two branch-vertices are linkable. It returns
* 'TRUE' if they are linkable and 'FALSE' if not.
*/ bool linkable_BranchVertices ( node Nl, node N2, node_map<NodeAttribute>* A, int I, // Level of the first graph int J ) // Level of the second graph
{ int i, j ; int eight_dir; NodeAttribute NA_0, NA;
// Lists for holding the neighbors of 'Nl' and 'N2'. vector<node> node__listl; vector<node> node__list2;
// Pointers for iterating through neighbors, node : : out_edges__iterator eib; node : : out_edges_iterator eie;
// List of eight-directions of the neighbors of each BV. vector <int> eight_dirsl; vector <int> eight_dirs2;
// Adding 'Nl' and its neighbors to node list. node_listl.insert (node_listl.begin() + node_listl.size() , Nl); eib = Nl.out_edges_begin() ; eie = Nl.out_edges_end() ; while (eib != eie)
{ node__listl.insert (node_listl.begin() + node_listl.size() , eib->target() ) ;
++eib; }
// Adding 'N2' and its neighbors to node list. node_list2.insert (node_list2.begin() + node_list2.size () , N2) ; eib = N2.out_edges_begin() ; eie = N2.out_edges_end() ; while (eib J= eie)
{ node__list2.insert (node_list2.begin() + node_list2.size() , eib->target() ) ; ++eib; }
// Getting the 8-dirs of each neighbor node of 1Nl' . int sizel = node_listl.size{) ; NA_0 = ((A[I]) [node_listl [0] ] ) ; for (i=l; i < sizel,- i++)
{
MA = ((A[Ij) [node_listl[i] J ) ; eight_dir = get_8Dir{ NA_0, NA ) ; eight_dirsl.insert (eight_dirsl.begin() + eight_dirsl.size() , eight_dir) ; }
// Getting the 8-dirs of each neighbor node of 'N2' . int size2 = node_list2.size() ;
NA_0 = ( (A[J] ) [node_list2 [0] ] ) ; for (i=l; i < size2; i++)
{
NA = ( (A[J] ) [node_list2[i]] ); eight_dir = get_8Dir( NA_0, NA ) ; eight_dirs2.insert (eight_dirs2.begin() + eight_dirs2.size() , eight_dir) ; }
if ( sizel < size2 ) return( neighbors_8dir_Match( eight_dirsl, eight__dirs2 ) ) ; else return( neighbors_8dir_Match( eight_dirs2, eight_dirsl ) ) ; }
* linkable_Vertices ()
* Used to find if node 1Nl1 from sub-graph 'I'
* and node 'N2' from sub-graph 1J' with attributes
* as defined in 'A' are linkable according to
* Fig. 23 in "Baile's" paper. */ bool linkable__Vertices ( node Nl, node N2, node_map<NodeAttribute>* A, int I, // Level of the first graph int J )// Level of the second graph
{ bool LINKABLE = FALSE;
// Get the position of the two vertices CvPoint Pl = ( (A[I] ) [Nl] ) .getPositionO ; CvPoint P2 = ((A[J] ) [N2] ) .getPositionO ;
// get the difference of the position of two vertices int x_dif = Pl.x - P2.x; int y_dif = Pl.y - P2.y;
// Only accepts nodes which are in the 8-neighborhood of reference, if (fabs (double(x_dif)) > 1 j) fabs (double(y_dif) ) > 1) return FALSE; switch ( ((A[I]) [Nl]) .getl6Dir() ) { case 0: case 8 : if (0 == x_dif)
LINKABLE = TRUE; break; case l: case 9: if ( 0 == x_dif I I
(1 == x_dif && -l == y_dif) | | (-1 == χ_dif && 1 == y_dif) ) LINKABLE = TRUE; break; case 2: case 10 : if ((O == x_dif && 0 == y_dif ) || (1 == x_dif && -1 == y_dif) | j (-1 == x_dif && 1 == y_dif) ) LINKABLE = TRUE; break; case 3 : case 11: if ( o == y_dif I I
(1 == x_dif &&' -1 == y_dif) | | (-1 == χ_dif && 1 == y_dif) ) LINKABLE = TRUE; break; case 4: case 12 : if (o == y_dif)
LINKABLE = TRUE; break,- case 5 : case 13 : if ( o == y_dif I I
(1 == x_dif && 1 == y_dif) | | (-1 == x_dif && -1 == y_dif) ) LINKABLE = TRUE; break; case 6 : case 14 : if ( (0 == x_dif && 0 == y_dif ) | | (1 == x_dif && 1 == y_dif) | | (-1 == x__dif && -1 == y_dif) ) LINKABLE = TRUE; break; case 7 : case 15 : if ( 0 == x_dif I I
(1 == χ_dif && l == y_dif) | | (-1 == x_dif && -1 == y_dif) ) LINKABLE = TRUE; break; }; if (LINKABLE) { if ( compatible_16Dir(( (A[I] ) [Nl] ) .getl6Dir() , ( (A[J] ) [N2] ) -getl6Dir () ) ) return TRUE;
} return FALSE;
}
/**
* neighbors_8dir_Match()
* Examines the 8-direction of one neighborhood with
* that of another. It returns 1TRUE1 if the neighborhood
* with fewer nodes matches with the neighborhood with
* more vertices.
*/ bool neighbors_8dir_Match( vector<int> Vl, vector<int> V2 )
{ int i, j;
-ut cntr; int sizel = Vl.size() ,- int size2 = V2.sized; int bins [9] = {θ},-
* All elements of 'bins' are set to zero initally.
* We go over the first vector of 8-dirs. We increment
* the location of each 8-dir in 'bins' by '+1'. We
* do the same thing for the second vector but this
* time we decrement by ' -2 ' . We will have a match
* in one of the following cases:
* bins [i] = -1
* binsfi] = 1 & bins [i+1] = -2
* bins [i] = -2 & bins [i+1] = 1
for (i=0; i < sizel; i++)
{ bins[Vl[i]]++; if (Vl[i] == 0) bins [8] ++; } for (j=0; j < size2; j++)
{ bins [V2 [J]] += -2; if(V2[j] == 0) bins [8] += -2,-
} cntr = 0; f or ( i=0; i < 8; i++ )
{ if (bins[i] == -1) cntr++; else if (bins [i] == 1) { if (binsti+1] == -2)
{ cntr++,- i++;
Figure imgf000164_0001
i++; } } } if (cntr == sizel) return TRUE; else return FALSE; }
/** * find IiinkableVertex2 ()
* Using the Look-Up Table finds a linkable vertex
* in graph 'J' to the referernce vertex in graph
* 1I'.
V bool find_LinkableVertex2 ( node ref_I, // reference node in graph I node& ref_J,// corresponding node in J node_map <NodeAttribute>* A, // their attributes IJUT* L, // stack of look-up tables int I, // subgraph numbers int J )
{ int dirl; CvPoint PI, PJ; vector <CvPoint> pos;
PI = ( (A[I] ) [ref_I] ) .getPositionO ; dirl = ((A[I] ) [ref_I] ) .getl6Dir() ; switch (dirl) { case 0: case 1: case 7 : case 8: case 9: case 15:
PJ.X = PI.x; p'j.y = PI.y; pos . insert (pos.beginO + pos.sizef), PJ) ;
PJ. X = PI. x; PJ. Y = PI. y - 1; pos. insert (pos.beginO + pos.size() , PJ) ;
PJ. X = PI. x; PJ. y = PI. y + 1; pos . insert (pos.beginO + pos.size() , PJ) ; if (f ind_CorrespondingVertex (dirl, A[J], L[J], pos, ref_J) ) { if (((A[J]) [ref_J] ) .getlmg_val ( ) >= ( (A[I] ) [ref__I] ) .getlmg_val ()) { pos.clear() ; return (TRUE) ;
} } break; case 3 : case 4: case 5: case 11: case 12 : case 13 :
PJ.x = PI.x,- PJ.y = PI.y,- pos.insert (pos.begin() + pos.sized, PJ);
PJ.X = PI.x - 1; PJ.Y = PI.y; pos.insert (pos.begin() + pos.size(), PJ);
PJ.X = PI.X + 1; PJ.y = PI.y; pos.insert (pos.begin() + pos.size (), PJ); if (find_CorrespondingVertex (dirl, A[J], L[J], pos, ref_J) ) { if (((A[J]) [ref_J]) .getlmg_val() >= ( (A[I] ) [ref_I] ) ,getlmg_val ()) { pos.clear() ; return (TRUE) ;
} } break,- case 2 :
PJ.x = PI.x; PJ.y = PI.y; pos.insert (pos.beginO + pos.size(), PJ);
PJ.X = PI.x - 1; PJ.y = PI.y + 1; pos. insert (pos.beginO + pos. size (), PJ);
PJ. X = PI. x + 1; PJ. y = PI. y - 1; pos . insert (pos.beginO + pos. size (), PJ);
PJ. X = PI. y - 1; PJ. y = PI. y; pos . insert (pos.beginO + pos.sizeO, PJ);
PJ. x = PI. x; PJ. y = PI. y - 1; pos . insert (pos.beginO + pos.sizeO, PJ); if (find_CorrespondingVertex (dirl, A[J], L[J], pos, ref _J) ) { if (((A[J]) [ref_J]) .getlmg_val() >= ( (A[I] ) [ref_I] ) .getlmg_val ()) { pos . clear 0 ; return (TRUE) ;
} } break,- case 6:
PJ. x = PI. x; PJ. y = PI. y; pos . insert (pos.beginO + pos.sizeO, PJ);
PJ. x = PI. x + 1; PJ. y = PI. y + 1; pos . insert (pos.beginO + pos.sizeO , PJ);
P J. x = PI. x - 1; PJ. y = PI. y - 1; pos . insert (pos.beginO + pos.sizeO, PJ); /It****************************************************************************
* gsat_graph.h
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* *#*************************************************************************/
#ifndef GSAT_GRAPH_INCLUDED ftdefine GSAT_GRAPH_INCIJUDED
#include <GTL/graph.h> #include "node_attrib.h" ftinclude "sub_graph.h" int get_8Dir( NodeAttribute&, WodeAttribute& ); bool get_16Dir{ NodeAttribute&, NodeAttribute&, NodeAttributek, int& ) ; void set_16Dir (node_map<NodeAttribute>&:, graph&, vector<node>&) ; bool areEightConnected{NodeAttributeS:, NodeAttribute&) ; bool linkable_BranchVertices ( node, node, node_map<NodeAttribute>*, int, int ) ; bool linkable_Vertices( node, node, node__map<NσdeAttribute>*, int, int ) ; bool neighbors_8dir_Match( vector<int>, vector<int> ) ; bool find_LinkableVertex (node, node, node, node&, graph*, node_map<NodeAttribute>*, int, int) ; bool find_LinkableVertex2 (node, nodek, node_map<NodeAttribute>*, LUT*, int,int); void create_LongestVertexList( node&, nodefc, graph&, node_map<NodeAttribute>&, vector<node>& ) ; bool get_NextVertex( node, node, node&, graph&, node_raap<NodeAttribute>& ); bool get_LastVertex( node, node, node&, graphs, node_map<NodeAttribute>& ) ; void reset_AllLinkTags (graph&, node_map<NodeAttribute>&) ; bool compatible_16Dir (int, int) ; void create_LUT (graphs, node_map <NodeAttribute>&, LUT&) ; bool find_CorrespondingVertex (int, node_map<NodeAttribute>&, LUTS:, vector<CvPoint>&, node&) ,- bool verify_CorrespondingVertex (int, node_map<NodeAttribute>&, vector<CvPoint>&, node) ; bool find_VIiist (node, node, node, graph*, node_map<NodeAttribute>*, vector<node>&, vector<node>&;, int, int) ,- bool is_Iiinkable (node, node, node_map<NodeAttribute>* , int, int) ; ttendif
#include <iostream> #include <fstream> #include <cstdlib> #include <cstring> #include <cstdio> #include <cmath> #include <ctype.h> #include <CV.h> #include <GTL/graph.h>
#include " ../util/filename.h" #include " ../util/ppmIO.h" #include "sat.h" int main( int argc, char* argv[] )
{ if ( argc < 2 ) { cerr << "Usage: gsat <input image>" << endl; exit (EXIT_FAILURE) ; }
IpIImage* S[TOTALJ3TAGES] ; IpIImage* D[TOTAL_STAGES] ; char image_name [FILE_NAME__SIZE] ; char clevel [FILE_NAME_SIZE] ; strcpy( image_name, argv[l] );
Ipllmage* img = readIMG2IPL( image_name, NULL );
Ipllmage* img_gray = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAY",
"GRAY", IPL__DATA_ORDER_PIXEL, IPL__ORIGIN_TL, IPL-ALIGU-QWORD, img->width, img->height, NULL, NULL, NULL, NULL ) ; if ( img_gray == NULL )
{ cout << "Can not create new image header" << endl; exit(EXIT_FAILURE) ;
} iplAllocatelmage( img_gray, 0, 0 ) ; if ( NULL == img_gray->imageData )
{ cout << "Can not create new image header" << endl; exit (EXIT_FAILURE) ; } iplColorToGray( img, img_gray ) ;
// Changing from color image to gray. maximal_disc_centers(img_gray, S, D, TOTAL__STAGES) ;
// Finding the centers of the maximal discs at different levels iplDeallocate(img_gray, IPL_IMAGE_ALL) ; iplDeallocate (img, IPL_IMAGE_ALL) ; return (0) ,- } ftifndef GSAT-UTIL_H #define GSAT_UTIL_H
// Defines the different types of template graphs enum TEMPLATE_TYPE { TRIANGL, ROOF, DIAMOND, SQR, TIE,
RHOMBUS1, RHOMB0S2 };
// used to mark each vertex of the graph with the type and number // of the template graph that it belongs to. typedef struct { int PN; // Primitive Number
TEMPLATEJTYPE PT; // Primitive Type } TYPE_NϋMBER_PAIR;
#endif
#include "imageSet . h"
ImageSet .- : ImageSet () {
_imgSet.clear() ;
_imgFeature.clear() ; }
ImageSet .-: ImageSet (Ipllmage* comblmg) { _imgSet.clear() ,- imgFeature.clear() ;
// From the given image build set of images createSet(comblmg) ;
}
ImageSet :: -ImageSet () { } void ImageSet :: createSet(Ipllmage* img) {
/* Declaring and initializing gray_img */
Ipllmage* gray_img = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPL_DATA_ORDER__PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL) ; if (gray_img == NULL) exit(EXIT_FAILURE) ; iplAllocatelmage (gray_img, 0, 0) ; if (NULL == gray_img->imageData) exit(EXIT_FAILURE) ;
/* Making gray-level image out of input image */ if (strctnp(img~>colorModel, "GRAY") != 0) iplColorToGray( img, gray_img ); else iplCopy (img, gray_img) ;
}
Ipllmage* ImageSet :: getlmage(int index) { } void ImageSet .-: selectlmage (int index) { } void ImageSet : : deselectlmage(int index) {
bool ImageSet :: imageSelectable(int index) { } double imageSetDistance (ImageSet& ISl, ImageSet& IS2) { double tanimotoDistance dpllmage* imgl, Ipllmage* img2) {
* imageSet . h
* Defines the class 'ImageSet' which is used for
* operations on image sets. *
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University ************************************#**************/
#ifndef __IKAGE_SET #define _IMAGE_SET
#include <ipl/ipl.h> ^include <CV.h>
#include " ../util/filename.h" ^include " ../util/ppmlO.h" class ImageSet { friend double imageSetDistance(ImageSetk, ImageSet&) ; private: vector <lpllmage*> _imgSet,- vector <Feature> _imgFeature,- void createSet (Ipllmage*) ; public:
ImageSet0 ,- ImageSet(Ipllmage*) ; -ImageSet () ;
Ipllmage* getImage(int) ; void selectlmage(int) ,- void deselectlmage(int) ; bool imageSelectable(int) ; }; double imageSetDistance(ImageSett, ImageSet&) ; double tanimotoDistance(Ipllmage*, Ipllmage*); ttendif
CARDIOHOME = /home/shahram/cardio/source
VIEWRBCHOME = $ (CARDIOHOME) /chamber_segment
UTILHOME = $ (CARDIOHOME) /util
ECHOVIDEOHOME = $ (CARDIOHOME) /echo_video
CC = gcc
CPPPLAGS = -g -Wno-deprecated
INCPATH = -1$ (CARDIOHOME)/include -I/usr/local/include
LIBPATH = -L/usr/local/lib
LIBS = -ϋplm6 -liplaβ -liplpx -lopencv -IGTL -lstdc++
-SUFFIXES = .cc $θ $H
PROG = gsat
SRCS = $ (UTILHOME) /util.cc $ (UTILHOME) /ppmlO.cc $ (UTILHOME) /img_util.cc
$ (ECHOVIDEOHOME) /ecg.cc $ (ECHOVIDEOHOME)/roi.cc $ (ECHOVIDEOHOME) /frame.cc
$(VIEWRECHOME)/sat.cc $ (VIEWRECHOME)/node_attrib.cc $ (VIEWRECHOME) /sub_graph.cc
$ (VIEWRECHOME)/tree.cc $ (VIEWRECHOME) /gsat_graph.cc $ (VIEWRECHOME) /sfp.cc
$ (VIEWRECHOME) /sfp_attrib.cc $ (VIEWRECHOME) /surface_path.cc
$ (VIEWRECHOME) /gsat_attrib.cc $ (VIEWRECHOME) /gsat.cc
$ (VIEWRECHOME) /chamberseg.cc
OBJS = $ (SRCS:%.cc=%.o)
-CC.O:
$(CC) $ (CPPFLAGS) $ (INCPATH) -c $< all: $ (PROG)
$(PROG) : $ (OBJS)
$(CC) $ (CPPFLAGS) -o $<§ $ (OBJS) $(LIBPATH) $ (INCPATH) $ (LIBS) clean: rm -f $(PROG) rm -f core rm -f *.o stats: we *.cc *.h
CARDIOHOME = /home/shahram/ cardio/ source
VIEWRECHOME = $ (CARDIOHOME) / chamber_segment
UTILHOME = $ (CARDIOHOME ) /util
ECHOVIDEOHOME = $ (CARDIOHOME) /echo_video
CC = gcc
CPPFLAGS = -g -Wno-deprecated
INCPATH = -1$ (CARDIOHOME) /include -I/usr/local/include
LIBPATH = -L/usr/local/iib
LIBS = -liplmδ -liplaδ -liplpx -lopencv -IGTL -lstdc++
.SUFFIXES = .cc $θ $H
PROG = gsat
SRCS = $ (UTILHOME) /util.cc $ (UTILHOME) /ppmlO.cc $ (UTILHOME) /irag_util.cc
$ (ECHOVIDEOHOME)/ecg.cc $ (ECHOVIDEOHOME)/roi.cc $(ECHOVIDEOHOME)/frame.cc
$ (VIEWRECHOME) /sat.cc $ (VIEWRECHOME) /node_attrib.cc $ (VIEWRECHOME) /sub_graph.cc
$ (VIEWRECHOME) /tree.cc $ (VIEWRECHOME) /gsat_graph.cc $ (VIEWRECHOME) /sfp.cc
$ (VIEWRECHOME) /sfp_attrib.cc $ (VIEWRECHOME)/surface_path.cc
$ (VIEWRECHOME) /gsat_attrib.cc $ {VIEWRECHOME)/gsat.cc
$ (VIEWRECHOME) /chamberseg.cc
OBJS = $ (SRCS:%.cc=%.o)
. CC. O:
$(CC) $ (CPPFLAGS) $ (INCPATH) -c $< all: $ (PROG)
$(PROG) : $ (OBJS)
$(CC) $ (CPPFLAGS) -o $@ $ (OBJS) $ (LIBPATH) $ (INCPATH) $ (LIBS) clean: rm -f $ (PROG) rm -f core rm -f * .o stats: we *.cc *.h
* chambereg.cc
*
* This file includes the main function for performing the GSAT segmentation.
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <GTL/graph.h> finclude "../util/filename.h" #include "../util/ppmlO.h" #include "node_attrib.h"
//#define DEBUG
NodeAttribute :: NodeAttribute()
{
_S = 0; _D = 0; jndex = 0;
_position.x = 0; _position.y = 0; Jevel = 0;
Jype = ISOLATED; JinkTAG = FALSE; _visitTAG = FALSE; }
NodeAttribute :: NodeAttribute( CvPoint p, unsigned char s, unsigned char d, int i )
{ Jype = ISOLATED; // initially every node is isolated
_position.x = p.x; _position.y = p.y; Jevel = 0;
_S = s; _D = d;
Jndex = i; _TAG = FALSE; JinkTAG = FALSE; _visitTAG = FALSE; }
NodeAttribute :: NodeAttribute( const NodeAttribute& NA )
{ Jype = NA. Jype;
CvPoint P = NA._position; _position.x = P.x; _position.y = P.y; Jevel = NA.Jevel;
_S = NA._S; _D = NA._D;
_index = NA._index; _TAG = NA._TAG; JinkTAG = NA. JinkTAG; _visitTAG = NA._visitTAG;
} void NodeAttribute :: set( CvPoint p, unsigned char s, unsigned char d, int i )
{ _type = ISOLATED; // initially every node is isolated
_position.x = p.x; _position.y = p.y; Jevel = 0;
_S = s;
_D = d;
_index = i; _TAG = FALSE; JinkTAG = FALSE; _visitTAG = FALSE; } bool NodeAttribute :: operator==( const NodeAttribute& right )
{ if(_index == right. jndex && Jype == right. Jype && Jevel == right. Jevel) return TRUE; else return FALSE; } const NodeAttribute &NodeAttribute :: operator=( const NodeAttribute &right )
{ if( &right != this ) { Jype = right. Jype;
CvPoint P = right j3osition; _position.x = P.x; jDosition.y = P.y; Jevel = rightjevel;
_S = right. JS; JD = right. JD;
Jndex = rightjndex; } return *this; // enables cascaded assignements } void NodeAttribute :: setType( const int TN ) switeh( TN ) { case( 0 ):
_type = ISOLATED; break; case( 1 ):
Jype = TERMINAL; break; case( 2 ):
_type = ARC; break; default:
Jype = BRANCH; break; } }
/** * Rotating the vertex clockwise.
*/ void NodeAttribute :: rotateVertex()
{ int x, y; x = josition.x; y = _position.y;
_position.x = -y; _position.y = x; } void NodeAttribute :: appendTNP( TEMPLATEJΥPE type, int number )
{
TYPE_NUMBER_PAIR tnp; tnp.PN = number; tnp.PT = type;
_TNP.insert( _TNP.begin() + _TNP.size(), tnp ); } int NodeAttribute :; getLastTNP( TYPE_NUMBER_PAIR& T )
{ if( _TNP.size() != O )
{
T = _TNP.back();
// return the last element return 1; } else return 0; } int NodeAttribute :: eraseLastTNPQ { if( _TNP.size() != 0 )
{ _TNP.pop_back();
// erase the iase element return 1; } else return 0;
}
* Writes a graph to a bi-level image. void dumpAslmage( graph& G, node_map<NodeAttribute>& A, Ipllmage* I1 Ipllmage* G_img, const char* img_name)
{
CvPoint P; unsigned char S; int pixjoc; int W = l->width; int H = l->height;
Ipllmage* total = iplClonefmage(l); Ipllmage* thresh = iplClonelmage(l);
//
// Going over the graph and writin each node to the correct // location in the image.
/J graph :: nodejterator it = G.nodes_begin(); graph :: nodejterator end = G.nodes_end(); while (it != end)
{
P = (A[*it]).getPosition(); // position of the node pixjoc = P.x + W * P.y; // position of corresponding pixel
S = (A[*it]).getNumj_iscs(); #ifdef DEBUG printf ("S=%d, at (%d,%d)\n", S, P.x, P.y); #endif
((char*)GJmg->imageData)[pixJoc] = S; // 255 ++it;
}
// make up a name for image #ifdef DEBUG printf("Writing image %s.pgm\n", img_name); #endif iplThreshold(G_img, thresh, 1); iplAdd(thresh, I, total); writelPL2IMG (total, img_name); iplDeallocate(thresh, IPL_IMAGE_ALL); iplDeallocate(total, IPLJMAGE_ALL);
/jtiii************************************************************************* * node attrib.h
* written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* A**************************************************************************/
#ifndef NODE_ATTRIB_H #define NODE_ATTRIB_H
#include <cstring> #include <cstdlib> #include <GTL/node.h> #include <CV.h> ftinclude "gsat_util.h" ftinclude " ../util/olist.h" #include " ../util/filename.h" enum NodeType { ISOLATED, TERMINAL, ARC, BRANCH } ; class NodeAttribute { public:
NodeAttribute () ;
// The default constructor
NodeAttribute (CvPoint p, unsigned char s, unsigned char d, int i=0) ;
// Copy constructor
NodeAttribute (const NodeAttribute &) ;
-NodeAttribute() {};
Il Equal operator bool operator==(const NodeAttributeS) ;
// Sets the elements of the NodeAttribute object void set (CvPoint p, unsigned char s, unsigned char d, int i=0) ;
// Assignment method for assigning node attributes const NodeAttribute &operator=(const NodeAttribute &) ;
// Returns the position of a vertex
CvPoint& getPosition () { return _position; }
// Sets new values for the position void setPosition {CvPoint P) { _position.x = P.x; _jposition.y = P.y,- }
// rotates the vertex location 90 degrees clockwise void rotateVertex () ;
// Returns the index of a vertex int getlndex () { return _index; }
// The 2 following functions return the number of maximal discs // and the value of the image at the location of the vertex unsigned char getNum_discs () { return _S; } unsigned char getlmg_val () { return _D; }
// Sets the type of the vertex void setType (const NodeType& T) { _type = T; } void setType (const int TN) ,-
// Get the type of the vertex. NodeType getType () { return _type,- }
// setting and getting the node's tag void setTag () { _TAG = TRUE; } void resetTag () { _TAG = FALSE; } bool Tag () { return _TAG; }
// setting and getting the node's link-tag void setLinkO { _JLinkTAG = TRUE; } void resetLinkO { _linkTAG = FALSE,- } bool isLinkedO { return _linkTAG; }
// setting and getting the node's visit tag void Visited() { _visitTAG = TRUE; } void resetVisitO { _visitTAG = FALSE; } bool isVisitedO { return _visitTAG; }
// methods for the type_number pair vector void appendTNP (TEMPLATEJTYPE, int) ; // adds a new pair to end of TNP int getLastTNP (TYPE_NUMBER_PAIR&) ; // returns the last TNP int eraseLastTNP () ; // erase the last TNP int sizeTNP () { return _TNP.size() ; } void setLevel (int level) { _level = level; } int getLevel () { return _level; } void setl6Dir (int dir) { _16Dir = dir,- } int getl6Dir () { return _16Dir; } private: int _index; // index of the node bool TAG; // a boolean tag to find out if node has been processed bool _linkTAG,- // Used in the process of linking the subset graphs bool _visitTAG; // TRUE if node has been visited during search
NodeType _type; // determines the type of the vertex as one of the // following: - isolated vertex (0)
// - terminal vertex (1)
// - arc vertex (2)
// - branch vertex (3) int _level; // level corresponding to 'n' (size of maximal disc) CvPoint _position; // position of a vertex in the graph
// Combination of ^position and _level gives the 3-D // position of the vertex in the stack of sub-graphs. unsigned char _S; // number of maximal discs at the vertex location unsigned char _D; // value of the image at the vertex location // variable names 'S' and 'D' are taken from the // Baile's paper vector <TYPE_NUMBER_PAIR> __TNP; // holds all the different primitives
// which the vertex belongs to int _l6Dir,- // the 16-direction of the current vertex }; void dumpAsImage(graph&, node__map<NodeAttribute>&, Ipllmage*, IplDnage*, const char*) ; ftendif
/*
* This file contains the operations related to the calculation of the
* "Symmetric Axis Transform" or the skeleton of a gray level image. *
* The method used here is the one discussed in the paper:
* "The use of Gray Level SAT to Find the Salient Cavities in
* Echocardiograms", by David R. Bailes, in JVCIR, vol.7, no.2, June 96
* pp: 169-195. *
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University */
#include <iostream>
^include <cstdlib> ttinclude <cstring>
#include <cstdio>
#include <ctype.h>
#include <cassert>
#include <math.h>
#include <ipl/ipl.h> ttinclude <CV.h>
#include "sat.h" ftinclude " ../util/img_util.h"
#include " ../uti1/ppmlO.h"
#include " ../util/filename.h"
//tdefine DEBUG void stageSAT2 (Ipllmage* D_0, Ipllmage* S_0, Ipllmage* D_l)
{
IplConvKernel *B = cvCreateStructuringElementEx (7, 7, 3, 3, CV_SHAPE_CUSTOM, QUAD) ;
Ipllmage* fi = iplClonelmage (D_0) ;
Ipllmage* f1 = iplClonelmage (D_0) ;
Ipllmage* f2 = iplClonelmage (D__0) ;
Ipllmage* fR = iplClonelmage (D_0) ,- cvDilate (fi, fl, B, 1) ; cvErode (fl, f2, B, 1) ; iplSubtract (f2, fi, fR) ; iplCopy (f1, D_l) ; iplCopy (fR, S_0) ; cvReleaseStructuringElement(&B) ; iplDeallocate (fi, IPL_IMAGE_ALIJ) ; iplDeallocate (fl, IPL_IMAGE_ALL) ; iplDeallocate (f2, IPL_IMAGE_ALL) ; iplDeallocate (fR, IPL_IMAGE_ALL) ; }
/* * stages AT ( )
* Purpose: implements one stage of the process of finding the centers
* of the dodecagon maximal disc centers- It follows the
* algorithm mentioned in the paper mentioned at the heading
* of this file, at page 178, Fig.12
* Input: the image at gray-level n.
* Output: img_S0, and img_Dl which are the centers of the maximal
* discs at this stage, and the image of the next layer. */ void stagesAT (IpIImage* D_0, IpIImage* S_0, IpIImage* D_I, int stage)
{ int w, h, i, j, k; int pix, pixl, pix2; unsigned char pixVl, pixV2, pixV;
IplConvKernel* B,- int W = 2 * D_0->width; int H = 2 * D_0->height;
/ /
// Zooming up the input image in order to create two hexagonal
// grids for using in the extraction of sub-sets.
/ / ,
Ipllmage* f_in = iplCreatelmageHeader (1 , 0 , IPL_DEPTH_8U, "GRAY" , "GRAY" ,
IPL_DATA_ORDER_PIXEL , IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL) ; if (NULL == f_in) exit (EXIT_FAILURE) ,- iplAllocatelmage (f_in, 0, 0) ; if (NULL == f_in->imageData) exit(EXrr_FAILURE) ; iplSet (f_in, 0) ,- int noffset, noffsetl; char *pf_inData = f__in->imageData; char *pD_0Data = D_O->imageData; for (h = 0,- h < f_in->height; h+=2) { noffset = h * f_in->widthStep; noffsetl = (h/2) * D_0->widthStep,- for (w = 0; w < f_in->width; w+=2) pf_inData[noffset + w] = (unsigned char)pD_0Data[noffsetl + (w/2)]; }
/I
// The grids and the SE's alternate between two choices. Once
// the rows are interpolated and once the columns are. SE
// changes correspondingly. Each time we use to grids. One is
// called the 'even' grid and the other is called the 'odd1.
Il
Ipllmage* fi_e = iplClonelmage (f_in) ,- Ipllmage* fi_o = iplClonelmage (f_in) ;
Ipllmage* fl_e = iplClonelmage (f_in) ; Ipllmage* f2_e = iplClonelmage (f_in) ; Ipllmage* fR_e = iplClonelmage (f_in) ,-
Ipllmage* fl_o = iplClonelmage (f_in) ; Ipllmage* f2_o = iplClonelmage (f_in) ; Ipllmage* fR_o = iplClonelmage (f_in) ;
Ipllmage* tmpl = iplClonelmage(fi_e) ; Ipllmage* tmp2 = iplClonelmage(fi_e) ;
Ipllmage* addl = iplClonelmage(f_in) ,- Ipllmage* add2 = iplClonelmage (f_in) ; iplSet (fi e , 0); iplSet (fi o , 0) ; iplSet (fl_e , 0) ; iplSet (fl_o , 0) ; iplSet (fR e , 0) ; iplSet (fR_o , 0) ; char *pfi eData = fi_e->imageData; char *pfi_oData = fi_o->imageData; char *ptmplData = tmpl->imageData; char *ptmp2Data = tmp2->imageData; char *pfl_eData, *pfR_eData; char *pfl_oData, *pfR_oData; switch (stage % 2) { case 0: // rows are interpolated //B = Bl;
B = cvCreateStructuringElementEx (S1 5, 2, 2, CV_SHAPE_CUSTOM, HEXAGON) ; for (h=0; h<H; h+=2) { noffset = h * f_in->widthStep; i = h/2; if (i%2 == 0) { // Even rows on the grid for (w=0; w<W; w++) { if (w%2 == 0) { // Even columns pfi__eData[noffset + w] = pf_inData[noffset + w] ,- pfi_oData[noffset + w] = 0; } else { // Odd columns pixVl = pf_inData[noffset + (w-1) ] ; pixV2 = pf_inData[noffset + (w+1) ] ; pf i_oData [nof fset + w] = (uns igned char) ( (pixVl+pixV2 ) /2 ) ; pf i_eData [nof fset + w] = 0 ;
}
}
} else { // Odd rows on the grid for (w=0 ; w<W ,- w++ ) { if (w%2 == 0 ) { // Even columns pf i_oData [nof fset + w] = pf_inData [nof f set + w] ; pf i_eData [nof f set + w] = 0 ; } else { / / Odd columns pixVl = pf_inData [noffset + (w- 1 ) ] ; pixV2 = pf_inData [nof fset + (w+1 ) ] ; pfi_eData [noffset + w] = (uns igned char) ( (pixVl+pixV2 ) /2 ) pf i_oData (nof f set + w] = 0 ; }
Figure imgf000186_0001
/ * < - - Finding the partial subsets at current scale
// < Even grid . > cvDilate (fi_e, fl_e, B, 1) ; cvErode (fl_e, f2__e, B, 1) ; iplSubtract (f2_e, fi_e, fR_e) ;
// < Odd grid. > cvDilate (fi_o, fl_o, B, 1) ; cvErode (fl_o, f2_o, B, 1) ; iplSubtract (f2_o, fi_o, fR_o) ;
//
// Get even rows from even grid and odd rows from odd grid.
// , pfl_eData = fl_e->imageData,- pfl_oData = fl_o->imageData; pfR_eData = fR_e->imageData; pfR_oData = fR_o->imageData,- for (h=0; h<H,- h+=2) { // Skipping the odd rows. noffset = h * f_in->widthStep; i = h/2; for (w=0; w<W; w+=2) { if (i%2 == 0) { // Even row ptmplData [nof f set + w] = pf l_eData [nof f set + w] ; ptmp2Data [nof f set + w] = pf R_eData [nof f set + w] ; } else { / / Odd row ptmplData [nof f set + w] = pf l_oData [nof f set + w] ; ptmp2Data [nof f set + w] = pf R_oData [nof f set + w] ; }
} cvReleaseStructuringElement(&B) ; break; case 1: // columns are interpolated //B = B2;
B = cvCreateStructuringElementEx (5, 5, 2, 2, CV_SHAPE_CUSTOM, HEXQUAD) ; for (h=0; h<H; h++) { noffset = h * f_in->widthStep,- for (w=0; w<W; w+=2) { j = w/2; if (j%2 == 0) { // Even columns on the grid if {h%2 == 0) { // Even rows pfi_eData[noffset + w] = pf_inData[noffset + w] ; pfi_oData[noffset + w] = 0; } else { // Odd rows int nof f set_primel = nof fset - f_in- >widthStep ; int nof f set_prime2 = nof fset + f__in- > widths tep ; pixVl = pf_inData [nof fset_primel + w] ; pixV2 = pf_inData [nof fset_prime2 + w] ; pf i_oData [nof fset + w] = (uns igned char) ( (pixVl+pixV2 ) /2 ) ; pf i_eData [noffset + w] = 0 ;
}
} else { // Odd columns on the grid if (h%2 == 0) { // Even rows pfi__oData[noffset + w] = pf_inData[noffset + w] ; pfi_eData[noffset + w] = 0; } else { // Odd rows int noffset_primel = noffset - f_in->widthstep; int noffset_prime2 = noffset + f_in->widthstep; pixVl = pf_inData[noffset_primel + w] ; pixV2 = pf_inData[noffset_prime2 + w] ; pfi_eData[noffset + w]=(unsigned char) ( (pixVl+pixV2) /2) ; pfi_oData[noffset + w] = 0; }
/* <-- Finding the partial subsets at current scale
// < Even grid. :> cvDilate (fi_e, fl_e, B, 1) ; cvErode (fl_e, f2_e, B, 1) ; iplSubtract (f2_e, fi_e, fR_e) ;
// < Odd grid. cvDilate {fi_o, fl_o, B, 1) ; cvErode (fl_o, f2_o, B, 1) ; iplSubtract (f2_o, fi_o, fR_o) ;
// ι , , ,
// Get even columns from even grid and odd columns from odd grid.
// pfl_eData fl_e->imageData; pfl_oData fl_o->imageData; pfR_eData fR_e->imageData; pfR_oData fR_o->imageData; for (h=0; h<H; h+=2) { noffset = h * f__in->widthStep; for (w=0; W<W; w+=2) { j = w/2; if (j%2 == 0) { // Even Columns on the grid ptmplData[noffset + w] = pfl_eData[noffset + w] ptmp2Data[noffset + w] = pfR_eData[noffset + w] else { // Odd Columns on the grid ptmplData [noffset + w] = pfl oData[noffset + w] ptmp2Data[noffset + w] = pfR__oData[noffset + w] } cvReleaseStructuringElement(&B) break;
};
char *pDlData = D_l->imageData; char *pS0Data = S_O->imageData; for (h=0; h < H; h+=2) { noffset = h * tmpl->widthstep,- noffsetl = (h/2) * D_l->widthStep; for (w=0; w < W; w+=2) { pDlData[noffsetl + w/2] = ptmplData[noffset +■ w] ; pS0Data[noffsetl + w/2] = ptmp2Data[noffset + w] ;
}
} iplDeallocate (addl, IPL_IMAGE_ALL) iplDeallocate (add2, IPL_IMAGE_ALL) iplDeallocate (f_in, IPL_IMAGE_ALIJ) iplDeallocate (fi_e, IPL_IMAGE_ALL) iplDeallocate (fi_o, IPL_IMAGE_ALL) iplDeallocate (fl_e, IPL_IMAGE_ALIi) iplDeallocate (fl_o, IPL_IMAGE_AI>L) iplDeallocate (f2_e, IPL_IMAGE_ALL) iplDeallocate (f2_o/ IPL_IMAGE_ALL) iplDeallocate (fR_e, IPL_IMAGE_ALL) iplDeallocate (fR_o, IPL_IMAGE_2U-L) iplDeallocate (tmpl, IPL_IMA.GE_ALL) iplDeallocate (tmp2, IPL_IMAGE_ALL)
/*
* stageSAT_Inverse()
* Is one stage of the reconstruction procedure of
* the original object from the corresponding SAT. */ void stageSAT_Inverse (Ipllmage* imgS, Ipllmage* imgT, int scale)
{ int w, h, i, j, pix; int W = 2 * imgS->width; int H = 2 * imgS->height;
//
// The Hexagonal structuring element used for scaling the image.
//
// Bl: - x - x - B2: - - X - -
// X - - - X
// X - X - X - - X - -
// X - - - X
// - X - X -
// // Bl is used for a hexagonal grid where the rows are
// interpolated, and B2 is used where the columns are
// interpolated .
/ /
IplConvKernel *B1 = cvCreateStructuringElementEx (5, 5, 2, 2, CV_SHAPE_CUSTOM, HEXAGON);
IplConvKernel *B2 = cvCreateStructuringElementEx (5, 5, 2, 2, CV_SHAPE_CUSTOM, HEXQUAD);
IplConvKernel *B,-
// ,
// Zooming up the input image in order to create two hexagonal
// grids for using in the extraction of sub-sets.
/ /
Ipllmage* f_in = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL) ; if (NULL == f_in) exit (EXIT_FAILURE) ; iplAllocatelmage (f_in, 1, 0) ; if (NULL == f_in->imageData) exit (EXIT_FAILURE) ; iplZoom (iragS, f_in, 2, 1, 2, 1, IPL_INTER_LINEAR) ,-
//
// The grids and the SE's alternate between two choices. Once
// the rows are interpolated and once the columns are. SE
// changes correspondingly. Each time we use to grids. One is
// called the 'even' grid and the other is called the 'odd'. //
Ipllmage* fi_e = iplClonelmage (f_in) ; Ipllmage* fi_o = iplClonelmage (f_in) ;
Ipllmage* fo_e = iplClonelmage (f_in) ; Ipllmage* fo_o = iplClonelmage (f_in) ;
Ip1Image* tmp; switch (scale % 2) { case 0 : // rows are interpolated B = Bl; for (h=0; h<H,- h++) { if (h%2) { // delete rows not on the grid for (w=0; w<W; w++) { pix = w + h * W,-
( (char*) fi_e->imageData) [pix] = 0; ( (char*) fi_o->imageData) [pix] = 0;
}
} else { // rows on the grid i = h/2; if (i%2) { // odd rows on the grid for (w=0 ; w<W; w++) { pix = w + h * W; if (w%2) // odd columns
( (char*)fi_o->imageData) [pix] = 0; else // even columns
( (char*)fi_e->imageData) [pix] = 0; }
} else { // even rows on the grid for (w=0; w<W; w++) { pix = w + h * W,- if (w%2) // odd columns
( (char*) fi_e->imageData) [pix] = 0,- else // even columns
( (char*) fi_o->imageData) [pix] = 0; }
} }
/* <-- Finding the partial subsets at current scale --> */
// < Even grid. > cvDilate (fi_e, fo_e, B, 1) ;
// < Odd grid. > cvDilate (fi_o, fo_o, B, 1) ;
// Get even rows from even grid and odd rows from odd grid.
Il tmp = iplClonelmage (fi_e) ; for (h=0; h<H; h++) { if (h%2 == 0) { // rows on the grid i = h/2; if (i%2) { // odd rows on the grid for (w=0; w<W; w++) { pix = w + h * W; ((char*) tmp->imageData) [pix] = fo_o->imageData[pix] ;
}
} else { // even rows on the grid for (w=0; w<W; w++) { pix = w + h * W;
( (char* ) tmp->imageData) [pix] = fo_e->imageData [pix] ; }
} } break; case 1: // columns are interpolated B = B2; for (w=0; W<W; W++) { if (w%-2) { // delete columns not on the grid for (h=0; h<H,- h++) { pix = w + h * W;
( (char* ) f i_e - >imageData) [pix] = 0 ; ( (char* ) fi_o - >imageData) [pix] = 0 ;
}
} else { // columns on the grid j = w/2; if (j%2) { // odd columns on the grid for (h=0; h<H; h++) { pix = w + h * W; if (h%2 ) / / odd rows
( (char* ) f i_o- >imageData) [pix] = 0 ; else // even rows
( (char* ) f i_e- >imageData) [pix] = 0 ;
}
} else { // even columns on the grid for (h=0; h<H; h++) { pix = w + h * W; if (h%2) // odd rows
( (char*) fi_e->imageData) [pix] = 0; else // even rows
{(char*)fi o->imageData) [pix] = 0;
Figure imgf000191_0001
/* <-- Finding the partial subsets at current scale
// < Even grid. > cvDilate (fi e, fo e, B, 1) ;
// < Odd grid. > cvDilate (fi_o, fo_o, B, 1) ,-
//
// Get even rows from even grid and odd rows from odd grid. Il tmp = iplClonelmage (fi_o) ; for (w=0; w<W; w++) { if (w%2 == 0) { // columns on the grid j = w/2; if (j%2) { // odd columns on the grid for (h=0; h<H; h++) { pix = w + h * W; ((char*) tmp->imageData) [pix] = fo_o->imageData[pix] ;
}
} else { // even columns on the grid for (h=0; h<H; h++) { pix = w + h * W;
( (char* ) trap - >imageData) [pix] = fo_e- >imageData [pix] ; }
} break;
} iplDecimate (tmp, imgT, 1, 2, 1, 2, IPL_INTER__LINEAR) ; iplDeallocate (f_in, IPL_IMAGE_ALL) ; iplDeallocate (fi_e, IPL_IMAGE_ALL) ; iplDeallocate (fi_o, IPL_IMAGE_ALL) ; iplDeallocate (fo_e, IPL_IMAGE_ALL) ; iplDeallocate (fo_o, IPL_IMAGE_ALL) ; iplDeallocate (tmp, IPL_IMAGE_ALL) ; cvReleaseStructuringElement (&B1) ; cvReleaseStructuringElement(&B2) ;
/*
* thinGoetcherianO
* Purpose: implements the skeletonization algorithm proposed by
* Goetcherian in the paper:
* Vartkes Goetcherian, "From binary to grey tone image
* processing using fuzzy logic concepts",PR,vol.12,pp7-15
* Input: the image to be skeletonized
* Output the same image after skeletonization */ void thinGoetcherian( Ipllmage* img )
{ int i, j, k, m, n; double image_distance = 1;
Ipllmage *I0 = iplclonelmage( img ) ;
Ipllmage *I0__last = iplclonelmage ( img ) ; int W = img->width; int H = img->height; int mask [3 J [3 ] ; int pix, pix_center, pix__val, pix_center_val; int cnt , cntl , cnt2 ;
Ipllmage* Il = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8ϋ, "GRAY", "GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGINJH,, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL) ; if (Il == NULL)
{ printf( "Can not create new image header"); exit (EXIT_FAILURE) ;
} iplAllocatelmage(II, 1, 0) ; if (NULL == Il->imageData)
{ printf("Can not create new image header"); exit(EXIT FAILURE); Ipl lmagfe* 12 = iplCreatelmageHeader ( 1 , 0 , IPL_DEPTH_8U, "GRAY" , "GRAY" , IPL_DATA__ORDER_PIXEIi, IPLJDRIGINJTL, IPL_ALIGN_QWORD, W, H, NULL , NULL , NULL , NULL) ; if ( 12 == NULL)
{ printf ("Can not create new image header") ; exit (EXIT_FAILURE) ;
} iplAllocateImage(I2, 1, 0) ; if (NULL == I2->imageData)
{ printf ("Can not create new image header") ,- exit (EXIT_FAILURE) ; }
Ipllmage* 13 = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAY", "GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN__TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL) ; if (13 == NULL)
{ printf("Can not create new image header"); exit (EXIT_FAILURE) ;
} iplAllocatelmage (13 , 1 , 0) ; if (NULL == I3 ->imageData)
{ printf ("Can not create new image header"); exit (EXIT_FAILURE) ; } while(image_distance != 0)
{
// extracting the masks highlighting the pixels that are greater // than their neighbors with -1 tag in the mask, and less than or // equal to the neighbors with +1 tag in the mask.
Ipllmage *I_dif = iplClonelmage(img) ; for(k=l; k <= 8; k++) {
// determining which mask to use at the current iteration switch(k) { case 1: for (i=0; i<3; i++) { for (j=0; j<3; j++) { mask [i] [j ] = Al [i] [j ] ; } }
//mask = Al; break; case 2: for (i=0; i<3; i++) { for (j=0; j<3; j++) { mask f i] [j ] = Bl [i] [ j ] ; } } //mask = Bl ; break,- case 3 : for {i=0; i<3; i++) { for (j=0; j<3; j++) { mask[i] [j] = A2[i] [j] ; } }
//mask = A2; break; case 4 : for (i=0; i<3; i++) { for (j=0; j<3; j++) { maskfi] [j] = B2 [i] [j] ; } }
//mask = B2; break; case 5 : for (i=0; i<3; i++) { for (j=0; j<3; j++) { maskti] [ j ] = A3 [i] [j] ;
} }
//mask = A3; break; case 6: for (i=0; i<3; i++) { for (j=0; j<3; j++) { maskti] [j] = B3[i] [ j ] ; }
//mask = B3; break; case 7 : for (i=0; i<3; i++) { for (j=0; j<3; j++) { maskfi] [j] = A4[i] [ j ] ; }
//mask = A4; break; case 8 : for (i=0; i<3; i++) { for (j=0; j<3; j++) { maskCi] [j] = B4 [i] [j] ; }
//mask = B4; break;
};
for(i=l; i < (H - I); i++) for(j=l; j < (W - 1) ; j++)
{ pix_center = j + W * i; pix_center_val = static_cast <unsigned char> ( IO->imageData[pix_center] ) ; cntl = 0; cnt2 = 0; for(m=-l; m <= 1; m++)
{ for(n=-l; n <= 1; n++)
{ pix = (j + n) + W * (i + m) ; if (mask[m+l] [n+1] == -1)
{ pix_val = static_cast <unsigned char> ( I0->imageData[pix] ); if(pix_val < pix_center_val) cntl++;
} else if(mask[m+1] [n+1] == 1)
{ pix_val = static_cast <unsigned char>
( I0->imageData[pix] ) ; if(pix_val >= pix_center_val) cnt2++; }
} } if (cntl == 3) ( (char*) Il->imageData ) [pix__center] = 1; if (cnt2 == 3) ( (char*) I2->imageData ) [pix_center] = 1;
// take all points that the two above conditions holds form them for(i=0; i < H; i++)
{ for (j=0; j < W; j++)
{ pix = j + W * i; int pix_vall = static_cast<unsigned char>(Il- >imageData[pix] ) ; int pix_val2 = static_cast<unsigned char>(12- >imageData[pix] ) ; if(pix_vall == 1 && pix_val2 == 1)
( (char*) I3->imageData ) [pix] = 1;
Figure imgf000195_0001
iplDeallocatelmage(12) ; iplDeallocatelmage (II) ; iplAllocatelmagedl, 1, 0) ; iplAllocateImage(X2, 1, 0) ;
// Setting the new image to the maximum of the pixels of the // old image that are under the mask elements ' -I1 Ipllmage *I4 = iplClone Image ( IO ) ; for(i=l; i < (H - I); i++)
{ for(j=l; j < (W - 1) ; j++)
{ pix_center = j + W * i; pix_center__val = static_cast <unsigned char> ( I3->iraageData[pix_center] ); if (pix_center_val == 1)
{ int new_pix__val; cnt = 0,- for(m=-l; m <- 1; m++)
{ fσr(n=-l; n <= 1; n++)
{
// only take points with '-1' tag if (mask[m+l] [n+1] == -1)
{ pix = ( j + n ) + W * ( i + m ); pix_val = static_cast <unsigned char>
( IO->iraageData[pix] ) ; if ( lent ) newjpix_val = pix_val; else if(pix__val > new_pix_val) new_pix_val = pix_yal;
Figure imgf000196_0001
( (char*) I4->imageData ) [pix_center] = static_cast<char>(new_pix_val) ;
Figure imgf000196_0002
iplCopy(I4, 10); iplDeallocate(I4, IPL_IMAGE_ALL) ; iplDeallocatelmage (13) ; iplAllocatelmage(13, 1, 0); } iplSubtract(I0_last, 10, I_dif) ; image_distance = cvSumPixels(I_dif) ; iplCopy(I0, I0_last) ; iplDeallocate(I_dif, IPL_IMAGE_ALL) ; } iplCopydO, img) ; /*********************************** ** **************************************** * sat.h
* Header file related to the operations and classes
* used in the process of finding the symmetric axis
* transform of images
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* A**************************************************************************/ ftifndef SAT_INCLUDED #define SAT_INCLUDED
#include <CV.h>
// **********************************************************************
// Masks for the skeletonization algorithm used in the method
// presented in the paper:
// "From Binary to Grey Tone Image Processing using Fuzzy Logic Concept",
/•/ Vartkes Goetcherian, PR, vol.12, pp7-15
// The element of the masks are numbered as follows: 1 2 3
// 8 x 4
// 7 6 5
// ************************************************************ static int Al [3] [3] = { -1, -1, 0,
Figure imgf000197_0001
static int A2[3] [3] = { 0, -1, -I',
1, 1, -1,
0, 1, 0 }; static int A3 [3] [3] = { 0, 1, 0,
1, 1, -1,
0, -1, -1 }; static int A4[3] [3] = { 0, 1, 0,
-1, 1, 1,
-1, -X1 0 }; static int Bl[3] [3] = { -1, -1, -1,
0, 1, 0,
1, 1, 0 }; static int B2[3] [3] = { 1, 0, -1,
1, 1, -1,
0, 0, -1 }; static int B3[3] [3] = { 0, 1, 1,
0, 1, 0, -1, -1, -1 }; static int B4[3] [3] = { -1, 0, 0,
-1, I, I, -1, 0, 1 };
// _#
// The Hexagonal structuring element used for scaling the image.
//
// " Bl: - x - x - B2: - - x - -
// x - - - X
// X - X - X • - - X - -
// X X
// - X - X - - - X - -
//
// Bl is used for a hexagonal grid where the rows are // interpolated, and B2 is used where the columns are
// interpolated.
// ,
// Bl: static int HEXAGON [25] = { 0, 1, 0, I1 0,
0, 0, 0, 0, 0,
1, 0, 1, 0, 1, 0, 0, 0, 0, 0,
0, 1, 0, 1, 0 };
// B2: static int HEXQUAD [25] = { 0, 0, 1, 0, 0,
1, 0, 0, 0, 1,
0, 0, 1, 0, 0,
1, 0, 0, 0, 1, 0, 0, 1, 0, 0 }; static int HEX[25] = { 0, 1, 1, l, 0,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
1, 1, 1, 1, 1,
0, 1, 1, 1, 0 }; static int QUAD[49] = { 0, 0, 1, 1, 1, 0, 0,
0, 1, 1, 1, 1, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, lr 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0 };
// Total number of stages in the SAT algorithm const int TOTAL_STAGES = 250; //60;
// Function declarations void stageSAT2 (Ipllmage*, Ipllmage*, Ipllmage*) ,- void stageSAT( Ipllmage*, Ipllmage*, Ipllmage*, int ); void stageSAT_Inverse (Ipllmage*, Ipllmage*, int); void thinGoetcherian( Ipllmage* ) ; void binaryThinning( Ipllmage* ); void thinlmageFAI( Ipllmage* ) ; void maximal_disc_centers ( Ipllmage*, Ipllmage**, Ipllmage**, int ) ; void reconstructCavity (Ipllmage*, int, char*) ;
#endif
* sfp.cc
*
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
* *#***********************************#*************************************/
#include <iostream>
#include <cstdlib>
#include <cstring> ttinclude <cstdio>
#include <cmath>
#include <ctype.h>
#include <algorithm>
#include <vector> ttinclude <CV.h>
#include " ../util/util.h" ttinclude "gsat__graph.h" ttinclude "sub_graph.h" ftinclude "sfp.h"
#include "sfp_attrib.h"
#include "surface_path.h"
//#define DEBUG void construct_SurfacePath ( graph* G, node_map <NodeAttribute>* A, LUT* L, vector <SFP>& SFPG, int l_rain, int l_max )
{ int 1;
// Linking branch vertices together. link_BranchNodes (G, A, SFPG, l_min, l__max) ;
// Linking the other vertices together, for (1 = l_min; 1 < l_max ; 1++) { reset_AHLinkTags (G[I], A[I]); // Unlink vertices except BRANCH link_OtherNodes (G, A, L, SFPG, 1, 1+1) ; } printf ("Total number of distinct SFPs: %d\n\n\n", SFPG.size()); }
* link BranchNodes ()
*/ void link_BranchNodes ( graph* G, node_map<NodeAttribute>* A, vector <SFP>& SFPG, int l_min, int l_raax )
{ int i, index; int cntrl_cnt = 0; double min, dist; CvPoint Pl, P2,- vector <CvPoint> NNl; // List of neighbor points of a node vector <CvPoint> NN2; vector <NodeAttribute> una; // a vector of attributes of upper graph nodes vector <node> unn;
/** Iterators for going over the sub-graphs */ graph : : node__iterator itl; graph : : node_iterator endl; graph : : node__iterator it2,- graph : : node_iterator end2,-
/**
* Go over all sub-graphs and
* extract the Branch Vertices.
*/ for (int l=l_min; l<l_max-l; 1++) { ttifdef DEBUG printf("Go over vertices of graph at level: %d to find Branch Vertices.\n", 1) ; ttendif itl = (G[I] ) .nodes_begin() ; endl = (G[I] ) .nodes_end() ; while (itl != endl) // Go over vertices of lower graph
{ if ( ( (A [II ) [*itl] ) . getType O > 2 ) // only get BV s
{ cntrl_cnt = 0; una.clear() ; unn.clear() ; NNl.clear() ; NN2.clear() ;
// location of the lower graph vertex Pl = ((A[I] ) [*itl]) .getPositionO ; it2 = (G[l+1] ) .nodes_begin() ; end2 = (G[1+1] ) .nodes__end() ; while (it2 != end2) // Go over vertices of upper graph
{ if ( ( (A [I-H] ) [*it2 ] ) . getType O > 2 ) // only get BV s
{
// location of the upper graph vertex P2 = ( (A [I-H] ) [* it2 ] ) . getPosition O ; // Are the vertices in the 8-neighborhood of each other? if (fabs (double(Pl.x - P2.x))<=l && fabs (double(Pl.y - P2.y))<=i)
{
// See if BVs are linkable if (linkable_BranchVertices (*itl, *it2, A, 1, 1+1) )
{
#ifdef DEBUG printf("Node %d at level %d at position (%d,%d) is linkable to node %d at level %d at position (%d,%d)\n", (A[I] [*itl] ) .getlndexO / 1, Pl.x, Pl. Y, (A[l+1] [*it2] ) .getlndexO , 1+1, P2.x, P2.y) ; ttendif
// put attributes-of linkable node in vector una.insert(una.begin() +una.size() ,A[1+1] [*it2] ) ; unn. insert(unn.begin() +unn.size() , *it2) ,- cntrl_cnt++;
Figure imgf000202_0001
// If there are linkable vertices available if (cntrl_cnt == 1) {
HNl.clear() ;
NN2.clear() ;
// Get neighbors of each BV getNeighbors (*itl, A[I] , NNl) ; getNeighbors (unn[0], A[l+l] , NN2) ;
// Link the nodes together and add to SFPG link_Nodes( (A[I] ) [*itl] , una[0] , NNl, NN2 , 1, 1+1, SFPG) ;
#ifdef DEBUG printf ("Linked BVNs: %d of G:%d, to %d of G:%d\n", (A[I]) [*itl] .getlndexO , 1, una[0] .getlndexO , 1+1 ) ; #endif
} else if (cntrl_cnt > 1) { // More than one linkable BN for (i=0; i < una.size(); i++) { dist = euclideanDistance (Pl, (una[i] ) .getPositionO ) ; if (0==i) { min = dist; index = i; } else { if (dist < min) { min = dist; index = i; } } }
// Get neighbors of each BV MNl.clear() ; NN2.clear() ,- getNeighbors (*itl, A[I], NNl); getNeighbors (unn[index] , A[l+1] , NN2) ;
// Link bhe nodes together and add to SFPG link_Nodes( (A[I] ) [*itl] , una[index] , NNl, NN2, 1, 1+1, SFPG);
#ifdef DEBUG printf ("Linked BVNs : %d of G:%d, to %d of G:%d\n", (A[I]) [*itl] .getlndexO , 1, una[index] .getlndex() , 1+1 ) ; #endif } }
++LtI; } }
* link OtherNodes ()
void link_OtherNodes ( graph* G, node__map<NodeAttribute>* A, LUT* L , vector <SFP>& SPPG, int I , int J ) int i, j, k, 1; NodeType T; node ref_I, ref_J,- node this_v, next_V, last_V, terap_V; node last_linked_VI;
// List of nodes for the two input sub-graphs vector <node> LI; vector <node> LJ;
// iterators for traversing the graph graph : : node_iterator it,- graph : . node_iterator end; node : : out_edges_iterator eib; node :: out_edges_iterator eie;
/** Get all possible starting TERMINAL vertices */ ftifdef DEBUG printf ("**********************************************\n") • printf ("Starting from unvisited TERMINAL vertices.\n") ; #endif message = strcat('Current object is: ', current_obj_label); disp(message); obj_index = find(strncmp(gt_obj_names, current_obj_name, 2) == 1 ); objCount{viewjndex}(obj_index) = objCount{viewjndex}(obj_index) + 1 ; end totalKFCount(view_index) = totalKFCount(view_iπdex) + 1 ; end end end clear message; clear rep_v__name; clear num_echos; clear n; clear N; clear I; clear L; clear m; clear M; clti-r view_index; clear v_name; clear total_num_objs; clear current_obj_label; clear current_obj_name; clear objjndex; clear gt_obj_names;
N = length(OBJ_PROPS_NORM); Modei_all_independent = cell(1,N);
% Estimate model parameters for chambers of each view for n = 1 :N
M = length(OBJ_PROPS_NORM{n}); % Number of objects in current view
Model__all_independent{n} = cell (1,M); for m = 1 :M
Model_alljndependent{n}{m} = mean(0BJ_PR0PS_N0RM{π}{m}')"; Model__all_independent{n}{m} = [Model_all_independent{n}{m} diag(cov((OBJ_PROPS_NORM{n}{m}')))]; end end clear N; clear M; clear n; clear m;
function [paramsCurView] = estimatePriorParams(kflnfo, objNames, viewNames, viewModels, roundjium, option)
% function [prior_params] = estimatePriorParams(kflnfo, objNames, viewNames, viewModels, round_num, option)
% Estimates the prior parameters of the MRF. See Stan Z. Li, section 7.3.4 and
% more detail in personal notes.
%
% Inputs:
% kflnfo: all info about the training constellation
% objNames: names of the objects
% viewNames: names of the views
% viewModels: models of the objects for different views
% round_num: number of the round in jacknife
% option: vector [location, area, angle, eccen, pair], element 1 => use that property
%
% Outputs:
% V1 , V2: min and max values of the prior parameters for each view
J = length(viewNames)-1 ; N = length(kflnfo);
% — —
% Find v_01, v_02 for each key-frame in each view for j = 1 : J paramsCurViewd'} = O-.
% Obtain and process key-frames of all echos for view 'j1 for n = 1 :N if (round_num ~= n)
% find the KFs in current echo with view name equal to the current view 'j' kfjnd = 1 + (findstr([kflnfo{n}.view_name], viewNames{j})-1)/3;
L = length(kfjnd); for l = 1 :L
% Get the parameters for current key-frame [v] = getOptimalParams(kflnfo{n}(kf_ind(l)).RI, objNames, viewModels, j, roundjium, option);
% add parameters to set of parameters obtained so far for current view paramsCurView{j} = [paramsCurViewfj} v]; end end end end
function [FP_props] = extractFPProps(KFJNFO, excluded_echo, VIEWJMAMES, OBJ_NAMES);
% function [FP_props] = extractFPProps(KFJNFO, excluded_echo, VIEW_NAMES, OBJJMAMES);
%
% Given the properties of each object in each echo, this function
% finds the bounds of the properties of the false positive regions
%
% Input:
% KFJNFO: contains properties info about each object in each key-frame of each echo
% KF_INFO{echo_num}(kf_num)
% VIEW_NAMES: names of the different views
% VIEW_NAMES{view_num}
% OBJ_NAMES: names of objects in each view
% OBJ_NAMES{view_num}
%
% Output:
% FP_props: FP_props{view_num} is a matrix with each row having the properties of
% a false positive region in the order:
% locX, locY, area, angle, eccen
%
J = length(VIEW_NAMES)-1; N = length(KFJNFO);
FP_props = cell(1 ,J); refjabel = 'fp1;
% Extract object features for each view for j = 1 : J
% read object properties for each echo and update elements of matrix OBJ_PROPS{j}' for n = 1:N if (n ~= excluded_echo)
% find the KFs in current echo with view name equal to the current view 'j1 kfjnd = 1+(findstr([KF_INFO{n}.view_name], VIEW_NAMES{j})-1 )/3;
% For each kf in the list find the properties of its objects for kf = 1 :length(kf_ind)
% number of objects in current KF M = length(KF_INFO{n}(kf_ind(kf)).RI); for m = 1 :M objjabel = KFJNFO{n}(kf_ind<kf)).RI(m).label; if (strcmp(ref_label, objjabel)) props(1 ) = KFJNFO{n}(kfJnd(kf)).RI(m).point.X; props(2) = KFJNFO{n}(kfJnd(kf)).RI(m).point.Y; props(3) = KFJNFO{n}(kfJnd(kf)).RI(m).area; props(4) = KFJNFO{n}(kf Jnd(kf)).RI(m).angle; props(5) = KFJNFO{n}(kfJnd(kf)).RI(m).eccen;
FP_props{j} = [FP_props{j}; props]; end end end end end end
function [OBJ_Location, OBJ_Area, OBJ_Angle, OBJ_Eccen] = extractObjProps(KF_INFO, excluded_echo, VIEW_NAMES, OBJ_NAMES);
% function [OBJ_Location, OBJ_Area, OBJ^Angle, OBJJ≡ccen] = extractObjProps(KF_INFO,
VIEW_NAMES, OBJJMAMES)
%
% Given the properties of each object in each echo, this function
% finds the properties of each object across different views.
%
% Input:
% KFJNFO: contains properties info about each object in each key-frame of each echo
% KF_INFO{echo_num}(kf_num)
% VIEW_NAMES: names of the different views
% VIEWJMAMES{view_num}
% OBJ_NAMES: names of objects in each view
% OBJ_NAMES{view_num}
% exclude_echo: index of the echo that has to be left out
%
% Output:
% OBJ_Location, OBJ_Angle, OBJ_Area, 0BJ_Eccen: joint and properties of objects in each view
%
J = length(VIEW_NAMES)-1; N = length(KFJNFO);
OBJ_Location = cell(1 ,J); % Holds the joint location of pair of objects
OBJ_Area = cell(1 ,J); % Holds the joint Area of pair of objects
OBJ_Angle = cell(1 ,J); % Holds the joint angle of pair of objects
OBJ_Eccen = cell(1 ,J); % Holds the joint eccentricity of pair of objects
% Extract object features for each view for j = 1 : J
% Number of objects in current view
L = length(OBJ_NAMES{j})-1;
LL = L*(L-1)/2;
OBJ_Location{j} = cell(L); OBJ_Area{j} = cell(L); OBJ_Anglefj} = cell(L); OBJ_Eccen{j} = cell(L);
% read object properties for each echo and update elements of matrix OBJ_PROPS{j}' for n = 1 :N if (n ~= excluded_echo)
% find the KFs in current echo with view name equal to the current view 'j' kf_ind = 1+(findstr([KF_INFO{n}.view_name], VIEW_NAMES{j})-1 )/3;
% For each kf in the list find the properties of its objects for kf = 1 :length(kf _ind) flag = zeros(L); % Use this binary valued matrix to keep track of the set elements
% number of objects in current KF M = length(KFJNFO{n}(kf_ind(kf)).RI); for m = 1:M objjabel = KF_INFO{n}(kfjnd(kf)).RI(m).label; for I = 1 :L refjabel = [OBJ_NAMES{j}(l,:)]; if (strcmp(ref_label, objjabel)) if (~flag(l,l)) props(1) = KFJNFO{n}(kf_ind(kf)).RI(m).point.X; props(2) = KFJNFO{n}(kfjnd(kf)).RI(m).point.Y; props(3) = KFJNFO{n}(kf_ind(kf)).RI(m).area; props(4) = KFJNFO{n}(kf_ind(kf)).RI(m).angle; props(5) = KFJNFO{n}(kf_ind(kf)).RI(m).eccen;
OBJJ_ocation{j}{l,l} = [OBJ_Location{j}{l,l}; props(1 :2)]; OBJ_Area{j}{l,l} = [OBJ_Area{j}{l,l}; props(3)]; OBJ_Angle{j}{!,l} = [OBJ_Angle{j}{l,l}; props(4)]; OBJ_Eccen{j}{l,l} = [OBJ_Eccen{j}{l,l}; props(S)]; flag(l,l) = 1; end
% Add to properties of pair of objects if (LL > 0) index_keeper = [I];
% Condition where pair of objects have been observed for mm = 1 :M if (mm — = m) objjabel_p = KFJNFO{n}(kf_ind(kf)).RI(mm).label; for II = 1 :L refjabel_p = [OBJ_NAMES{j}(ll,:)]; if (strcmp(ref_label_p, obj_label_p)) index_keeper = [index_keeper II]; props2(1) = KFJNFO{n}(kf_ind(kf)).RI(mm).point.X; props2(2) = KFJNFO{n}(kf_ind(kf)).RI(mm).point.Y; props2(3) = KFJNFO{n}(kf_ind(kf)).RI(mm).area; props2(4) = KFJNFO{n}(kf _ind(kf)).RI(mm).aπgle; props2(5) = KFJNFO{n}(kf_ind(kf)).RI(mm).eccen; if (Il < I) if (~flag(ll,l))
M1 = [props2(1 :2) props(1:2)]; M2 = [props2(3) props(3)]; M3 = [props2(4) props(4)]; M4 = [props2(5) props(5)];
OBJ_Location{j}{ll,l} = [OBJ_Location{j}{ll,l}; M1]; OBJ_Area{j}{H,l} = [OBJ_Area{j}{ll,i}; M2]; OBJ_Angle{j}{ll,l} = [OB J_Angle{j}{! I.I}; M3]; OBJ_Eccen{j}{ll,l} = [OBJ_Eccen{j}{ll,I}; M4]; flag(il,l)=1 ; end else if (-flag(l,H)) M1 = [props(1:2) props2(1 :2)]; M2 = [props(3) props2(3)]; M3 = [props(4) props2(4)j; M4 = [props(5) props2(5)];
OBJJ_ocation{j}{l,ll} = [OBJJ_ocation{j}{l,ll}; M1]; OBJ_Area{j}{l,ll} = [OBJ_Area{j}{l,ll}; M2]; OBJ_Angle{j}{l,l!} = [OBJ_Ang!e{j}{l,ll}; M3]; OBJ_Eccen{j}{l,il} = [OBJ_Eccen{j}{l,ll}; M4]; flag(l,ll)=1; end end end end end end
% Condition where only one object has been observed if (length(index_keeper) ~= L) for Il = 1 :L if (length(find(index_keeper==ll))==O) if (Il < I) if (~flag(ll,l))
M1 = [zeros(1 ,2) props(1 :2)]; M2 = [0 props(3)]; M3 = [0 props(4)]; M4 = [0 props(5)j;
OBJ_Location{j}{ll,l} = [OBJ_Location{j}{ll,l}; M1]; OBJ_Area{j}{ll,l} = [OBJ_Area{j}{ll,l}; M2]; OBJ_Angle{j}{ll,l} = [OBJ_Angle{j}{ll,l}; M3]; OBJ_Eccen{j}{ll,l} = [OBJ_Eccen{j}{ll,l}; M4j; flag(ll,l)=1 ; end θlsθ if (~flag(l,ll))
M1 = [props(1 :2) zeros(1 ,2)]; M2 = [props(3) O]; M3 = [props(4) O]; M4 = [props(5) O];
OBJ_Location{j}{l,ll} = [OBJ_Location{j}{l,ll}; M1]; OBJ_Area{j}{l,ll} = [OBJ_Area{j}{l,ll}; M2]; OBJ_Angle{j}{l,ll} = [OBJ_Angle{j}{l,|[}; M3]; OBJ_Eccen{j}{l,ll} = [OBJ_Eccen{j}{l,ll}; M4]; flag(l,ll)=1 ; end end end end end end end end end end end end end
function [T] = extractToken(input_string, delimiter, tokenjiumber)
% [T] = extractToken(input_string, delimiter, token_number)
% If 'token_number' provided then it finds the corresponding token in the 'input_string' using
% the provided 'delimiter'. If 'token_number' is missing, it finds all tokens using the "delimiter1. if nargin < 3
% Index of segment of name corresponding to view tokCnt = 1 ;
[t, r] = strtok(input_string, delimiter); s = r; T{tokCnt} = t; while (isempty(r) == 0) tokCnt = tokCnt + 1 ;
[t, r] = strtok(s, delimiter); s = r;
T{tokCnt} = t; end elseif nargin == 3
% Index of segment of name corresponding to view tokCnt = 1;
[t, r] = strtok(input_string, delimiter); s = r; while (isempty(r) == 0) tokCnt = tokCnt + 1;
[t, r] = strtok(s, delimiter); s = r; if (tokCnt == tokenjiumber)
T = t; end end else disp('Usage: extractToken(input_string, delimiter, token_number)'); end
R = length(FP_props); % number of rounds
% for each round find min and max values of parameters for r = 1:R N = !ength(FP_props{r}); % number of views
% for each view find those parameters for n = 1:N paramsMin = min(FP_props{r}{n}); paramsMax = max(FP_props{r}{n}); FP_params{r,n} = [paramsMin' paramsMax1]; end end clear R; clear N; clear r; clear n; clear params*;
function [prior_bounds, priors] = findPriorMRFBounds(prior_param, option)
E = length(prior_param); % number of jacknife rounds N = length(prior_param{1}); % number of views prior_bounds = zeros(E,N,3); priors = zeros(E,N,2); for β = 1 :E for n = 1:N
P = prior_param{e}{n}; if (option) ind = find(P(3,:) < 1000); p(1 ) = mean(P(1,ind)); p(2) = mean(P(3,ind)); p(3) = mean(P(4,ind)); if (p(2) > p(1)) prior_bounds(e,n,:) = p; priors(e,n,1) = (p(1)+p(2))/2; priors(e,n,2) = min(P(4,ind)); else message = strcat('Error! view. ', num2str(n), ' of round:', num2str(e)); disp(message); end else prior_bounds(e,n,1) = max(P(1,:)); priors(e,n,1 ) = max(P(1 ,:)); end end end
function [PRIOR_PARAMS_11101 , PRIOR_P ARAMSJ 1001 , PRIOR_PARAMS_10001, PR|OR_PARAMS_11100, PRIORJ3ARAMSJ IOOO, PRiOR_PARAMS_10000] = findpriorparams(KFJNFO_NORM,OBJ_NAMES,VIEW_NAMES,VM_COMP)
PRIORJ3ARAMSJ I I OI = cell(15,1 ); PR|OR_PARAMS_11001 = cell(15,1); PRIOR_P ARAMSJ 0001 = cell(15,1); PRIOR_PARAMSJ 1100 = cell(15,1); PRIORJ=5ARAMSJ 1000 = cell(15, 1 ); PRIORJ3ARAMS-I OOOO = cell(15,1 ); for I = 1 :15
PRIORJ=1ARAMSJ 1101{l}=estimatePriorParams(KFJNFO JMORM, OBJ_NAMES, VIEW JMAMES,
VM_COMP, I, [1 1 1 0 1]);
PRIORJ3ARAMSJ 1001 {l}=estimatePriorParams(KF JNFO JMORM, OBJJMAMES, VIEW JMAMES,
VM_C0MP, I, [1 1 0 0 1]);
PRIOR_PARAMS J 0001 {l}=estimatePriorParams(KFJNFO JMORM, OBJJMAMES, VIEW JMAMES,
VM_C0MP, I, [1 0 00 1]);
PRIORJ3ARAMS-I 1100{l}=estimatePriorParams(KFJNFO JMORM, OBJJMAMES, VIEW JMAMES,
VM_C0MP, I, [1 1 1 0 O]);
PRIORJ3ARAMSJ 1000{l}=estimatePriorParams(KFJNFO_NORM, OBJJMAMES1 VIEW JMAMES,
VM_C0MP, I, [1 1 0 0 O]);
PRIORJ3ARAMSJ 0000{l}=estimatePriorParams(KFJNFO JMORM, OBJJMAMES, VIEW JMAMES,
VMJ30MP, I, [1 0 0 0 O]); end
% Finding average precision for noMiss+noFP for R = 1 :15 prec_noFP_noMiss(R) = length(find((SVMPreLab_compIeteConst{R}- GTLab_test_completeConst{R})==θ))/length(GTLab_test_completeConst{R}); end prec_perView_noFP_noMiss = zeros(15,10); for R = 1 :15 for i = 1 :10 ind = fiπd(GTLab_test_completeConst{R}==i); if (length(iπd)>0) prec_perView_noFP_noMiss(R,i)=length(find((SVMPreLab_completeConst{R}(ind)- GTLab_test_completeConst{R}(iπd))==0))/length(ind); end end end prec_perVieW_noFP__noMiss_avg = mean(prec_perView_noFP_noMiss);
ConfusionMat_noFP_noMiss_avg = zeros(10,10); for R = 1 :15
ConfusionMat_noFP_noMiss_avg = ConfusionMat_noFP_noMiss_avg + SVMConfMatrix_test_completeConst{R}(1 -.10,1:10); end for i = 1:10
ConfusionMat_noFP_noMiss_avg(i,:) =
ConfusionMat_noFP_noMiss_avg(i,:)/sum(ConfusionMat_noFP_noMiss_avg(i,:)); end
function [E] = getGTEnergy(KFJNFO, VM, PM, objNames, viewNames, rn, options)
% function [E] = getGTEnergy(GTC, VM, PM, objNames, viewNames, rn, options)
%
% For the constellations in ground truth in each view of the given round, finds the
% energy of the constellation and returns it in the matrix 'energy';
%
% Input:
% KF_INFO: key-frame information for all echos
% VM: View Models of the different views for the different rounds
% PM: Prior Models for each view under different rounds
% objNames: names of the objects for different views
% rn: round number
%
% Output:
% E: is a cell array holding the energy values
%
J = length(viewNames)-1 ; % number of views
N = length(KFJNFO); % number of echos
E = CeIl(J, 1 );
% Find energy values for KFs under each view for vn = 1 : J numObjs = length(objNames{vn}); % number of objects in current view
% Go over all echos and only get the ones % in current round. For them find the energy % values for each of their key-frames for n = 1:N if (n -= rn)
% find the KFs in current echo with view name equal to the current view 'j' kfjnd = 1+(findstr([KF_INFO{n}.view_name], viewNames{vn})-1)/3;
% For each kf in the list find the properties of its objects for kf = 1:length(kf_ind)
% sites in current constellation sites = KF_INFO{n}(kf_ind(kf)).RI;
% number of sites in current KF M = length(sites); o/o **********************************************
% Find the indexes of the configuration of sites
% config = oπes(1,M)*numObjs;
% get NULL labeled sites nulllnd = 1 + (findstr([sites.label], objNames{vn}(end,:))-1)/2; nullCnt = length(nulllnd);
% get non-NULL labeled sites nonNullCnt = M - nullCnt; nonNulllnd = Q; % keeps index of sites with non-null label for m = 1 :M if (length(find(nulllnd == m)) == 0)
% if site is not in null list already nonNulllnd = [nonNulllnd m]; reshapedMat = reshape(objNames{vn}', 1, 2*numθbjs); labelObj = sites(m).label; config(m) = 1 + (findstr(reshapedMat, labelθbj)-1 )/2; end end
% Get the energy of the configuration now o/o ***************************************
E{vn} = [E{vn} configEnergy(config, sites, VM, PM, numObjs, options, vn, rn)]; end end end end
function [miu, sigma] = getNormalVariabies(b);
% Finds the mean (miu) and the covariance matrix (sigma) of a % multivariate normal distribution, from the observed samples % as given by the matrix 'a'. Matrix 'a' has dimensions of MxN % where 'M' is the number of samples and 1N1 is the dimensionality % of the observed space. a = b1;
[M, N] = size(a); miu = zeros(1, N); sigma = zeros(N, N);
% Calculate the mean vector for m=1 :M miu = miu + a(m,:); end miu = miu / M;
% Calculate the covariance matrix for m = 1 :M diff = a(m,:) - miu; sigma = sigma + diff * diff; end sigma = sigma / (M-1); miu = miu';
function [V] = getOptirnalParams(constellation, objNames, viewModels, vn, rn, options)
%
% [V] = getOptimalParams(constellation, objNames, viewModeis, vn, rn, options)
%
% Given a constellation of parts and the models of the distribution of
% parts properties for each view, this function estimates the best parameters
% of the prior MRF model such that the ground truth is embedded at the
% location of minimum energy in the configuration space
M = length(objNames{vn}) - 1 ; % number of non-null labels
N = length(constellation); % number of sites config = zeros(1,N); % keeps the configuration of the sites
V = zeros(4,1 ); if (isempty(constellation)) % following settings won't change the final results V(I1I) = O; V(2,1) = 0; V(3,1 ) = realmax; V(4,1) = realmax; t Vjnin = 0;
Vjnax = 0; return; end
Figure imgf000221_0001
% Find configuration of labels on the sites set by grount truth
O/ _ __ .
% Find the list of Null-labeled sites nulllnd = 1 + (findstr([constellation. label], objNames{vn}(end,:))-1)/2; nullCnt = length(nulllnd);
% Find the list of Non-Null-labeled sites nonNullCnt = N - nullCnt; nonNulllnd = Q; % keeps index of sites with non-null label for n = 1:N if (length(find(nulllnd == n)) == 0) nonNulllnd = [nonNulllnd n]; reshapedMat = reshape(objNames{vn}',1 ,2*(M+1)); labelObj = constellation(n). label; coπfig(n) = 1 + (findstr(reshapedMat, labelθbj)-1 )/2; end end
% Do not continue if there are no non-null labels if (-nonNullCnt)
% following settings won't change the final results
V(1,1 ) = 0;
V(2,1) = 0;
V(3,1) = realmax;
V(4,1) = realmax;
V_min = 0; V_max = 0; return; end
O/ _
% (1) Find lower bound on v_nθ
% Visit each non-null site and change its label to null
% the change in local energy will be: deltaE = E_after - E_before
% By putting deltaE > O using linear programming find lower
% bounds on v_10, and v_20.
V_min = D; for n = 1 :nonNullCnt
/o ' — ~ — —
% Find prior energy value before flipping sites label to null
O //o ' — — — - _ —
SI = [nonNulllnd(n)]; % add site to Sitelndex priorE = compSumDist(config(SI), constellation(SI), viewModels, vn, rn, options);
% if context should be used if (options(end)==1 && nonNullCnt~=0) for m = 1 :nonNuIICnt if (m~=n) si = [Sl nonNuillnd(m)]; priorE = priorE + compSumDist(config(si),constellation(si),viewModels,vn,rn,options); end end end
V_min = [V_min [priorE;0]]; end
% Get common min value across all examples V(1 :2,1) = max(V_min')'; vmin = V(1 ,1); a/0
% (2) Find upper bound on v_nθ
% Visit each null site and change its label to one of possible
% non-null labels.
% Again set the change in local energy deltaE = E_after - E_before
% as deltaE > O. Using linear programming find upper bounds on
% v_10, and v_20.
V_max = Q; for n = 1.nullCnt
% Find posterior energy value after flipping site % labels to any of M possible non-null labels for m = 1 :M
% Energy after the label of the current null-labeled sites is flipped to 'm1 LI = [m]; SI = [nulllnd(n)]; postE = compSumDist(LI, constellation(SI), viewModels, vn, rn, options); % if context should be used Ni = O; if (options(eπd)==1 && noπNul!Cnt~=0) for p = 1 :nonNullCnt if (m -= config(noπNulllnd(p))) Ii = [Ll config(nonNulllnd(p))]; si = [Sl nonNulllnd(p)]; postE = postE + compSumDist(li, constellatioπ(si), viewModels, vn, rn, options); else
Ni = Ni + 1 ; end end end if (Ni)
% Construct inequality and solve for max values of v_10 and v_20
% f = (-1 )*eye(2,1); A = zeros(3,2); b = zeros(3,1); f(2) = Ni; A(1 ,1 ) = 1 ; A(1 ,2) = -Ni; A(2:3,1 :2) = (-1)*eye(2); b(1,1) = postE; b(2,1 ) = -vmin;
V_max = [V_max !inprog(f,A,b)]; end end end
% Get common max value across all examples if (length(V_max) ~= 0)
V(3:4,1) = min(V_max')'; else
V(3:4,1) = realmax; end
function [E] = getSampCompEnergy(SC, VM, PM, objNames, rn, options)
% function [E] = getSampCompEnergy(SC, VM, PM, objNames, rn)
% For the sampled constellations in each view of the given round, finds the
% energy of the constellation and returns it in the matrix 'energy';
%
% Input:
% SC: Sampled Constellatios for the current round: round: SC{view_num}(const_num).RI(site_index)
% VM: View Models of the different views for the different rounds
% PM: Prior Models for each view under different rounds
% objNames: names of the objects for different views
% rn: round number
%
% Output:
% E: a matrix of energy values, TxN, T=number of constellations in the views, N=number of views
%
N = length(SC); % number of views
T = length(SC{1}); % number of constellation under each view.
% its the same for all views in the case of
% sampled constellations
%
% Find the energy for the constellations of each view o/ for vn = 1:N numObjs = length(objNames{vn}); % number of labels in current view for t = 1 :T o/o **********************************************
% Find the indexes of the configuration of sites o/o ********************************************** sites = SC{vn}(t).RI;
M = length(sites); % number of sites in current constellation config = ones(1 ,M)*numObjs; % keeps the configuration of the sites
% get NULL labeled sites nulllnd = 1 + (findstr([sites.label], objNames{vn}(end,:))-1 )/2; nullCnt = length(nulllnd);
% get non-NULL labeled sites nonNullCnt = M - nullCnt; nonNulllnd = FJ; % keeps index of sites with non-null label for m = 1 :M if (length(find(nulllnd == m)) == 0)
% if site is not in null list already nonNulllnd = [nonNulllnd m]; reshapedMat = reshape(objNames{vn}',1,2*numObjs); labelObj = sites(m).label; config(m) = 1 + (findstr(reshapedMat, labelθbj)-1 )/2; end end o/o ***************************************
% Get the energy of the configuration now 7o
E(t,vn) = configEnergy{config, sites, VM, PM, numObjs, options, vn, rn); end end
SC_2 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VMJ)OMP, FP_params, 2,
5000); save fSampledCoπstellation_noFP_R2_5000perView_Nov26.rnat' SC_2 clear SC__2
SC 3 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 3,
5000); save 'SampledCoπstellation_πoFP_R3_5000perView_Nov26.mat' SC_3 clear SC_3
SC_4 = sampleConste!lations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 4,
5000); save 'SampledConstellation_noFP_R4_5000perView_Nov26.mat' SC_4 clear SC_4
SC__5 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 5,
5000); save 'SampledConstellation_noFP_R5_5000perView_Nov26.mat' SC_5 clear SC_5
SC_6 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 6,
5000); save 'SampledConstellation_noFP_R6_5000perView_Nov26.mat' SC_6 clear SC_6
SC_7 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 7,
5000); save 'SampledConstellation_noFP_R7_5000perView_Nov26.mat1 SC_7 clear SC_7
SC_8 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM-COMP, FP_params, 8,
5000); save 'SampledConstellation_noFP_R8_5000perView_Nov26.mat' SC_8 clear SC_8
SC_9 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 9,
5000); save 'SampledConstellation_noFP_R9_5000perView_Nov26.mat' SC_9 clear SC_9
SC_10 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP__params, 10,
5000); save 'SampledConstellation_noFP_R10_5000perView_Nov26.mat' SC_10 clear SCJO
SC_11 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 11,
5000); save 'SampledConstellation_πoFP_R11_5000perView_Nov26.mat' SC_11 clear SC_11
SC_J2 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM-COMP1 FP_params, 12,
5000); save 'SampledCoπstellation_noFP_R12_5000perView_Nov26.matI SC_12 clear SC_12
SC_13 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM-COMP, FP_params, 13, 5000); save 'SampledConstelIation_noFP_R13_5000perView_Nov26.maf SC_13 clear SC_13
SC_14 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 14,
5000); save 'SampledConstellation_noFP_R14_5000perView_Nov26.mat' SC_14 clear SCJ 4
SC_15 = sampleConstellations(OBJ_NAMES, objMissProb, avgFPregions, VM_COMP, FP_params, 15,
5000); save 'SampIedConstellation_noFP_R15_5000perView_Nov26.mat' SC_15 clear SC 15
function [optConfig, optEnergy] = HCF(constel, viewModel, priorModel, objNames, vn, rn, options)
% function [optConfig, optEnergy, numlterations] =
% HCF(constel, view/Model, priorModel, objNames, viewjium, round_num, options)
%
% Given a constellation of parts, this fucntion uses the "Highest Confidence First" algorithm
% proposed by Chou & Brown to assign labels to the parts according to the given MRF model.
% The HCF method is a deterministic combinatorial optimization technique that has been shown to
% perform well in a reasonable time. For details see: Stan Z. Li's book (2001), pp. 235-237
%
% Inputs:
% constel: constellation of points which has to be labeled
% viewModel: models of the different views
% priorModel: parameters of the prior for the different views
% objNames: names of the objects
% view_num: index of the view to be used for labeling
% round_num: round of jacknife
% options: a vector [loc, area, angle, eccen, pair] which
% determines which features should be used
%
% Outputs:
% optConfig: configuration resulting in lowest possible energy
% optEnergy: value of energy at the optimal configuration
% numlterations: total number of iterations before stopping
M = size(objNames{vn}, 1 ); % number of objects N = length(constel); % number of sites numlterations = 0; uncommittedLabel = 0; % uncommitted sites are assigned the label 1O' which is different from NULL. % the NULL label is the last label in the list of object names for each view
Figure imgf000228_0001
% Initialize all sites to uncommitted config = zeros(N,1 ); % NOTE: maintain order. First element always is the label of first site in constel
Figure imgf000228_0002
% Go over all sites and find their S_i(f) % Also sort the values of S_i(f) %
[S] = Create_Heap(config, constel, objNames, viewModel, priorModel, vn, rn, options); .
% Sort energy values in descending order
[heapS, indexS] = sort(S);
%
% Start iterating over sites and updating their labels
<y0 while (heapS(1 ) < 0)
K = indexS(1 ); % index of least stable site
% for current site with the given label find the best new label config(K) = Change_Label(K, config, constel, objNames, viewModel, priorModel, vn, rn, options); % Now having the label, udpate S_K(I)
S(K) = Update_S(K, config, constel, objNames, viewModel, priorModel, vn, rn, options); num Iterations = num Iterations + 1 ;
% Adjust the heap to reflect new changes [heapS, indexS] = sort(S); for J = 1 :N if (J -= K)
S(J) = Update_S(J, config, constel, objNames, viewModel, priorModel, vn, rn, options); numlterations = numlterations + 1;
[heapS, indexS] = sort(S); end end end optConfig = config; optEnergy = configEnergy(config, constel, viewModel, priorModel, M, options, vn, rn);
root__dir = Yhome/shahram/cardio/source/chamber_segment/testimg/echon'; filejndex = 'list.dat'; index = [123456789121314151617]; cnt=1; for i = 1 :length(index) data_dir = strcat(root_dir, num2str(index(i)), V);
[S{cnt}] = importData (data_dir, filejndex); clear data_dir; cnt = cnt + 1 ; end clear root_dir; clear filejndex; clear index; clear i; clear cnt;
function [S] = importData (data_dir, filejndex)
% function [S] = importData (data_dir, filejndex)
%
% Use this function to read the .dat files listed in the file 'filejndex1 in the director
% 'datajiir' to import the information about each annotated key-frame of the echocardiogram video.
% Returns the structure array 'S' with the following elements:
% S:
% echojium
% viewjiame
% seqjium
% Rl (Roi Info)
%
% where, roijnfo is itself a structure with the following fields:
% Rl:
% label
% point
% area
% angle
% eccentricity
% point:
% X
% Y
% Get the name of KF files to be read indexjilename = strcat(data_dir, filejndex); fid = fopen(index_filename); cnt = 0; while (~feof(fid)) cnt = cnt + 1; kf_name{cnt} = fgetl(fid); end fclose(fid);
% import the data from each of the KF files fs = length(kfjiame); for i = 1:fs kf_fileπame = strcat(data_dir, 'features/', kf_name{i}); fid = fopen(kf_filename);
% read fields of the current .dat file cnt = 0; roi_cnt = 0; while (~feof(fid)) if (cnt == 0) % Set the echo number
S(i). echojium = fgetl(fid); elseif (cnt == 1 ) % Set the echo name vπ = fgetl(fid); if (strcmp(vn, 1UNKN') || strcmp(vn , ")) vn_final = 'ukn'; else if (strcmp(vn.'PSAB')) vπ_final = 'psb'; elseif (strcmp(vn,'PSAPM')) vn_final = 'psp'; elseif (strcmp(vn,'PSAM')) vn_final = 'psm'; elseif (strcmp(vn,'PSAX')) vn_final = 'psx'; else vn_final = lower(vn); end end
S(i).view_name = vn_final; elseif (cnt == 2) % Set the sequence number
S(i).seq_num = fgetl(fid); else % Set the ROI fields roi_cnt = roi_cnt + 1 ; current_roi = fgetl(fid); T = extractToken(current_roi, ' '); if (ischar(current_roi))
% Set object's label IbI = T{1}; if (strcmpøbl, 1UNKN'))
L = 'fp'; else
L = lower(lbl); end roijnfo. label = L;
% Set object's center-of-mass location p.X = str2num(T{2}); p.Y = str2num(T{3}); roijnfo. point = p;
% Set object's area, angle, and eccentricity roijnfo.area = str2num(T{4}); roijnfo.angle = str2num(T{5}); roMπfo.eccen = str2num(T{6});
S(i).RI(roi_cnt) = roijnfo; end clear T; end cnt = cnt + 1 ; end fclose(fid); end
function D = mahaIonobis(X, M, S);
% function D = mahalonobis(X, M, S);
% Finds the Mahalonobis distance of the vector 1X' to the mean of the
% Gaussian defined by (M1S)
%
% Input:
% X: feature vector
% M: mean of Gaussian
% S: covariance of Gaussian
D = (X-M)'*inv(S)*(X-M);
[numRounds, numViews] = size(L_samp_noFP_noMiss);
% obtain training data for NN for each round for m = 1:numRounds inVects = []; outVects = []; for vn = 1: numViews %vect = (-1)*ones(numViews,1); %vect(vn) = 1;
%inVects = [inVects [L_samp_noFP{rn,vn}.energy]]; %outVects= [outVects repmat(vect, 1 , T)];
T = length(L_samp_noFP_noMiss{m,vn}); vect = repmat(vn, 1 , T); inVects = [inVects [L_samp_noFP_noMiss{rn,vn}.energy]]; outVects= [outVects repmat(vn,1,T)3; end
SVMtrain(rn).IN = inVects; SVMtrain(rn).OUT = outVects; end clear numRounds; clear numViews; clear rn; clear vn; clear inVects; clear outVects; clear T; clear vect;
.
KFJNFO_NORM = KFJNFO;
E = length(KFJNFO); for e = 1:E
T = length(KF_INFO{e}); for t = 1:T
N = length(KF_INFO{e}(t).RI); for n = 1:N p.X = KF_INFO{e}(t).RI(n).point.X / 352; p.Y = KF_INFO{e}(t).RI(n).point.Y / 240;
KF_INFO_NORM{e}(t).RI(n).point = p;
A = KF_INFO{e}(t).RI(n).area / (240*352); KF_INFO_NORM{e}(t).RI(n).area = A; end end end clear E; clear T; clear N; clear e; clear t; clear n; clear A; clear p;
function [optConfig, optEnergy, energyTrace] = optimalConfigSearch(viewModel, priorModel, constel, objNames, view_num, round_num, options)
% function [optConfig, optEnergy] = optimalConfigSearch(viewModel, priorModel, constel, objNames, view_num, roundjium, options)
%
% Given a constellation of parts, this function uses SA+MCMC to search the configuration
% space for the configuration that results in the minimum overall energy value.
% We use Metropolis-Hastings proposal function for the MCMC sampling.
%
% Note: '0' corresponds to Null label
%
% Inputs:
% viewModel: models of the different views
% priorModel: parameters of the prior for the different views
% constel: constellation of points which has to be labeled
% objNames: names of the objects
% view_num: index of the view to be used for labeling
% round_num: round of jacknife
% options: a vector [loc, area, angle, eccen, pair] which
% determines which features should be used
%
% Outputs:
% optConfig: configuration resulting in lowest possible energy
% optEnergy: value of energy at the optimal configuration
% energyTrace: keeps track of the energy of the accepted configurations
T = 10Λ9; % initial temperature k = 0.95; % temperature decrease rate
M = size(objNames{view_num}, 1 ); % number of objects N = length(constel); % number of sites energyTrace = FJ;
% Assign random colorings to the sites % and find its corresponding energy value
O/ rand('state', sum(100*clock)); config = unidrnd(M, 1, N);
E = configEnergy(config, constel, viewModel, priorModel, M, options, viewjium, roundjium); energyTrace = [energyTrace E];
% Start cooling procedure. Find coloring % in each temperature. iteratel = 1; % set to 1O' to stop the process completely unsuccessTemp = 0; % if 3 unsuccessful temperatures back to back then stop numTries = 0; while (iteratel ) iterate2 = 1 ; % set to '0' to stop visiting site in the same temperature numAccept= 0; % number of accepted proposals in current temperature while (iterate2) for n = 1 :N numTries = numTries + 1;
% Get a new configuration new_config = config; new_config(n) = unidrnd(IVI);
% find energy due to new configuration newE = configEnergy(new_config, constel, viewModel, priorModel, M, options, view_num, round_num);
% accept or reject configuration if (newE < E) config = new__config; E = newE; num Accept = numAccept + 1 ; energyTrace = [energyTrace E]; else if (rand < exp(-(newE - E)/T)) config = new_config; E = newE; numAccept = numAccept + 1 ; energyTrace = [energyTrace E]; end end end if (numAccept>10*N) iterate2 = 0; unsuccessTemp = 0; elseif (numTries > 100*N) iterate2 = 0; unsuccessTemp = unsuccessTemp + 1 ; end end
% decrease temperature T = k*T;
% stop annealing if at 3 consecutive temperatures nothing happens if (unsuccessTemp > 3) iteration2 = 0; end end
% assign the final energy and configuration to output optConfig = objNames{view_num}(config,:); optEnergy = E; function [distMean,distSig,angMean,angSig] = pairSitePropertyMLE(Y, L, R, jointMean, jointSig)
% function [distMean,distSig,angMean,angSig] = pairSiteProperfyMLE(Y, L, R, jointMean, jointSig)
%
% Given the estimates of the parameters of the distribution of the joint location of 2 sites
% this function could be used to estimate the parameters of the Gaussian distribution for the
% distance and angle between those 2 sites.
% it first fills in the missing values according to the missing pattern of each observation and
% then calculates the mean and covariance of the entire distances and angles.
%
% Inputs:
% Y: observations, NxP, where: N=# observations, P=dimensionality of data
% L: index of the missingness patter for each observation
% R: matrix of missingness patterns
% (jointMean, jointSig): estimate of parameters of joint distribution of the location of sites
% Outputs:
% (distMean, distSig): estimates of the parameters of the Gaussian distribution of distance
% (aπgMean, angSig): estimates of the parameters of the Gaussian distribution of angle
%
[N1P] = size(Y);
[numMP, lengthMP] = size(R);
% pattern templates secondMiss = '1100'; firstMiss = '0011';
% decouple the mean and sigma for different elements mX =jointJvlean(1:2); mY = jointMean(3:4); sXX = jointSig(1 :2,1:2); sYY = jointSig(3:4,3:4); sXY =jointSig(1:2,3:4); sYX = jointSig(3:4,1 :2);
O = Y; dist = zeros(N,1); ang = zeros(N,1);
% Go through the observations and fill in for missing values for n = 1 :N
% find the missingness pattern for current index % and fill in the missing pattern accordingly
O/ _. if (strcmp(R(L,:), firstMiss)) % first site missing % get conditional mean and variance meanX = mX + sXY*inv(sYY)*(Y(n,3:4)' - mY); sigX = sXX - sXY*inv(sYY)*sYX;
% fill in the missing values using the mean of their distribution 0(n,1 :2) = meanX'; elseif (strcmp(R(L,:), secondMiss)) % second site missing % get conditional mean and variance meanY = mY + sYX*inv(sXX)*(Y(n, 1 :2)" - mX); sigY = sYY - sYX*inv(sXX)*sXY;
% fill in the missing values using the mean of their distribution O(n,3:4) = meanY'; end dist(n) = sqrt((O(n,1 )-O(n,3))Λ2 + (O(n,2)-O(n,4))Λ2); ang(n) = atan((O(n,4)-O(n,2))/(O(n,3)-O(n,1 ))); end
0/ „
% estimate parameters of distance and angle
°/ distMean = mean(dist); distSig = cov(dist); anglvlean = mean(ang); an^Sig = cov(ang);
function p = pathdef
%PATHDEF Search path defaults.
% PATHDEF returns a string that can be used as input to MATLABPATH
% in order to set the path.
% Copyright 1984-2000 The MathWorks, Inc. % $Revision: 1.4 $ $Date: 2000/06/01 16:19:21 $
% PATH DEFINED HERE - Don"t change this line.
P = I-
'e:\users\shahram\svm_toolbox2;', ...
'e:\users\naoko\matlab\fullbnt;',...
'e:\users\naoko\matlab\fullbnt\bnt;',...
'e:\users\naoko\matlab\fullbnt\bnt\cpds;1,...
'e:\users\naoko\matlab\fullbnt\bnt\cpds\old;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\nhmm;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\map;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\map\old;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\mgram;1,...
'eΛusersVnaoko^atlabWullbnftbnttexamplesWynamic^hmmVmgramtold;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\motif;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\old;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\square;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\hhmm\square\old;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\old;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\slam;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\dynamic\slam\old;1,...
'eΛusers^aokoVmatlabyfullbntVbntVexamplesMimids;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\belprop;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\brutti;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\hme;',...
'eΛusersVnaokoVmatlabWullbnftbnftexamples^tatic^isc;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\models;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\models\old;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\scg;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\structlearn;1,...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\zoubin;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\dtree;',...
'e:\users\naoko\matlab\fullbnt\bnt\examples\static\fgraph;1,...
'e:\users\naoko\matlab\fullbnt\bnt\general;',...
'e:\users\naoko\matlab\fullbnt\bnt\general\old;',...
'e:\users\naoko\matlab\fullbnt\bnt\inference;1,...
'e:\users\naoko\matlab\fullbnt\bnt\inference\dynamic;',...
'e:\users\naoko\matlab\fullbnt\bnt\inference\online;1,...
'e:\users\naoko\matlab\fullbnt\bnt\inference\static;1,...
e:\users\naoko\matlab\fullbnt\bnt\learning;1,...
'e:\users\naoko\matlab\fullbnt\bnt\potentials;',...
'e:\users\naoko\matlab\fullbnt\bnt\potentials\old;1,...
'e:\users\naoko\matlab\fullbnt\bnt\potentials\tables;',...
'e:\users\naoko\matlab\fullbnt\graphviz;1,...
'e:\users\naoko\matlab\fullbnt\hmm;1,... 'e:\users\naoko\matlab\fullbnt\hmm\old;',...
'e:\users\naoko\matlab\fullbnt\kpmstats;',...
'e:\users\naoko\matlab\fullbnt\kpmtools;',...
'e:\users\naoko\matlab\fullbnt\kalman;',...
'e:\users\naoko\matlab\fullbnt\graph;',...
'e:\users\naoko\matlab\fullbnt\graph\c;',...
'e:\users\naoko\matlab\fullbnt\graph\old;',...
'e:\users\naoko\matlab\fullbnt\netlab;1,...
'e:\users\naoko\matlab\fullbnt\netlab2;1,...
'e:\users\shahram\svm_km\examples;1,...
e:\users\shahram\svm_km\toolsvm;',...
'e:\users\shahram\osu_svm;',...
'e:\users\masaki\vss\2hmm ;',...
'e:\matlab_toolbox\bnt\examples\dynamic;',...
'e:\matlab_toolbox\bnt\examples\static\hme;',...
'e:\matlab_toolbox\bnt\examples\static\zoubin;',..
'e:\matlab_toolbox\bnt\examples\static\models;',.
'e:\matlab_toolbox\bnt\examples\static\misc;1,...
'e:\matlab_toolbox\bnt\examples\static;1,...
'e:\matlab_toolbox\bnt\potentials\tables;',...
'e:\matlab_toolbox\bnt\potentials;1,...
'e:\matlab_toolbox\bnt\learning;',...
'e:\matlab_toolbox\bnt\inference\dynamic;',...
'e:\matlab_toolbox\bnt\inference\static;1,...
'e:\matlab_toolbox\bnt\inference;1,...
'e:\matlab_toolbox\bnt\kalman;',...
'e:\matlab_toolbox\bnt\hmm;1,...
'e:\matlab_toolbox\bnt\netlab2;1,...
e:\matlab_toolbox\bnt\netlab;',...
e:\matlab_toolbox\bnt\stats2;1,...
'e:\matlab_toolbox\bnt\stats1;',...
'e:\matlab_toolbox\bnt\graphdraw;',...
'e:\matlab_toolbox\bnt\graph;1,...
'e:\matlab_toolbox\bnt\misc;',...
'e:\matlab_toolbox\bnt\general;1,...
'e:\matlab_toolbox\bnt\cpds;',...
'e:\matlab_toolbox\bnt;1, ...
'e:\users\masaki\app\matlab;',...
'e:\users\masaki\vss\data\surveillance;',...
'e:\users\masaki\vss\2hmm-mix;1,... matlabroot,'\toolbox\matlab\general;',... matlabroot,'\toolbox\mat!ab\ops;',... matlabroot,'\toolbox\matlab\lang;',... matlabroot,'\toolbox\matlab\elmat;',... matlabroot,'\toolbox\matlab\elfun;',... matlabrootΛtoolbox'imatlabVspecfun;1,... matlabroot,'\toolbox\matlab\matfun;',... matlabroot,'\toolbox\matlab\datafun;',... matlabrootΛtoolbox\matlab\audio;',... matlabroot,'\toolbox\matlab\polyfun;',... matlabroot.'UoolboxVmatlabVunfun;1,... matlabroot,'\toolbox\matlab\sparfun;',... matlabroot,'\toolbox\matlab\graph2d;',... matlabroot,'\toolbox\matlab\graph3d;',... matlabroot,'\toolbox\matlab\specgraph;',... matlabroot,'\toolbox\nnatlab\graphics;',... matlabroot.'UoolboxXmatiabVuitoois;',... matlabroot, '\toolbox\matlab\strfuri;1,... matlabrootΛtoolbox\matlab\iofun;',... matlabrootΛtoolbox\matlab\timefun;',... matlabroot,'\toolbox\matlab\datatypes;',... matlabrootΛtoolbox^atlabWerctrl;1,... matlabrootΛtoolbox\matlab\winfun;',... matlabroot,'\toolbox\matlab\winfun\comcli;',... matlabrootΛtoolbox\matlab\demos;',... matlabroot,'\toolbox\local;',... matlabroot,%>olbox\simulink\simulink;',... matlabroot,'\toolbox\simulink\blocks;',... matlabroot,'\too!box\simulink\components;\... matlabroot,'\toolbox\simulink\fixedandfioat;',... matlabroot,'\toolbox\simulink\fixedandfloat\fxpdemos;',... matlabroot,'\toolbox\simuiink\fixedandfloat\obsolete;',... matlabrootΛtoolbox\simulink\simdemos;',... matlabroot,'\toolbox\simulink\simdemos\aerospace;',... matiabroot,'\toolbox\simulink\simdemos\automotive;',... matlabroot.'Uoolbox^imuiinWsimdemosteimfeatures;',... matlabroot,'\toolbox\simulink\simdemos\simgeneral;',... matiabroot,'\toolbox\simulink\simdemos\simnew;',... matlabroot,'\toolbox\simulink\dee;',... matlabroot/UoolboxVsimulinkWastudio;1,... matlabroot,'\toolbox\stateflow\statef!ow;', ... matlabroot.'NtooiboxVstateflowlsfdemos;1,... matlabroot,'\toolbox\stateflow\coder;',... matlabroot, '\toolbox\rtw\rtw;1, ... matlabroot,'\toolbox\rtw\rtwdemos;',... matlabroot, l\toolbox\rtw\rtwdemos\rsinndemos;'1... matlabroot, '\toolbox\rtw\targets\asap2\asap2;', ... matlabroot,'\toolbox\rtw\targets\asap2\asap2\user;1, ... matlabroot, '\toolbox\rtw\targets\rtwin\rtwin ;',... matlabroot, '\toolbox\aeroblks\aeroblks;',... matlabroot.'Uoolbox^eroblksNaerodemos;',... matlabroot,'\toolbox\ccslink\ccslink;',... matlabroot, '\toolbox\ccslink\ccsdemos;',... matlabrootΛtoolbox\ccslink\rtdxblks;',... matlabrootΛtoolbox\cdma\cdma;',... matlabroot, '\toolbox\cdma\cdmamasks;1,... matlabrootΛtoolbox\cdma\cdmamex;',... matlabroot, '\toolbox\cdma\cdmademos;1,... matlabroot, '\toolbox\combuilder\combuilder;',... matlabroot,'\toolbox\comm\comm;',... matlabroot, '\toolbox\comm\commdernos;1,... matlabrootΛtoolbox\comm\commobsolete;',... matlabroot, '\toolbox\commblks\commblks;',... matlabroot, '\toolbox\commblks\cornmmasks;',... matlabroot,'\toolbox\commblks\commmex;',... matlabroot.'UoolboxWommblksNcommblksdemos;1,... matlabroot, '\toolbox\commblks\commblksobsolete\commblksobsolete;',... ■ matlabroot, "ttoolboxteompiler;1,... matlabroot, '\toolbox\control\control;1,... matlabroot,'\toolbox\control\ctrlguis;',... matlabroot,'\toolbox\control\ctrlobsolete;',... matlabroot, '\toolbox\control\ctriutil ;',... matlabroot,'\toolbox\control\ctrldemos;',... matlabroot, '\toolbox\curvefit\curvefit;',... matlabroot,'\toolbox\curvgfit\cftoolgui;', ... matlabroot, '\toolbox\daq\daq;1,... matlabroot, '\toolbox\daq\daqguis;',... matlabrootΛtoolboxVdaqtøaqdemos;',... matlabroot,'\toolbox\database\database;',... matlabrootΛtoolbox\database\dbdemos;', ... matlabroot, ^toolboxWatabaseWqb; ', ... matlabroot, '\toolbox\datafeed\datafeed;',... matlabroot,'\toolbox\datafeed\dfgui;',... matlabroot,'\toolbox\dials;',... matlabroot.'UoolboxVdspblksklspblks;1,... matlabroot, "\toolbox\dspblks\dspmasks;1,... matlabroot, '\toolbox\dspblks\dspmex;1, ... matlabroot,'\toolbox\dspblks\dspdemos;',... matlabroot,'\toolbox\rtw\targets\ecoder;',... matlabroot, '\toolbox\rtw\targets\ecoder\ecoderdemos; matlabroot, '\toolbox\exlink;',... matlabroot, '\toolbox\filterdesign\filterdesign;',... matiabroot.'ytoolboxVfilterdesign^uantization;1,... matlabroot,'\toolbox\filterdesign\filtdesdemos;',... matlabroot,'\toolbox\finance\finance;',... matiabroot, '\toolbox\fiπance\calendar; ', ... matlabroot,'\toolbox\finance\findemos;',... matlabroot,'\toolbox\finance\finsupport;',... matlabroot,'\toolbox\finderiv\finderiv;',... matlabroot, '\toolbox\fixpoint;',... matlabroot, '\toolbox\ftseries\ftseries;',... matlabroot,'\toolbox\ftseries\ftsdemos;',... matlabroot,'\toolbox\ftseries\ftsdata;',... matlabroot,'\toolbox\ftseries\ftstutorials;',... matlabroot, '\toolbox\fuzzy\fuzzy;',... matlabroot, '\toolbox\f uzzy\fuzdem os; ',... matlabroot,'\toolbox\garch\garch;',... matlabroot,'\toolbox\garch\garchdemos;',... matlabrootΛtoolbox\ident\ident;',... matlabroot,'\toolbox\ident\idobsolete;',... matlabroot, "\toolbox\ident\idguis; ', ... matlabroot,'\toolbox\ident\idutils;\... matlabroot,'\toolbox\ident\iddemos;',... matlabroot, '\toolbox\ident\idhelp;',... matlabroot,'\toolbox\images\images;', ... matlabroot, '\toolbox\images\imdemos;',... matlabroot, '\toolbox\instrument\instrument;', ... matlabrootΛtoolbox\instrument\instrumentdemos;',... matlabrootΛtόolbox\lmi\lmictrl;',... matlabroot,'\toolboxMmi\lmilab;',... matlabroot, '\toolbox\map\map;', ... matlabroot, '\toolbox\map\mapdisp;1,... matlabroot, '\toolbox\map\mapproj;1,... matlabroot,'\toolbox\matlabxl\matlabxl;',... matlabroot, '\toolbox\mbc\mbc;1,... matlabroot, '\toolbox\mbc\mbcdata;',... matlabroot, '\toolbox\mbc\mbcdesign;',... matlabroot,'\toolbox\mbc\mbcexpr;',... matlabroot,'\toolbox\mbc\mbcguitools;',... matlabroot,'\toolbox\mbc\mbclayouts;',... matlabroot,'\toolbox\mbc\mbcmodels;\... matlabroot,'\toolbox\mbc\mbGsimulink;',.-. matlabroot,'\toolbox\mbc\mbctools;',... matlabroot,'\toolbox\mbc\mbcview;',... matlabroot,'\toolbox\physmod\mech\mech;',... matlabroot,'\toolbox\physmod\mech\mechdemos;',... matlabroot,'\toolbox\mpc\mpccmds;',... matlabrootΛtoolbox\mpc\mpcdemos;',... matlabroot,'\toolbox\rtw\targets\mpc555dk\mpc555dk;',... matlabrootΛtoolboxVrtwUargetsVmpcSSSdkteommonlcanliWblockset;',... matlabroot,'\toolbox\rtw\targets\mpc555dk\common\canlib\blockset\mfiles;',... matlabroot,'\toolbox\rtw\targets\mpc555dk\common\vectorlib\blockset;',... matlabroot,'\toolbox\rtw\targets\mpc555dk\common\vectorlib\blockset\mfiles;',... matlabroot/UoolboxNrtwVtargetsympcSδSdkVcommon^onfiguration;',... matlabroot,'\toolbox\rtw\targets\mpc555dk\pil;',... matlabrootΛtoolbox\rtw\targets\mpc555dk\rt\b!ockset;',... matlabroot,l\toolbox\rtw\targets\mpc555dk\rt\blockset\mfiles;',... matlabroot.MoolboxVrtwUargets^pcSδSdk^blockseΛmfilesVsimulinkutilities;1,... matlabrootΛtoolboxVtwUargetsNmpcSSδdkVmpcδδSdemos;1,... matlabroot,'\toolbox\mutools\commands;',... matlabroot,'\toolbox\mutools\subs;\... matlabroot,'\toolbox\ncd;', ... matlabroot,'\toolbox\nnet\nnet;',... matlabroot,'\toolbox\nnet\nnutils;',... matlabroot,'\toolbox\nnet\nncontrol;',... matlabroot.'Uoolbox^neftnndemos;',... matlabroot,'\toolbox\nnet\nnobsolete;',... matlabroot,'\toolbox\optim;\ ... matlabroot,'\toolbox\pde;',... matlabroot,'\toolbox\simulink\perftools;',... matlabroot,'\toolbox\simuliπk\mdldiff;',... rnatlabroot,'\toolbox\simu!ink\simcoverage;',... matlabrootΛtoolbox\rtw\accel;',... matlabroot,'\toolbox\powersys\powersys;',... matlabroot,'\toolbox\powersys\powerdemo;',... matlabrootΛtoolbox\reqmgt;',... matlabrootΛtoolboxVobust;',... matlabrootΛtoolboxVptgen;1,... matlabroot,'\toolbox\rptgenext;',... matlabroot,'\toolbox\runtime;',... matlabroot,I\toolbox\sb2sl;'1... matlabroot,'\toolbox\signal\signal;',... matlabrootΛtoolbox\signal\sigtools;',... nnatlabroot.'Uoolbox^ignalVsptoolgui;',... matlabroot,'\toolbox\signal\sigdemos;f,... matlabroot,'Uoolbox\spliπes;',... matlabroot.'Uoolbox^ymbolic;1,... matlabrootΛtoolbox\rtw\targets\tic6000\tic6000;',... matlabrootΛtoolbox\rtw\targets\tic6000\blks;', ... matlabroot,'\toolbox\vr\vr;',... matlabrootΛtoolboxWrWrdemos;1,... matlabroot.'UoolboxWaveleftwavelet;1,... matlabroot/UoolboxVwaveletWavedemo;1,... matlabrootΛtoolbox\webserver\webserver;',... matlabroot,'\toolbox\webserver\wsdemos;',... matlabrootΛtoolbox\rtw\targets\xpc\xpc;\... matlabroot,'\toolbox\rtw\targets\xpc\target\build\xpcblocks;',... matlabroot,'\toolbox\rtw\targets\xpc\xpcdemos;', ... matlabroot,1\toolbox\rtw\targets\xpc\target\kemel\embedded;',... matlabroot,'\work;',... matlabrootΛtoolbox\stats;',...
]; p = [userpath.p];
M = length(OBJ_PROPS_NORM); % Number of views
CHAMBER_DIST = cell(1, M); color_vect = cell(1,M);
% Plot a scatterplot for location of the chambers of each view for m = 1:M
N = length(OBJ_PROPS_NORM{m});
% concatenate all chambers feature vectors for n = 1:N
CHAMBER_DIST{m} = [CHAMBER_DIST{m} OBJ_PROPS_NORM{m}{n}]; color_vect{m} = [color_vect{m} repmat(n, 1 ,length(OBJ_PROPS JslORM{m}{n}))]; end end clear M; clear N; clear m; clear n;
% NOTE:
% Ran with data in: fallModelsEstimated__changedPriors_Nov14.mat'. The plot results are under
%the VIEWREC_HOME/results' and are named: priors_R1_Vn.fig'
% 11/15/03 for n = 1 :10 ind = find(prior_params_11001{2}{n}(3,:)<1000); figure(n); plot(prior_params_11001{2}{n}(1 ,ind)); hold on; plot(prior_params_11001{2}{n}(3,ind),V); plot(repmat(priors_bounds_11001(2,n,1), length(ind),1),'c*'); plot(repmat(priors_bounds_11001(2,n,2), length(ind),1 ),'y*'); plot(repmat(priors_11001(2,n,1 ), length(ind),1),'kΛI); xlabel('Key-Frame number'); ylabel('Energy'); message = strcat('Bounds of penalty incurred by NULL assignment - Round 2, View ', num2str(n)); title(message); legend('Min\ 'Max', 'Min-Average1, 'Max-Average', 'Estimated Prior'); hold off end clear n; clear ind; clear message;
function [PP] = PseudoProb(config, sites, viewModels, priorModels, objNames, vn, rn, O)
% function [PP] = PseudoProb(config, sites, viewModels, priorModels, objNames, vn, rn, O)
%
% Given the configuration of a constellation finds the Pseudo Posterior Probability of
% the configuration as an approximation to the actual posterior probability.
%
% Inputs:
% config: vector of labels
% sites: constellation of parts
% viewModels:
% priorModels:
% objNames:
% vn, rn: view index and round index
% O: options
% Output:
% PP: pseudo-posterior probability
M = length(objNames{vn}); % number of labels N = length(sites); % number of parts localPseudoProbs = zeros(N,1 );
<yo *********************************************************************
% Go over each part and find its normalized local posterior probability o/o ********************************************************************* for sitelndex = 1 :N
O //o — —
% Find energy corresponding to every possible labeling of the site
E = zeros(M,1 ); for m = 1 :M if (m == M)
E(m) = priorModels(m,vn,1 );
% actual local energy of site if (m == config(sitelndex))
Elocal = E(m); end else
E(m) = E(m) + compSumDist(m, sites(sitelndex), viewModels, vn, rn, O);
% energy of pair-site cliques if (O(end)) for n = 1 :N if (n-=sitelndex && config(n)~=M) if (config(n)~=m) SI = [sitelndex n]; LI = [m config(n)];
E(m) = E(m) + compSumDist(LI, sites(SI), viewModels, vn, rn, O); else
% add penalty for 2 sites when they have the same label E(m) = E(m) + priorModels(m,vn,2); end end end end
% actual local energy of site if (m == config(sitelndex))
Elocal = E(m); end end end localPseudoProbs(sitelndex) = exp(-Elocal) / sum(exp(-E)); end
PP = prod(localPseudoProbs);
form = 2:15 disp(rn);
E_samp_11001{rn} = getSampConstEnergy(SarnpConstNorrn{rn}, VM_COMP, priors_11001, OBJ_NAMES, rn, [11001)); end clear rn;
%for l = 1 :15 % Round 1 load 'SampledConstellation_noFP_R1_5000perView_Nov26.mat' for n = 1:10 message = strcatfRound: ', num2str(1), ' View: ', num2str(n)); disp(message); a = SC_1{n};
L_samp_noFP_R1{n}=crossModelLabeler(a,VM_COMP,priors_11001,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 1 ); end save 'SC_noFP_R1_5000perView_energy_Nov26.mat' L_samp_noFP_R1 clear SC_1 ; clear L_samp_noFP_R1 ;
%end
% Roud 2 load 'SampledConstellation_noFP_R2_5000perView_Nov26.mat' for n = 1 :10 message = strcatfRound: ', num2str(2), ' View: ', num2str(n)); disp(message); a = SC_2{n};
L_samp_noFP_R2{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 2); end save 'SC_noFP_R2_5000perView_energy_Nov26.mat' L_samp_noFP_R2 clear SC_2; clear L_samp_noFP_R2;
% Roud 3 load 'SampledCoπstellation_noFP_R3_5000perView_Nov26.mat' for n = 1 :10 message = strcat('Round: ', num2str(3), ' View: ', num2str(n)); disp(message); a = SC_3{n};
L_samp_noFP_R3{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 00 1], n, 0, 3); end save 'SC_noFP_R3_5000perView_energy_Nov26.mat' L_samp_noFP_R3 clear SC_3; clear L_samp_noFP_R3;
% Roud 4 load 'SampledCoπstellation_noFP_R4_5000perView_Nov26.maf for n = 1 :10 message = strcat('Round: ', num2str(4), ' View: ', num2str(n)); disp(message); a = SC_4{n};
Lisamp_noFP_R4{n}=crossModelLabeler(a,VIVl_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 4); end save 'SC_noFP_R4_5000perView_energy_Nov26.maf L_samp_noFP_R4 clear SC_4; clear L_samp_noFP_R4;
% Roud 5 load 'SampledConste(lation_noFP_R5_5000perView_Nov26.mat' for n = 1:10 message = strcat('Round: ', num2str(5), ' View: ', num2str(n)); disp(message); a = SC_5{n}; 1
Figure imgf000252_0001
save 'SC_noFP_R5_5000perView_energy_Nov26.maf L_samp_noFP_R5 clear SC_5; clear L_samp_noFP_R5;
% Roud 6 load SampledConstellation_noFP_R6_5000perView_Nov26. mat' for n = 1:10 message = strcat('Round: ', num2str(6), ' View: ', num2str(n)); disp(message); a = SC_6{n};
L_samp_noFP_R6{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 6); end save 'SC_noFP_R6_5000perView_energy_Nov26.mat' L_samp_noFP_R6 clear SC_6; clear L_samp_ηoFP_R6;
% Roud 7 load 'SampledConstellation_noFP_R7_5000perView_Nov26.maf for n = 1 :10 message = strcat('Round: ', num2str(7), ' View: ', num2str(n)); disp(message); a = SC_7{n};
L_samp_noFP_R7{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 7); end save 'S7_noFP_R7_5000perView_energy_Nov26.mat' L_samp_noFP_R7 clear SC_7; clear L_samp_noFP_R7;
% Roud 8 load 'SampledConsteilation_noFP_R8_5000perView_Nov26. mat' for n = 1 :10 message = strcat('Round: ', num2str(8), ' View: ', num2str(n)); disp(message); a = SC_8{n};
L_samp_noFP_R8{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 00 1], n, 0, 8); end save 'SC_ηoFP_R8_5000perView_energy_Nov26.mat' L_samp_noFP_R8 clear SC_8; clear L_samp_noFP_R8;
% Roud 9 load 'SampledConstellation_noFP_R9_5000perView_Nov26.mat' for n = 1 :10 message = strcat('Round: ', num2str(9), ' View: ', num2str(n)); disp(message); a = SC_9{n};
L_samp_noFP_R9{n}=crossModelLabeler(a,VM_COMP,priorsJ 1001 ,OBJ_NAMES,VIEW_NAMES,[1 1 00 1], n, 0, 9); end save 1S7_noFP_R9_5000perView__energy_Nov26.mat' L_samp_noFP_R9 clear SC_9; clear L_samp_noFP_R9;
% Roud 10 load 'SampledConstellation_noFP_R10_5000perView_Nov26.maf for n = 1 :10 message = strcat('Round: ', num2str(10), ' View: ', num2str(n)); disp(message); a = SC_10{n};
L_samp_noFP_R10{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 10); end save 'SC_noFP_R10_5000perView_energy_Nov26.mat' L_samp_noFP_R10 clear SC__10; clear L_samp_noFP_R10;
% Roud 11 load 'SampledConstellation_noFP_R11_5000perView__Nov26.maf for n = 1 :10 message = strcat('Round: ', num2str(11 ), ' View: ', num2str(n)); disp(message); a = SCJ 1{n};
L_samp_noFP_R11{n}=crossModelLabe!er(a,VM COMP,priors_11001 ,OBJ_NAMES,V1EW_NAMES,[1 1 0 0 1], n, 0, 11); end save 'S7_noFP_R11_5000perView_energy_Nov26.mat' L_sampjioFP_R11 clear SCJ 1; clear L_samp_noFP_R11 ;
% Roud 12 load 'SampledConstellation_πoFP_R12_5000perView_Nov26. mat' tor n = 1:10 message = strcat('Round: ', num2str(12), ' View: ', num2str(n)); disp(message); a = SC_12{n};
L_samp_noFP_R12{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 12); end save 'SC_noFP_R12_5000perView_energy_Nov26.mat' L_sampjioFP_R12 clear SC_12; clear L_samp_noFP_R12;
% Roud 13 load 'SampledConstellat(oπ_noFP_R13_5000perView_Nov26.mat' for n = 1:10 message = strcat('Round: ', num2str(13), ' View: ', num2str(n)); disp(message); a = SC_13{n};
L_samp_noFP_R13{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_N AMES1VIEVV-1NAMES1[I 1 0 0 1], π, 0, 13); end save 'S7_noFP_R13_5000perView_energy_Nov26.mat' L_samp_noFP_R13 clear SC_13; clear L_samp_noFP_R13;
% Roud 14 load 'SampledConstellation_noFP_R14_5000perView_Nov26.mat' for n = 1 :10 message = strcat('Round: ', num2str(14), ' View: ', num2str(n)); disp(message); a = SC_14{n};
L_samp_noFP_R14{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 14); end save 'SC_noFP_R14_5000perView_energy_Nov26.mat' L_samp_noFP_R14 clear SC 14; clear L_samp_noFP_R14;
% Roυd 15 load 'SampledConstellation_noFP_R15_5000perView_Nov26.maf for n = 1 :10 message = strcatfRound: ', num2str(15), ' View: ', num2str(n)); disp(message); a = SC_15{n};
L_samp_noFP_R15{n}=crossModelLabeler(a,VM_COMP,priors_11001 ,OBJ_NAMES,VIEW_NAMES,[1 1 0 0 1], n, 0, 15); end save 'SC_noFP_R15_5000perView_energy_Nov26.mat' L_samp_noFP_R15 clear SC_15; clear L_samp_noFP_R15;
clear I; clear π; clear a; clear message;
%NOTE: 1 = index of round of experiment for m = 1:15
% NOTE: vn = index of view for vn = 1 :10
% Get all key-frames under current view across the 14 training seqs
% in this round of experiment numObjs = length(OBJ_NAMES{vn}); % number of objects in current view
% Go over all echos and only get the ones % in current round. Put them in a big list and
% pass them to crossLabeler.
% viewConstellations = fj; % holds all the constellations under current view
% NOTE: n = index of echo sequences tor n = 1 :15 if (n ~= rn)
% find the KFs in current echo with view name equal to the current view 'j1 kfjnd = 1+(findstr([KF_INFO_NORM{n}.view_name], VIEW_NAMES{vn})-1)/3; viewConstellations = [viewConstellations KFJNFO_NORM{n}(kfJnd)]; end end message = strcat('Round: ', num2str(m), ' View: ', num2str(vn)); disp(message);
L_train_noFP{m,vn}=crossModelLabeler(viewConstellations,VM_COMP,priors_11001 ,OBJ_NAMES, VIE W_NAMES,[1 1 0 0 1], vn, 0, rn); end end clear rn; clear vn; clear a; clear message; clear kfjnd; clear n; clear viewConstellations; clear numObjs;
run ra n ng .m
% Round 11 dispC*************************************************1); message = strcat('Leaming model at round: ', num2str(11)); disp(message);
% load data load 'tmp.mat1; load 'SC_noFP_R11_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes classifiers_R11 = cell(10); for i = 1 :10 forj = (i+1):10 message = strcat('Learning classifier: (', num2str(i), ',', num2str(j), ')')'. disp(message);
Ti = length(L_samp_noFP_R11{i}); Tj = length(L_samp_noFP_R11{j});
% Classifier for distinguishing between classes V and 'j'
% Get training data inVects = [L_samp_noFP_R11{i}.energy L_samp_noFP_R11{j}.energy]; outVects = [repmat(1 ,1 ,Ti) repmat(-1 ,1 ,Tj)];
% Normalize data
[pn, minp, maxp] = premnmx(inVects);
% Learn the best combination of (C, Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1 :6
C = 2Λ(2*(L1-4)+1 ); for L2 = 1 :6
Gamma = 2Λ(2*(L2-4)+1); message = strcat('SVM classifier for parameters with index: (', num2str(L1 ),',',num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :5
Ii = unidrnd(5); Ij = unidmd(5);
% Divide the data into training and validation sets Si = 1+floor(Ti/5)*(li-1); Ei = floor(Ti/5)*li; Sj = 1+floor(Tj/5)*(lj-1); Ej = floor(Tj/5)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si.Εi) outVects(Ti+Sj:Ti+Ej)3; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+Tj))]; trainingSetY = [outVects(1 :(Si-1 )) outVects((Ei+1 ):(Ti+Sj-1 )) outVects((Ti+Ej+1 ):(Ti+Tj))]; % Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain{trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + length(find((PreLab-validationSetY) ~= O))/length(PreLab); end errorCG(L1,L2) = er/5; end end
% Get the combination (C.Gamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestl_1 = minl(minll); bestL2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C, Gamma) bestGamma = 2Λ(2*(bestL2-4)+1); bestC = 2Λ(2*(bestL1-4)+1); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; classifiers_property.Gamma = bestGamma; classifiers__property .AlphaY = AlphaY; classifiers_property.SVs = SVs; ciassifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifiers_property.nSV = nSV; classifiers_property.nLabel = nLabel; classifiers_R11{i,j}.P = classifiers_property; end end save 'SVM_classifier_noFP_R11_Nov30.maf classifiers_R11 ; clear;
% Round 12 disp('*************************************************'); message = strcat('Learning model at round: ', num2str(12)); disp(message);
% load data load 'tmp.mat'; load 'SC_noFP_R12_5000perView_energy_Nov26.maf; % Learning best classifier for each pair of classes classifiers_R12 = cell(10); for i = 1 :10 forj = (i+1):10 message = strcat('Learning classifier: (', num2str(i), ',', num2str(j), ')')', disp(message);
Ti = length(L_samp_noFP_R12{i}); Tj = length(L_samp_noFP_R12{j});
% Classifier for distinguishing between classes T and 'j'
% Get training data inVects = [L__sampι_noFP_R12{i}.energy L_samp__noFP_R12{j}.energy]; outVects = [repmat(1 ,1,Ti) repmat(-1 ,1,Tj)];
% Normalize data
[pn, minp, maxp] = premnmx(inVects);
% Learn the best combination of (C,Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1:6
C = 2Λ(2*(L1-4)+1); for L2 = 1 :6
Gamma = 2Λ(2*(L2-4)+1); message = strcat('SVM classifier for parameters with index: (', num2str(L1),',l,num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :5
Ii = unidrnd(5); Ij = unidrnd(5);
% Divide the data into training and validation sets Si = 1 +floor(Ti/5)*(li-1); Ei = floor(Ti/5)*li; Sj = 1+floor(Tj/5)*(lj-1); Ej = fϊoor(Tj/5)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si.Εi) outVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn({Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):{Ti+Tj))J; trainiπgSetY = foutVects(1 :(Si-1 )) outVects((Ei+1 ):(Ti+Sj-1 )) outVects((Ti+Ej+1 ):(Ti+Tj))];
% Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + iength(find((PreLab-validationSetY) -= O))/length(PreLab); end errorCG(L1 ,L2) = er/5; end end
% Get the combination (C.Gamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestL.1 = minl(minll); bestl_2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C, Gamma) bestGamma = 2Λ(2*(bestL2-4)+1 ); bestC = 2Λ(2*(bestL1-4)+1); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; classifiers_property.Gamma = bestGamma; classifiers_property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifiers_property.nSV = nSV; classifiers_property.nLabel = nLabel; classifiers_R12{ij}.P = classifiers_property; end end save fSVM_ciassifier_noFP_R12_Nov30.mat' classifiers_R12; clear;
% Round 13 message = strcat('Learning model at round: ', num2str(13)); disp(message);
% load data load 'tmp.mat'; load 'SC_noFP_R13_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes classifiers_R13 = cell(10); for i = 1 :10 for j = 0+1 ):1O message = strcat('Learning classifier: (', num2str(i), ',', num2str(j), ')'); disp(message);
Ti = length(L_samp_noFP_R13{i}); Tj = length(L_samp_noFP_R13{j});
% Classifier for distinguishing between classes T and 'j'
% Get training data inVects = [L_samp_noFP_R13{i}.energy L_samp_noFP_R13{j}.energy]; outVects = [repmat(1 , 1 ,Ti) repmat(-1 , 1 ,Tj)];
% Normalize data
[pn, minp, maxp] = premnmx(inVects);
% Learn the best combination of (C, Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1 :6
C = 2Λ(2*(L1-4)+1 ); for L2 = 1 :6
Gamma = 2Λ(2*(L2-4)+1 ); message = strcat('SVM classifier for parameters with index: (', num2str(L1),',',num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :5
Ii = unidmd(5); Ij = unidmd(5);
% Divide the data into training and validation sets Si = 1+floor(Ti/5)*(li~1); Ei = floor(Ti/5)*li; Sj = 1+floor(Tj/5)*(lj-1); Ej = floor(Tj/5)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si:Ei) σutVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+Tj))]; trainingSetY = [outVects(1 :(Si-1 )) outVects((Ei+1 ):(Ti+Sj-1 )) outVects((Ti+Ej+1 ):{Ti+Tj))];
% Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[AlphaY, SVs, Bias, Parameters, nSV, nLabelj = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest{validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel); '
% Add error to total error er = er + length(find((PreLab-validationSetY) -= O))/length(Prel_ab); end errorCG(L1,L2) = er/5; end end
% Get the combination (C.Gamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestL.1 = minl(minll); bestL2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C.Gamma) bestGamma = 2Λ(2*(bestl_2-4)+1 ); bestC = 2Λ(2*(bestL1-4)+1); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; classifiers_property.Gamma = bestGamma; classifiers _property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property. Parameters = Parameters; classifiers_property.nSV = nSV; cIassifiers_property.nLabel = nLabel; classifiers_R13{i,j}.P = classifiers_property; end end save 'SVM_classifier_noFP_R13_Nov30.mat' classifiers_R13; clear;
% Round 14 message = strcat('Learπing model at round: ', num2str(14)); disp(message);
% load data load 'tmp.mat'; load 'SC_noFP_R14_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes classifiers_R14 = cell(10); for i = 1 :10 for j = (i+1 ):10 message = strcat('Learning classifier: (', num2str(i), ',', num2str(j), ')'); disp(message);
Ti = length(L_samp_noFP_R14{i}); Tj = iength(L_samp_noFP_R14{j});
% Classifier for distinguishing between classes T and 'j'
% Get training data inVects = [L_samp_noFP_R14{i}.energy L_samp__noFP_R14{j}.energy]; outVects = [repmat(1,1 ,Ti) repmat(-1 ,1 ,Tj)];
% Normalize data [pn, minp, maxp] = premnmx(inVects);
% Learn the best combination of (C.Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1 :6
C = 2Λ(2*(L1-4)+1 ); for L2 = 1 :6
Gamma = 2Λ(2*(L2-4)+1); message = strcat('SVM classifier for parameters with index: (', num2str(L1 ),7,num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :5
Ii = unidrnd(5); Ij = unidrnd(5);
% Divide the data into training and validation sets Si = 1+floor(Ti/5)*(li-1 ); Ei = floor(Ti/5)*li; Sj = 1+floor(Tj/5)*(lj-1); Ej = floor(Tj/5)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si:Ei) outVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+Tj))]; trainingSetY = [outVects(1 :(Si-1 )) outVects((Ei+1 ):(Ti+Sj-1 )) outVects((Ti+Ej+1 ):(Ti+Tj))];
% Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + length(find((PreLab-validationSetY) -= O))/length(PreLab); end errorCG(L1 ,L2) = er/5; end end
% Get the combination (C.Gamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestLI = minl(minll); bestL2 = mini I;
% Now train a classifier using the entire training data % with this optimal combination of (C.Gamma) bestGamma = 2Λ(2*(bestL2-4)+1); bestC = 2Λ(2*(bestL1-4)+1 ); Parameters = [2 3 bestGamma 0 bestC]; [AlphaY, SVs, Bias, Parameters, nSV, nLabef] = SVMTrairφn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; ciassifiers_property.Gamma = bestGamma; classifiers_property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifιers_property.nSV = nSV; ciassifiers_property.nlabel = nLabel; classifiers_R14{i,j}.P = classifiers_property; end end save 'SVM_classifier_noFP_R14J\lov30.maf classifiers_R14; clear;
% Round 15 dispC*************************************************'); message = strcat('Learning model at round: ', num2str(15)); disp(message);
% load data load 'tmp.mat1; load 'SC_noFP_R15_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes classifiers_R15 = cell(10); for i = 1 :10 forj = (i+1):10 message = strcat('Leaming classifier: (', num2str(i), ',', num2str(j), ')'); disp(message);
Ti = length(L_samp_noFP_R15{i}); Tj = length(L_samp_noFP_R15{j});
% Classifier for distinguishing between classes T and 'j'
% Get training data inVects = [L_samp_noFP_R15{i}. energy L_samp_noFP_R15(j}.energy]; outVects = [repmat(1,1 ,Ti) repmat(-1 ,1 ,Tj)];
% Normalize data
[pn, minp, maxp] = premπmx(inVects);
% Learn the best combination of (C, Gamma) % for RBF-kerneied classifier errorCG = zeros(6); for L1 = 1 :6
C = 2Λ(2*(L1-4)+1); for L2 = 1 :6
Gamma = 2Λ(2*(L2-4)+1 ); message = strcat('SVM classifier for parameters with index: (', num2str(L1),',\num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1:5
Ii = unidrnd(5); Ij = unidmd(5);
% Divide the data into training and validation sets Si = 1 +floor(Ti/5)*(li-1 ); Ei = floor(Tϊ/5)*li; Sj = 1+floor(Tj/5)*(lj-1); Ej = floor(Tj/5)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si:Ei) outVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+Tj))]; trainingSetY = [outVects(1 :(Si-1)) outVects((Ei+1):(Ti+Sj-1)) outVects((Ti+Ej+1):(Ti+Tj))];
% Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + length(find((PreLab-validationSetY) ~= O))/length(PreLab); end errorCG(L1 ,L2) = er/5; end end
% Get the combination (C.Gamma) that gives lowest error , [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestL.1 = minl(minll); bestl_2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C, Gamma) bestGamma = 2Λ(2*(bestL2-4)+1 ); bestC = 2Λ(2*(bestL1-4)+1 ); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; classifiers_property. Gamma = bestGamma; classifiers_property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifiers_property.nSV = nSV; classifiers_property.nLabel = nLabel; classifiers_R15{i,j}.P = classifiers_property; end end save SVM_classifier_noFP_R15_Nov30.mat' classifiers_R15; clear;
% Round 11 message = strcat('Learning model at round: ', num2str(11)); disp(message);
% load data load 'tmp.mat'; load 'SC_noFP_R11_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes class if iers_R 11 = cell(10); for i = 1:10 forj = (i+1):10 message = strcat('Learning classifier: (', num2str(i), ',', num2str(j), ')'); disp(message);
Ti = length(L_samp_noFP_R11{i}); Tj = length(L_samp_noFP_R11{j});
% Classifier for distinguishing between classes T and 'j'
% Get training data inVects = [L_samp_noFP_R11{i}.energy L_samp_noFP_R11{j}.energy]; outVects = [repmat(1 ,1,Ti) repmat(-1 ,1 ,Tj)];
% Normalize data
[pn, minp, maxp] = premnmx(inVects);
% Learn the best combination of (C, Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1 :6
C = 2Λ(2*(L1-4)+1 ); for L2 = 1:6
Gamma = 2Λ(2*(L2-4)+1); message = strcat('SVM classifier for parameters with index: (', num2str(L1),l,l,num2str(L2),')'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :3
Ii = unidrnd(3); Ij = unidmd(3);
% Divide the data into training and validation sets Si = 1+floor(Ti/3)*(li-1 ); Ei = floor(Ti/3)*li; Sj = 1+floor(Tj/3)*(lj-1 ); Ej = floor(Tj/3)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si:Ei) outVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+T]))]; trainingSetY = [outVects(1 :(Si-1 )) outVects((Ei+1 ):(Ti+Sj-1 )) outVects((Ti+Ej+1 ):(Ti+Tj))]; % Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[Alpha Y, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, AlphaY, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + length(find((PreLab-validationSetY) -= O))/length(PreLab); end errorCG(L1 ,L2) = er/3; end end
% Get the combination (C.Gamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestLI = minl(minll); bestL2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C.Gamma) bestGamma = 2Λ(2*(bestL2-4)+1 ); bestC = 2Λ(2*(bestL1-4)+1); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; classifiers_property.C = bestC; classifiers_property.Gamma = bestGamma; classifiers_property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifiers_property.nSV = nSV; classifiers_property.nLabel = nLabel; classifiers_R11{i,j}.P = classifiers_property; end end save 'SVM_classifier_noFP_R11_Nov30.mat' classifiers_R11 ; clear;
% Round 12 dJSpC*************************************************'); message = strcat('Learning model at round: ', num2str(12)); disp(message);
% load data load 'tmp.mat'; load 'SC_noFP_R12_5000perView_energy_Nov26.mat';
% Learning best classifier for each pair of classes classifiers_R12 = celi(10); for i = 1 :10 forj = (i+1):10 message = strcat('Learning classifier: (', num2str(i), ',', num2strθ), ')'); disp(message);
Ti = length(L_samp_noFP_R12{i}); Tj = length(L_samp_noFP_R12{j});
% Classifier for distinguishing between classes T and 'j*
% Get training data inVects = [L_samp_noFP_R12{i}.energy L_samp_noFP_R12{j}.energy]; outVects = [repmat(1 ,1 ,Ti) repmat(-1 ,1, Tj)J;
% Normalize data
[pn, minp, maxp] = premπmx(inVects);
% Learn the best combination of (C,Gamma) % for RBF-kerneled classifier errorCG = zeros(6); for L1 = 1:6
C = 2Λ(2*(L1-4)+1); for L2 = 1:6
Gamma = 2Λ(2*(L2-4)+1 ); message = strcat('SVM classifier for parameters with index: (', num2str(L1 ),',',num2str(L2),1)'); disp(message);
% Divided the training data into 5 segments and learn a % classifier for each of the 5 folds, and find error on % validation set er = 0; for m = 1 :3
Ii = unidmd(3); Ij = unidrnd(3);
% Divide the data into training and validation sets Si = 1+floor(Ti/3)*(li-1 ); Ei = floor(Ti/3)*li; Sj = 1+floor(Tj/3)*(lj-1); Ej = floor(Tj/3)*lj; validationSetX = [pn(Si:Ei) pn(Ti+Sj:Ti+Ej)]; validationSetY = [outVects(Si:Ei) outVects(Ti+Sj:Ti+Ej)]; trainingSetX = [pn(1 :(Si-1 )) pn((Ei+1 ):(Ti+Sj-1 )) pn((Ti+Ej+1 ):(Ti+Tj))]; trainingSetY = [outVects(1 :(Si-1 )) outVects((Ei+1 ):{Ti+Sj-1 )) outVects((Ti+Ej+1 ):(Ti+Tj))];
% Train a binary SVM classifier
Parameters = [2 3 Gamma 0 C];
[Alpha Y, SVs1 Bias, Parameters, nSV, nLabel] = SVMTrain(trainingSetX, trainingSetY, Parameters);
[ClassRate, DecVal, Ns, ConfMatrix, PreLab] = SVMTest(validationSetX, validationSetY, Alpha Y, SVs, Bias, Parameters, nSV, nLabel);
% Add error to total error er = er + length(find((PreLab-validationSetY) ~= O))/length(PreLab); end errorCG(L1 ,L2) = er/3; end end
% Get the combination (CGamma) that gives lowest error [minV.minl] = min(errorCG); [minW.minll] = min(minV); bestLI = minl(minll); bestL2 = minll;
% Now train a classifier using the entire training data % with this optimal combination of (C,Gamma) bestGamma = 2Λ(2*(bestL2-4)+1); bestC = 2Λ(2*(bestL1-4)+1 ); Parameters = [2 3 bestGamma 0 bestC];
[AlphaY, SVs, Bias, Parameters, nSV, nLabel] = SVMTrain(pn, outVects, Parameters); classifiers_property.minp = minp; classifiers_property.maxp = maxp; ciassifiers_property.C = bestC; classifiers_property.Gamma = bestGamma; classifiers_property.AlphaY = AlphaY; classifiers_property.SVs = SVs; classifiers_property.Bias = Bias; classifiers_property.Parameters = Parameters; classifiers_property.nSV = nSV; classifiers_property.nLabel = nLabel; classifiers_R12{i,j}.P = classifiers_property; end end save 'SVM_classifier_noFP_R12_Nov30.mat' classifiers_R12; clear;
function [SC] = sampleConstellations(objNames, objMissProbs, fpAvg, viewModels, fpModels, R, T);
% function [SC] = sampleConstel|ations(objNames, objMissProbs, fpAvg, viewModels, fpModels, R, T);
%
Given the models of the different properties of the views, average number of false positives, probability of the objects missing, this function generates 1T'
% samples of the constellations under each view. % % Input: % objNames: name of the objects under each view % objMissProbs: probability of an object missing % fpAvg: average number of false positive regions in each view % viewModels: mean and covariance of the properties of each object in each view % fpModels: min, max values of each parameter for False Positive regions % R: round in jacknife % T: number of desired samples % % Output: % SC: a cell array of 'Sampled Constellation' properties %
L = length(objNames); % get number of views
SC = cell(1 ,L); % initalize SC constellation = Q;
% Get sample of constellations for each view for I = 1 :L
M = length(objNames{l})-1 ; % number of objects in current view
% find total number of combinations of labels
N = O; for m=1 :M
N = N + nchoosek(M,m); end message = strcat(View: ', num2str(l), ' , Total # of combinations: ', num2str(N)); disp(message);
T1 = floor(T/N); % number of samples for each combination V = [1 :M]; % vector of object indices cnt = 1;
/o
% Generate samples for the different combinations of chambers % in the current view
/o for m = 1 :M
% Get all sets of 'm' objects out of total 'M' objects C = nchoosek(V,m); [S1,S2] = size(C); for si = 1:S1
% For each such combination generate TT samples for t = 1 :T1 ri = 0; for s2 = 1:S2 q = C(s1 ,s2); % current chamber
% decide to drop or keep this part rand('state', sum(100*clock));
P.X = normrnd(viewModels{R,1}{l}{q,q}(1,1), viewModels{R,1}{l}{q,q}(1,2)); P.Y = normrnd(viewModels{R,1}{l}{q,q}(2,1), viewModels{R,1}{l}{q,q}(2,2)); regionjnfo.label = [obj'Names{l}(q,:)]; regionjnfo.point = P; regionjnfo.area = normrnd(viewModels{R,2}{l}{q,q}(1,1), viewModels{R,2}{l}{q,q}(1,2)); regionjnfo.angle = normrπd(viewModels{R,3}{l}{q,q}(1,1 ), viewModels{R,3}{l}{q,q}(1 ,2)); regionjnfo.eccen = normrnd(viewModels{R,4}{l}{q,q}(1,1 ), viewModels{R,4}{l}{q,q}(1,2)); ri = [ri regionjnfo]; end constellation(cnt).RI = ri; cnt = cnt + 1 ; end end end
SC{I} = constellation; constellation = Q; end
for R = 1:15 message = strcat('Learning model at round: ', num2str(R)); disp(message);
[pn, minp, maxp] = premnmx(SVMtrain(R)JN);
Q = length(SVMtrain(R).IN); iitst = 2:4:Q; iitr = [1 :4:Q 3:4:Q 4:4:Q];
[SVMalphaY, SVs1 Bias, SVMParams, nSV, SVMπLabel] = RbfSVC(pn(:,iitr), SVMtrain(R).OUT(iitr)); disp('Classify test sequenes:');
% Get results on test set from the training data
[ClassRate, DecVal, Ns, SVMConfMatrix{R}, SVMPreLab{R}] = SVMTest(pn(:,iitst), SVMtrain(R).OUT(iitst), SVMalphaY, SVs, Bias, SVMParams, nSV, SVMnLabel);
% Get results on actual test sequence for this round of jacknife testData{R} = tramnmx([energy_cornpleteKF_noFP{R}.energy], minp, maxp);
GTLab_test_completeConst{R} = [energy_cornpleteKF_noFP{R}.view_index];
[ClassRate, DecVal, Ns1 SVMConfMatrix_test_completeConst{R}, SVMPreLab_completeConst{R}] = SVMTest(testData{R}, GTLab_test_completeConst{R}, SVMalphaY, SVs, Bias, SVMParams, nSV, SVMnLabel); end clear SVMalphaY; clear SVs; clear Bias; clear SVMParams; clear nSV; clear SVMnLabel; clear testData; clear cnt; clear t; clear i; clear j; clear ClassRate; clear DecVal; clear Ns; clear T; clear R;
K = length(KFJNFO);
V = iength(VIEW_NAMES); view_contrib_cnt = zeros(V,K); for k = 1:K
L = length(KFJNFO{k}); for l = 1:L for v = 1 :V if (strcmp(VIEW_NAMES{v}, KFJNFO{k}(i).view_name)) view_contrib_cnt(v,k) = view_contrib_cπt(v,k) + 1; end end end end clear K; clear V; clear L; clear k; clear v; clear I;
function [SK] = Update_S(sitelndex, config, constel, objNames, VM, priorModels, vn, rn, O)
% function [SK] = Update_S(sitelndex, config, constel, viewModel, priorModel, vn, rn, O)
%
% Given the new configuration for the site with index 'sitelndex1, this function
% updates the local energy value corresponding to the site.
M = size(objNames{vn}, 1 ); % number of objects N = length(constel); % number of sites if (-config(sitelndex)) % /////////////////////////// % CASE 1 : Site is uncommitted % ///////////////////////////
% Try different labels for the site E = zeros(M,1); for m = 1 :M if (m == M)
%
% label of the site is NULL %
E(m) = priorModels(m,vn,1 ); else
%
% label of the site is non-NULL %
E(m) = E(m) + compSumDist(m, constel(sitelndex), VM, vn, rn, O);
% energy of pair-site cliques if (O(end)) for n = 1:N if (n~=sitelndex && config(n)~=M && config(n)~=0) if (config(n)~=m) SI = [sitelndex n]; LI = [m config(n)];
E(m) = E(m) + compSumDist(LI, constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label E(m) = E(m) + priorModels(m,vn,2); end end end end end end
E = sort(E); SK = -(E(2)-E(1 )); else
% ///////////////////////////
% CASE 2: Site has label
% if (config(sitelndex) ~= M)
<y0 % Find local energy of the site given the current configuration
O/ _ _
Ek = compSumDist(config(sitelndex), constel(sitelndex), VM, vn, rn, O); if (O(end)) for n = 1:N if (n~=sitelndex && config(n)~=M && config(n)~=o) if (config(n)~=config(sitelndex)) SI = [sitelndex n];
Ek = Ek + compSumDist(config(SI), constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label Ek = Ek + priorModels(rn,vπ,2); end end end end else
Ek = priorModels(m,vn,1 ); end
0/ _. _ __ __ __ _ _.
/O
% Now change the label of site in order to minimize energy difference
%
E = zeros(M-1,1); l = zeros(M-1 ,1 ); cnt = 1 ; for m = 1 :M if (m ~= config(sitelndex)) if (m == M)
% /////////////////////////////// % label of the site is NULL
E(cnt) = priorModels(rn,vn,1); l(cnt) = m; cnt = cnt + 1 ; else
% label of the site is non-NULL
% ///////////////////////////////
E(cnt) = E(cnt) + compSumDist(m, constel(sitelndex), VM, vn, rn, O); I (cnt) = m;
% energy of pair-site cliques if (O(end)) for n = 1 :N if (n~=sitelndex && config(n)~=M && config(n)~=0) if (config(n)~=m) SI = [sitelndex n]; LI = [m config(n)];
E(cnt) = E(cnt) + compSumDist(LI, constel(SI), VM, vn, rn, O); else
% add penalty for 2 sites when they have the same label E(cnt) = E(cnt) + priorModels(m,vn,2); end end end end cnt = cnt + 1 ; end end end
EE = sort(E - Ek); SK = EE(I); end
function [viewjnodels] = viewMLE(KF_INFO, VIEW_NAMES, OBJ_NAMES)
% function [view_models] = viewMLE(KF_INFO, VIEW_NAMES, OBJ_NAMES)
%
% Use to esimate parameters of the Guassian distribution of the object
% properties in the different views.
% It chooses (N- 1 ) out of N available echos and finds ML estimates for the set.
% The echo left out from each set will later be used for testing.
%
% Output:
% view_models{set_num, propertyjndex}{view_πum}{obj1 ,obj2}: a matrix of estimate of
% the parameters. First column contains the mean, and the second column contains the
% variance. Note: we use diagonal covariance matrix here. echojndex = [1 :length(KF_INFO)]; % vector of indices of the echos view_models = cell(length(echo_index), 6);
% Prepare the missingess pattern matrices for feature use
P = 2; for p = 1 :(2ΛP)-1
R2(p,:) = dec2bin(p,P); end
P = 4; for p = 1:(2ΛP)-1
R4(p,:) = dec2bin(p,P); end
% For each choice of training set estimate parameters % of the corresponding distribution
O/ for e = 1 :length(echo_index) message = strcat('lteration for test set: ', num2str(e)); disp(message);
% Get properties of objects in different views of the chosen set of echos disp('Extracting object properties for each view.');
[Locatioπ.Area.Angle.Eccen] = extractObjProps(KF_INFO, e, VIEW_NAMES, OBJ_NAMES);
% Find the ML estimates for each single or pair of objects' properties
L = length(Location); % Number of views view_models{e,1} = cell(1 ,L); view_models{e,2} = cell(1,L); view_models{e,3} = cell(1,L); view_models{e,4} = cell(1,L); view_models{e,5} = cell(1 ,L); view_models{e,6} = cell(1,L); for I = 1 :L message = strcat('Leaming models for: ', VIEW_NAMES{I}); disp(message);
M = length(Location{l}); % Number of objects in current view view_models{e,1}{l} = cell(M); view_mode!s{e,2}{l} = cell(M); view_models{e,3}{l} = cell(M); view_models{e,4}{l} = cell(M); view_models{e,5}{l} = cell(M); view_models{e,6}{l} = cell(M); for ml = 1:M for m2 = m1:M if (m1 == m2) message = strcat('Estimate params of single object: ', num2str(m1)); disp(message);
% Estimate for property 'Location1 view_models{e,1}{l}{m1,m2}(:,1) = mean(Location{l}{m1 ,m2}); view_models{e,1}{l}{m1 ,m2}(:,2) = diag(cov(Location{i}{m1 ,m2}));
% Estimate for property 'Area' view_models{e,2}{l}{m1,m2}(:,1) = mean(Area{l}{m1,m2}); view_models{e,2}{l}{m1,m2}(:)2) = diag(cov(Area{l}{m1,m2}));
% Estimate for pro4erty 'Angle' view_models{e,3}{l}{m1 ,m2}(:,1) = mean(Angle{!}{m1lm2}); view_models{e,3}{l}{m1,m2}(:,2) = diag(cov(Angle{l}{m1,m2}));
% Estimate for property Εccen' view_models{e,4}{l}{m1,m2}(:,1) = mean(Eccen{l}{m1,m2}); viewjτιodels{e,4}{l}{m1 ,m2}(:,2) = diag(cov(Eccen{l}{m1 ,m2})); else message = strcat('Estimate params of pair of object: ', num2str(m1 ), 'and:', num2str(m2)); disp(message);
% Find the index of the missingness pattern for each observation
% • curM2 = Area{l}{m1,m2};
MPI2 = findMissPatternlndex(curM2, R2); curM4 = Location{l}{m1,m2};
MPI4 = findMissPattemlndex(curM4, R4);
% Estimate for property 'Location'
[thetaM, thetaS, stat] = multivariateNormalMLE(Location{l}{m1,m2}, MPI4, R4, 1); if (stat) view_models{e,1}{l}{m1 ,m2}(:,1) = thetaM'; view_models{e,1}{l}{m1 ,rn2}(:,2) = diag(thetaS);
% Derive parameters of distribution of distance and angle between sites [dM,dS,aM,aS] = pairSitePropertyMLE(Location{l}{m1 ,m2}, MPI4, R4, thetaM', thetaS); view_models{e,5}{l}{m1,m2}(:,1) = dM; view_models{e,5}{l}{m1 ,m2}(:,2) = dS; view_models{e,6}{l}{m1,m2}(:,1 ) = aM; view_models{e,6}{l}{m1 ,m2}(:,2) = aS end
%
% Estimate for property 'Area'
O/
[thetaM, thetaS, stat] = multivariateNormalMLE(Area{l}{m1,m2}, MPI2, R2, 1); if (stat) view_models{e,2}{l}{m1 ,m2}(:,1 ) = thetaM1; view_models{e,2}{l}{m1 ,m2}(:,2) = diag(thetaS); end
% Estimate for pro4erty 'Angle'
O //0 - - - - -
[thetaM, thetaS, stat] = multivariateNormalMLE(Angle{l}{m1 ,m2}, MPI2, R2, 1); if (stat) view_models{e,3}{l}{m1 ,m2}(:,1 ) = thetaM'; view_models{e,3}{l}{m1 ,m2}(:,2) = diag(thetaS); end
0L -
% Estimate for property Εccen'
[thetaM, thetaS, stat] = multivariateNormalMLE(Eccen{l}{m1 ,m2}, MPI2, R2, 1); if (stat) view_models{e,4}{l}{m1 ,m2}(:,1) = thetaM'; view_models{e,4}{l}{m1 ,m2}(:,2) = diag(thetaS); end end end end end end
UTIL
#ifndef FILENAME H #define FILENAME_H const char* const PPMExtension = ".ppm"; const char* const PGMExtension = ".pgm"; const char* const PBMExtension = ".pbm"; const char* const JPGExtension = ".jpg"; const char* const MPGExtension = ".mpg"; const char* const ANON Extension = "_anon"; const char* const DEFAULT_FILE_NAME = "out"; const char* const VIDEO = "video. mpg";
// name of the echo video under a specific directory const char* const IMG_DIR = "img/";
// image directory related to an echo video const char* const ED_FILE = "ed.dat";
// File containing all the end-diastole frames const char* const KF_FILE = "kf.dat";
// File containing all the key-frames const char* const DYN_SUM = "dynamic_summary/";
// directory where the dynamic summary is const char* const STAT-SUM = "static_summary.html";
// directory where the static summary is const char* const TEMPLATEJDIR = "/home/shahram/cardio/echos/template/";
// directory containing the templates const char* const BWJTMPLT = "bw.pgm"; const char* const BWJTMPLTJDAT = "bw.dat";
// template for BW triangular frames const char* const DOP_TMPLT = "doppler.pgm"; const char* const DOP JTMPLT_DAT = "doppler.dat";
// template for doppler frames const char* const TRAP_2_TMPLT = "trapzoom.pgm"; const char* const TRAP_Z_TMPLT_DAT = "trapzoom.dat";
// template for trapezoidal zoom frame const char* const SQR_Z_TMPLT = "sqrzoom.pgm"; const char* const SQR_Z_TMPLT_DAT = "sqrzoom.dat";
// template for square zoom frame const char* const MASK_DIR = "/home/shahram/cardio/echos/mask/";
// directory containing the mask images const char* const BW_MASK = "bwjnask.pgm";
// mask image for 2D bw frames const char* const TRAP_Z_MASK = "trapz_mask.pgm";
// mask image for trapezoidal zoom frame const char* const SQR_Z_MASK = "sqrjnask.pgm";
// mask image for square zoom frame const char* const DOP_MASK = "doρ_mask.pgm";
// mask frame for doppler frame const char* const TXT_DOP_MASK_TXT = "txt_dop_mask.dat"; const char* const TXT_DOP_MASK = "txt_dop_mask.pgm";
// mask for blocking the text in the doppler frames const char* const TXT_BLANK_MASK = "txt_blank_mask.pgm";
// mask for blocking text in blank frames const char* const TXT_OTHER_MASK = "txt_other_mask.pgm";
// mask for blocking text in other type of frames const char* const TXT_ECHO_MASK_TXT = "txt_echo_mask.dat"; const char* const TXT_ECHO_MASK = "txt_echo_mask.pgm";
// mask for blocking text in color, bw, and zoom frames const char* const INFO = "info.dat";
// info file used for views, echo_videos, ... const char* const VIEWJDIR = "views/";
// specifies the view directory relative to current one const char* const CYCLE_DIR = "cycle/";
// specifies the cycle directory under view directory const char* const UNDECIDED = "not-known";
// name of the views before view recognition const char* const KEY_FRAME = "keyjrame";
// name of the keyjrame file of a view const char* const TMP_DIR = "/home/shahram/cardio/echos/temp/";
// a temporary directory for inflating files const char* const MPEG2ENC ="mpeg2encode"; const char* const MPEG2DEC ="mpeg2decode";
// mpeg2 codec system calls const char* const MPEG_TEMPLATE = "/home/shahram/cardio/echos/template/ternplate.par"
// a template .par file for mpeg2 encoding
const int MAXJJNE = 280;
// maximum length of a line in file const int FILE_NAME_SIZE = 256;
// max length of the name of a file const int ED_SIZE = 50;
// number of frames to process to find R-peaks const int MAXJvϊEW JMAME = 10;
// max length of the view name const int MAX_PIX_VAL = 255; const int MINJ3IX-VAL = 0;
// min and max pel values for gray-level image const int MAX_NUM_CLASS = 20;
// maximum number of classes in clusterlmage() const int REFJ=RAM E JJV = 352; const int REFJ=RAMEJH = 240;
// Width and Height of a reference frame
//Viewnamesasused in the file extensions constchar*const A4C = "A4C"; constchar*const A5C = "A5C"; constchar*const A2C = "A2C"; constchar*const A3C = "A3C"; constchar*const PLA = "PLA"; constchar*const VIT = "VIT"; constchar*const PSAB = "PSAB"; constchar*const PSAM = "PSAM"; constchar*const PSAX = "PSAX";
// enumeration of the different frame types enum TYPE { BW,
COLOR,
ZOOM,
DOPPLER,
BLANK,
OTHER };
// enumeration of the different forms that a time-marker blob can have // DYNAMIC: when the time marker is either companding or moving // STATIC: when the time marker is neither moving nor companding enum TM_STATUS { DYNAMIC,
STATIC, NOT_SET
};
#endif
* img_util.cc
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <iostream.h> #include <algorithm> #include <stdlib.h> #include <stdio.h> #include <assert.h> #include <math.h>
#include "../util/ppmIO.h" #include "../util/util.h" #include "../util/olist.h" #include "img_util.h"
using namespace std;
/*
* HoughLines
* Purpose: Performs Hough Transform on the input image to detect
* straight lines.
* Input: edge image of the image we want to find lines in
* Output: (rho, theta) pairs returned in "lines"
* Note: theta_res is in degrees.
*/ void HoughLines(lpllmage* img, double rho_res, double theta_res, float* lines, int nojines)
{ int W = img->width; int H = img->height; float limitjength = sqrt(pow((double)W,2) + pow((double)H,2)); int N = static_cast<int>(limitjength / rho_res);
// transforming the rho resolution to number of steps int M = static_cast<int>(180.0 / theta_res);
// transforming the theta resolution to number of angles float* COS = new float[M]; float* SIN = new float[M];
// having cosine and sine values handy
LookUpTable(COS, SIN, M);
// creating a LUT for sine and cosine values int* Accumulator = new int[M * N]; for(int a=0; a < M*N; a++) Accumulator[a] = 0;
// the accumulator matrix for the Hough Transform // populating the Accumulator matrix float r; int m, n; for( int h=0; h < H; h++ )
{ for( int w=0; w < W; w++ )
{ int pix_pos = w + W * h; int pix =static_cast<int>(img->imageData[pix_pos]); if (pix != 0)
{ for(m.=0; m < M ; m++)
{ r = h * COS[m] + w * SIN[m]; r += limitjength; r /= (2.0 * limitjength); r *= (N-1 ); r += 0.5; n = static_cast<int>(floor(r));
Accumulator[n + m * N]++; }
} } }
// Sorting the Accumulator matrix int* tmp_acc = new int[M * NJ; for(m=0; m < M*N; m++) tmp_acc[m] = Accumulator[m]; sort(tmp_acc, tmp_acc + M*N);
// Only keeping the biggest "nojines" bins of the Accumulator matrix int cnt = 0; for(m=0; m < M; m++)
{ for(π=0; n < N; n++)
{ if(Accumulator[n + m * N] > tmp_acc[M * N - no_lines-1])
{ float R = ((n - 0.5) * 2 * limitjength / static_cast<float>(N-1)) - limitjength; lines[cnt++] = R; lines[cnt++] = static_cast<float>(m * theta_res); } } } delete D tmp_acc; delete Q Accumulator; delete D SIN; delete Q COS; } /*
* sobelEdgeDetect
* Purpose: Finds the edge image of the input gray-scale image.
* Input: Gray-scale image.
* Output: Edge image.
*/ void sobelEdgeDetect(lpllmage* img_src, Ipllmage* img_dst)
{
Ipllmage* tmp1 = iplClonelmage(img_src); Ipllmage* tmp2 = ip!Clonelmage(img_src); int horizontal[9] = { 1 , 2, 1 , 0, 0, 0, -1 , -2, -1 }; int vertical[9] = { 1 , 0, -1, 2, 0, -2, 1, 0, -1 };
IplConvKernel* k1 = iplCreateConvKernel( 3, 3, 1 , 1 , horizontal, 0 ); IplConvKernel* k2 = iplCreateConvKernel( 3, 3, 1 , 1 , vertical, 0 ); iplConvolve2D(img_src, tmp1, &k1, 1 , IPL_SUMSQROOT); iplConvolve2D(img_src, tmp2, &k2, 1 , IPL_SUMSQROOT); iplAdd(tmp1 , tmp2, imgjdst); iplDeallocate(tmp1 , IPLJMAGE_ALL); iplDeallocate(tmp2, IPLJMAGE_ALL);
} /*
* crop Image
* Purpose: crops a rectangle out from the input image 'img'. The left
* hand corner of the rectangle has co-ordinates (LU_x, LU_y)
* and its width and height are W, and H respectively.
* Input: a gray-level image "img1 and the coordinates of the box
* Output: the cropped image
*/
Ipllmage* croplmage( const Ipllmage* img, int LU_x, int LU_y, int W, int H )
{ //
// instantiating a new IPL header for the smaller image //
Ipllmage* cropjmg = iplCreatelmageHeader( 1 , 0, IPL_DEPTH_8U, "GRAY", "GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ALIGN_QWORD, W, H1 NULL, NULL, NULL, NULL); if ( cropjmg == NULL ) exit( 1 ); iplAllocatelmage( cropjmg, 0, 0 ); if( NULL == cropjmg->imageData ) exit( 1 );
// the 2 ends of the box int startjpixeljDos = LU_y * img->width + LU_x; int stopjpixeljDos = ( LU_y + H ) * img->width + ( LU_x + W ); int cnt = 0; for( int i = start_pixel_pos; i < stop_pixel_pos; i++ ) { int y = (int) floor( (double)(i) / (doubie)( img->width ) ); int x = i - y * ( img->width );
// only accepting points that fall in the rectangle box if( ( x > LU_x - 1 ) && ( x < ( LU__x + W ) ) && ( y < ( LU_y + H ) ) ) {
( (char*)crop_img->imageData )[ cnt ] = img->imageData[ i ]; cnt++; } } return( crop_img ); }
/*
* bilevelizelmage
* Purpose: transforms a gray-scale image with two distinct gray-levels
* into a bilevel image by automatically finding a threshold
* and applying it to the image.
* Input: a gray-level image 'in_img'
* Output: the bilevel image 'bijrng'
*/ void bilevelizelmage( Ipllmage* in_img, Ipllmage* bijmg )
{ int W = in_img->width; int H = injmg->height; int i, j; double median; double *minVal = new double; double *maxVal = new double; CvPoiπt *minLoc = new CvPoint; CvPoint *maxLoc = new CvPoint; median = imageMedian( in_img ); cvMinMaxLoc( in_img, minVal, maxVal, minLoc, maxLoc ); cout « "min: " « (*minVal) « "\t" « "max: " « (*maxVal) « endl; cout « "median: " « median « endl;
. int T = (int)( ( (*maxVal) + median ) / 2.0 ); cout « "T: " « T « endl; iplThreshold( in_img, bijmg, T ); delete maxLoc; delete minLoc; delete maxVal; delete minVal; clusterlmage
' Purpose: using the K-means algorithm clusters the image pixels into
'num_classes' classes : Input: a gray-level image Output: the clustered image
7 void clusterlmage( Ipllmage* src_img, unsigned char* pixjabel, const unsigned char num_class )
{ int width = src_img->width; int height = src_img->height; unsigned char LABEL, pel; double pixel; double total_population = 0; double points; int i, j; double class_population[MAX_NUM_CLASS]; double old_class_population[MAX_NUM_CLASS]; double old_mean[MAX_NUM_CLASS]; double new_meaπ[MAX_NUM_CLASS]; double sum[MAX_NUM_CLASS];
// Choosing the initial cluster centers double* CC = new double[width * height]; for(i=0; i < width * height; i++)
{ unsigned char pix = static_cast<unsigned char>( src_img->imageData[i] );
CC[i] = (double)pix; }
SOiI(CC, CC + width * height); int K = (int)( (double)(width * height) / (double)(num_class - 1 ) ); for(i=0; i < num_class; i++) old_mean[i] = CC[i * (K-1 )]; delete Q CC;
// Main loop for clustering the pixels. for (int l=0; l < 100; I++)
{
/* Initializing 'sum', and 'class_population' 7 for(i=0; i < num_class; i++)
{ sum[i] = 0; class_population[i] = 0;
} for( i=0; i < width * height; i++ )
{ pel = static_cast<unsigned char>( src_img->imageData[i] ); pixel = static_cast<double>( pel ); LABEL = classifyPixel( pixel, old_mean, num_class ); class_population[ LABEL ]++; pixjabelp] = LABEL; sum[LABEL] += pixel;
} for(i=0; i < num_class; i++)
{ if(class_population[i]) new_mean[i] = sum[i] / class_population[i]; } // Find the new averages of the clusters points = migratingPoints(class_population, old_class_population, num_class); // Find how many points have migrated if (points < 100.0) break;
// end of the process total_population = 0; for (i=0; i < num_class; i++)
{ total_population += class_population[i]; old_class_population[i] = class_population[i]; old_mean[i] = new_mean[i]; } }
}
/*
* migratingPoints
* Purpose: finds the difference in the total population in two cluster sets
* Input: population of cluster C1 and C2 and number of classes in cluster
* Output: the difference between populations
*/ double migratingPoints( double* C1 , double* C2, unsigned char num_class )
{ double difference = 0; for( int i=0; i < num_class; i++ ) difference += fabs(C1[i] - C2[i]); return(difference); }
/*
* classifyPixel
* Purpose: assigns a pixel to one of the many classes using proximity
* in the Euclidean space
* Input: a 'pixel' and the classes means, and number of classes
* Output: label of the class whose center is closest to current pixel
*/ unsigned char classifyPixel( double pixel, double* class_means, unsigned char num_class ) { double* distance = new double[num_class]; double* tmp_dist = new double[num_class]; for(int c=0; c < num_class; C++)
{ distance[c] = fabs(pixel - class_means[c]); tmp_dist[c] = distance[c]; } sort(tmp_dist, tmp_dist + num_class); double minimum = tmp_dist[O]; delete □ tmp_dist; unsigned char minjndex; for( int i=0; i < num_class; i++ )
{ if( distancep] == minimum )
{ minjndex = i; break; } } delete [] distance; return(minjndex); }
/*
* mergeTo2Regions
* Purpose: bi-levelizes an already clustered image to a foreground and
* a background. Background is the region with largest area.
* Input: 'irng' and its pixel 'label's, and number of clusters
* Output: implicitly sets the pixel values of 'img1
*/ void mergeTo2Regions( Ipllmage* img, unsigned char* label, unsigned char num_class )
{ int size = img->width * img->height;
//
// initiallizing the region areas
// double* region_area = new double[num_class]; assert( region_area != NULL ); for(int i=0; i < num_class; i++) region_area[i] = 0; unsigned char L; for(int j=0; j < size; j++)
L = labelQ]; region_area[L]++;
}
// finding the label of the region with largest area unsigned char largest_region_label = sort(region_area, num_class); delete G region_area; for( int j=0; j < size; j++ ) { if( labelf j ] == largest_region_label )
( (char*)img->imageData )[j ] = static_cast<char>(0); else
( (char*)img->imageData )[ j ] = static_cast<char>(255); }
}
I* spatialStd
* Purpose: finds the amount of spread of points in a bilevel image
* Input: bi-level image
* Output: the standard deviation in x and y direction
*/ void spatialStd( const Ipllmage* img, double &x_std, double &y__std )
{ int i, j; int W = img->width; int H = img->height; double mean_x = 0; double mean_y = 0; for( i=0; i < H; i++ ) { for( j=0; j < W; j++ ) { int pix_pos = j + i * W; int pix = static_cast<unsigned char>( img->imageData[ pix_pos ] ); if ( static_cast< int >( pix ) == 255 ) { mean_x += j; mean_y += i; } } } mean_x /= (double)( W ); mean_y /= (double)( H ); for( i=0; i < H; i++ ) { for( j=0; j < W; j++ ) { int pixjDOs = j + i * W; int pix = static_cast<unsigned char>( img->imageData[ pix_pos ] ); if ( static_cast< int >( pix ) == 255 ) { x_std += pow( j - mean_x, 2.0 ); y_std += ρow( i - mean_y, 2.0 ); } } } x_std = sqrt( x_std ); y_std = sqrt( y_std );
}
/* coloredBlobs —
* Extracts colored regions of an image and
* returns the map of those colored blobs. */
Ipllmage* coloredBiobs(lpllmage* img)
{ int width = img->width; int height = img->height; int n_ch = img->nChannels; printf("lmage width: %d, Image Height: %d, Num Channels: %d\n", width, height, n_ch);
// allocate the output image
Ipllmage* blobjmg = iplCreatelmageHeader(1 , 0, IPL_DEPTH_8U, "GRAY", "GRAY",
1PL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, width, height, NULL, NULL, NULL, NULL); if (NULL == blobjmg) exit(EXIT_FAILURE); iplAllocatelmage(blob_img, 0, 0); if (NULL == blob_img->imageData) exit(EXIT_FAILURE);
// identify colored pixels and write to output for (int i=0; i<height; i++) { for (int j=0; j<width; j++) { int pix_pos = 3*(j + i * width); int pix_d = static_cast<unsigned char>(img->imageData[pix_pos]); int pix_c2 = static_cast<unsigned char>(img->imageData[pix_pos+1]); int pix_c3 = static_cast<unsigned char>(img->imageData[pix_pos+2]); //printf("(%d,%d,%d), ", pix_d ,pix_c2,pix_c3); if (pix_d ==pix_c2 && pix_c2==pix_c3) { (blob_img->imageData)ϋ+i*width] = MIN_PIX_VAL;
} else { (blob_img->imageData)[j+i*width] = MAX_PIX_VAL;
} } } return(blobj'mg);
int getLabei(lpllmage* I, Ipllmage* mask) { int W = l->width; int H = l->height; int nch = l->nChanne!s; for (int i=0; i<H; i++) { for (int j=0; j<W; j++) { int pix_pos = j + i * W; int pix = static_cast<unsigned char>(mask->imageData[pix_pos]); if (pix > 0) { int pix_c1 = static_cast<unsigned.char>(l->imageData[3*pix_pos]); int pix__c2 = static_cast<unsigned char>(l->imageData[3*pix_pos+1]); int pix_c3 = static_cast<unsigned char>(l->imageData[3*pix_pos+2]); int L = (int)(4*((double)pix_c1 /255.0) + 2*((double)pix_c2/255.0) + ((double)pix_c3/255.0)); printf("%d \n", L); return L; } } } } r
* GonnectComponentLabeling
* Purpose: labels the blobs in a bi-level image
* Input: bi-level image
* Output: the standard deviation in x and y direction
* Note: The pels of the input image will be changed during the
* course of operation, its better to pass a copy of the
* actual image to the function
*/ int grassLabel (Ipllmage *img, unsigned char "label, int &numθbjects, int *objAreas, int MAX_OBJ)
{ int i, j; int width = img->width; int height = img->height; writelPL2IMG(img, "ecgjmg"); printfC1... 1 ...\n"); // Find the maximum gray-level in image, int* pixVals = new intfwidth * height]; for( i=0; Kheight; i++ ) { for(j=0; j<width; j++ ) { int pix_pos = j + i * width; int pix = static_cast<unsigned char>(img->imageData[pix_pos]); pixVals[pix_pos] = pix; } } printff... 2 ...\n"); sort(pixVals, pixVals+width*height); int maxPixVal = pixVals[width*height - 1]; delete Q pixVals; printfC1... 3, ..An"); // Initializing the object areas for (i=0; i < MAX_OBJ; i++) objAreas[i] = 0; printff... 4 ...\n"); // Identify the regions in the image, int number = 0; for( i=0; i<height; i++ ) { for( j=0; j<width; j++ ) { int pix_pos = j + i * width; int pix = static_cast<unsigned char>(img->imageData[pix_pos]); if (static_cast<int>(pix) == maxPixVal) { printf("Calling fucntion grass!\n"); grass (img, label, i, j, number+1 , objAreas+number, maxPixVal); number++; if (number >= MAXJDBJ) goto RET; } } } printff... 5 ..An");
RET: numObjects = number; retum(O); }
/*
* Given a seed pixel grows a region similar to that pixel in a recursive fashion
*/ int grass (Ipllmage *img, unsigned char label, int row, int col, int number, int *area, int MaxPixVal)
{ int i, j; int width = img->width; int height = img->height;
((char*)img->imageData)[col + row * width] = 0; labelfcol + row * width] = number; (*area)++; for (i = -1 ; i <= 1 ; i++) { for (j = -1 ; j <= 1 ; j++) { if ( (row + i) >= 0 && (row + i) < height && (col + j) >= 0 && (col + j) < width ) { int pix_pos = (j+col) + (i+row) * width; int pix = static_cast<unsigned char>(img->imageData[pix_pos]); if (static_cast<int>(pix) == MaxPixVal) grass(img, label, row + i, col + j, number, area, MaxPixVal); }
} return(O);
eliminateSmallRegions
Purpose: Sets to background all the patches of foreground which are smaller than a certain volume Input: the image to be cleaned, and the threshold Outputs: cleaned image
7 void eliminateSmallRegions (Ipllmage* img, int TH)
{ int num_regions; int W = img->width; int H = img->height;
Ipllmage* tmpj'mg = iplCreatelmageHeader( img->nChannels, img->alphaChannel, img->depth, img->colorModel, img->channelSeq, img->dataθrder, img->origin, img->align, img->width, img->height, NULL,
NULL, img->imageld, NULL ); if ( NULL == tmpjmg ) exit(EXIT_FAILURE); iplAllocatelmage( tmpjmg, 0, 0 ); if( NULL == tmp_img->imageData ) exit(EXIT_FAILURE); iplCopyømg, tmpjmg); unsigned char* label = new unsigned char[W * H]; int *region_area = new int[W * H]; grassLabel (tmpjmg, label, num_regions, regionjarea, W * H); for(int i=0; i < num_regions; i++) { if(regionjarea[i] < TH) { for(intj = 0; j < (W * H); j++) { if(labelQ] == (i+1 ))
(img->imageData)[j] = 0; } } } delete Q region_area; delete 0 label; jpiDeallocate (tmp_img, IPL_IMAGE_ALL); }
I*
*
7 void subtractMedian( Ipllmage* src, Ipllmage* dst )
{ double median = imageMedian( src ); double new_pix; for( int i=0; i < src->heϊght; i++ ) { for( int j=0; j < src->width; j++ ) { int pix_pos = j + i * src->width; unsigned char pix = static_cast<unsigned char>( src->imageData[ pix_pos ] ); new_pix = fabs( (double)pix - median );
((char*)dst->imageData)[ pix_pos ] = static_cast<char>(new_pix); } } }
/*
*
7 double imageMedian( Ipllmage* img )
{ int img_size = img->width * img->height; double *img_array = new double[img_sizej; for( int i=0; i < img_size; i++ ) { unsigned char pix = static_cast<unsigned char>(img->imageData[i]); img_array[i] = (double)pix;
} sort(img_array, img_array + img_size); double median = img_array[ img_size / 2 ]; delete [ ] img_array; return(median); }
/* * Purpose: Masks one image with the other
* Input: the image to be masked, and the mask image
* Output: the same image after masking 7 void rnasklrnage( Ipllmage* img, Ipllmage* mask )
{
// image and the mask have to be of the same dimensions int W = img->width; int H = img->height; int W_mask = mask->width; int Hjnask = mask->height; if(W != W_mask || H != H_mask)
{ cerr « "Mask and image do not have the same dimensions"; exit(EXIT_FAILURE);
} int pix_pos, pix;
CvPixelPosition8u pp; //pixel position structure for 8U image
CvRect roi; roi.x = 0; //roi is the whole image roi.y = 0; ( roi. width = W; ' roi.height = H;
CVJNIT_PIXEL_POS( pp, (unsigned char*)mask->imageData, mask->widthStep, roi, 0, 0, mask->origin ); int cnt1=0; for(int i=0; i < H ; i++)
{ for(intj=0; j < W; j++)
{ pix_pos = j + W * i; pix = static_cast<unsigned char>(*(pp.currline + pp.x)); if(pix != MAX_PIX_VAL)
{
( (char*)img->imageData )[pix_pos] = MINJ3IX-V AL; cnt1++; }
CV_MOVE_RIGHT_WRAP(pp, 1);
}
CV_MOVE_DOWN(pp, 1);
} }
/*
* tanimotoDistanceO
*
* Finds the distance between two binary image sets.
* Note: This distance metric has been modified to better suit
* the situation in the segmented echo videos. */ double tanimotoDistance(lpllmage* img1 , Ipllmage* img2) { int img1_N = cvCountNonZero(imgi); int img2_N = cvCountNonZero(img2);
Ipllmage* imgAND = iplCreatelmageHeader (1 , 0, IPL_DEPTH_8U,"GRAY", "GRAr1,
IPL-DATA-ORDER-PIXELJPL-ORIGIN-TL1 IPL-ALIGN-QWORD, img1->width, img2->height, NULL, NULL, NULL, NULL); if (imgAND == NULL) exit(EXIT_FAILURE); iplAllocatelmage(imgAND, 0, 0);
Jf(NULL == imgAND->imageData) exit(EXIT_FAILURE); //Ipllmage* imgAND = iplClonelmage(img1 ); //iplSet(imgAND, 0); iplAnd(img1 , img2, imgAND); int imgAND_N = cvCountNonZero(imgAND); printf("img1_N: %d, img2_N: %d, imgAND_N: %d\n", img1_N, img2_N, imgAND_N); double tanimoto; tanimoto = double(imgAND_N) / (double)(img1_N + img2_N - imgAND_N); iplDeallocate(imgAND, IPL_IMAGE_ALL); return tanimoto; } double modifiedTanimotoDistance(lpllmage* img1 , Ipllmage* img2) { int img1_N = cvCountNonZero(imgi); int img2_N = cvCoυntNonZero(img2); int imgMin_N = (img1_N < img2_N ? img1_N : img2_N);
Ipllmage* imgAND = iplClonelmage(imgi ); iplSet(imgAND, 0); iplAnd(img1 , img2, imgAND); int imgAND_N = cvCountNonZero(imgAND); double tanimoto; if (img1_N !=0 && img2_N !=0) tanimoto = double(imgAND_N) / (double)(imgMin_N); else tanimoto = 0; iplDeallocate(imgAND, IPLJMAGE_ALL); return tanimoto; }
/*
* Calculates the histogram of a given block of data. */ void findHistogram (int* data, int sizeData, int maxBins, double* hist) { // Reset histogram array for(int i=0; i < maxBins; i++) histfl] = 0;
// Build histogram for (int j=0; j < sizeData; j++) hist[dataϋ]]++;
// Normalize the histogam for(int i=0; i < maxBins; i++) hist[i] /= (double)sizeData;
}
/*
* Gets the intersection between 2 histograms. */ double histlntersection(double* H1, double* H2, iπt size) { double numerator = 0; double sum1 = 0, sum2 = 0; for (int m=0; m < size; m++) { numerator += (H1[m] < H2[m] ? H1[m] : H2[m]); sum1 += H1[m]; sum2 += H2[mj; } return ( numerator / (sum1 < sum2 ? sum1 : sum2) ); }
/* * Finds the Kullback-Leibler distance between 2 distributions. 7 double KLDistance(double* H1 , double* H2, int size) { double distance = 0; for (int m=0; m < size; m++) if (H1[m] != 0 && H2[m] != 0) distance += H1[m] * log(H1 [m] / H2[m]); return distance / log(2.0); }
/*
* Distance Ordinal Histogram difference according to
* Cha and Sirhari's paper. 7 double ordinalHistDistance(double *H1 , double *H2, int size) { double prefixSum = 0; double histDist = 0; for (int i=0; i < size; i++) { prefixSum += H1[i] - H2[i]; histDist += fabs(prefixSum); } return (histDist); }
* imgjjtil.h
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#ifndef _IMG_UTIL_H #define _IMG_UTIL_H
#include <ipl/ipl.h> #include <CV.h> ffinclude "../util/olist.h"
Ipllmage* croplmage( const Ipllmage*, const int, const int, const int, const int ); void clusterlmage( Ipllmage*, unsigned char*, const unsigned char ); inline double migratingPoints( double* , double* , unsigned char ); inline unsigned char ctassifyPixel( double , double* , unsigned char ); void mergeTo2Regions( Ipllmage*, unsigned char*, unsigned char ); void spatialStd( const Ipllmage*, doubles, doubles ); void subtractMedian( Ipllmage*, Ipllmage* ); double imageMedian( Ipllmage* );
Ipllmage* coloredBlobs(lpllmage*); int getLabel(lpllmage*,lpllmage*); int grassLabel (Ipllmage*, unsigned char*, int&, int*, int); int grass (Ipllmage*, unsigned char*, int, int, int, int*, int); void eliminateSmallRegions( Ipllmage*, int ); void bilevelizelmage( Ipllmage*, Ipllmage*); void masklmage( Ipllmage* , Ipllmage* ); void sobelEdgeDetect(lpllmage*, Ipllmage*); void HoughLines(lpllmage*, double, double, float*, int);
/*
* Distance measures for comparing a collection of binary objects. */ double tanimotoDistance(lp! Image*, Ipllmage*); double modifiedTanimotoDistance(lpllmage*, Ipllmage*); /*
* Bunch of histogram difference measures.
/ void findHistogram (int*, int, int, double*); double histlntersection(double*, double*, int); double KLDistance(double*, double*, int); double ordinalHistDistance(double*, double*, int);
#endif
CARDIOHOME = /home/shahram/cardio/source
UTILHOME = $(CARDIOHOME)/util
ECHOVIDEOHOME = S(CARDIOHOMEVeChO-ViCIeO
CC = gee
CPPFLAGS = -g -Wno-deprecated
INCPATH = -l$(CARDIOHOME)/include -l/usr/local/include
LIBPATH = -L/usr/local/lib
LIBS = -iiplm6 -liplaθ -liplpx -lopencv -IGTL -lstdc++
.SUFFIXES = -cc $O $H
PROG = extractColoredBlobs
SRCS = $(UTILHOME)/util.cc $(UTILHOME)/ppmlO.cc $(UTILHOME)/img_util.cc $(UTILHOME)/Blob.cc $(UTILHOME)/extractColoredBlobs.cc
OBJS = $(SRCS:%.cc=%.o)
.cc.o:
$(CC) $(CPPFLAGS) $(INCPATH) -c $< all: $(PROG)
S(PROG): $(OBJS)
$(CC) $(CPPFLAGS) -o $@ $(OBJS) $(LIBPATH) S(INCPATH) S(LIBS) clean: rm -f S(PROG) rm -f core rm -f *.o stats: we *.cc *.h
t
* ppmlO.cc
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <iostream.h> #include <fstream.h> #include <stdlib.h> #include <ctype.h> #include <stdio.h> #include <string.h>
#include "ppmlO.h" #inc(ude "filename.h"
/* readlMG2IPL
* Purpose: reads-in a ".ppm" image format from a file either in
* ASCII or RAW format and initializes an 'ipllmage' object
* with the read image.
* Inputs: image file name, image id
* Outputs: an object of the 'Iplmage' structure.
*
* Revised: 11/26/00
* Revised: 6/7/01
*/
Ipllmage* readlMG2IPL( const char* img_name, char* img_ID ) { ifstream in_jmage( img_name, ios::in | ios::binary); if ( !iπ_image )
{ cerr « "readlMG2IPL: Image could not be opened!\n"; exit(EXIT_FAILURE); }
// Checking file identifier matches the ppm format char c; if ( (c = in_image.get()) != MAGICK )
{ cerr « "Error! Not a PNM image formatΛn"; exit(EXIT_FAILURE); }
IMGJTYPE type = (IMGJTYPE) in_image.get(); if ( (type != PBM_ASCII) && (type != PBMJRAW) && (type != PGM_ASCII) && (type != PGMJRAW) && (type != PPM_ASCII) && (type != PPMJRAW) )
{ cerr « "Error! Not a PNM image formatΛn"; exit(EXIT_FAILURE); } in_image.ignore(10, '\n');
// getting rid of the rest of line
// reading the image dimensions from the header int width; int height; int max_val; char tmp_s[100j; int fieldjium = 0; int header_end_flag = 0; int n = 0; while ( ( c = in_image.get() ) != EOF )
{ if ( c == '# ) in_image.ignore(200, Vi1);
// ignoring the comments in the header else if ( !isspace( c ) ) tmp_s[n++] = c; else
{ tmp_s[ n ] = '\0'; switch( field_num )
{ case 0; width = atoi( tmp_s ); break; case 1 : height = atoi( tmp_s ); break; case 2: if( (type == PBM_ASCII) || (type == PBMJRAW) ) break; max_val = atoi( tmp_s ); if ( max_val != MAX_PIX_VAL )
{ cout « "Wrong maximum pixel value" « endl; exit(EXIT_FAILURE);
} header_end_flag = 1 ; break; } field_num++; n = 0; } if ( header_end_flag ) break; } int pix_size; char colorModel[5]; char channelSeq[5]; int channel_depth; switch(type)
{ case(PBM_ASCII): case(PBM_RAW): pix_size = 1 ; strcpy(colorModel, "GRAY"); strcpy(channelSeq, "G"); channel_depth = IPL_DEPTH_1U; break; case(PGM_ASCII): case(PGM_RAW): pix_size = 1 ; strcpy(colorModel, "GRAY"); strcpy(channelSeq, "GRAY"); channel_depth = IPL_DEPTH_8U; break; case(PPM_ASCII): case(PPM_RAW): pix_size = 3; strcpy(colorModeI, "RGB"); strcpy(channe(Seq, "RGB"); channeLdepth = IPL_DEPTH_8U; break;
};
// Initializing an object of structure "Ipllmg" with the image values Ipllmage* img = iplCreatelmageHeader( pix_size, // num of channels
0, // no alpha channel channel_depth, // data of byte type colorModel, // color model channelSeq, // color order
IPL_DATA_ORDER_PIXEL, // arrangement
IPL-ORIGINJrL, // top-left orientation
IPL_AUGN_QWORD, // 8-bytes aligned width, // image width height, // image height
NULL, // no ROI
NULL, // no mask ROI imgJD, // image id
NULL); // not tiled if (img == NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE); } iplAllocatelmage(img, 0, 0); if (NULL == img->imageData) { cout « "Error! Can't allocate memory to imageData" « end!; exit(EXIT_FAILURE); }
// Reading the RGB pixel values according to the file format int img_size = width * height; int imgjength = img_size * img->nChannels; int counter = 0; int m = 0; int val_flag = 0;
// Left here! Reading PBM Raw (maybe ASCII too) should be handled differently switch ( type )
{ case PBM_ASCII: case PGM-ASCII: case PPM_ASCII: while ( (c = inJmage.getO) != EOF )
{ if ( !isspace( c ) )
{ val_flag = 1 ; tmp_s[m++] = c;
} else if ( !val_flag ) continue; else
{ tmp_s[ m ] = '\0'; int pix_val = atoi( tmp_s );
((char*)img->imageData)[counter] = static_cast<char>(pix_val); counter++; m = 0; val_flag = 0;
} } break; case PBM_RAW: case PGM_RAW: case PPMJRAW: unsigned char *img_buffer = new unsigned char [imgjength]; injmage.read (reiπterpret_cast<char *>(img_buffer), sizeof(unsigned char) * imgjength); for (int i = 0; i < imgjength; i++)
((char*)img->imageData)[i] = imgjaufferp]; delete FJ img_buffer; break;
} in_image.close(); return (img);
}
Ipllmage* readROI2IPL (const char* img_name, float ul_x, float ul_y, float lr_x, float lr_y) { ifstream in_image (img_name, ios::in | ios::binary); if (ϋπjmage) { cerr « "readROI2IPL: Image could not be opened!\n"; exit(EXIT_FAILURE); }
// Checking file identifier matches the ppm format char c; if ((c =in_image.get()) != MAGICK) { cerr « "Error! Not a PNM image formatΛn"; exit(EXIT_FAILURE); }
IMG JΥPE type = (IMG_TYPE) in_image.get(); if ( (type != PBM_ASCII) && (type != PBM_RAW) &&
(type != PGM_ASCII) && (type != PGM_RAW) &&
(type != PPM_ASCII) && (type != PPM_RAW) )
{ cerr « "Error! Not a PNM image formatΛn"; exit(EXIT_FAILURE); } in_image.ignore(10, '\n');
// getting rid of the rest of line
// reading the image dimensions from the header int width; int height; int max_val; char tmp_s[100]; int field_num = 0; int header_end_flag = 0; int n = 0; while ((c = iπjmage.getO) != EOF)
{ if (c == '#') in_image.ignore(200, Vi');
// ignoring the comments in the header else if (ϋsspace(c)) tmp_s[n++] = c; else
{ tmp_s [n] = '\0'; switch (field_num)
{ case 0: width = atoi (tmp_s); break; case 1 : height = atoi (tmp_s); break; case 2: if ((type == PBM_ASCII) || (type == PBM_RAW)) break; max_val = atoi (tmp_s); if (max_val != MAX_PIX_VAL) { cout « "Wrong maximum pixel value" « endl; exit(EXIT_FAILURE);
} header_end_flag = 1 ; break; } field_num++; n = 0; } if (header_end_flag) break; } int pix_size; char colorModel[5]; char channelSeq[5]; int channel_depth; switch(type) { case(PBM_ASCII): case(PBM_RAW): pix_size = 1 ; strcpy(colorModel, "GRAY"); strcpy(channelSeq, "G"); channel_depth = IPL_DEPTH_1 U; break; case(PGM_ASCII): case(PGM_RAW): pix_size = 1 ; strcpy(colorModel, "GRAY"); strcpy(channelSeq, "GRAY"); channel_depth = IPL_DEPTH_8U; break; case(PPM_ASCII): case(PPM_RAW): pix size = 3; strcpy(colorModel, "RGB"); strcpy(channe!Seq, "RGB"); channel_depth = IPL_DEPTH_8U; break;
};
// Initializing an object of structure "Ipllmg" with the image values int wj = (int) (ul_x * (float)width); int w_r = (int) (lr_x * (float)width); int h_t = (int) (ul_y * (float)height); int h_b = (int) (lr_y * (float)height); int W = w_r - w_l; int H = h_b - h_t; printf ("wj=%d, w_r=%d, hj=%d, h_b=%d\n", wj, w_r, h_t, h_b); printf ("W=%d, H=%d\n", W, H);
/*
Ipllmage* img = ipICreatelmageHeader (1 , // num of channels
0, // no alpha channel
IPL_DEPTH_8U, // data of byte type
"GRAY", // color model
"GRAY", // color order
IPL_DATA_ORDER_PIXEL, // arrangement IPL_ORIGIN_TL, // top-left orientation
IPL_ALIGN_QWORD, // 8-bytes aligned W, // image width
H, // image height
NULL, // no ROI
NULL, // no mask ROI
NULL, // image id
NULL); // not tiled
*/
Ipllmage* img = ipICreatelmageHeader (pix_size, // num of channels
0, // no alpha channel channel_depth, // data of byte type colorModel, // color model channelSeq, // color order IPL_DATA_ORDER_PIXEL, // arrangement IPL_ORIGIN_TL, // top-left orientation
IPL_ALIGN_QWORD, // 8-bytes aligned W, // image width
H, // image height
NULL, // no ROI
NULL, // no mask ROI
NULL, // image id
NULL); // not tiled if (img = NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE);
} iplAllocatelmage (img, O1 0); if (NULL == img->imageData) { cout « "Error! Can't allocate memory to imageData" « endl; exit(EXIT_FAILURE);
}
// Reading the RGB pixel values according to the file format int initial_skip = (width * h_t + w_l) * pix^size; int read_span = (w_r - w_l + 1 ) * pix_size; int skip_span = (width - w_r + w_l + 1 ) * pix_size; printf ("initial_skip = %d, read_span = %d, skip_span = %d\n", initial_skip, read_span, skip_span); int noffset; char* pimg = (char*)(img->imageData);
// Left here! Reading PBM Raw (maybe ASCII too) should be handled differently in_image.seekg (initial skip * sizeof(unsigned char), ios::cur); for (int i=0; i < H; i++) { unsigned char *buffer = new unsigned char[read_spanj; in_image.read (reinterpret_cast<char*>(buffer), sizeof(unsigned char)* read_span); noffset = i * img->widthStep; for (int j=0; j < read_span; j++) { pimgfj + noffset] = bufferD]; /* int pix_val = (int) (0.212671 * (float)buffer[j] +
0.715160 * (float)buffer[j+1] + 0.072169 * (float)bufferD+2]); pimgO/3 + noffset] = (unsigned char)pix_val; */ }
//(img~>imageData)[j+i*W*img->nChannels] = bufferD]; injmage.seekg (skip_span * sizeof(unsigned char), ios::cur); delete rj buffer; } in_image.close(); return (img);
} /* writelPL2IMG
* Purpose: Writes an IPL image into one of PPM, PGM or PBM formats
* based on the specifications of the IPL image.
* Inputs: The image that has to be written to output and the out
* file name. If no file name provided, writes to default file.
*
* Outputs: No output, just writes to the file.
*
* Revised: 11/26/00
* Revised: 6/7/01 */ void writelPL2IMG (Ipllmage* img, const char* file_name)
{ int no_channels; int colorjnodel;
IMG_TYPE imgTyp;
Ipllmage* img_out;
Ipllmage* img_copy = iplCreatelmageHeader( img->nChannels, img->alphaChannel, img->depth, img->colorModel, img->channelSeq, img->dataθrder, img->origin, img->align, img->width, img->height,
NULL,
NULL, NULL, NULL ); if (NULL == img_copy) exit(EXIT_FAILURE); iplAllocatelmage (img_copy, O1 0); if (NULL == img_copy->imageData) exit(EXIT_FAILURE); iplCopy (img, img_copy);
//
// if format of the image is not RGB change it to RGB
//
if( !strcmp( "RGB", img->colorModel ) ) color_model = 0; if( !strcmp( "YUV", img->colorModel ) ) color_model = 1; if( !strcmp( "GRAY", img->colorModel) || !strcmp( "Gray", img->colorModel) || !strcmp(
"GRAYGRAY", img->colorModel)) color_model = 2; switch (colorjTiodel) { case 0: // Makes the order of pixels "RGB" img_out = iplCreatelmageHeader( 3, 0, IPL_DEPTH_8U, "RGB", "RGB", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if (img_out == NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE);
} iplAllocatelmage( img_out, 0, 0 ); if (NULL == img_out->imageData) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE); } iplCopy( img_copy, img_out ); imgTyp = PPM_RAW; break; case 1 : // Reformats to RGB img_out = iplCreatelmageHeader( 3, 0, IPL_DEPTH_8U, "RGB", "RGB", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if (img_out == NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE);
} iplAI!ocatelmage( img_out, 0, 0 ); if (NULL == img_out->imageData) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE); } iplYUV2RGB (img_copy, img_out); imgTyp = PPM_RAW; break; case 2: if (img->depth == IPL_DEPTH_8U) { img_out = iplCreatelmageHeader( 1 , 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if (img_out == NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE);
} iplA!locatelmage( img_out, 0, 0 ); if (NULL == img_out->imageData) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE); } iplCopy( img_copy, img_out ); imgTyp = PGM_RAW;
} else { img_out = iplCreatelmageHeader( 1 , 0, IPL_DEPTH_1U, "G", "GRAY",
IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL__ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if (img_out == NULL) { cout « "Can not create new image header" « endl; exit(EXIT_FAILURE);
} iplAllocatelmage( img_out, 0, 0 ); if (NULL == img_out->imageData) { cout « "Can not create new image header" « endl; exit(EXIT ^FAILURE); } iplCopy (img_copy, img_out); imgTyp = PBMJRAW; } break;
}
// writing the image to the output file in RAW format.
// Output will be one of the ".ppm", ".pgm", or ".pbm" formats.
//
char img_name[FILE_NAME_SIZE]; if( filejiame ) strcpy( img_name, file_name ); else strcpy( img^name, DEFAULT_FILE_NAME ); if( imgTyp == PPM_RAW ) strcat( img_name, PPMExtension ); else if( imgTyp == PGM_RAW ) strcat( img_name, PGMExtension ); else strcat( img_name, PBMExtension );
// opening the output file for writing in binary ofstream out_image_file( img_name, ios::out | ios::binary); if (!out_image_file) { cerr « "writelPL2IMG: Can not open file for writing" « endl; exit(EXIT_FAILURE); }
// writing the image file header out_image_file « MAGICK « atoi(reinterpret_cast<char*>( &imgTyp )) «endl; out_image_file « img_out->width « endl; out_image_file « img_out->height « endl; if (imgTyp != PBM_RAW) outjmagejile « MAX_PIX_VAL « endl; int imgjength = img_out->width * img_out->height * img_out->nChannels; outjmage_file.write(img_out->imageData, imgjength); out_image_file.close(); iplDeallocatelmage (img_out); iplDeallocatelmage (img_copy); }
* ppmlO.h
*
* Written by Shahram Ebadollahi
* Digital Video|MuItimedia Group
* Columbia University
#ifndef PPM JOJNCLUDED #define PPMJOJNCLUDED
#include <ipl/ipl.h> #include "filename.h" const int PiX_SIZE = 3; const char MAGICK = 'P'; enum IMG_TYPE ( PBM-ASCII = 1I1, PGM_ASCII = '2', PPM_ASCII = '31, PBM_RAW = '4', PGM_RAW = '5', PPM_RAW = '6'
};
Ipllmage* readlMG2IPL (const char*, char*);
Ipllmage* readROI2IPL (const char*, float, float, float, float); void writelPL2IMG(lpllmage*, const char*);
#endif
* util.cc
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <iostream.h> #include <fstream.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <stdio.h> #include <vector.h> #include <unistd.h> #include <dirent.h> #iπclude <sys/stat.h> #include <fcntl.h>
#include "util.h" #include "fiiename.h"
/*
* Comparison function to be used with 'qsort' function
*/ int cmp (const void* vp, const void* vq)
{ const double *p = (double*) vp; const double *q = (double*) vq; double diff = *p - *q; return ( ( diff >= 0 ) ? ( (diff>0) ? -1 : 0 ) : +1 ); }
/* */ unsigned char sort( double* array, unsigned char num_elements )
{ unsigned char *label = new unsigned char[ num__elements ]; assprt( label != NULL ); for( int I=O; I < num_elements; I++ ) labelf I ] = I; double a; unsigned char I; for( int i=0; i < num_elements - 1 ; i++ ) { for( int j = num_e!ements-1 ; j > i; — j ) { if( array[j-1 ] < array[j ] ) { a = array[ j-1 ]; I = labelf j-1 ]; array[j-1 ] = array[j ]; label[j-1 ] = label[j ]; arrayf j ] - a; label[j ] = l; } } }
I = labelf 0 ]; delete rj label; return( I ); }
/* lookUpTable
* Purpose: Makes a look-up table of cos and sin values for
* the specified resolution M. IvI is the number of angles in
*/ void LookUpTable(float* COSINE, float* SINE, int M)
{ float theta; for( int m=0; m < M; m++ )
{ theta = ((m * 180.0) / static_cast<float>(M)) - 90.0; theta *= Deg_to_Rad;
COSINE[m] = cos(theta); SINE[ITi] = sin(theta); } }
/*
* deleteExtension()
*
* Used for deleting a file extension from the end of a file name.
* i.e. deleting '.ppm' from Ydirectory/foo.ppm'. 7 void deleteExtension(char* name)
{ int I; int L = strlen(name); char* tmp_name = new char[FILE_NAME_SIZE]; strcpy(tmp_name, name);
// keeping the name for future use vector<char> name_chars;
// this vector holds the charcters in name in reverse order. for(l=0; I<=L; I++) name_chars.insert(name_chars.begin() + 1, name[l]);
// populating the vector with the elements of the name string array. I = L; while(l >= 0)
{ if( (name_chars[l] != '.') && (name_chars[l] != V) ) name_chars.erase(name_chars.begin() + 1); else
{ name_chars.erase(name_chars.begin() + 1); break; }
I-; }
L = name_chars.size(); for(l=0; I < L; I++) tmp_name[l] = name_chars[l]; tmp_name[l] = '\0'; strcpy(name, tmp_name); delete 0 tmp_name;
} r
* deleteDirectoryO
* Used for deleting the directory path to a file.
* i.e. deleting 7foo_dir1/foo_dir2/' from 7foo_dir1/foo_dir2/foo.ext'. */ void deleteDirectory(char* name)
{ int l; int L = strlen(name); char* tmpjiame = new char[FILE_NAME_SIZE]; strcpy(tmp_name, name);
// keeping the name for future use vector<char> name_chars;
// this vector holds the charcters in name in reverse order. for(l=0; K=L; I++) name_chars.insert(name_chars.begin() + 1, name[l]);
// populating the vector with the elements of the name string array.
I = L; while(l >= 0)
{ if( name_chars[l] == V) break; else !-; } name_chars.erase( name_chars.begin(), name_chars.begin() + I ); L = name_chars.size(); for(l=0; I < L; I++) tmp_name[l] = name_chars[l]; tmp_name[l] = 1W; strcpy(name, tmp_name); delete Q tmp_name; } int NumOfFiles(char* dir)
{
DIR *dp; struct dirent *F; struct stat statbuf; char current_dir[FILE_NAME_SIZE]; getcwd(current_dir, FILE_NAME_SIZE); // keeping the information about the current working directory if( (dp = opendir(dir)) == NULL ) { cerr « "util.cc: NumOfFiles(): Cannot open directory:" « dir; exit(EXIT_FAILURE);
} // openning the image directory chdir(dir); // changing directory to the image directory int cnt = 0; while( readdir(dp) I= NULL ) { lstat( F->d_name, &statbuf ); if( !S_ISDIR(statbuf.st_mode) ) cnt++; } chdir(current_dir); closedir(dp);
// going back to the directory we were before and closing the directory return(cnt-2);
// subtracting 2 to exclude '.' and '..' from the total count }
UNITED STATES PATENT AND TRADEMARK OFFICE DOCUMENT CLASSIFICATION BARCODE SHEET
Figure imgf000320_0001
Figure imgf000320_0002
Figure imgf000320_0003
Figure imgf000320_0004
Figure imgf000320_0005
Figure imgf000320_0006
DATA COLLECTION / XMLIO
eόhαd— segl-kf 1-PLA. xml
<?xml version- '1.0"?>
<echoFrame path="/home/shahram/cardio/echos/videos/echo1/" name="echo1_seg1_kf1_PLA.ppm" echoNum="1" segNum="1" kfNum="1" view="PLA"><ROI roiPath='7home/shahram/cardio/echos/vicleos/echo1/regions/echo1_seg1_kf1_PLA_seg_1.pgm'' index="1" labei="UNKN" valid="FALSE"x/ROIxROI roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_2.pgm" index="2" labeh"UNKN" valid="FALSE"x/ROIxROI roiPath='7home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_3.pgm" index="3" label="UNKN" valid="FALSE"></ROIxROI roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_4.pgm" index="4" label="UNKN" valid="FALSE"></ROI></echoFrame>
package xmlio; import java.util.*; import java.util.Vector; import java.io.*; importjava.net.*; import Java. lang. Exception; import javax.xml.parsers.*; import org.xml.sax.*; import org.xml.sax.helpers.*; import org.w3c.dom.*; import com.docuverse.dom.DOM; public class EchoFrame implements XMLConstants { private static String featDir = "features"; private static String ppmSuffix = "ppm"; private static String datSuffix = "dat"; private static EchoFrame instance; private String path; private String name; private String view; private String echoNum; private String segNum; private String kfNum; private ArrayList rois; private String inputXMLfile; private Document inputDoc; public static synchronized EchoFrame getlnstanceQ { if (instance == null) instance = new EchoFrame(); return instance; }
// Default constructor public EchoFrameO { rois = new ArrayList(); }
// Parses the input XML file public void read(String inputfile) throws Exception {
String fileContents = "";
String line;
BufferedReader fileReader = new BufferedReader(new FileReader(inputfile)); while((line=fileReader.readLine())!=null) fileContents += (line + "\n"); parselnput(fileContents.getBytes()); }
// Parses the input xml file and sets the fields public void parselnput(byte[] input_xml) throws Exception
{ inputDoc = ParseXML.getDocument(new ByteArraylnputStream(input_xml)); createEchoFrame(); createROIO; }
// From the XML DOM structure reads in the attributes of EchoFrame private void createEchoFrame() {
NodeList el = inputDoc.getElementsByTagName(ECHO_FRAME);
NamedNodeMap attlist = el.item(O).getAttributes(); iπt attjength = attlist.getLength(); for (int i=0; i < attjength; i++) { Node node = attlistitem(i); String nodejiame = node.getNodeName(); if(node_name.equals(FRAME_PATH)) { setPath(node.getNodeValue()); } else if (node_name.equals(FRAME_NAME)) { setName(node.getNodeValueO); } else if (node_name.equals(FRAME_VIEW)) { setView(node.getNodeValue()); } else if (node_name.equals(ECHO_NUM)) { setEchoNum(node.getNodeValue()); } else if (node_name.equals(SEG_NUM)) { setSegNum(node.getNodeValue()); } else if (node_name.equals(KF_NUM)) { setKFNum(node.getNodeValueO); } else {
System. out.println("Error!"); } } }
// From the DOM tree reads the attributes of the ROIs in the EchoFrame // 10/10/03: Added section for reading attributes of descendents of ROIs // look at Roi.java for additions related to class <Neighbor> private void createROIQ { if (rois.isEmptyO == false) rois.clearO;
// get att ROI elements
NodeList el = inputDoc.getElementsByTagName(ROI); int numj-ois = el.getLength();
// For each ROI either get attributes or get "Neighbor" elements for(int i=0; i < num_rois; i++) {
Roi roi = new Roi();
NodeList roi_children = el.item(i).getChildNodes(); int num_roi_children = roi_children.getLength();
// Get attributes of the current ROI NamedNodeMap attlist = el.item(i).getAttributes(); int attjength = attlist.getLength(); for (int pO; j < attjength; j++) { Node node = attlist.item(j); String node_name = node.getNodeName(); if(node_name.equa!s(ROI_PATH)) { roi.setPath(node.getNodeValue()); } else if (node_name.equals(ROIJNDEX)) { roi.setlndex(node.getNodeVatueO); } else if (node_name.equals(ROI_LABEL)) { roi.setiabel(node.getNodeValue()); } else if (node_name.equals(ROi_VAUD)) { roi.setValid(node.getNodeValueO); } else if (node_name.equals(ROI_COMX)) { roi.setComX(node.getNodeValue()); } else if (node_name.equals(ROI_COMY)) { roi.setComY(node.getNodeValue()); } else if (node_name.equals(ROI_AREA)) { roi.setArea(node.getNodeValueO); } else if (node_name.equals(ROI_ANGLE)) { roi.setAngle(node.getNodeValueO); } else if (node_name.equals(ROI_ECCEN)) { roi.setEccentricity(node.getNodeValue()); } else {
System. out.println("Error!"); } }
// Get all children (Neghbors) of current ROI NodeList neighborjist = el.item(i).getChildNodes(); int neighbor_num = neighbor_list.getLength(); if (neighbor_num != 0) { // If current ROI has any neighbors for (int k = O; k < neighbor_num; k++) {
NamedNodeMap neighbor_attlist = neighborJist.item(k).getAttributes(); int neighbor_att_length = neighbor_attlist.getLength(); Neighbor neighbor = new Neighbor(); for (int I=O; I < neighbor_att_length; I++) { Node nd = neighbor_attlist.item(l); String neighbor_node_name = nd.getNodeName(); if(neighbor_node_name.equals(NEIGHBOR_LABEL)) { neighbor.setLabel(nd.getNodeValue()); } else if (neighbor_node_name.equals(NEIGHBOR_DIST)) { neighbor.setDistance(nd.getNodeValue()); } else if (neighbor_node_name.equals(NEIGHBOR_ANGLE)) { neighbor.setAngle(nd.getNodeValueO); } else {
System.out.println("Error!"); } }
// add neighbor to the ROI roi.setNeighbor(neighbor, k); } } rois.add(i, roi); } rois.trimToSizeQ; }
// This should be called when changes have been made to the attributes of the // EchoFrame document before writing the updated XML structure to disk private void updateDOM() {
NodeList el_echo = inputDoc.getElementsByTagName(ECHO_FRAME);
NamedNodeMap echo_attlist = ei_echo.item(0).getAttributes(); int attjength = echo_attlist.getLength();
/*
* Set the attributes of the EchoFrame */ for (int i=0; i < attjength; i++) { Node node = echo_attlist.item(i); String node__name = node.getNodeName(); if(node_name.equals(FRAME_PATH)) { node.setNodeValue(path); } else if (node_name.equals(FRAME_NAME)) { node.setNodeValue(name); } else if (node_name.equals(FRAME_VIEW)) { node.setNodeValue(view); } else if (node_name.equals(ECHO_NUM)) { node.setNodeValue(echoNum); } else if (node_name.equals(SEG_NUM)) { node.setNodeValue(segNum); } else if (node_name.equals(KF_NUM)) { node.setNodeValue(kfNum); } else {
System. out.println("Error!"); } }
/*
* Get all ROI elements and set either their attributes
* or add new Neighbor elements to them if necessary. */
NodeList el_roi = inputDoc.getElementsByTagName(ROI); for(int j=0; j < el_roi.getLength(); j++) { Roi currentROI = (Roi)rois.get(j); Node currentROINode = el_roi.itemO); NamedNodeMap roi_attlist = currentROINode.getAttributes(); attjength = roi_attlist.getLength();
/* ,
* Set the attributes of the current ROI
* These are all singleton attributes
* Attributes of pair of nodes is set below */ for (int k=0; k < attjength; k++) { Node node = roi_attlist.item(k); String nodejiame = πode.getNodeName(); if(node_name.equals(ROI J3ATH)) { node.setNodeValue(currentROI.getPathO); } else if (node_name.equals(ROI_INDEX)) { node.setNodeValue(currentROI.getlndexO); } else if (node_name.equals(RO(_LABEL)) { node.setNodeValue(currentROI.getLabelO); } else if (node_name.equals(ROI_VALlD)) { node.setNodeValue(currentROI.getValidity()); } else if (node_name.equals(ROI_COMX)) { node.setNodeValue(currentROI.getComX()); } else if (node_name.equals(ROI_COMY)) { node,setNodeValue(currentROI.getComY()); } else if (node_name.equals(ROI_AREA)) { node.setNodeValueCcurrentROI.getAreaO); } else if (node_name.equals(ROI_ANGLE)) { node.setNodeValue(currentROI.getAngle()); } else if (node_name.equals(ROl_ECCEN)) { node.setNodeValue(currentROI.getEccentricity()); } else {
System.out.println("Error!"); } }
Element currentROIEIement = (Element)currentROINode; if (currentROI.comXUpdated == true) { currentROIEIementsetAttributelROLCOMX, currentROI.getComXO);
} if (currentROI.comYUpdated == true) { currentROIEIement.setAttribute(ROI_COMY, currentROI.getComY());
} if (currentROI.areaUpdated == true) { currentROIEIement.setAttribute(ROI__AREA, currentROI.getAreaO);
} if (currentROI.angleUpdated == true) { currentROIEIement.setAttribute(ROI_ANGLE, currentROI.getAngle());
} if (currentROI.eccenUpdated == true) { currentROIEIement.setAttribute(ROI_ECCEN, currentROI.getEccentricityO); }
/*
* Set the 'Neighbor' elements and their attributes which
* specify the attributes of pair of ROIs */ if (currentROI.neighborAdded == true) { int num_neighbors = currentROI.getNumNeighbors(); for (int m=0; m < num_neighbors; m++) {
// Add a new node to the XML document
Element neighbor_node = (Element)inputDoc.createElement(NEIGHBOR);
Neighbor current_neighbor = currentROI.getNeighbor(m); neighbor_node.setAttribute(NEIGHBOR_LABEL, current_neighbor.getLabel()); neighbor_node.setAttribute(NElGHBOR_DIST, current_neighbor.getDistanceQ); neighbor_node.setAttribute(NEIGHBOR_ANGLE, current_neighbor.getAngle()); currentROINode.appendChild(neighbor_node); } } } } public void writeToFile(String outputfile) throws Exception { // Write changes made to EchoFrame attributes to the DOM tree updateDOM();
// Output DOM tree as byte array
DOM dom = new D0M();
ByteArrayOutputStream os = new ByteArrayOutputStream(); dom.writeDocument(inputDoc, os);
ByteArraylnputStream output_xml = new ByteArraylnputStream(os.toByteArrayO); Document outputDoc = ParseXML.getDocument(output_xml); DOM dom_out = new D0M(); dom_out.writeDocument(outputDoc, outputfile);
} public void setPath(String p) { path = new String(p); } public void setName(String n) { name = new String(n); } public void setView(String v) { view = new String(v); } public void setEchoNum(String s) { echoNum = new String(s); } public void setSegNum(String s) { segNum = new String(s); } public void setKFNum(String c) { kfNum = new String(c); } public int getNumROIs() { return rois.size(); } public int getNumValidROIs() { int cnt = O; for (int i=0; i < rois.size(); i++) { if((((Roi)rois.get(i)).getValidity()).equals("true")) { cnt++; } } return cnt;
} public Roi getROI(int i) { return ((Roi)rois.get(M)); } public String getPath() { return path; } public String getName() { return name; } public String getView() { return view; } public String getEchoNum() { return echoNum; } public String getSegNum() { return segNum; } public String getKFNum() { return kfNum; }
// A bunch of methods for updating the ROI attributes public void setROIPath(int i, String p) {
Roi roi = new Roi(); roi.setPath(p); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(((Roi)rois.get(i-1)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX()); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).areaUpdateci == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROIIndex(int i, String index) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(index); roi.setValid(((Roi)rois.get(i-1)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX0); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).areaUpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngie(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROILabei(int i, String label) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(M)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setLabel(label); roi.setValid(((Roi)rois.get(i-1)).getValidity()); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX0); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY0); if (((Roi)rois.get(i-1)).areaUpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROIValid(int i, String v) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(v); roi.setLabel(((Roi)rois.get(i-1)).getl_abel()); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX()); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).areaUpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).ang!eUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1 , roi); } public void setROICom(int i, double x, double y) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(((Roi)rois.get(M)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); roi.setCOM(x, y); if (((Roi)rois.get(i-1)).areaUpdated == true) roi,setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).angleUpdated == true) rαi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROIArea(int i, double a) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(((Roi)rois.get(i-1)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); . roi.setArea(a); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX0); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROIAngle(int i, double a) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(((Roi)rois.get(i-1)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); roi.setAngie(a); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX()); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).areaUpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).ecceπUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); rois.set(i-1, roi); } public void setROIEccen(int i, double e) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValid(((Roi)rois.get(i-1)).getValidity()); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); roi.setEcceπtricity(e); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComX(((Roi)rois.get(i-1)).getComX()); if (((Roi)rois.get(i-1)).comYUpdated == true) roi.setComY(((Roi)rois.get(i-1)).getComY()); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle(j); if (((Roi)rois.get(i-1)).areaUpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); rois.set(i-1, roi); } public void addROINeighbor(int i, Neighbor N) { Roi roi = new Roi(); roi.setPath(((Roi)rois.get(i-1)).getPath()); roi.setlndex(((Roi)rois.get(i-1)).getlndex()); roi.setValidCCCRoiJrois.getfi-IJJ.getValidityO); roi.setLabel(((Roi)rois.get(i-1)).getLabel()); if (((Roi)rois.get(i-1)).comXUpdated == true) roi.setComXCffRoOrois.getό-IJJ.getComXO); if (((Roi)rois.get(M )).comYUpdated == true) roi.setComYffCRoiJrois.getfi-IJJ.getComYO); if (((Roi)rois.get(i-1)).angleUpdated == true) roi.setAngle(((Roi)rois.get(i-1)).getAngle()); if (((Roi)rois.get(i-1)).areallpdated == true) roi.setArea(((Roi)rois.get(i-1)).getArea()); if (((Roi)rois.get(i-1)).eccenUpdated == true) roi.setEccentricity(((Roi)rois.get(i-1)).getEccentricity()); int num_neighbors = ((Roi)rois.get(i-1)).getNumNeighbors(); int j = 0; for (j=0; j < num_neighbors; j++) {
Neighbor n = ((Roi)rois.get(i-1)).getNeighbor(j); roi.setNeighbor(n,j); } roi.setNeighbor(Nj); rois.set(i-1, roi);
} public static void main(String[j args) { try {
EchoFrame echo = new EchoFratne{); echo.read(args[OJ);
Roi roi = echo.getROI(i); System.out.ρrintln("Attributes of the first ROI"); System.out.println("Patrr. " + roi.getPath()); System.out.print!n("lndex: " + roi.getlndex()); System, out. println("Label: " + roLgetLabe!()); System.out.println("Valid: " + roi.getValidityO); echo.setROICom(1, 100.0, 200.0); echo.setROIArea(1, 150.0); for (int i=1; i < echo.getNumROIs(); i++) { Roi roi_cur = echo.getROI(i+1);
Neighbor n = new Neighbor(); n.setLabel(roi_cur.getLabel(j); n.setDistance(roi.getDistance(roi_cur)); n.setAngle(roi.getOrientation(roi_cur)); echo.addROINeighbor(1, n); } echo.writeToFiie("output.xml");
} catch (MalformedURLException urle) {
System.out.printin("URL exception."); } catch (ParserConfϊgurationException pee) {
System.out.println("Parser exception.");
} catch (SAXParseException spe) { System.outprintln("SAX exception.");
} catch (lOException e) {
System.outprintln("IO Error.");
System.out.println(e.getMessage()); e.printStackTrace{);
} catch (Exception e) {
System.out.priπtln("Error."); e.printStackTrace (System. err); }
} package xmlio; public class Neighbor { private String label; private double distance; private double angle; public NeighborQ { distance = 0; angle = 0;
} public void setLabel(String L) { label = new String(L); } public void setDistance(double D) { distance = D; } public void setDistance(String D) { distance = Double.valueOf(D).doubleValue(); } public void setAngle(double A) { angle = A; } public void setAngle(String A) { angle = Double.valueOf(A).doubleValue(); } public String getLabel() { return label; } public String getDistance() { return String.valueθf(distance); } public String getAngle() { return String. valueθf(angle); } }
<?xml version="1.0"?>
<echoFrame path='7home/shahram/cardio/echos/videos/echo1/" name="echo1_seg1_kf1_PLA.ppm" echoNum="1" segNum="1" kfNum="1" view="PLA"><ROI roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_1.pgm" index="1" label="UNKN" valid="FALSE" comX="100.0" comY="200.0" area="150.0"><NEIGHBOR nlabel-'UNKN" ndist="O.O" nangle="0.0"x/NEIGHBORxNE!GHBOR nlabel="UNKN" ndist="O.O" nangle="0.0"x/NEIGHBOR><NEIGHBOR nlabel="UNKN" ndist="0.O" nangle='O.0"></NEIGHBORx/RO|xROI roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_2.pgm" index="2" label="UNKN" valid="FALSE"></RO|xROi roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_3.pgm" index="3" label="UNKN" valid="FAL$E"></ROIxROI roiPath="/home/shahram/cardio/echos/videos/echo1/regions/echo1_seg1_kf1_PLA_seg_4.pgm" index="4" label="UNKN" valid="FALSE"></ROI></echoFrame>
package xmlio;
/*
* ParseXML.java
*
* By: Sergey Sigelman (ss1792(c gcs.columbia.edu) */
// JAXP packages import javax.xml.parsers.*; import javax.xml.parsers.DocumentBuilderFactory; import org.xml.sax.*; import org.xml.sax.helpers.*; import org.w3c.dom.*; import java. util.*; import java.io.*; public class ParseXML
{
/** All output will use this encoding */ static final String outputEncoding = "UTF-8"; public static Document getDocument(lnputStream input) throws Exception
{
DocumentBuilderFactory dbf = getDocumentBuilderFactoryO; DocumentBuilder db = getDocumentBuilder(dbf);
// parse the input file
Document doc = null; doc = db.parse(input,"file.7home/shahram/cardio/project/datacollect/xmlio/"); return doc; }
public static DocumentBuilderFactory getDocumentBuilderFactoryO throws Exception
{
DocumentBuilderFactory dbf = DocumentBuilderFactory.newlnstance(); dbf.setValidatiπg(false); dbf.setlgnoringComments(true); dbf.setCoalescing(false); dbf.setExpandEntityReferences(false); dbf.setlgnoringElementContentWhitespace(true); dbf.setNamespaceAware(true); return dbf; }
public static DocumentBuilder getDocurnentBuilder(DocumentBuilderFactory dbf) throws Exception
{
// create a DocumentBuilder that satisfies the constraints // specified by the DocumentBuilderFactory DocumentBuilder db = null; db = dbf.newDocumentBuilder();
// set an ErrorHandler before parsing db.setErrorHandler(new MyErrorHandler(new PrintWriter( new OutputStreamWriter(System.err)))); return db; } public static void printDOMNode(Node n, Writer writer) throws lOException
{ switch(n.getNodeTypeO)
{ case Node.ELEMENT_NODE: writer.write("<" + n.getNodeName());
NamedNodeMap atts = n.getλttributes();
Node node; for (int i = 0; i < atts.getl_ength(); i++)
{ node = atts.item(i); writer. write(
" " + node.getNodeName() + "=V"' + node.getNodeValue() + "V");
} if(n.hasChildNodes())
{ writer. write(">"); for (node = n.getFirstChild(); node != null; node = node.getNextSiblingO)
{ printDOMNode(node,writer);
} writer. write("<r + n.getNodeNameQ + ">");
} else
{ writer.write("/>");
} case Node.TEXT_NODE: String val = n.getNodeValueQ; if(val!=null && !val.equals("null"))
{ writer. write(makeLegal(val));
} break; default: for (node = n.getFirstChild(); node != null; node = node.getNextSiblingO)
{ printDOMNode(node,writer);
} } } public static String makeLegal(String val)
{
// Check to see if we need to do anything boolean foundGTLT = false; char[] illegalChars = new char[] {'<ϊ>7&'}; for (int i=0; MllegalChars. length; i++) if (val.indexOf((int)illegalChars[i]) != -1)
{ foundGTLT = true; break; } if {foundGTLT)
{
StringBuffer sb = new StringBuffer (val.length()); int len = val.lengthQ; for (int i=0; i<len; i++)
{ char c = val.charAt(i); if (c == <) sb.append ("&lt;"); else if (c == '>') sb.append ("&gt;"); else if (c == '&') sb.append ("&amp;"); else sb.append (c);
} val = sb.toStringO;
} return val; }
// Error handler to report errors and warnings
// TAKEN STRAIGHT FROM JAXP SAMPLES public static class MyErrorHandler implements ErrorHandler
{
/** Error handler output goes here */ private PrintWriter out;
MyErrorHandler(PrintWriter out) {this.out = out;} r
* Returns a string describing parse exception details
*/ private String getParseExceptionlnfo(SAXParseException spe) {
String systemld = spe.getSystemldQ; if (systemld == null) { systemld = "null";
}
String info = "URI=" + systemld + " Line=" + spe.getLineNumberQ + ": " + spe.getMessageQ; return info; }
// The following methods are standard SAX ErrorHandler methods. // See SAX documentation for more info. public void warning(SAXParseException spe) throws SAXException { out.println("Warning: " + getParseExceptionlnfo(spe)); } public void error(SAXParseException spe) throws SAXException { String message = "Error: " + getParseExceptionlnfo(spe); throw new SAXException(message);
} public void fatalError(SAXParseException spe) throws SAXException { String message = "Fatal Error: " + getParseExceptionlnfo(spe); throw new SAXException(message); } } }
package xmlio; import Java. lang. Math; import java.util.*; public class Roi implements XMLConstants { private String path; private String index; private String valid; private String label; private double comX; private double comY; private double area; private double angle; private ArrayList neighbors; private double eccentricity; public boolean comXUpdated; public boolean comYUpdated; public boolean areaUpdated; public boolean angleUpdated; public boolean eccenUpdated; public boolean neighborAdded; public Roi() { neighbors = new ArrayList(); comXUpdated = false; comYUpdated = false; areaUpdated = false; angleUpdated = false; eccenUpdated = false; neighborAdded = false; } public void setNeighbor(Neighbor N, int I) { neighbors.add(l, N); neighbors.trimToSize(); neighborAdded=true;
} public Neighbor getNeighbor(int i) { return (Neighbor)neighbors.get(i); } public int getNumNeighbors() {return neighbors.size();} public void setPath(String p) { path = new String(p); } public void setlndex(String i) { index = new String(i); } public void setLabel(String I) { label = new String(l); } public void setValid(String v) { valid = new String(v); } public void setCOM(double x, double y) { comX = x; comY = y; comXUpdated = true; comYUpdated = true;
} public void setComX (String x) { comX = Double.valueOf(x).doubleValue(); comXUpdated = true; } public void setComY (String y) { comY = Double.valueOf(y).doubleValue(); comYUpdated = true; } public void setComX (double x) { comX = x; comXUpdated = true; } public void setComY (double y) { comY = y; comYUpdated = true; } public void setArea(double a) { area = a; areaUpdated = true; } public void setArea(String a) { area = Double. valueOf(a).doubleValue(); areaUpdated = true; } public void setAngle(double a) { angle = a; angleUpdated = true; } public void setAngle(String a) { angle = Double.valueOf(a).doubleValue(); angleUpdated = true; } public void setEccentricity(double a) { eccentricity = a; eccenUpdated = true; } public void setEccentricity(String a) { eccentricity = Double. valueOf(a).doubleValue(); eccenUpdated ; true; } public String getPath() { return path; } public String getlndexQ { return index; } public String getLabel() { return label; } public String getValidityQ { return valid; } public String getComX() { return String. valueOf(comX); } public String getComY() { return String.valueOf(comY); } public String getAreaQ { return String.valueθf(area); } public String getAngle() { return String.valueθf(angle); } public String getEccentricityQ { return String.valueθf(eccentricity); } public String toString() { return label; }
// Used to find distance between 2 ROIs public String getDistance(Roi R) { return String.valueOf(java.lang.Math.sqrt( java.lang.Math.pow(comX - Double.valueOf(R.getComX()).doubleValue(), 2.0) + java.lang.Math.pow(comY - Double. valueOf(R.getComY()).doubleValue(), 2.0)));
}
// Used to find the angle between 2 ROIs public String getOrientation(Roi R) { return String.valueOf(java.iang.Math.atan2(comY - Double. valueOf(R.getComY()).doubleValue(), comX - Double.valueOf(R.getComX()).doubleValue())); } }
package xmlio; interface XMLConstants { String ECHO_FRAME = "echoFrame"; String FRAME_PATH = "path";
String FRAMEJMAME = "name"
String FRAME_VIEW = "view"; String ECHO_NUM = "echoNum"; String SEG_NUM = "segNum"; String KF_NUM = "kfNum";
String ROI = "ROI";
String ROI INDEX = "index";
String ROI LABEL = "label";
String ROI PATH = "roiPath";
String ROI VALID = "valid";
String ROI COMX = "comX";
String ROI COMY = "com Y";
String ROI AREA = "area";
String ROI ANGLE = "angle";
Suing ROI ECCEN = "eccentricity"
String NEIGHBOR = "NEIGHBOR"; String NEIGHBOR_ANGLE = "nangle"; String NEIGHBOR_DIST = "ndist"; String NEIGHBORJ.ABEL = "nlabel";
String PLA
String RVIT = "RVIT";
String PLAP = "PLAP";
String PSAB = "PSAB";
String PSAPM = "PSAPM"
String PSAM = "PSAM";
String PSAX = "PSAX";
String A2C = "A2C";
String A3C = "A3C";
String A4C = "A4C";
String A5C = "A5C";
String SC4C = "SC4C";
String SCIVC = "SCIVC";
String LV = "LV;
String RV = "RV";
String LA = "LA";
String RA = "RA";
String AO = "AO";
String AV _ ,,AV...
String UNKN = "UNKN";
String TRUE = "true";
String FALSE = "false"; DATA COLLECTION
dcgui : contains the code for the user interface used to annotate echocardiogram frames to obtain the training data for learning descGen : Generates descriptions of the regions and their properties in the segmented echo frames.
Stores those descriptions in XML formate. xmiio : Code for reading from and writing to the XML files of the descriptiont of the frames. dataGen : Code for dumping the information in the XML frame description files into flat file format to be read by MATLAB code for view recognition.
DATA COLLECTION / DATAGEN
#!/usr/bin/perl -w
$start = 2;
$stop = 3;
$srcdir = "/home/shahram/Gardio/source/chamber_segment/testimg/echon";
$src_suffix = "/desc"; for ($i=$start; $i < $stop; $i++ )
{ $src = $srcdir.$i.$src_suffix; opendir (SRCDIR, $src) || die "no directory: $!"; while ($xmlln = readdir(SRCDIR)) {
$inFile = $src.7'.$xmlln; print "$iπFile\n"; system "Java -classpath
/home/shahram/cardio/source/datacollect:/usr/java/]ars/domsdk.jar:/usr/java/iars/ecrioframe.jar dataGen.DataGen $inFιϊe"
} closedir (SRCDIR);
}
P2.X = Pl. X; P2.y = Pl. y; pos. insert (pos . begin () + pos.sizeO , P2) ,-
P2.x = Pl. x + 1; P2.y = Pl. y - 1; pos. insert (pos. begin () + pos. size () , P2) ;
P2.X = Pl. X - 1; P2.y = Pl. y + 1; pos. insert (pos. begin () + pos. size (), P2) , if (verify_CorrespondingVertex (dirl, A[I2], pos, V2) ) { if (((A[I2] ) [V2]) .getlnng_val() >= ( (A[Il] ) [Vl] ) .getlmg_val <)) { pos . clear () ; return (TRUE) ; } break; case 3 :
P2.X = Pl. x; P2.y = Pl. y; pos. insert (pos.beginQ + pos.sizeO , P2) ;
P2.X = Pl. x + 1; P2.y = Pl. y,- pos. insert (pos.beginO + pos.sizeO , P2) ;
P2.x = Pl. x - 1; P2.y = Pl. y; pos. insert (pos.beginO + pos.sizeO, P2) ;
P2.X = Pl. x + 1; P2.y = Pl. y - 1; pos . insert (pos.beginO + pos.sizeO, P2) ;
P2.X = Pl. x - 1; P2.y = Pl. y + 1; pos . insert (pos.beginO + pos.sizeO, P2) ; if (verify_CorrespondingVertex (dirl, A[I2] , pos, V2)) { if (((A[I2] ) [V2]) .getlmg_val() >= ( (A[Il] ) [Vl] ) .getlmg_val ()) { pos.clear() ; return (TRUE) ;
} } break; case 4:
P2.X = Pl.x; P2.y = Pl.y; pos . insert (pos.beginO + pos.sizeO , P2) ;
P2.x = Pl.x + 1; P2.y = Pl. y; pos . insert (pos.beginO + pos.sizeO , P2) ;
P2.X = Pl.x - 1; P2.y = Pl. y; pos.insert (pos.beginO + pos.sizeO, P2) ; if (verify_CorrespondingVertex (dirl, A[I2] , pos, V2) ) { if ( ((A[12] ) [V2] ) .getlmg_val() >= ( (A[Il] ) [Vl] ) .getlmg_val()) { pos.clear() ; return (TRUE) ; } break; case 5.-
P2.X = Pl.x; P2.y = Pl. y; pos. insert (pos.beginO + pos.sizeO , P2) ;
P2.x = Pl.x + 1; P2.y = Pl. y; pos. insert (pos.beginO + pos.sizeO , P2) ;
P2.X = Pl.x - 1; P2.y = Pl. y,- pos . insert (pos.beginO -i- pos.sizeO , P2) ;
P2.x = Pl.x + 1; P2.y = Pl. y + 1; pos. insert (pos.beginO + pos.sizeO, P2) ; P P22..xx = = P Pll.. x x - - 1 l;; P e2z..y y = = P Pli.. y y - - 1 l;; pos. insert (pos. begin () + pos.size(), P2) ; if (verify_CorrespondingVertex (dirl, A[I2], pos, V2) ) { if (( (A[I2] ) [V2] ) .getlmg__val() >= ( (A[Il] ) [Vl] ) .getlmg_val ()) { pos .clear () ; return (TRUE) ;
} break; case 6 :
P2.x = Pl. X; P2.y = Pl. y; pos . insert (pos. begin () + pos. size (), P2) ;
P2.x = Pl. X + 1; P2.y = Pl. y + 1; pos.insert (pos.begin() + pos.sizeO, P2);
P2.x = Pl.x - 1; P2.y = Pl.y - 1; pos .insert (pos.begin() + pos.sizeO, P2) ; if (verify_CorrespondingVertex (dirl, A[I2] , pos, V2)) { if (((A[I2] ) [V2] ) .getlmg_val() >= ( (A[Il] ) [Vl] ) .getlmg_val ()) { pos.clear() ; return (TRUE) ; } break; case 7 :
P2.x = Pl.x; P2.y = Pl.y; pos.insert (pos.begin() + pos.sizeO, P2) ;
P2.x = Pl.x; P2.y = Pl.y + 1; pos. insert (pos.beginO + pos.sizeO, P2);
P2.x = Pl. x; P2.y = Pl. y - 1; pos. insert (pos.beginO + pos.sizeO, P2);
P2.x = Pl. x + 1; P2.y = Pl . y + 1; pos. insert (pos.beginO + pos.sizeO, P2) ;
P2.X = Pl. X - 1; P2.y = Pl. y - 1; pos.insert (pos.beginO + pos.sizeO, P2); if (verify_CorrespondingVertex (dirl, A[I2], pos, V2)) { if { ( (A[I2j) [V2] ) .getlmg_val() >= ( (A[Il] ) [Vl] ) .getlmg_val()) { pos.clear() ; return (TRUE) ; } break; };
/**
* find LinkableVertex()
* Iterating through, all the nodes in graph at level 'J' and
* determining which one is located in the designated positions
* for the refererence node in level ' I' .
* After finding such a node we have to verify if its 16-dir
* matches with that of the reference node. */ bool find_LinkableVertex( node this_v, // Node in 'I' we want to link, node last_v, // last and next nodes of this v node next_v, node& linkable_vertex,// function returns this, graph* G, // Stack of graphs. node_map<NodeAttribute>* A, // Graph stach attribs. int I, // Level of reference node, int J ) // Level of graph to look in
// for a linkable node with 'Nl' int dirO _16 = 0 , dirl _16 = 0 , dir2 16 = 0 ;
CvPoint P_0, P; int x__dif, y_dif; bool vertice_FOUND = FALSE; node potentially_linkable,- node tv, nv, Iv;
// Iterators for outgoing-edges of the potentially linkable node, node : : out_edges_iterator edge__it; node : : out_edges_iterator edge_end;
// Iterators for graph at level 'J' in the stack, graph :: node_iterator Nit = (G[J] ) .nodes_begin() ; graph :: node_iterator Nend = (G[J] ) .nodes_end() ;
// Finding the 16-dir of the reference vertex in graph 'I', if ( !get_16Dir ( (A[I] ) [this_v] , (A[I] ) [next__v] , (A[I] ) [last_v] , dirθ_16) } return FALSE;
// Location of the 'this_v' in graph 'I1. P_0 = ( (A[I] ) [this_v] ) .getPositionO ;
// Iterating through nodes of level 1J' graph, while (Nit != Nend)
{
// position of current node in graph J P = ((A[J] ) [*Nit] ) .getPositionO ;
// Only accepts nodes which are in the 8-neighborhood of reference, if (fabs (double (P.x - P_0.x)) > 1 || fabs (double(P.y - P_0.y)) > 1)
{
++Nit; continue;
x_dif = P.x - P_0.x; y_dif = P.y - P_0.y;
/*
* Based on the 16-dir of the reference node in graph 'I',
* we check the appropriate places in the neighborhood of
* this node in graph 'J' to see if a node exists. If the
* answer is positive, such a node is potentially linkable. */ switch ( dirθ_16 )
{ case 0 : case 1: case 7 : case 8 : case 9: case 15: if( 0 == x_dif )
{ potentially_linkable = (*Nit) ; vertice_FOUNϋ = TRUE;
} break; case 3 : case 4 : case 5 : case 11: case 12 : case 13 : if ( 0 == y_dif )
{ potentially_linkable = (*Nit) ; vertice_FOUND = TRUE;
} break; case 2.- if( ((-l != x_dif) && (-1 == y_dif)) || ((-1 == χ_dif) ScSc (-1 != y_dif) ) )
{ potentially_linkable = (*Nit) ; vertice_FOUND = TRUE;
} break; case 6 : if ( ( (I ! = x_dif ) ScSc ( -1 == y_dif ) ) | | ( (I == χ_dif ) ScSc ( -1 ! = y_dif ) ) )
{ potentially_linkable = (*Nit) ; vertice_FOUND = TRUE;
} break; case 10 : if( ((I != x_dif) && (1 == y_dif) ) | | ((I == x_dif) && (1 != y_dif) ) )
{ potentially_linkable = (*Nit) ; vertice_FOUND = TRUE;
} break; case 14: if( ((-1 != x_dif) && (1 == y_dif)) | | ((-1 == x_dif) && (1 != y_dif) ) )
{ potentially__linkable = (*Nit) ; vertice_FOUND = TRUE;
} break;
}; if{ vertice_FOUND ) break; ++Nit; }
/*
* After finding a potentially linkable vertex
* in graph 'J', we have to see if its 16-dir
* matches that of the reference vertex in
* graph 1I'. */ if ( vertice_FOUND )
{
// The vertex should not already be linked. if ( ((A[J] ) [*Nit]) .TagO ) return FALSE; if( BRANCH == ( (A[J] ) [*Nit] ) .getTypeO )
{ linkable_vertex = potentially_linkable; return TRUE;
} else
{ edge_it = potentially_linkable.out_edges_begin() ; switch( potentially_linkable.outdeg() )
{ case(l) : // This is a TERMINAL vertex.
// Taking into acount both outgoing and ingoing dirs. if ( get_16Dir( (A[J] ) [edge_it->source ()], (A[J]) [edge_it->source() ] , (A[J] ) [edge_it->target () ], dirl_16 ) && get_16Dir( (A[J]) [edge_it->source() ] , (A[J] ) [edge_it->target () ] , (A[J] ) [edge_it->source()l , dir2_16 ) )
{
// The 2 vertices are linkable only their 16-dirs match, if ( compatible_16Dir (dirθ_16, dirl_16) | | compatible_16Dir (dirθ_16, dir2_16) )
{ linkable_vertex = potentially_linkable ; return TRUE ; } break; case (2) : tv = edge_it->source() ; nv = edge_it->target() ; Iv = (++edge_it) - > target ( ) ;
// Taking the two possible directions into account . if ( get_16Dir ( (A [J] ) [tv] , (A [J] ) [nv] , (A [J] ) [Iv] , dirl_16) && get_16Dir ( (A [J] ) [tv] , (A [J] ) [Iv] , (A [J] ) [nv] , dir2_16) )
{
// The 2 vertices are linkable only their 16-dirs match, if ( compatible_16Dir (dirθ_16, dirl_16) | | compatible_16Dir (dirθ_16, dir2_16) )
{ linkable_yertex = potentially_linkable; return TRUE; } break; default: cerr << "Have to be either ARC or TERMINAL vertex" << endl; break;
}; } return FALSE;
* find_VList()
* Given two initial vertices one in each graph 'I' and 'J'
* where each graph is described by 1G', 'A1, and 1L', this
* function finds two list of vertices which start from
* these two initial points and have the longest lengths.
*/ bool find_VList (node this_vtx_I, node last_vtx_I, node this_vtx_J, graph* G, node_map <NodeAttribute>* A, vector <node>& LI, vector <node>& LJ, int I, int J)
{ int degree, deg_cntr; vector <node> tmp_Vlist; node next_vtx_J; node next_vtx_I; CvPoint PI, PJ; node : : out_edges_iterator eib; node .- : out_edges_iterator eie;
/** initialize the lists with the 2 input vertices */ LI.insert (LI.begin() + LI.size(), this_vtx__I) ; LJ.insert (LJ.begin{) + LJ.sizeO, this_vtx_J) ; * Making note not to visit this vertices
* in the search process again.
*/
((A[I] ) [this__vtx_I] ) .VisitedO ; ( (A[J] ) [this__vtx_J] ) .VisitedO ,-
/* Traverse graph 'I' */ while (get_NextVertex (this__vtx_I, last_vtx__I, next_vtx_I, G[I] , A[I])) {
//printf ("Next vertex on graph %d is: %d\n" , I, ( (A[I] ) [next_vtx_I] ) .getlndexO ) ;
// Proceed for all cases except when linked BRANCH vertex if ( ! (( (A[I] ) [next_vtx_I] ) .getType() == BRANCH && ( (A[I] ) [next_vtx_I] ) .isLinkedO == TRUE )) {
// location of the next vertex
PI = ( (A[I] ) [next_vtx_I] ) . getPosition () ;
// iterators for going over neighbors eib = this_vtx_J.out_edges__begin() ; eie = this_ytx_J.out_edges_end() ; while (eib != eie) { next_vtx_J = eib->target () ;
// location of the neighbor
PJ = ( (A[J] ) [next_vtx_J] ) .getPosition () ;
// The neighbor should not have been visited before if ( ! ( (A[J] ) [next_vtx_J] ) .isVisited() ) { if (fabs (double (PI. x - PJ. x) ) <=1 && fabs (double (PI .y - PJ. y) ) <=1) {
LI. insert (LI.begin() + LI. size 0 , next_vtx_I) ; LJ. insert (LJ.beginO + LJ.sizeO , next_vtx_J) ;
( (A[I] ) [next_vtx_I] ) .VisitedO , ( (A[J] ) [next_vtx_J] ) .VisitedO ; this_vtx_J = next_vtx_J; break; }
++eib; } last_vtx_I = this_vtx_I; this_vtx_I = next_vtx_I;
} else { return FALSE; }
return TRUE; * create_LongestVertexList {)
^ .». : _
* Given an initial vertex from graph 'I'
* this function finds the longest lists
* of vertices starting from the initial
* linkable nodes.
* The list created is returned in 'LI1. *
* Note: 'this_vtx' is not a Branch vertex! User should observer that.
* Note: if 'last_vtx = this_vtx' then 'this_vtx' is TERMINAL. */ void create_LongestVertexList(node& this_vtx, // Input vertices for graph I node& last_vtx, graphs G, // Graphs and their attribs. node_map<NodeAttribute>& A, vector<node>& LI) // Returned lists of vertices
{ treeGraph TG (this_vtx, G, A) ;
// Creating a tree graph of the paths in the 2D graph. TG.get_LongestBranch (LI, A) ;
// Finding the longest branch in the tree graph. }
/**
* get_NextVertex()
% . _: ,
* Passing the vertices 'this_vertex' , and ' last_vertex' it
* finds the 'next_vertex' of graph 'G1, with attributes given
* by 1A' .
* Note: Returns 'next__v = this_v' if 'this_v' is a BRANCH vertex
* or a TERMINAL one. */ bool get_NextVertex (node this_v, node last_v, node& next_v, graph& G, node_map<NodeAttribute>& A)
{ node tmp_v;
NodeType type = (A[this__v] ) .getType() ;
// When vertex is BRANCH or TERMINAL. if( (type == BRANCH) | | ((type == TERMINAL) && {this_v != last_v) ) ) { next_v = this_v; return FALSE; // No next_vertex for BRANCH or TERMINAL vertex. } else {
// Iterators for outgoing-edges of the branch vertex. node :: out_edges_iterator edge__it = this_v.out_edges__begin() ; node .- .- out__edges_iterator edge_end= this_v.out_edges__end() ; while (edge_it != edge_end)
{ tmp_v = edge_it->target () ;
// Should not take into account ' last_vertex' ! if( tmp_v == last_v ) {
++edge_it; continue;
} else
{ next_v = tmp_v; break;
} ++edge_it;
}
return TRUE; }
/**
* get_LastVertex()
* Passing the vertices 'this_vertex' , and 'next_vertex' it
* finds the 'last_vertex' of graph 'G', with attributes given .
* Note: Returns 'last_v = this_v' if 'this_v' is a BRANCH vertex
* or a TERMINAL one. */ bool get_LastVertex (node this_v, node next_v, node& last_v, graph& G, node__map<NodeAttribute>& A)
{ node tmp_v;
NodeType type = (A[this__v] ) .getType() ;
// When vertex is BRANCH or TERMINAL. if( (type == BRANCH) || ((type == TERMINAL) && (this_v != next_v) ) ) { last_v = this_v; return FALSE; // No next_vertex for BRANCH or TERMINAL vertex. } else {
// Iterators for outgoing-edges of the branch vertex. node :: out_edges_iterator edge_it = this_v.out_edges_begin() ; node .- : out_edges_iterator edge_end= this_v.out_edges_end{) ; while (edge_it != edge_end)
{ tmp_v = edge_it->target 0 ;
// Should not take into account 'last_vertex' ! if ( tmp_v == next_v )
{
+-i-edge_it; continue;
} else
{ last_v = tmp_v; break;
} ++edge_it;
}
return TRUE;
} /*
* reset AllLinkTags ()
* Resets all the linkTAG fields of all nodes in graph. */ void reset_AHLinkTags ( graph& G, node_map <NodeAttribute>& A )
{
// Iterators for graph at level 1J1 in the stack, graph :: node_iterator Nit = G.nodes_begin<) ; graph :: node_iterator Nend = G.nodes_end() ; while (Nit != Nend) { if ( (A[*Nit] ) .getTypeO != BRANCH ) { (A[*Nit] ) .resetLinkO ; (A[*Nit] ) .resetVisitO ;
} Nit++;
} }
/*
* compatible_16Dir()
* Finds if two 16-directions are compatible.
*/ bool compatible_16Dir (int dirl, int dir2)
{
// The two 16Dir values are compatible if they // differ by less than or equal to 2. We take // into account the different combinations of // directions. if ( fabs (double (dirl - dir2) ) < 3 | | fabs (double ( (dirl+8) %16 - dir2)) < 3 ) return TRUE; else return FALSE; }
/*
* create_LUT()
*
* Given a subset graph with its attributes it creates a
* 2D Look Up Table where each element is the corresponding
* vertex of the graph. This should speed up the operations
* compared to traversing the graph each time. */ void create_LUT( graphk G, node_map<NodeAttribute>& A, LUT& L )
/ /
// Going over the graph, calculating the 16Dir
// values of each vertex and setting the LUT.
// , int dir = 0; node last_v, this_v, next_v;
// iterators for going over the nodes of the graph graph :: node_iterator it = G.nodes_begin() ; graph :: node_iterator end = G.nodes_end() ;
// iterators for going over the edges of a node node : : out_edges_iterator eib; node : : inout_edges_iterator ioeb;
// Going over the nodes in graph while (it != end)
{ switch ( (A[*it] ) .getTypeO ) { case ARC: this_v = *it; eib = ( *it) . out_edges_begin ( ) ; last__v = eib- >target ( ) ; ++eib ; next_v = eib- >target ( ) ; get_16Dir (A [this_v] , A [next_v] , A [last_v] , dir) ; (A [this_v] ) . setlβDir (dir) ; break; case TERMINAL: this_v = *it; ioeb = (*it) .inout_edges_begin() ; if ( (*ioeb) .source () == this_v) next_v = (*ioeb) .target() ; else next_v = (*ioeb) .source() ; get_16Dir (A[this_v], A[next_v], A[this_v], dir) ; (A[this_v] ) .setl6Dir(dir) ; break; };
CvPoint P = (A[*it] ) .getPositionO ; L.putNode (P, *it) ; ++it;
* find_CorrespondingVertex()
* Given a reference point 1P', and a vector of
* differential points 'diffP1, it finds a vertex
* in the graph with attributes 1A' which has a
* compatible 16Dir to dir_ref.
*/ bool find_CorrespondingVertex ( int dir_ref, node_map <NodeAttribute>& A, LUT& L, vector <CvPoint>& P, node& N)
{ node vertex; int dir_other,- for (int i=0; i < P.sizeQ; i++) { if (L.getNode (P[i] , vertex)) { if ( (A[vertex] ) .getTypeO I= BRANCH && (A[vertex] ) .getType () != ISOLATED) { dir_other = (A[vertex] ) .getl6Dir(); if (compatible_16Dir (dir_ref, dir_other) ) { N = vertex; return TRUE;
} } else {
N = vertex; return TRUE; }
return FALSE; }
/**
* verify_CorrespondingVertex ()
* Given a point with 16Dir 'dir_ref' and
* a bunch of points 1P1, this function
* looks in the LUT 'L' and other attributes
* 'A' to verify that node 'N' is linkable
* to the reference one.
*/ bool verify_CorrespondingVertex ( int dir_ref, node_map <NodeAttribute>& A, vector <CvPoint>& P, node N)
{
CvPoint PP = A [N] . getPosition O ; int dir_other = (A [N] ) . getlδDir ( ) ; for (int i=0; i < P.size() ,- i++) { if ( P[i] .x == PP. x && P[i] .y == PP.y ) { if (compatible_16Dir (dir_ref, dir_other) ) return TRUE;
}
} return FALSE;
* gsat attrib.h
*
* Written by Shahrara Ebadollahi
* Digital Video[Multimedia Group
* Columbia University
* *********#*******************************#*******#*************************/
#ifndef _GSAT_ATTRIB_H #define _GSAT_ATTRIB_H
#include <cmath> #inolude "sfp_attrib.h" class GSAT_Attrib { friend bool CC (GSAT_Attrib&, GSAT_Attrib&) ; friend double GLCFactor (GSAT_Attrib&, GSAT_Attrib&) ; friend double GLCFactor2 (GSAT_Attrib&, GSAT_Attrib&) ; friend double FactorO (GSAT_Attrib&, GSATJVttrib&) ; friend double Factorl (GSAT_Attrib&, GSAT_Attrib&) ; public:
GSAT_Attrib() ;
GSAT_Attrib(CvPoint, int, int, int) ;
-GSATJVttrib() { } ;
// Copy operator const GSAT_Attrib &operator={const GSAT_Attrib &) ; void setlndex(int i) { _index = i; } int getlndex() { return _index; } void setTop(int r, CvPoint p, int n) { _maxR=r, _maxL.x = p.x, _maxL.y = p.y, _maxlsr = n,- } void setBot(int r, CvPoint p, int n) { _minR=r, _minL.x = p.x, _minL.y = p.y, _minN = n,- } int getTopRad () { return _maxR; } int getBotRad () { return _minR; }
CvPoint& getTopLoc () { return _maxL; }
CvPoint& getBotlioc () { return _minL; } int getTσpLevel () { return _maxN; } int getBotLevel () { return __minN; } int getGminO { return _j3min; } int getDeltaGO { return _DeltaG; } int getAvgDisc() { return _avgDisc; } CvPoint& getAvgPos () { return _avgPos,- } void Used() { _USED = TRUE; } void NotUsedO { _USED = FALSE; } bool isUsedO { return _USED; } void DontReconstruct () { _NO__RECONST = TRUE; } bool isDontReconstruct() { return _NO_RECONST; } void setNoiseO { _NOISE = TRUE; } void resetNoiseO { JSTOISE = FALSE; } bool isNoise() { return _NOISE; } void addNode(int n.) {_cnctd.insert (_cnctd.begin() +_cnctd.size () , n) ; } vector <int> getNodeO {return _cnctd; }
// These 2 methods were added on: 8/4/03 void addSFPIndex(int n) {_sfplnd.insert(_sfplnd.begin() +_sfplnd.size(), n) ; } vector <int> getSFPIndex{) {return _sfplnd; } // Was added on: 8/6/03 void setColor(int c) {_color = c;} int getColorO (return _color; } private: int _color; // coloring of the vertex int _index; // index of node in GSAT bool _USED; // TRUE if node was used in subgraph finding bool _NO_RECONST;// Don't use for reconstruction if TRUE bool _NOISE; // if TRUE shows that this vertex is noise int _Gmin,- // minimum gray-level of the associated SF path int _DeltaG; // gray-level difference between SF path top and bottom int _avgDisc; // average size of maximal disc in SF path
CvPoint _avgPos; // average position of the associated SF path int _minR; // Radius of node at bottom and top of corresponding SP int _maxR;
CvPoint _minL; // Location of node at bottom and top of corresponding SP
CvPoint _maxL; int _minM; // Min and Max level of corresponding SP int _maxN;
// last 2 lines were added 8/7/03. Needed in computing cavity compatibility vector <int> _cnctd;// List of nodes to which current gsat node is connected
// Was added on: 8/4/03 vector <int> _sfplnd; // List of SFPs corresponding to this node
// Was added on: 8/6/03
}; bool CC (GSAT_Attrib&, GSAT_Attrib&) ; double GLCFactor (GSAT_Attrib&, GSAT_Attrib&) ; double GLCFactor2 (GSAT_Attrib&, GSAT_Attrib&) ; double FactorO (GSAT_Attrib&, GSAT_Attrib&) ; double Factorl (GSAT_Attrib&, GSAT_Attrib&) ;
#endif # ! /usr/bin/perl -w
$start = 2 ;
$stop = 3 ;
$srcdir = " /home/shahram/cardio/source/chamt)er__segment/testimg/echon" ;
$src_suffix = "/desc " ; for ($i=$start; $i < $stop,- $i++ )
{
$src = $srcdir.$i.$src_suffix; opendir (SRCDIR, $src) | | die "no directory: $!"; while ($xmlln = readdir(SRCDIR) ) {
$inFile = $src.7'.$xmlln; print "$inFile\n"; system "Java -classpath
/home/shahram/cardio/source/datacollect:/usr/java/jars/domsdk.jar:/usr/Java/jars /echoframe.jar dataGen.DataGen $inPile"
} closedir (SRCDIR) ;
}
package dataGen; import java.awt.image. *; import Java.io.File; import Java.io.FileWriter; import java.lang.String; import java.util.StringTokenizer; import java.util.Vector,- import java.lang.Exception; import javax.media.jai.*; import com.sun.media.jai.codec.*; import xmlio.EchoFrame; import xmlio.Roi; public class DataGen implements XMLConεtants { private static String featureDirRel = "features", private static String keysDirRel = "keys"; private static String descDirRel = "desc"; private static String descBkpDirRel = "desc_bkp", private static String masksDirRel = "masks"; private static String regάonsDirRel = "regions"; private static String labelsDirRel = "labels"; private static String PGM "-pgm"; private static String PPM ".ppm"; private static String DAT " .dat"; private static String SEG = "_seg"; // was _seg__ private EchoFrame echoFrame; private String inputFileName; public DataGen {String xmlName) throws Exception { // Read the xml file and parse it echoFrame = new EchoFrame () ; echoFrame.read(xmlName) ;
// Keep the name of input file inputFileName = getFileName (xmlName) ;
private String getFileKTame (String S) { String delim = new String("/");
StringTokenizer st = new StringTokenizer(S, delim); int numTokens = st.countTokens () ;
// Find the name of the input file for (int i=0; i < numTokens-1; i++) {
String word = st.nextTokenO ; }
String lastToken = new String(st.nextToken()); return (lastToken) ;
public void checkViewModified() throws Exception { // Get the source name and the view label String name = echoFrame.getName() ; String view = echoFrame.getView() ; String path = echoFrame.getPath();
// Get the stem name
StringTokenizer st = new StringTokenizer(name, ".");
String stem = new String(st.nextToken() ),-
// If view's name was changed, change the name of echo accordingly if (stem.endsWith(view) == false) {
System.out.println("<<<<<<<<<<<<<< »>>>>>>>>>");
System.out.printlnC'View label has been modified!!!!");
System,out.println(" ") ;
// Update the stem name
String newStem = changeName(stem, view);
System.out.println("Orignial stem: " + stem + ", new stem: " + newStem);
* Make the necessary changes inside the xml file
// change the echo name
String newName = name.replaceFirst(stem, newStem); echoFrame.setName(newName) ;
System.out.println("Original name: " + name + ", New Echo Name: " + newName) ;
// change each object path int numObjs = echoFrame.getNumROIs {);
System.out.println("Number of ROIs: " + numObjs); for (int i=0; i < numObjs; i++) { // get the roi's path
Roi currentROI = echoFrame.getROI (i+1) ; String oldPath = currentROI.getPathO ;
// correct the path
String newPath = oldPath.replaceFirst(stem, newStem) ,- System.out.println("Original path: " + oldPath + ", New path: " + newPath) ;
// write it back to roi object echoFrame.setROIPath(i+1, newPath) ; }
* Make necessary changes to files associated with this xml String newInputFileName = inputFileName.replaceFirst (stem, newStem);
// change xml file name
String descNameOld = new String(path + descDirRel + "/" + inputFileName);
String descNameNew = new String(path + descDirRel + "/" + newInputFileName) ;
String descNameBkp = new String(path + descBkpDirRel + "/" + inputFileName + ".bkp"); System.out.println("xml old: " + descNameOld + ", xiril new: " + descNameNew) ; echoFrame.writeToFile (descNameNew) ;
File descFileOld = new File(descNameOld) ; File descFileBkp = new File(descNameBkp) ; descFileOld.renameTo(descFileBkp) ;
// change ppm file in keys directory
String keysNameOld = new String(path + keysDirRel + "/" + name);
String keysNameNew = new String(path + keysDirRel + "/" + newName) ;
System.out.println("keys ppm old: " + keysNameOld + ", keys ppm new: " 4 keysNameNew) ;
File keysFileOld = new File (keysNameOld) ; File keysFileNew = new File(keysNameNew) ; keysFileOld.renameTo(keysFileNew) ;
// change pgm file in masks directory
String maskNameOld = new String(path + masksDirRel + "/" + stem + PGM); String maskNameNew = new String(path + masksDirRel + "/" + newStem + PGM) ;
System.out.println("mask old: " + maskNameOld + ", mask new: " +• maskNameNew) ;
File maskFileOld = new File(maskNameOld) ; File maskFileNew = new File (maskNameNew) ; maskFileOld.renameTo (maskFileNew) ;
// locate the related pgm image regions and change their names String regionsDirName = new String(path + regionsDirRel);
File regionsDir = new File(regionsDirName) ; File [J regionsFile = regionsDir.listFiles (); int numFiles = regionsFile.length; for (int j=0; j < numFiles; j++) {
//String currentFileName = new String(regionsFileNames [j]); if (regionsFile[j] .isFileO) {
String fileName = getFileName(regionsFile [j] .getCanonicalPath()); if (fileName.startsWith(stem) ) { // make the proper file name String newFileName = fileName.replaceFirst (stem, newStem) ;
String regionsNameOld = new String(path + regionsDirRel + "/" + fileName) ;
String regionsNameNew = new String(path + regionsDirRel + "/" + newFileName) ;
System.out.println("regions old: " + regionsNameOld + ", regions new: " + regionsNameNew) ; File regionsFileOld = new File(regionsNameOld) ; File regionsFileNew = new File(regionsNameNew) ; regionsFileOld.renameTo(regionsFileNew) ; }
}
} else {
System.out.println("<<<<<<<<<<<<<< >>>>>>>>>>>");
System.out.println("View label has NOT been modified!!! !");
System.out.println(" ") ;
}
/*
* Changes the last token of the string 'N1 to 1V */ private String changeName (String N, String V) { String delim = new String("_");
StringTokenizer st = new StringTokenizer(N, delim); int numTokens = st.countTokens () ;
// Get the last token of 'N' for (int i=0; i < numTokens-1; i++) {
String word = st.nextTokenO ; }
String oldV = new String( st.nextTokenO ) ; String newN = N.replaceFirst(oldV, V); return (newN) ; }
/*
* Writes the attributes of the valid objects in the echo
* frame to appropriate files under the 'features' directory
* of the current echo video */ public void exportEchoInfo () throws Exception { // get the view label of current echo frame String viewLabel = echoFrame.getViewO ; String path = echoFrame.getPathO ; System.out.println("view label: " + viewLabel + ", path: " + path);
// For each object get its label, create an output file
// and export its information to that file int numObjs = echoFrame.getNumROIs {) ;
System.out.println("Number of objects: " + numObjs); for (int i=0; i < numObjs; i++) {
Roi currentROI = echoFrame.getROI (i+1) ; if ( (currentROI.getValidity() ) .equals (TRUE) ) { // Get current ROIs label String objLabel = currentROI.getLabel (); // Make up the name of export file
String relativeName = new String(viewLabel + "_" + objLabel + DAT) ; String exportFileName -= new String(path + featureDirRel + "/" + relativeName) ;
System.out.println("Export File Name; " + exportFileName);
// List object attributes for export
StringBuffer sb = new StringBuffer(); sb.append(currentROI.getComX() ) ; sb.append(" ") ; sb.append(currentROI.getComYO ) ; sb.append(" ") ; sb.append(currentROI.getArea() ) ; sb.append(" ") ; sb.append(currentROI.getAngleO),- sb.append(" ") ; sb.append(currentROI.getEccentricity() ) ; sb.append("\n") ;
System.out.println("Output attributes: " + sb.toString());
// Open the file for exporting object info to it File exportFile = new File (exportFileName) ; FileWriter out = new FileWriter(exportFile, true); out.write (sb.toString() ) ; out.close () ; } } } public void exportlnfo() throws Exception { // Make up the name of export file String path = echoFrame.getPath(); String name = echoFrame.getName(); String name_base = name.replaceAll (PPM, " "); name_base = name_base.trim() ;
String exportFileName = new String(path + featureDirRel + "/" + name_base + DAT) ;
System.out.println("Export File Name; " +- exportFileName);
// Output string buffer
StringBuffer sb = new StringBuffer();
// Output the echo number int index = path.lastlndexOf("echo") ; // was "echon"
String echonum_tmp = path.substring(index+5) ;
String echonum_str = echonum_tmp.replaceAll ("/", " ") ; echonum_str = echonum_str.trim() ; sb.append("0") ; //echonum_str) ; NOTE: was this before., sb.append("\n") ;
// Output the name of the view sb.append(echoFrame.getView() ) ; sb.append("\n") ;
// Output the sequence number of the key-frame sb.append(name_base) ; sb.append("\n") ; int numObjs = echoFrame.getNumROIs() ; for (int i=0; i < numObjs; i++) {
Roi currentROI = echoFrame.getROI(i+1) ; if ( (currentROI.getValidity() ) .equals (TRUE) } { // Get current ROIs label String objLabel = currentROI.getLabel {);
// List object attributes for export sb.append(currentROI.getLabel () ) ; sb.append(" ") ; sb.append(currentROI.getComX() ) ; sb.append(" ") ; sb.append(currentROI.getComYO ) ; sb.append(" ") ; sb.append(currentROI.getArea() ) ; sb.append(" ») ; sb.append(currentROI.getAngle() ) ; sb.append(" ") ; sb.append(currentROI.getEccentricity() ) ; sb.append("\n") ;
System.out.println("Output attributes: " + sb.toString() ) ; }
// Open the file for exporting object info to it File exportFile = new File(exportFileName) ; FileWriter out = new FileWriter(exportFile, true); out.write(sb.toString() ) ; out.close() ; }
// For each region in the mask export its label to appropriate file public void exportRegionLabels () throws Exception { boolean FLAG,-
// get the view label of current echo frame
String viewLabel = echoFrame.getViewO ;
String path = echoFrame.getPath();
String name = echoFrame.getName();
String name_seg = name.replaceAll (PPM, SEG);
String name_base = name.replaceAll(PPM, " "); name_base = name__base.trim() ;
String regionNameBase = new String(path + regionsDirRel + "/" + name_seg) ; System.out.println("View label: " + viewLabel + ", path: " + path) ;
// For each object get its label, create an output file
// and export its information to that file int numObjs = echoFrame.getNumROIs ();
System.out.println("Number of objects: " + numObjs); // Make up the name of export file
String exportFileName = new String(path + labelsDirRel + "/" + name_base + DAT) ;
System,out,println("Export Pile Name,- " + exportPileName) ;
// Output string buffer
ΞtringBuffer sb = new StringBuffer() ; for (int k=0; k < numObjs; k++) { FLAG = false;
/* _ Get ROI's image */
String regionName = new String(regionNameBase + String.valueOf (k+1) + PGM) ;
System.out.println("Name of region mask: " + regionName);
// open image
Planarlmage image = JAI.create("fileload", regionName);
Bufferedlmage buflmage = image.getAsBufferedlmage ();
SampleModel sm = bufImage.getSampleModel () ;
WritableRaster wr = bufImage.getRaster();
DataBuffer db = wr.getDataBuffer() ; int imageWidth = image.getWidthf) ; int imageHeight = image.getHeight (); int pixel [] [] = new int [imageWidth * imageHeight] [] ,- for (int i=0; i < imageHeight; i++) { for (int j=0; j < imageWidth; j++) { int pix[] = null; pixel [i*imageWidth + j] = sm.getPixel(j ,i,pix,db) ;
// Get COM of image double ex = 0; double cy = 0; double totalG = 0; int len = pixel [0] .length; for (int j=0; j < len; j++) { for (int i=0; i < pixel.length; i++) { int pix = pixel [i] [j]; int row = i / imageWidth; int col = i - row*imageWidth; ex += col *pix,- cy += row *pix; totalG += pix;
double comX = ex / totalG; double comY = cy / totalG;
System.out.println("COM from object: " + comX + ", " + comY) ; /* Find the non-null region having the same com */ for (int 1=0; 1 < numObjs; 1++) {
Roi currentROr = echoFrame.getROI (1+1) ;
// find com of valid roi if ( (currentROI.getValidityO ) .equals (TRUE) ) { double comx = Double.valueOf(currentROI.getComX()) .doubleValue(); double corny = Double.valueOf(currentROI.getComYO ) .doubleValue();
System.out.println{"COM from XML: " + comx + ", " + corny); if ( (comx == comX) && (corny == comY) ) { sb.append(currentROI.getLabel () ) ; sb.append(" ") ; FLAG = true; break; } } }
// no region found to match the COM if (FLAG == false) { sb.append("nn ") ; }
String output_string = sb.toString() ; output_string = output_string.trim() ; String output = new String(output_string + "\n");
// Open the file for exporting object info to it File exportFile = new File(exportFileName) ; FileWriter out = new FileWriter(exportFile, true); out.write (output) ; out.close () ; } public static void main(String args [] ) { try {
DataGen dataGen = new DataGen(args [0] );
// Check to see if view has been modified. If yes change name. //dataGen.checkViewModified() ;
// Export view's objects information to appropriate files //dataGen.exportRegionLabels () ; //dataGen.exportEchoInfo() ; dataGen.exportlnfo() ;
} catch (Exception e) {
System.out.println("Error. ") ; e.printStackTrace (System.err) ; } package dataGen; interface XMLConstants {
String ECHO_FRAME = "echoFrame" String FRAME_PATH = "path",; String FRAME_NAME = "name"; String FRAME_VIEW = "view"; String ECH0_NUM = "echoNum";
String SEG_NUM = "segNum";
String KF_NUM = "kfNum";
String ROI = "ROI";
String R0I_INDEX = "index"; String R0I_LABEL = "label"; String R0I_PATH = "roiPath"; String R0I_VALID = "valid";
String ROI_COMX = "comX";
String ROI_COMY = "comY";
String R0I_AREA = "area";
String R0I_ANGLE = "angle";
Spring ROI_FEAT = "fPile";
String PIiA = "PLA";
String RVIT = "RVIT";
String PLAP = "PLAP";
String PSAB = "PSAB";
String PSAPM = "PSAPM";
String PSAM = "PSAM";
String PSAX = "PSAX";
String A2C = "A2C";
String A3C = "A3C";
String A4C = "A4C";
String A5C = "A5C";
String SC4C = "SC4C";
String SCIVC = "SCIVC";
String LV = "LV";
String RV = "RV";
String LA = "LA";
String RA = "RA";
String AO = "AO";
String AV = "AV";
String UNKN = "XINKN";
String TRUE = "TRUE";
String FALSE = "FALSE";
DATA COLLECTION /DCGUI
javac -classpath
/home/shahram/cardio/source/datacollecti/usr/java/jars/domsdk.jar/home/shahram/package/irriaging/clas ses/ LaunchPad.java
package dcgui; import java.io. * ■ import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event. *; import javax.swing.border. *; import javax.swing.JTable; import javax.swing.table.AbstractTableModel; import javax.xml.parsers. *; import org.xml.sax. *; import org.w3c.dom.*; import java.net.*; import java.util.StringTokenizer; import java.util.ArrayList; import xmlio.EchoFrame;
public class ControlPanel extends JPanel implements GUTConstants
{ private static ControlPanel instance; private DisplayPanel display; private InfoViewer metadataArea; private ControlViewer controlArea; private EchoFrame echoframe; public static synchronized ControlPanel getlnstance() { if (instance == null) instance = new ControlPanel (); return instance; } public ControlPanel () { super() ; setLayout (null) ; //setSize (560, 200) ; setMetaArea() ; setControlArea () ;
Insets insets = getlnsets () ; metadataArea.setBounds (insets. left, insets. top, 175, 32); // 175 x 32 controlArea.setBounds (insets.left, 32+insets.top, 175, 528) ; // 175 x 248 add(metadataArea) ; add(controlArea) ; }
// Set up the text area that is used to show meta-data of images void setMetaArea() { metadataArea = InfoViewer.getlnstance (); } // Set up the control area void setControlAreaO { controlArea = ControIViewer.getlnstance() ; }
// Update the sub panels public void update(String fileName, String viewName, ArrayList rois) {
System.out.println("Updating the ROIs!!"); metadataArea.update(fileName, viewName) ,- controlArea.update (rois) ; }
package dcgui; import java.io. *; import java.util.*; import java.awt.*; import Java.awt.Color; import javax.swing.*; import java.awt.event.*; import javax.swing.event.*; import javax.swing.border.*; import xmlio.EchoFrame; import xmlio.Roi; public class ControlViewer extends JPanel implements ActionListener, ItemListener,
GUIConstants
{ private static final int width= 165, SUBMITJHEIGHT = 30, QUIT_HEIGHT = 30, COMBO_HEIGHT=30; private static String VIEW_SEIiECTOR = "view", LABEL_SELECTOR = "label"; private static ControlViewer instance,- private int NUMBERJBUTTONS = 10; private ArrayList roiButtons; private JComboBox selectionBox; private JPanel roiButtonPanel; private JButton submitButton,- private Container verticalBox; private String current__view; private String current_label; private String current__roi_name; private String[] labels = (LABELO, LABELl, LABEL2, LABEL3, LABEL4, LABEL5, LABEL6} ; public static synchronized ControlViewer getlnstance() { if (instance == null) instance = new ControlViewer(); return instance; } public ControlViewer{) { super() ; setLayout (null) ; setBackground(Color.gray) ; setBorder(BorderFactory.createLineBorder(Color.black) ) ; roiButtons = new ArrayList (); createSubmitButton() ; createROIPanel () ; createViewSelectionBox() ;
Insets insets = getlnsetsO; roiButtonPanel.setBounds (insets.left, insets.top, 175, 400); selectionBox.setBounds (insets.left, 400+insets.top, 175, 50); submitButton.setBounds (insets . left, 450+insets.top, 175, 50); add(roiButtonPanel) ; add(submitButton) ; add(selectionBox) ; } private void createSubmitButtonO { submitButton = new JButton{"Analyze"); submitButton.setEnabled(true) ; submitButton.addActionListener(this) ; submitButton.setBorder(BorderFactory.createBevelBorder(BevelBorder.RAISED) ) ; } private void createROIPanel () { roiButtonPanel = new JPanelO; roiButtonPanel.setBackground(Color.lightGray) ; createCheckBoxes () ; } private void createCheckBoxes () { boolean split = false; int panel_width = roiButtonPanel.getWidth() ; int panel__height = roiButtonPanel.getHeight(); int box_width = panel_width, box_height = 20; int K = panel_height / box_height; roiButtonPanel.setLayout(null) ;
Insets insets = roiButtonPanel.getlnsets (); if (roiButtons.isEmpty() == false) {
// split check boxes into two columns if this happens if (box_height * roiButtons.size () > panel_height) box_width /= 2; for (int i=0; i < roiButtons.size(); i++) { int ht = i%K * box_height; int wl = (i/K) *box_width + 3; System.out.println("ht: " + ht + ", wl: " + wl) ;
String text = new StringC'ROI " + String.valueOf (i+1) );
JCheckBox box = new JCheckBox(text) ; box.addltemliistener(this) ; box.setBackground(Color.lightGray) ; box.setBounds (wl+insets.left, ht+insets.top, box_width/2, box_height)
// labels have to change with the choice of view. JComboBox L = new JComboBox (labels) ; IJ.setSelectedlndex(0) ; Ii.addActionListener (this) ;
IJ.setBounds (wl+box_width/2+insets.left, ht+insets .top, box_width/2, box_height) ;
L.setName(text) ; roiButtonPanel.add(L) ,- roiButtonPanel.add(box) ; } } } private void createViewSelectionBoxO {
String[] views = { VIEWO, VIEWl, VIEW2, VIEW3, VIEW4, VIEW5, VIEW6, VIEW7,
VIEW8, VIEW9, VIEWlO, VIEWIl, VIEW12, VIEW13 }; selectionBox = new JComboBox (views) ; selectionBox.setSelectedlndex(0) ; current_view = (String) selectionBox.getSelectedltemO ; selectionBox.addActionListener (this) ;
Dimension d = new Dimension(170, 50); selectionBox.setPreferredSize(d) ; selectionBox.setBorder(BorderPactory.createBevelBorder(BevelBorder.RAISED) ) ; selectionBox.setName (VIEW_SELECTOR) ; } public void update(ArrayList roi) { //update ROIs roiButtons.clear() ; for (int i=0; i < roi.size (); i++) roiButtons.add( (Roi)roi.get(i) ) ;
// recreate the roi panel roiButtonPanel.removeAll () ; createCheckBoxes () ,- roiButtonPanel.repaint () ; } public void actionPerformed (ActionEvent evt) { String command = evt.getActionCommandO ; Object source = evt.getSource(); t if (command.equals (SELECTION_BOX_CHANGED) ) {
JComboBox cb = (JComboBox)evt.getSource (); String source_name = cb.getName() ; if (source_name.equals (VIEW_SELECTOR) ) { current_view = (String) cb.getSelectedltemO ;
String viewName = new String(); if (current_view.equals (VIEWO) ) { viewName = (VIEWO) ; } else if (current_view.equals (VIEWl) ) { viewName = (VIEWIJLABEL) ; } else if (current_view.equals (VIEW2) ) { viewName = (VIEW2_LABEL) ; } else if (current_view.equals (VIEW3) ) { viewName = (VIEW3_LABEL) ; } else if (current_view.equals (VIEW4) ) { viewName = (VIEW4_LABEL) ; } else if (current_view.equals (VIEW5) ) { viewName = (VIEW5 LABEL) ; } else if (current_view.equals (VIEWS) ) { viewName = (VIEW6_LABEL) ,- } else if (current_view.equals (VIEW7) ) { viewName = (VIEW7__LABEL) ; } else if (current_view.equals (VIEW8) ) { viewName = (VIEW8JL.ABEL) ; } else if (current_view.equals (VIEW9) ) { viewName = (VIEW9_LABEL) ; } else if (current_view.equals (VIEWlO) ) { viewName = (VIEW10_L>ABEL) ; } else if (current_view.equals (VIEWIl) ) { viewName = (VIEW11_LABEL) ; } else if (current_view.equals (VIEW12) ) { viewName = (VIEW12_LABEL) ; } else if (current_view.equals (VIEW13) ) { viewName = (VIEW13_LABEL) ;
}
System.out.println ("Chosen view: " + current_view) ,- System.out.println("View Name: " + viewName);
// updating the echoframe view field EchoFrame echoframe = EchoFrame.getInstance () ,- echoframe.setview(viewName) ;
InfoViewer metadata = InfoViewer.getlnstance (); metadata.update (current_view) ;
GuidePanel guideP = GuidePanel.getlnstance(); guideP.update (viewName) ; } else { current_label = (String) cb.getSelectedltemO ; current_roi_name = cb.getName() ;
Systern.out.println ("********************************") •
System.out.println ("Chosen roi: " + current_roi_name) ;
System.out.println ("Chosen label: " + current_label) ;
System.out.println (»********************************") •
// Get the id of the selected/deselected ROI
StringTokenizer token = new StringTokenizer(current_roi_name, " "); int num_tokens = token.countTokens () ; int roi_id; for (int i=0; i < num_tokens - 1; i++) { String tmp_token = token.nextToken() ;
} roi_id = Integer.valueOf (token.nextToken() ) .intValue(),-
System.out.println("roi id: " + roi_id) ;
// updating the appropriate ROI
EchoFrame echoframe = EchoFrame.getlnstance (); echoframe.setROILabel (roi_id, current_label) ;
} } else if (command.equals (ANALYZE_EVENT) ) {
// Analyze the ROIs and their relations and write to disk ROIAnalyzer ra = ROIAnalyzer.getlnstance() ; ra.initialize(EchoFrame.getlnstance() ) ; ra.writeRoisToDisk{) ;
// Write the composite image to disk - Added 10/15/03
EchoPrame echoframe = EchoFrame.getlnstance();
String path_name = echoframe.getPathO ;
String img_nameC = echoframe.getName() ;
String img_name = img_nameC.replaceAll {" .ppm", ".jpg");
String img_filename = path_name.concat("/overlay/") .concat (img_name) ;
ImageViewer img_view = ImageViewer.getlnstance(); img_view.savelmageAsJPEG(img__filename) ; } } public void itemStateChanged(ItemEvent evt) { System.out.println{evt.getStateChange () ) ; JCheckBox box = (JCheckBox) evt.getltemO ; String label = box.getText () ;
// Get the id of the selected/deselected ROI System.out.println("Label is: " + label); StringTokenizer token = new StringTokenizer(label, " "); int num_tokens = token.countTokens () ; int roi_id; for (int i=0; i < num_tokens - 1; i++) { String tmp_token = token.nextToken();
} roi_id = Integer.valueOf(token.nextTokenQ ) .intValue();
EchoFrame echoframe = EchoFrame.getlnstance(); if (evt.getStateChange () == ItemEvent.SELECTED) { echoframe.setROIValid(roi_id, TRUE) ; } else { echoframe.setROIValid(roi_id, FALSE) ; }
// Get the selected/deselected mask image and send to ImageViewer
Roi roi; if (roi_id - 1 < roiButtons .size () ) { roi = (Roi)roiButtons.get (roi_id - 1) ;
ImageViewer imgV = ImageViewer.getlnstance (); imgV.updateROI(roi, evt.getStateChange() ) ; } else
System.out.println("Error. Index out of bounds.");
package dcgui; import java.io.*; import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax-swing. event.*; import javax.swing. border.*; import javax.swing. JTable; import javax.swing.table.AbstractTableModel; import javax.xml.parsers.*; import org.xml. sax.*; import org.w3c.dom.*; importjava.net.*; import java.util.StringTokenizer; import java.utii.ArrayList; import xmlio.EchoFrame;
public class DisplayPanel extends JPanel implements ActionListener,
GUIConstants
{ private static DisplayPanel instance; private SelectionPanel selection; private InputPanel input; private lmageViewer imageArea; //private InfoViewer metadataArea; private ControlPanel controlArea; private EchoFrame echoframe; public static synchronized DisplayPanel getlnstance() { if (instance == null) instance = new DispIayPanel(); return instance; } public DispiayPanelO { super(); setLayout(null); //setSize(550, 300); // 550 x 300 setDisplayAreaO; setControlAreaO; //setMetaAreaQ; //setControlAreaO;
Insets insets = getlnsets(); imageArea.setBounds(insets.left, insets.top, 375, 280); //metadataArea.setBounds(375+insets.left, insets.top, 175, 32); //controlArea.setBounds(375+insets.left, 32+insets.top, 175, 248); add(imageArea); //add(metadataArea); //add(controlArea); } // Set up the display area for showing images void setDisplayAreaO { imageArea = ImageViewer.getlnstanceQ; }
// Set up the text area that is used to show meta-data of images
//void setMetaArea() {
// metadataArea = lnfoViewer.getInstance(); li)
Il Set up the control area void setControlAreaO { controlArea = ControIPanei.getlnstanceQ; } public void actionPerformed (ActionEvent evt) { String command = evt.getActionCommand(); Object source = evtgetSourceQ; if (command.equals(setlnputFile)) { String fname = selection.getSelectionName(); imageArea.resetO; updateDisplay(fname);
} } public void setSelection( ) { selection = SelectionPanel.getlnstanceQ;
} public void setlnput( ) { input = lnputPanel.getlnstanceO; }
// Gets the input xml and updates the contents // of the DisplayPanel accordingly, private void updateDisplay(String inputfilename) { // Get the input file and extract info from it echoframe = EchoFrame.getlnstance(); try { echoframe.read(inputfilename);
} catch (Exception e) {
System. out.println("Error."); e.printStackTrace (System. err); }
// Get input parameters
String file_path = echoframe.getPath();
String file_name = echoframe.getName();
// Get number of ROIs
String num_rois = String.valueOf(echoframe.getNumROIs());
// Get View name String view_name = echoframe.getView(); if(view_name == "") view_name = NOT_SPECIFIED;
// Get the ROIs of the echoframe Array List rois = new ArrayList(); for (int j=0; j < Integer. valueOf(num_rois).intValue(); j++) rois.add(j, echoframe.getROIO+1 ));
// Update elements of the display panel
//metadataArea.update(file_name, num_rois, view_name, status); //metadataArea.update(file_name, view_name);
// Get echo image and display
String imgFile = new String(file_path + "keys/" + file_name); imageArea.update(imgFile);
// Get and display the proper guide image GuidePanel gp = GuidePanel.getlnstance(); gp.update(view_name);
// Get rois and display them controlArea.update(file_name, viewjiame, rois); //controlArea.update(rois); } }
package dcgui; interface GUTConstants { String setlnputFile = "SetlnputFile"; String validFileExtension = "xml"; String TABLE_NAME_CELL = "Name : " ■ String TABLE_ROI_CELL = "ROlS:"; String TABLE_VIEW_CELL = "View.-"; String TABLE__STATUS_CELL = "Status:";
String SELECTION_BOX_CHANGED = "comboBoxChanged" String ANALYZE_EVENT = "Analyze"; String QUTT_EVENT = "Quit";
String TRUE = "TRUE";
String FALSE = "FALSE";
String VIEWO = "UNKN",
VIEWl = "PLA",
VIEW2 = "PLA - RVIT",
VIEW3 = "PLA - Pulmonic",
VIEW4 = "PSA - Base" ,
VIEW5 = "PSA - MV",
VIEW6 = "PSA - PM",
VIEW7 = "PSA - Apex" ,
VIEW8 = "A4C" ,
VIEW9 = "A5C",
VIEWlO = "A2C",
VIEWIl = "A3C",
VIEW12 = "SC4C
VIEW13 = "SCIVC" ;
String LABELO = "UNKN'
LABELl = "LV",
LABEL2 = "RV",
LABEL3 = "LA",
LABEL4 = "RA",
LABEL5 = "AO",
LABEL6 = "AV" ;
String VIEW1_ MODEL = ' /home/shahram/cardio/echos/models/pla.jpg";
String VIEW2_ MODEL = ' /home/shahram/cardio/echos/models/vit.jpg";
String VIEW3_ MODEL = ' /home/shahram/cardio/echos/models/plap.jpg";
String VIEW4_ MODEL = ' /home/shahram/cardio/echos/models/psab.jpg";
String VIEW5_ MODEL = '"/home/shahram/cardio/echos/models/psam.jpg";
String VIEW6_ MODEL = ' /home/shahram/cardio/echos/models/psapm.jpg";
String VIEW7__ MODEL = ' /home/shahram/cardio/echos/models/psax.jpg";
String VIEW8_ MODEL = ' /home/shahram/cardio/echos/models/a4c.jpg";
String VIEW9_ MODEL = ' /home/shahram/cardio/echos/models/aSc.jpg" ;
String VIEWlO _MODEL = "/home/shahram/cardio/echos/models/a2c.jpg";
String VIEWIl _MODEL = "/home/shahram/cardio/echos/models/a3c.jpg";
String VIEW12 _MODEL = "/home/shahram/cardio/echos/models/sc4c.jpg";
String VIEWl3 MODEL = "/home/shahram/cardio/echos/models/scivc.jpg" ;
String VIEW1_LABEL = "PLA"; String VIEW2_LABEL = "VIT"; String VIEW3 LABEL = "PLAP" String VIEW4_LABEL = "PSAB"; String VIEW5_LABEL = "PSAM"; String VIEW6__LABEL = "PSAPM"; String VIEW7_LABEL = "PSAX"; String VIEW8_LABEL = "A4C"; String VIEW9__LABEL = "A5C" ; String VIEWl0_LABEL = : "A2C"; String VIEWl1_LABEL = : "A3C"; String VIEW12_LABEL = ■ "SC4C"; String VIEW13_LABEL = : "SCIVC"; String NOT SPECIFIED = "UNKN"; int SEI1ECTION=O, DISPLAY=I, GUIDE=2 , CONTROL=3 ; int NUM_OF_P ANELS= 4 ;
Static int MAX VAL = 255, MIN VAL = 0;
package dcgui; import java.io.*; import java.awt. *; import java.util.ArrayList; import java.awt.event. *; import java.awt.geotn.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event. *; import javax.media.jai.*; import javax.media.jai.operator.CompositeDestAlpha; import com.sun.media.jai.codec.*; public class GuidePanel extends JPanel implements GUIConstants { private static GuidePanel instance; protected ImagePanel viewer; private Planarlmage srclmage; private int display_width; private int display_height; public GuidePanel () { setBackground(Color.gray) ; } public static synchronized GuidePanel getlnstance() { if (instance == null) instance = new GuidePanel (); return instance; } public void update(String viewName) { String fileName = new String(); if (viewName.equals (VIEW1_LABEL) ) { fileName = VIEW1_MODEL; } else if (viewName.equals (VIEW2_LABEL) ) { fileName = VIEW2_MODEL; } else if (viewName.equals (VIEW3_LABEL) ) { fileName = VIEW3_MODEL; } else if (viewName.equals (VIEW4_LABEL) ) { fileName = VIEW4_MODEL; } else if (viewName.equals (VIEW5_LABEL) ) { fileName = VIEW5_MODEL; } else if (viewName.equals (VIEW6_LABEL) ) { fileName = VIEW6_MODEL; } else if (viewName.equals (VIEW7_LABEL) ) { fileName = VIEW7_MODEL; } else if (viewName.equals (VIEW8_LABEL) ) { fileName = VIEW8_MODEL; } else if (viewName.equals (VIEW9_LABEL) ) { fileName = VIEW9_MODEL; } else if (viewName.equals (VIEWlO-LABEL) ) { fileName = VIEWl0_MODEL; } else if (viewName.equals (VIEWl1_LABEL) ) { fileName = VIEW11_MODEL; } else if (viewName.equals (VIEW12_LABEL) ) { fileName = VIEW12_MODEL; } else if (viewName.equals (VIEW13 LABEL)) { fileName = VIEW13_MODEL; ~ } if (fileName.length() != 0) { srclmage = JAI.create ("fileload" , fileName); displaylmage () ; } } public void displaylmage() {
Rectangle rect = this.getBounds () ; display_width = rect.width; display_height= rect.height; launchPanel () ; } public void launchPanel () { if (viewer == null) viewer = new TmagePanel ();
Dimension d = getViewerSize (srclmage.getwidth() / (double)srclmage.getHeight () ) ; viewer.set(srclmage, d) ; viewer.setPreferredSize(new Dimension(d.width, d.height)); setLayout (null) ;
Insets insets = getlnsets () ; int wl = display_width/2 - d.width/2; int ht = display__height/2 - d.height/2; viewer.setBounds (wl+insets.left, ht+insets.top, d.width, d.height) ; add(viewer) ; viewer.repaint () ; } protected Dimension getViewerSize (double imageAspectRatio) { int maxWidth = (int) (display_width * 9/10.0); int maxHeight = (int) (display_height* 9/10.0); double screenAspectRatio = maxWidth / (double)maxHeight; int viewerWidth, viewerHeight; if (imageAspectRatio > screenAspectRatio) { viewerWidth = maxWidth; viewerHeight = (int) (maxHeight * screenAspectRatio / imageAspectRatio) ; } else { viewerHeight = maxHeight; viewerWidth = (int) (maxWidth * imageAspectRatio / screenAspectRatio) ;
} return new Dimension(viewerWidth, viewerHeight);
} protected class ImagePanel extends JComponent { protected Planarlmage image;
AffineTransform atx = new AffineTransformO ; protected int width, height; public ImagePanel() {} public void set (Planarlmage i, Dimension d) { width = d.width; height= d.height; float scaleX = width / (float) i.getWidth(); float scaleY = height/ (float) i.getHeight ();
RenderedOp scaledlmg = ImageOperation.scale(i, scaleX, scaleY, 0, 0) ; image = scaledlmg.createlnstance (); } public void paintComponent (Graphics gc) {
Graphics2D g = (Graphics2D)gc; if(image != null) g.drawRenderedlmage(image, atx); } public int getWidth() { return width; } public int getHeight () { return height; } }
package dcgui; import Java, io.*; import java.awt.*; import java.awtimage.*; import java.awt.image.renderabIe.ParameterBlock; import java.util.ArrayList; import java.awt. event.*; import java.awt.geom.*; import java.awtevent.*; import javax.swing.*; im port javax. swing . event. *; import javax. media.jai. *; import javax.media.jai. ROI ; import com.sun.media.jai.codec.*; public class ImageOperation {
// Method for scaling an image public static RenderedOp scale (Renderedlmage image, float magx, float magy, float transx, float transy) {
ParameterBlock pb = new ParameterBlock(); pb.addSource(image); pb.add(magx); pb.add(magy); pb.add(transx); pb.add(transy); pb.add(lnterpolation.getlnstance(lnterpolation.lNTERP_NEAREST)); return JAI.create("scale", pb); }
// Method for adding two images public static RenderedOp addlmages(Planarlmage imagel, Planarlmage image2 ) {
ParameterBlock pb = new ParameterBlock(); pb.addSource(imagei); pb.addSource(image2); return JAI.create("add", pb); }
// Method for creating a ROI from an image public static ROIfJ createROIFromlmage(Renderedlmage img) { int numbands = img.getSampleModel().getNumBands(); ROI[] roi = new ROI[numbands]; if (numbands == 1) { roi[0] = new ROI(img); return roi;
} int[] bandindices = new int[1]; for (int i=0; i<numbands; i++) { bandindicesfO] = i;
RenderedOp oplmage = JAI.createC'bandselect", img, bandindices); roi[i] = new ROI((Planarlmage)oplmage);
} return roi; }
// Adds two images together public static RenderedOp compositelmage(Planarlmage sourcelmagel,
Planarlmage sourcelmage2, boolean alphaPremultiplied, int destAlpha) {
ParameterBlock pb = new ParameterBlock(); pb.addSourceCsourcelmagel ); pb.addSource(sourcelmage2); pb.add(sourcelmage1 ); pb.add(sourcelmage2); pb.add(new Boolean(alphaPremultiplied)); pb.add(destAlpha); return JAI.createfcomposite", pb); }
// Overlays two images public static RenderedOp overlaylmages(Planarlmage imagel,
Planarlmage image2) { farameterBlock pb = new ParameterBlock(); pb.addSource(image1 ); pb.addSource(image2); return JAI.create("overlay", pb); }
// Change color image to gray public static RenderedOp grayBandCombine (Planarlmage image) { doublet]!] grayBandCombineMatrix = {{0.212671f, 0.71516Of1 0.071169f, O.Of}}; return JAI.createfBandCombine", image, grayBandCombineMatrix); } }
package dcgui; import java.io. *; import java.awt.*; import java.awt.image. *; import Java.lang.Math; import java.util.ArrayList; import java.awt.event.*; import Java.awt.geom. *; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; import javax.media.jai.*; import com.sun.media.jai.codec.*; import xmlio.EchoFrame; import xmlio.Roi; import xmlio.Neighbor; public class ImageRoi implements GUIConstants { private Roi roi; private int imageWidth; private int imageHeight; private int pixel [] [] ; private Point2D.Double centerOfMass; private double area; private double eccentricity; private double orientation; private ArrayList neighbors;
ImageRoi (Roi R) { neighbors = new ArrayList (); roi = R; set() ; }
//public String getFileO { return roi.getFile() ; } public String getlndex() { return roi.getlndexO ; } public String getLabelO { return roi.getLabel () ; } private void set() {
// Read in the image of the ROI and transform it to gray level Planarlmage image = JAI.create ("fileload", roi.getPath() ) ; Bufferedlmage bufImage = image.getAsBufferedlmage (); SampleModel sm = bufImage.getSampleModel (); WritableRaster wr = bufImage.getRaster() ; DataBuffer db = wr.getDataBuffer() ;
// Writing the image data to an array of pixels imageWidth = image.getWidth() ; imageHeight= image.getHeight() ;
System.out.printlnC'imageWidth = " + imageWidth + ", imageHeight = " + imageHeight) ; pixel = new int [imageHeight * imageWidth] [] ; for (int i=0; i < imageHeight; i++) { for(int j=0; j < imageWidth; j++) { int pix[] = null; pixel [i*imageWidth + j] = sm.getPixel(j , i, pix, db) ; } }
// Finding image features findArea() ;
System.out.println("Area = " + area); findCOMO ;
System.out.println("COM.x = " + centerOfMass.getX() + " COM.y = " + centerOfMass.getY() ) ; findOrientationO ;
System.out.println("orientation = " + orientation); findEccentricity() ;
System.out.println("eccentricity = " + eccentricity); }
// Writes the features of the roi image to file public void storeToFile() { try { roi.setComX(centerOfMass.getXO ) ; roi.setComY(centerOfMass.getY() ) ; roi.setArea(area) ; roi.setAngle (orientation) ; roi.setEccentricity(eccentricity) ;
System.out.println("Area: " + area);
System.out.println("Eccentricity: " + eccentricity); System.out.println("Orientation: " + orientation);
System.out.println("Center of Mass: " + centerOfMass.getX() + ", " + centerOfMass.getY()) ;
System.out.println("Neighbors' info: "); for (int i=0; i < neighbors.size(); i++) {
Roi nroi = (Roi)neighbors.get (i) ;
// Set neighborhood relationship between 'roi' and 'nroi'
Neighbor ni = new Neighbor();
String dist = roi.getDistance(nroi) ;
String orient= roi.getOrientation(nroi) ; ni.setLabel (nroi.getLabel() ) ; ni.setDistance(dist) ; ni.setAngle (orient) ; roi.setNeighbor(ni, i) ;
System.out.println("Neighbor " + (i+1) ) ; System.out.println("Label: " + nroi.getLabel ()); System.out.println("Distance: " + dist) ; System.out.println("Angle: " + orient); }
System out pr"intln ( " ** * ** * *** * * * ***** ***** * ***** ** * * ** * ** * ** " )
} catch (Exception e) {
System. out . println ( "Error . " ) ; e . printStackTrace (System. err) ; }
// Returns the center of mass of roi public point2D.Double getCOMO { return centerOfMass; } public void addNeighbor(Roi N) { neighbors.add(N) ; } private void findCOMO { double cX = 0; double cY = 0; double totalG = 0; int len = pixel [0] .length; for (int j=0; j < len; j++) { for (int i=0; i < pixel.length; i++) { int pix = pixel [i] [j] ; if (pix > MIN_VAL) { int row = i / imageWidth; int col = i - row * imageWidth; cX += col; // * pix; cY += row; // * pix; totalG += 1; // pix;
Figure imgf000392_0001
cX = cX / totalG; cY = CY / totalG; centerOfMass = new Point2D.Double(cX, cY) ; } private void findAreaO { area = 0; int len = pixel [0] .length; for (int j=0; j < len; j++) { for .length; i++) { ] ;
Figure imgf000392_0002
area++;
Figure imgf000392_0003
private void findEccentricity() { double miu_20 = centralMoment (2, 0) ; double miu_02 = centralMoment (0, 2); double miu_ll = centralMoment (1, 1) ; eccentricity = Math.pow(miu__20 - miu_02, 2.0) + 4 * miu_ll; eccentricity = eccentricity / area; } private void findOrientation() { double miu_20 = centralMoment(2, 0) ; double miu_02 = centralMoment(0, 2); double miu_ll = centralMoment {1, 1) ; double tmp = 2 * miu_ll / (miu_20 - miu_02) ; orientation = 0.5 * Math.atan(tmp) ; } private double centralMoment(int p, int q) { double centralMoment = 0,- int len = pixel [0] .length; for {int j=0; j < len; j++) { for (int i=0; i < pixel.length; i++) { int pix = pixel [i] [j] ; int row = i / imageWidth; int col = i - row * imageWidth; if (pix > MIN_VAL) { centralMoment += Math.pow(row-centerOfMass.getX{) , p) *
Math.pow(col-centerOfMass.getY() , q) ;
Figure imgf000393_0001
return centralMoment; // /(imageWidth * imageHeight) ;
}
package dcgui ; import java.io. *; import java.awt. *; import java.util.ArrayList; import java.awt.image. *; import java.awt.event. *; import java.awt.geom.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; import javax.media.jai.*; import javax.media.jai.operator.CompositeDestAlpha; import com.sun.media.jai.codec.*; import xmlio.EchoFrame; import xmlio.Roi; import com.vistech.util. *; public class ImageViewer extends JPanel { private static ImageViewer instance; protected ImagePanel viewer; private Planarlmage srclmage; private Planarlmage dstlmage; private int display_width; private int display_height; private ArrayList rois; private ArrayList roilmg; public ImageViewer() { setBackground(Color.black) ; rois = new ArrayList (); roilmg = new ArrayList(); } public static synchronized ImageViewer getlnstance() { if (instance == null) instance = new ImageViewer(); return instance; } public void update(String filename) {
Planarlmage itng = JAI.create("fileload", filename);
RenderedOp colorConverted = ImageOperation.grayBandCombine(img) ; srclmage = colorConverted.createlnstance(); dstlmage = srclmage; displaylmage () ; } public void displaylmage() {
Rectangle rect = this.getBounds() ,- display_width = rect.width; display_height= rect.height; launchPanel () ; } public void reset () { if (rois != null) rois.clear(); if (roilmg ! = null) roilmg . clear ( ) ; } public void launchPanel ( ) { if (viewer == null) viewer = new ImagePanel ( ) ; viewer . set (dstlmage) ; setLayout (null) ;
Insets insets = getlnsets 0 ; int wl = display_width/2 - viewer.getWidthf)/2; int ht = display_height/2 - viewer.getHeight () /2; viewer.setBounds (wl+insets.left, ht+insets.top, viewer.getWidthO , viewer.getHeight() ) ; add(viewer) ; viewer.repaint() ; } protected class ImagePanel extends JComponent { protected Planarlmage image;
AffineTransform atx = new AffineTransformO ; protected int width, height; public ImagePanel () {} public void set (Planarlmage i) { image = i; width = i . getWidth O ; height= i.getHeight () ; } public void paintComponent (Graphics gc) {
Graphics2D g = (Graphics2D)gc; if(image != null) g.drawRenderedlmage(image, atx); } public int getWidthf) { return width; } public int getHeight() { return height; } }
' public void updateROI (Roi roi, int status) { if (status == ItemEvent.SELECTED) { // Adding ROI image to original image System.out.println("Updating(selecting) ROI: " + roi.getPath() ) ; rois.add(roi) ,•
Planarlmage img = JAI.create("fileload", roi.getPathO ) ; roiImg.add(img) ; updateROIImage() ;
} else if (status == ItemEvent.DESELECTED) { // Removing ROI from image System.out.println("Updating(deselecting) ROI: " + roi.getPath{) ) ; for (int i=0,- i < rois.size(); i++) { if (( (Roi) {rois .get (i) )) .getIndex() == roi.getlndex() ) { int index = rois.indexOf(rois.get (i) ); rois.remove(index) ; roilmg . remove ( index) ; updateROI Image () ; }
}
System.out.printing"ROIs size: " + rois.size () ) ;
}
// Overlays the ROI images on the original image private void updateROIImage() { dstlmage = srclmage; for (int i=0; i < roilmg.size() ; i++) {
Planarlmage img = ( (Planarlmage) (roilmg.get(i) ));
RenderedOp composite = ImageOperation.addlmages (dstlmage, img) ; dstlmage = composite.createlnstance();
} launchPanel() ;
}
// Save the overlay image to file public void saveImageAsJPEG(String filename) {
Bufferedlmage bf__img = dstlmage.getAsBufferedlmage(); JpegUtil.savelmageAsJPEG(bf_img, filename) ; } }
package dcgui; import java.io.*; import java.awt.*; import java.awt. event.*; import javax.swing.*; import javax.swing. event.*; import javax.swing.JTable; import javax.swing.table.AbstractTableModel; public class InfoViewer extends JTable implements GUIConstants
{ private static InfoViewer instance; private MyTableModel myModel; private String table_values []; public static synchronized InfoViewer getlnstance() { if (instance == null) instance = new lnfoViewer(); return instance; } public InfoViewerO { tablej/alues = new String[2]; MyTableModel myModel = new MyTableModel(); setModel(myModel); setShowGrid(false); setShowHorizontalLines(true); setBackground(Color. white); } class MyTableModel extends AbstractTableModel { private Object[][] data = new Object[][] {
{ TABLE_NAME_CELL, table_values[O]}, { TABLE_VIEW_CELL, table_values[1]}
}; public int getRowCountQ { return data.length; } public int getColumnCount() { return data[O].length; } public Object getValueAt(int row, int column) { return data[row][column]; }
public void update(String name, String view) { table_values[O] = name; table_values[1] = view;
MyTableModel myModel = new MyTableModel(); setModel(myModel); repaint(); } public void update(String view) { table_values[1] = view; MyTableModel myModel = new MyTableModel(); setModel(myModel); repaintQ; } }
package dcgui; import javax.swing.*; import java.awt. *; public class InputPanel extends JPanel
{ private static InputPanel instance; private DisplayPanel display; public static synchronized InputPanel getlnstance() { if (instance == null) instance = new InputPanel (); return instance; } public InputPanel () { } public void setDisplayf ) { display = DisplayPanel.getlnstance(); }
package dcgui; import javax.swing.*; import java.awt.image.*; import java.awt.*; import java.awt.event.*; public class LaunchPad extends JFrame implements ActionListener,
GUIConstants
{
// define constants private static LaunchPad instance; private static final int WIDTH=850, HEIGHT=560; //W=765, H=47S private static final String TITLE = "Data Collection GUI vl.O"; private static JPanel panels [] ; private JMenuBar menubar; private JPanel globalpanel; public static synchronized LaunchPad getlnstance() { if (instance == null) instance = new LaunchPad(); return instance; }
// constructor public LaunchPad( ) { super(TITLE) ; setDefaultCloseOperation(EXIT_ON_CLOSE) ; setSize (WIDTH, HEIGHT); setResizable (false) ;
// set look and feel to cross-platform try {
UIManager.setLookAndFeel (UIManager.getCrossPlatformLookAndFeelClassWame () ) ; } catch (Exception exc) {
System.err.println("Error loading L&F: " + exc); } buiIdGUK) ; exchangeMessages () ; } private void buildGUK) {
//menubar = createMenuBar() ;
//setJMenuBar(menubar) ; createPanels () ; layoutPanels () ; getContentPane () .add(globalpanel, BorderLayout.CENTER) ; } private void createPanels () { panels = new JPanel[NUM_OF_PANELS] ; panels [SELECTION] = SelectionPanel.getlnstance(); panels [DISPLAY] = DisplayPanel.getlnstance (); panels [GUIDE] = GuidePanel . getlnstance ( ) ; panels [CONTROL] = ControlPanel . getlnstance 0 ;
//panels [RESULT] = ResultPanel . getlnstance ( ) ;
} private void layoutPanels () { globalpanel = new JPanelO; globalpanel.setLayout (null) ; globalpanel.setSize(WIDTH, HEIGHT) ;
Insets insets = getlnsets () ; panels [SELECTION] .setBounds (insets.left, insets.top, 285, 560); // 200 x 450 panels [GUIDE] .setBounds (285+insets.left, insets.top, 375, 260); // 375 x 170 panels [DISPLAY] .setBounds (285+insets.left, 260+insets.top, 375, 300); // 550 x 280 panels [CONTROL] .setBounds (660+insets.left, insets.top, 215, 560);
//panels [RESULT] .setBounds (575+insets.left, insets.top, 175, 170); globalpanel.add(panels [SELECTION] ) ; globalpanel.add(panels [DISPLAY] ) ; globalpanel.add(panels [GUIDE] ) ; globalpanel.add(panels [CONTROL] ) ; } private void exchangeMessages () {
( (DisplayPanel)panels [DISPLAY] ) .setSelection() ;
( (SelectionPanel)panels [SELECTION] ) .setDisplay() ;
( (DisplayPanel)panels [DISPLAY] ) .setlnputO ;
//( (InputPanel)panels [INPUT] ) . setDisplay() ; }
// create the menu bar private JMenuBar createMenuBar() {
JPopupMenu.setDefaultLightWeightPopupEnabled(false) ;
JMenuBar menuBar = new JMenuBar() ;
JMenu file = (JMenu) menuBar.add(new JMenu("File")); file,add(new JMenuItemC'New") ) ; file.add(new JMenuItem("Open") ) ;
JMenuItem exit = (JMenuItem) file . add (new JMenuItem C'Exit " ) ) ; exit.addActionListener(new ActionListener() { public void actionPerformed (ActionEvent e) { System.exit (0) ;
});
JMenu tools = (JMenu) menuBar.add(new JMenu("Tools") ) ;
JMenu help = (JMenu) menuBar.add(new JMenu("Help") ); help.add(new JMenuItem("Topics") ) ; return menuBar; } public void actionPerformed (ActionEvent evt) { String command = evt.getActionCommand() ; if (command.equals (setlnputFile) ) {
System.exit(0) ; }
public static void main(String args [] ) { // create the frame JFrame frame = LaunchPad.getlnstance() ;
// listen for window closing frame.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) {
System.exit(0) ; } })!
Il set the background and size of frame frame.setBackground(Color.lightGray) ;
Dimension screenSize = Toolkit.getDefaultToolkit () .getScreenSize(); frame.setLocation(screenSize.width/2 - WXDTH/2,screenSize.height/2 - HEIGHT/2) ; frame.setvisible (true) ; }
PLA: LA
PLA-RVIT: RV RA
PSA-Base: LA RA
PSA-MV: LA
PSA-PM: LA
PSA-Apex: LA
A4C: LV LA RV RA
A5C: LV LA RV RA AO
A2C: LV LA
SC4C: LV LA RV RA
SCIVC: RV RA
ECHO VIDEO
* ecg.cc
* See ecg.h for more details.
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#include <iostream.h> ffinclude <stdlib.h> ffinclude <string.h> #include <stdio.h> ffinclude <assert.h> ffinclude <algorithm> ffinclude <vector.h> ffinclude <math.h> ffinclude "ecg.h" void colorToGray (Ipllmage*, Ipllmage*); const int MAX_TM = 5; // Maximum allowed number of blobs in the ECG int ECG :: count = 0; int ECG :: getCount() { return count; }
ECG :: ECG()
{ ++count;
_sn = 0;
_VALID = FALSE;
_R_peaks.clear(); _tm_blobs.clear(); _RPs.clear(); }
// Constructor of ECG class. Input is the image of a frame. ECG :: ECG(lpllmage* img)
{ ++count;
_sn = 0; _VALID = TRUE;
// declaring and initializing grayjrng used for intermediate operations Ipllmage* grayjmg = iplCreatelmageHeader(1, 0, IPL_DEPTH_8U, "GRAr1, "GRAY",
IPL_DATA_ORDER_PIXEL, IPLJDRIG IN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if (grayjmg == NULL) exit(EXITJFAILURE); iplAllocatelmage (gray_img, 0, 0); if (NULL == grayj"mg->imageData) exit(EXIT_FAJLURE);
// Making gray-level image out of input image if (strcmp(img->colorModel, "GRAY") != 0) ipICoforToGray (img, gray img);
// Changing image from color to gray-level else ipICopy (img, grayjmg);
// Just copying one gray-level image to other extractECG_Region (grayjmg);
// crops the ECG region, and sets _ecg_img
Gray_Level_Segment();
// Segments the gray-level image into ECG and background float img_size = static_cast<float>(_ecg_bin->width * _ecg_bin->height); float non zero = static_cast<float>(cvCountNonZero(_ecg_bin)); float percent jion_zero = non_zero * 100 / img_size; float perceπt_TH = 8;
// threshold obtained heuristically if (percent non zero < percent TH) { jrnjmg = iplClonelmage(_ecg_bin); _qrs_img = iplClonelmage(_ecg_bin); extractBlobsO; iplSubtract (_ecg_bin, _tmjmg, _qrs_img); extractPeaks();
} else _VALID = FALSE; iplDeallocate (grayjmg, IPLJMAGE_ALL); '
}
ECG :: ECG (const char* filename, int sn)
{ ++count;
_sn = sn; _VALID = TRUE;
_R_peaks.clear(); Jmj3lobs.clear(); _RPs.clear();
Ipllmage* img = readlMG2IPL(filename, 0);
// declaring and initializing grayjmg used for intermediate operations
Ipllmage* grayjmg = ipiCreatelmageHeader (1, 0, !PL_DEPTH_8U, "GRAY", "GRAY",
IPLJDATAjDRDER-PIXEL1 IPLJDRIGINJrL, IPL_ALIGN JQWORD, img->width, img->height, NULL, NULL, NULL, NULL); if (grayjmg == NULL) exit(EXIT_FAILURE); iplAllocatelmage (grayjmg, 0, 0); if (NULL == gray_img->imageData) exit(EXIT_FAlLURE);
// Making gray-level image out of input image if (strcmp(img->colorModel, "GRAr1) != 0) ipIColorToGray (img, grayjmg);
// Changing image from color to gray-level else ipICopy (img, grayjmg);
// Just copying one gray-level image to other extractECG_Region(grayjmg); ipIDeallocate (img, IPLJMAGE_ALL); ipIDeallocate (grayjmg, IPLJMAGE_ALL);
// Extract statistics of the pixel values //_ecg_bin = NULL; //_tm_img = NULL; //jqrsjmg = NULL; double mean; double max_val; double min_val; double ecg jatio; CvPoint maxP, minP; mean = cvMean(_ecgjmg); cvMinMaxLoc(_ecg_img, &min_val, &max_val, &minP, &maxP); ecg_ratio = mean / max_val; cout « mean « ", " « ", " « max_val « ", " « ecg ratio « V; if (ecg_ratio > 0.9 || ecg_ratio < 0.25) // was 0.4
_VALID = FALSE; else {
GrayJ_evel_Segment(); //j3cg_img, jscg bin); float img_size = static_cast<float>(_ecg_bin->width * _ecg_bin->height); float non_zero = static_cast<float>(cvCountNonZero(_ecg_bin)); float percent_non_zero = non_zero * 100 / img _size; float percent TH = 8;
//cout « percentjion_zero « 1Vn'; if (percent_non_zero < percent_TH) { jrnjmg = iplClonelmage(_ecg_bin); _qrs_img = iplClonelmage(_ecg_bin); extractBlobs(); //_ecg_bin, _tm_img); writelPL2IMG(_ecg_bin, "ecg_bin"); writelPL2IMG(_tm_img, "tm_img"); writelPL2IMG(_qrs_img, "qrsjmg"); iplSubtract (_ecg_bin, jjnjmg, _qrs_img); extractPeaks(); //_qrs_img);
} else _VALID = FALSE;
}
_ecg_img_W = _ecg_img->width; _ecg_img_H = _ecg_img->height; _bin_img_W = _ecg_bin->width; _bin_img_H = _ecg_biπ->height; }
ECG& ECG :: operator=(const ECG& right)
{ int i; if (&right != this) { _sn = right._sn; _VALID = right._VAL)D;
_eGg_img_W = right._ecg_img_W; _ecg_img_H = right._ecgjmg_H; _bin_img_W = right._bin_img_W; _bin_img_H = right._bin_img_H; for (i=0; i < right.JR_peaks.size(); i++)
_R_peaks.insert (_R_peaks.begin() + i, right._R_peaks[i]); for (i=0; i < right._tm_blobs.size(); i++)
_tm_blobs.insert (_tm_blobs.begin() + i, right._jm_blobs[ij); for (i=0; i < right._RPs.size(); i++)
_RPs.insert (_RPs.begin() + i, right._RPs[i]); } return *this; }
/* * Destructor of ECG
7 ECG :: -ECG()
{
_R_peaks.clear(); _tm_blobs.clear(); _RPs.clear();
-count; } void ECG :: extractBlobs() //Ipllmage* _ecg_biπ, Ipllmage* _tm_img)
{ int k; int W = _ecg_bin->width; int H = _ecg_bin->height; int SE_size = 32; int cnctd_regions; CvPoint center, left_edge, right edge; // closing the image first
IplConvKernel *SE1 = cvCreateStructuringElementEx(2, 2, 0, 0, CV_SHAPE_RECT, NULL);
IplConvKernel *SE2 = cvCreateStructuringElementEx^, 2, 1, 1, CV_SHAPE_RECT, NULL); cvDilate(_ecg_bin, __ecg_bin, SE1, 1 ); cvErode (_ecg_bin, _ecg_bin, SE2, 1 ); cvReleaseStructuringElement(&SE2); cvReleaseStructuringE!ement(&SE1);
// setting the structuring elements needed for morphological operations
IplConvKernei *SE = cvCreateStructuringElementEx(4, 8, 0, 0, CV_SHAPE_RECT, NULL);
IplConvKernel *SE_prime = cvCreateStructuringElementEx^, 8, 3, 7, CV_SHAPE_RECT, NULL);
// finding the location of the time marker cvErode(_ecg_bin, __tm_img, SE, 1 ); cvDilate(_tmjmg, _tm_img, SE_prime, 1); cvReleaseStructuringElement(&SE_prime); cvReleaseStructuringElement(&SE);
// finding the number of blobs in the time-marker image unsigned char* label = new unsigned char[W * H]; assert( NULL != label ); int* region_area = new int[MAX_TM]; assert( NULL != region_area );
Ipllmage* img = iplClonelmage(_tm_img); printff'Calling grassLabel function!!\n"); grassLabel(img, label, cnctd_regions, region_area, MAX_TM);
// using grass-fire method labeling the blobs printf("Number of connected regions; %d\n", cπctd_regions); for( int i=0; i < cnctd_regions; i++ ) { printf("Area of region %d: %d\n", i, region_area[i]); if( region_area[i] >= SE size ) {
// initializing and allocating "tmpjmage"
Ipllmage* tmp_img = iplCreatelmageHeader (1 , 0, IPL_DEPTH_8U,"GRAY", 11GRAr,
IPL_DATA_ORDER_PIXEL,IPL_ORIGIN_TL, IPL_ALIGN_QWORD,_ecg_bin->width, _ecg_bin->height,NULL, NULL, NULL, NULL); if(tmp_img == NULL) exit(EXIT_FAILURE); iplAllocatelmage(tmp_img, 0, 0); if(NULL == tmp_img->imageData) exit(EXIT_FAILURE); for ( int h=0; h < H; h++ ) { for ( int w=0; w < W; w++ ) { int pix_pos = w + h * W; if( label[pix_pos] == (i+1 ) ) (tmp_img->imageData)[pix_pos] = MAX_PIX_VAL; else
(tmpjmg->imageData)[pix_pos] = 0; } }
// finding info about the blob CvMoments* moments = new CvMoments; cvMoments(tmp_img, moments, 1 ); double M_10 = cvGetSpatialMoment( moments, 1, 0 ); double M_01 = cvGetSpatialMoment( moments, 0, 1 ); double M_00 = cvGetSpatialMoment( moments, 0, 0 ); delete moments; center.x = static_cast<int>(M_10/M_00); center.y = static_cast<int>(M_01/M_00);
// blob's center of mass printf("COM of blob: (x,y) = (%d,%d)\n", center.x, center.y); left_edge.y = static_cast<int>(M_01/M_00); right_edge.y = static_cast<int>(M_01/M_00);
// finding the x-value of the left edge of the blob for( k=0; k < W; k++ ) { int pix_pos = k + W * (center.y); int pix =static_cast<unsigned char>(tmp_img->imageData[pix_pos]); if (pix == MAX_PIX_VAL) { left_edge,x = k; break; } }
// finding the x-value of the right edge of the blob for( k=W-1; k >= 0; k- ) { int pix_pos = k + W * (center.y); int pix =static_cast<unsigned char>(tmpjmg->imageData[pix_pos]); if (pix == MAX_PIX_VAL) { right_edge.x = k; break; } }
//TM_Blob* b = new TM_Blob( right_edge, left_edge, center ); //_tm_blobs->append(b); TM_Blob b(right_edge, left_edge, center ); _tm_blobs.insert(_tm_blobs.begin() + _tm_blobs.size(), b); iplDeallocate(tmp_img, IPL_IMAGE_ALL); } } iplDeallocateflmg, IPL_IMAGE_ALL); delete 0 label; delete FJ region_area; cvDilate (_tm_img, _tm_img, NULL, 2); writelPL2IMGC_tm_img, "tmp_TMimg"); }
/*
* Extracting the R-peaks of the ECG graph.
*/ void ECG :: extractPeaks() //lpllmage*_qrs_img)
{ int W = _qrs_img->width; int H = _qrs_img->height;
Ipllmage* tmp_img1 = iplClonelmage(_qrs_img); Ipllmage* tmp_img2 = iplClonelmage(_qrs_img); Ipllmage* tmpjmg3 = iplClonelmage(_qrsJmg); Ipllmage* Rwave_peak = iplClonelmage(_qrs_jmg);
// The SE's for extracting the Rwave Peaks int se_mask[ 15 ] = { 0, 1 , 0, 1 , 1 , 1.
1, 1. 1,
1, 1 , 1.
1, 1, 1 }; int se_mask_upside[ 15 ] = { 1, 1, 1,
1 1 1
1, 1 , 1 , 1. 1, 1, 0, 1, 0 }; int se_vertical_mask[ 2 ] = { -1 , -1 };
IplConvKernel *SE_vertical = cvCreateStructuringElementEx ( 1 , 2, 0, 0, CV_SHAPE_CUSTOM, se_vertical_mask );
IplConvKernel *SE_Rwave_upside = cvCreateStructuringElementEx ( 3, 5, 1, 4, CV_SHAPE_CUSTOM, se_mask_upside);
IplConvKernel *SE_Rwave = cvCreateStructuringElementEx ( 3, 5, 1, 0, CV_SHAPE_CUSTOM, se_mask);
// finding upward and downward pointing R-peaks cvDilate( _qrs_jmg, tmp_img1, SE_yertical, 1 ); cvErode( tmp_img1 , tmp_irng2, SE_Rwave, 1 ); cvErode( tmp_img1, tmp_img3, SE_Rwave_upside, 1 );
// releasing the structuring elements cvReleaseStructuringElement( &SE_yertical ); cvReleaseStructuringElement( &SE_Rwave_upside ); cvReleaseStructuringElement( &SE_Rwave ); // adding upward and downward pointing peaks together iplAdd( tmp_img2, tmpjmg3, Rwave_peak );
/* * Extracting all the points belonging to the
* peak-point areas and putting them in a list of points. */
// picking only the foreground points for(int i=0; i < H; i++)
{ for(intj=0;j < W; j++)
{ int pix = static_cast<unsigned char>(Rwave_peak->imageData[j+i*W]); if (pix == MAX_PIX_VAL)
{
CvPoint peak_point; peak point.x = j; peakjDoint.y = i;
_R_peaks.insert(_R_peaks.begin() + _R_peaks.size(), peak_point); _RPs.insert( _RPs.begin() + _RPs.size(), peak_point ); } }
}
// replacing the bunch of points representing each peak with single point aggregatePeakPoints(); iplDeallocate( Rwave_peak, IPL_IMAGE_ALL ); iplDeallocate( tmpjmg3, IPL_IMAGE_ALL ); iplDeallocate( tmpjmg2, IPL_IMAGE_ALL ); iplDeallocate( tmp_img1, IPLJMAGE_ALl ); }
/*
* Using some rules it determines if the current ECG graph is
* a valid one and is actually an ECG rather than something
* similar to it which has been extracted as ECG due to noise.
* Returns TRUE if ECG is valid and FALSE otherwise. */ bool ECG :: isValid()
{ if (FALSE == _VALID) retum(FALSE); bool FLAG = FALSE; int i, j, cnt; int num_peaks = NumR_Peaks(); int num blobs = NumBlobs(); float ecgjine = 0.43 * static_cast<float>(_binJmg_H); //(_ecg_bin->height);
// REPLACE above line with width and height in *.h file. - LEFT HERE. 5:50pm Junei 3
CvPoint P1. P2; vector<int> diffs; vector<int>::const_iterator it; if( (num_blobs > 4) || (πum_blobs < 1 ) ) return(FALSE); cnt = 0; if(num _peaks > 1 )
{ for(i=0; i < num_peaks-1; i++)
{
P1 = _R_peaks[i]; for(j=i+1 ; j < num_peaks; j++)
{
P2 = _R_peaksβ]; diffs.insert( diffs.begin() + cnt, static_cast<int>(fabs((double)(P1.y- P2.y))) ); cnt++; if (fabs((double)(P1.y - ecgjine)) > 8 || fabs((double)(P2.y - ecgjine)) > 8 )
_VALID = FALSE; } } } if(num_blobs > 1 ) { for(i=0; i < num_blobs-1; i++)
{
P1 = Center_Blob(i); for(j=i+1 ; j < num_blobs; j++)
{
P2 = Center_BlobG); diffs.insert( diffs.begin() + cnt, static_cast<int>(fabs((double)(P1.y - P2.y))) ); cnt++; if( fabs((double)(P1.y - ecgjine)) > 8 || fabs((double)(P2.y - ecgjine)) > 8 )
_VALID = FALSE; } } } if( (nurn_peaks > 1) && (numjalobs > 1) )
{ for(i=0; i < num_peaks; i++)
{
P1 = _R_peaks[i]; forO=0; j < numjDlobs; j++) {
P2 = Center J3(obO); diffs.insert( diffs.begin() + cnt, static_cast<int>(fabs((double)(P1.y - P2.y))) ); cnt++; } } } if( (num_peaks > 1 ) || (num_blobs > 1 ) )
{ it = max_eiement(diffs.begin(), diffs.endQ); if(*it > 10) // value set from heuristics! Make it percent of height _VALID = FALSE;
} returnLVALID);
}
/*
* Crops the input image and extracts the area which is supposed to
* contain the ECG graph. The size of the ECG is found using heuristics. */ void ECG :: extractECG_Region(lp!lmage* img) //, Ipllmage* _ecg_img)
{ Il crop image to take out the ECG area only // This region is located in the box: //
// (0.12W. 0.82H) (0.88W, 0.82H)
// I I
// I I
// I I
// (0.12W, 0.96H) (0.88W, 0.96H)
//
// W = width of image
// H = height of image
int UL_x = (int)(0.12 * img->width); int UL_y = (int)(0.82 * img->height);
// Upper-left corner of the box int W = (int)(0.76 * img->width); int H = (int)(0.14 * img->height);
// width and height of the box
// Initializing _ecg_img
_ecg_img = iplCreatelmageHeader (1 , 0, IPL_DEPTH_8U, "GRAY", "GRAV, IPL_DATA_ORDER_PIXEL,
IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL); if (_ecg_img == NULL) exit(EXIT_FAILURE); iplAtlocatelmage (_ecg_img, 0, 0); if (NULL == _ecg_img->imageData) exit(EXIT_FAILURE); int pix_pos1, pix_pos2; for (int i=0; i < img->height; i++) { for (int j=0; j < img->width; j++) { pix_pos1 = j + i * img->width; pix_pos2 = Q - UL_x) + (i - UL_y) * W; if (i >= UL_y && i < (UL_γ + H) && j >= UL_x && j < (UL_x + W))
(_ecg_img->imageData)[pix_pos2] = img->imageData[pix_pos1]; } } }
I* * Using the inter- and intra-cluster distances, packs the points
* representing each R-peak to a single point. 7 void ECG :: aggregatePeakPoints()
{ int M = _R_peaks.size(); double new distance, old_distance; int counter = 0; int i; while(M > 1 )
{ int index_l, iπdex_J; new_distance = mergeClosestPoints( index_l, index_J );
// Merge 2 closest points and modify list accordingly if (counter == 0) old_distance = new_distance; if( (new_distance - old_distance) / new_distance > 0.95 ) break;
// stop only if each cluster of points is sufficiently packed
// otherwise merge the two closest points together CvPoint P1 = _R_peaks[index_l]; CvPoint P2 = _R_peaks[index_J];
CvPoint P_avg;
P_avg.x = (int)( (double)( P1.x + P2.x ) / 2.0 );
P_avg.y = (int)( (double)( P1.y + P2.y ) / 2.0 ); vector <CvPoint> R_copy; for(i=0; i < _R_peaks.size(); i++)
{ if( i == indexj )
R_copy.insert(R_copy.begin() + R_copy.size(), P_avg); else if( i == index_J ) continue; else
R_copy.insert(R_copy.begin() + R_copy.size(), _R_peaks[i] ); }
_R_peaks.swap( R copy ); R_copy.clear();
M = _R_peaks.size(); counter++; }
//verify_R_Peaks(); }
I*
* Verifies the validity of the R-peak clusters 7 void ECG :: verify_R_Peaks() { int i, j; int cnt; //CvPoint *P; CvPoint P; int num_poiπts = _RPs.size(); int num clusters = _R_peaks.size(); double min_dist; double* tmp_dist = new double[num_clusters]; double* dist = new double[num_clusters]; vector <CvPoint>* C_V = new vector <CvPoint> [num_clusters]; vector <int> DPs; for( i=0; i < num_points; i++ )
{ cnt = 0; for( j=0; j < num_points; j++ )
{ P = _R_peaksD); tmp_dist[cnt] = sqrt( pow(P.x - (_RPs[i]) x, 2.0) + pow(P.y - (_RPs[i]).y, 2.0) ); dist[cnt] = tmp_dist[cnt];
DPs.insert( DPs.begin() + DPs.size(), j ); cnt++; } // find distance from i-th peak point to all cluster centers assert( cnt == num_clusters ); sort(tmp_dist, tmp_dist + num_clusters); min_dist = tmp_dist[0]; // sort the distances for ( j=0; j < num_clusters; j++ )
{ if( distfj] == minjjist )
(C_VO]).insert( (C_V|fl).begln() + (C_VQ]).size(), _RPs[i]); } // add the i-th R-peak point to the j-ht peak cluster } for( i=0; i < num_clusters; i++ )
{ if( !verify_R_Cluster( C_V[i] ) )
_R_peaks.erase(_R_peaks.begin(), _R_peaks.begin() + DPs[i]); } deleteQC_V; deleteQdist; delete0tmp_dist; }
/*
* Verifies that there are enough points in the R-peak cluster
* and that the points in the cluster are stretched in the * vertical direction as they should be and not in the horizontal
* direction. It uses the moments to determine this. */ int ECG :: verify_R_Cluster( vector <CvPoint>& RV )
{ int size = RV.size(); if( size < 4 ) return 0; return 1;
// number 4 is set by heuristics!
// change the verification measure to something that uses the // spread of the points in the x and y directions and their // relations }
/*
* Merges 2 points I1 and J together. */ double ECG :: mergeClosestPoints(int &index_l, int &index_J)
{ in* M = _R_peaks.size(); int num_combinations = (M * (M-1 )) / 2; // total number of point pairs
// the elements of the 'mutual' vector hold info about the distance // between two points and their corresponding id's Element* mutual = new Element[num_combinations]; double* tmp_distance = new double[num_combinations];
// Finding the distances between every pair of points int cnt = 0; for (int i=0; i < (M-1); i++) { CvPoint P1 = _R_peaks[i]; for (int j=ι+1 ; j < M; j++) { CvPoint P2 = _R_peaks[j]; tmp_distance[cnt] = sqrt( pow(P1.x - P2.x, 2.0) + pow(P1.y - P2.y, 2.0) ); mutual[cnt].distance = tmp_distance[cnt]; mutualfcntj.l = i; mutual[cnt].J =j; cnt++;
} }
// finding the minimum distance between any two of the points sort(tmp_distance, tmp_distance + num_combinations); double min distance = tmp_distance[0]; delete \\ tmp distance;
// finding which pair of points are the closests for (int k=0; k < num_combinations; k++) { if (mutual[k].distance == min_distance) { if(mutual[k].l < mutual[k].J) { index_l = mutual[k].l; index_J = mutual[k].J;
} else { index l = mutual[k].J; index_J = mutual[k].l; }
} } delete Q mutual; return(min_distance); }
I*
*
*/ void ECG :: Gray_Level_Segment() //Ipllmage* _ecg_img, Ipllmage* _ecg_bin)
{ int WW = _ecg_img->width; int HH = _ecg_img->height;
// Initializing a temp image
Ipllmage* tmp_img = iplCreatelmageHeader (1, 0, iPL_DEPTH_8U, "GRAY", "GRAr1, IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, WW, HH, NULL, NULL, NULL, NULL ); if (tmpjmg == NULL) exit(EXIT_FAILURE); iplAllocatelmage(tmp_img, 0, 0); if (NULL == tmp_img->imageData) exit(EXIT_FAILURE); unsigned char* label = new unsigned charfWW * HH]; // 'label' holds the label of each pixel of the image clusterlmage(_ecg_img, label, 2);
// using K-means clustering to classify ECG into 2 regions mergeTo2Regions(trnp_img, label, 2);
// merging the small regions and bi-levelizing the image delete □ label;
// Making the boundaries of the '_ecg_bin' bigger to be able to do // morphological operations later on int W = (int) (1.2 * WW); int H = (int) (2.0 * HH);
_ecg_bin = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAV, "GRAV, IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W1 H, NULL, NULL, NULL, NULL); if (_ecg_bin == NULL) exit(EXIT_FAILURE); iplAllocatelmage(_ecg_bin, 0, 0); if (NULL == _ecg_bin->imageData) exit(EXIT_FAILURE);
// w_0 and h_0 are the upper left corners of the place the image
// should be put in the new image int pix_pos, pix_pos_ekg; int w_0 = static_cast<int> ( (double)(W - WW) / 2.0 ); int h_0 = static_cast<int> ( (double)(H - HH) / 2.0 ); for (int i=0; i < H; i++) { for (intj=O; j < W; j++) { pix_pos = j + i * W; pix_pos_ekg = Q - w_0) + (i - h_0) * WW; if( i >= h_0 && i < (H - h_0 - 1) && j >= w_0 && j < (W - w_0 - 1) )
(_ecg_bin->imageData)[pix_pos] = tmpjmg->imageData[pix_pos_ekg]; else
(_ecg_bin->imageData)[pix_pos] = MIN_PIX_VAL; } } iplDeallocate(tmpjmg, IPL_IMAGE_ALL); tmp_img = 0; eliminateSmallRegions(_ecg_bin, 20);
// Do something about '20'. It is hard coded. Why? }
/*
* Returns the i-th R-wave peak. */
CvPoint& ECG :: getR_Peak(int I)
{ if( < > _R_peaks.size() ) exit(EXIT J=AILURE); return(_R_Peaks[l]); }
/*
* Returns the i-th time marker blob. */
TMJ3lob& ECG :: getBlob(int I)
{ int no__blobs = _tm_blobs.size(); if( I > no_blobs ) exit(EXIT_F AILU RE); return((_tm_bl°bs[l])); }
CvPointS. ECG :; LeftEdge_Blob(int I)
{ int no_blobs = _tm_blobs.size(); if( I > no_blobs ) exit(EXIT_FAILURE); retum( (_tm_blobs[l]).getLeftEdge() );
}
CvPointS ECG :: RightEdge_Blob(int I)
{ int no_blobs = _trn_blobs.size(); if( I > no_blobs ) exit(EXIT_FAiLURE); return( (_tm_blobs[l]).getRightEdge() ); }
CvPoint& ECG :: Center_Blob(int I) { int no_b!obs = _tm_blobs.size(); if( I > no_blobs ) exit(EXIT_FAILURE); return( (_tm_blobs[l]).getCeπter() ); } void ECG :: setTM_Status(int I1 TM_STATUS S)
{ if( I > NumBlobsO ) exit(EXIT_FAILURE);
(_tm_biobs[l]).setStatus(S); }
TM_STATUS ECG :: getTM_Status(int I)
{ if{ I > NumBlobsO ) exit(EXIT_FAILURE); return( (_tm_blobs[l]).getStatus() ); } f
* This method is used to based on two input ecg's determine
* they status of the time-markers of the first one. The time-
* marker can be either DYNAMIC or STATIC. A DYNAMIC time-marker
* is one which is either moving with a fixed width or is companding.
* A STATIC time-marker is one that is the opposite of DYNAMIC. / void setStatus(ECG* ecg1 , ECG* ecg2)
{ int i, j; int num_TMs1 , num_TMs2; CvPoint P1, P2; CvPoint P-IJeft, P1_right; CvPoint P2_left, P2_right; float dif, dif1 , dif2; if ((ecg1 != NULL) && (ecg2 != NULL)) { num_TMs1 = ecg1~>NumBlobs(); πum_TMs2 = ecg2->NumBlobs();
} else { num_TMs1 = 0; num_TMs2 = 0; }
/*
* Setting the status of the ecg for different possible
* combinations of the first and second ecg status. */ if (num_TMs1 == 1) ecg1->setTM_Status(0, DYNAMIC); else if (num_TMs1 == 2) {
TM_Blob blb1[2];
WbI[O] = ecg1->getBlob(0); blb1[1] = ecg1->getBlob(1 );
CvPoint P1_1_center = (blb1[0]).getCenter(); CvPoint P1_2_center = (blb1[1]).getCenter(); NUM_TMs_1 = 2 Λ NUMJTMs_2 = 1
I Il I ECG1
Figure imgf000421_0001
*/ if (num_TMs2 == 1 ) {
// the TM closest to the one in next frame is th STATIC one. CvPoint P2 = ecg2->Center_Blob(0); float dif1 = fabs((double)(P1_1_center.x - P2.x)); float dif2 = fabs((double)(P1_2__center.x - P2.x)); if( dif1 < dif2 ) ecg1->setTM_Status(0, STATIC); else ecg1->setTM_Status(1 , STATIC);
/*
NUM_TMs_1 = 2 Λ NUM_TMs_2 = 2
*
I I I I ECG1
Figure imgf000421_0002
*/ else if (num_TMs2 == 2) { // 2->2: Find which two are STATIC
// The TM's with the same centroid are STATIC, for (i = 0; i < num_TMs1; i++) { p-ijeft = ecg1->LeftEdge_Blob(i); P1_right = ecg1->RightEdge_Blob(i); for (j = 0; j < num_TMs2; j++) { P2_left = ecg2->LeftEdge_Blob(j); P2_right = ecg2->RightEdge_Blob{j); dif1 = fabs((double)(P1 Jeft.x - P2Jeft.x)); dif2 = fabs((double)(P1_right.x - P2_right.x)); if( dif1 < 2 && dif2 < 2 )
{ ecg1->setTM_Status(i, STATIC); break; } } } r
NUM_TMs_1 = 2 Λ NUM_TMs_2 = 3
*
I I I I ECG1
ECG2
7 else if (num_TMs2 == 3) { for (i = 0; i < num_TMs1 ; i++) { Pijeft = ecg1->LeftEdge_Blob(i); Pijight = ecg1->RightEdge_Blob(i); for (j = 0; j < num_TMs2; j++) { P2_left = ecg2->LeftEdge_Blob(j); P2_right = ecg2->RightEdge_Blob(j); dif1 = fabs((double)(P1_left.x - P2_left.x)); dif2 = fabs((double)(P1_right.x - P2_right.x)); if( dif 1 < 2 && dif2 < 2 )
{ ecg1->setTM_Status(i, STATIC); break; } } } } else printf("Not possible. Error in finding the time markerΛn");
} else if( num_TMs1 == 3 )
{ if( num_TMs2 == 3 )
{ for( i = 0; i < num_TMs1 ; i++ )
{
P1_left = ecg1->LeftEdge_Blob(i); P1_right = ecg1->RightEdge_Blob(i); for( j = 0; j < num_TMs2; j++ )
{
P2_left = ecg2->LeftEdge_BlobO); P2_right = ecg2->RightEdge_Blob{j); dif1 = fabs((double)(P1_leftx - P2_left.x)); dif2 = fabs((double)(P1_right.x - P2_right.x)); if( dif1 < 2 && dif2 < 2 )
{ ecg1->setTM_Status(i, STATIC); break; } }
} } else if( num_TMs2 == 2 )
{ for( i = 0; i < num_TMs1 ; i++ )
{
P1 = ecg1->Center_Blob(i); P1_left = ecg1->LeftEdge_Blob(i); P1 _right = ecg1->RightEdgeJ3!ob(i); for( j = 0; j < num_TMs2; j++ )
{
P2 = ecg2->Center_BIob(j); P2_left = ecg2->LeftEdge_BlobO); P2_right = ecg2->RightEdge_Blob(j); dif = fabs((double)(P1.x - P2.x)); dif1 = fabs((double)(P1 Jeft.x - P2_left.x)); dif2 = fabs({doub!e)(P1 jϊght.x - P2_right.x)); if( (dif1 < 2 && dif2 < 2) || dif == 0 )
{ ecg1->setTM_Status(i, STATIC); break; } } } } else if( num_TMs2 == 4 )
{ for( i = 0; i < num_TMs1 ; i++ )
{
P1_left = ecg1->LeftEdge_Blob(i); P1_right = ecg1->RightEdge_Blob(i); for( j = 0; j < num_TMs2; j++ )
{
P2_left = ecg2->LeftEdge_BlobO); P2_right = ecg2~>RightEdge_Blob(j); dif1 = fabs((double)(P1_left.x - P2 Jeft.x)); dif2 = fabs((doub!e)(P1_right.x - P2_right.x)); if( dif 1 < 2 && dif2 < 2 )
{ ecg1->setTM_Status(i, STATIC); break; } } } } else printf("Not possible. Error in finding the time marker An");
} else if( num_TMs1 == 4 ) { if( num_TMs2 == 3 )
{ for( i = 0; i < num_TMs1; i++ )
{
P1 = ecg1->Center_Blob(i); P1_ieft = ecg1->LeftEdge_Blob(i); P1_right = ecg1->RightEdge_B!ob(i); for( j = 0; j < num_TMs2; j++ )
{
P2 = ecg2->Center_Blob(j); P2_left = ecg2->LeftEdge_BlobQ); P2_right = ecg2->RightEdge_BlobG);
Figure imgf000424_0001
if( (dif1 < 2 && dif2 < 2) || dif == 0 )
{ ecg1->setTM_Statυs(i, STATIC); break; } } } } else if( num_TMs2 == 4 )
{ for( i = 0; i < num_TMs1; i++ )
{
P1_left = ecg1->LeftEdge_Blob(i); P1_right = ecg1->RightEdge_Blob(i); for( j = 0; j < num_TMs2; j++ )
{
P2_left = ecg2->LeftEdge_Blob(j); P2_right = ecg2->RightEdge_Blob(j); dif1 = fabs((double)(P1_left.x - P2Jeft.x)); dif2 = fabs((double)(P1_rightx - P2__right.x)); if( dif1 < 2 && dif2 < 2 )
{ ecg1->setTM_Status(i, STATIC); break; } } } } } } * ecg.h
* _
* The ECG class is initialized by giving it the image of the ECG region
* of an EchoFrame. The user of this class can access the R-peaks and
* the multitude of the time-marker blobs (if applicable. Usually there
* is only one time-marker blob (not elongated) ).
* It uses basic morphological operations to process the ecg signal's
* image to extract the components.
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#ifndef _ECG_H #defιne _ECG_H ffinclude <vector.h> #include <ipl/ipl.h> #include <CV.h>
#include "../util/ppmlO.h" #include "../util/olist.h" #include "../util/util.h" #include "../util/img_util.h" #include "tm_blob.h" using namespace std; class ECG
{ private:
Ipllmage* _ecgjmg; // the gray-level image of the ECG area
Ipllmage* _ecg_bin; // segmented image of the ecg region
Ipllmage* _tm_img; // the image of the time-marker blobs
Ipllmage* _qrs_img; // image of ECG without the time-markers bool _VALID; // TRUE if ECG is valid int _sn; // sequence number int _ecg_img_W; // Height and Width of the original ecg area image int _ecg_img_H; int __bin_img_H; // Height and Width of the bigger binary ecg image int _bin_img_W; vector <CvPoint> R peaks; vector <TM_Blob> _tm_blobs; vector <CvPoint> _RPs; // vector of all R-peaks static int count; protected: void extractBlobs(); // extracts the blobs of time-marker void extractPeaksO; // extracts the QRS peak points void extractECG_Region(lpllmage*); // extracts the ECG region from input void aggregatePeakPoints(); // packs points based on proximity double mergeClosestPointsønt &, int &); // used by above method void Gray_Level_Segment(); // creating _ecg_bin image void verify_R_Peaks(); // Checks the validity of R-peak clusters int verify_R_Cluster(vector <CvPoint>&); public: ECG();
ECGflpilmage*); ECG(const char*, int); ~ECG();
ECG& operator=(const ECG&);
//void setECG(lpllmage*); // sets the ECG params in case default was used bool is Valid(); // determines if this is in fact a valid ECG int SN() { return _sn; } int NumR_Peaks() { return _R_peaks.size(); } int NumBlobsQ { return _tm_blobs.size(); }
CvPoint& getR_Peak(int I); // returns a pointer to the i-th R-wave peak TM_Blob& getBlob(int I); // returns a pointer to the i-th time-marker
CvPoint& LeftEdge_Blob(int I); // returns the left edge of tm blob I
CvPoint& RightEdge_Blob(int I); // returns the right edge of tm blob I
CvPoint& Center_Blob(int I); // returns the center of tm blob I void setTM_Status(int I, TM_STATUS); // sets the status of the I-th tm blob
TM_STATUS getTM_Status(int I); // returns the status of I-th tm blob static int getCountQ;
};
/*
* Used to determine the status of the first ECG of the two ECGs passed
* as the arguments. 7 void setStatus(ECG*, ECG*);
#endif
* frame.cc
* The definition of the "Frame" class is in this file.
* A Frame is a class to capture the notion of a video frame.
* A frame is closely related to an image.
*
* Written by Shahram Ebadoilahi
* Digital Video|Multimedia Group
* Columbia University
#include <iostream.h> #include <fstream.h> #include <stdlib.h> #include <stdio.h> #include <math.h> #include <unistd.h>
#include " Jutil/img_util.h" #include "frame.h"
/*
* The default constructor of the class. */
Frame :: Frame()
{
_sn = 0;
_name = 0;
_typejlag = FALSE;
_blank_FLAG = FALSE;
_BLANK = FALSE; }
I*
* Constructing the frame using the image name
* and path and its sequence number. 7
Frame :: Frame(const char* file_name, int sn)
{ _sn = sn;
_name = new char[strlen(file_name)+1]; strcpy( _name, file_name );
_img = readlMG2IPL(file_name, 0);
// reading the corresponding image in
_type_flag = FALSE;
// resetting the type flag _blank_FLAG = FALSE; _BLANK = FALSE;
}
Frame :: ~Frame() { ifCJmg) { iplDeallocate( _img, IPL_IMAGE_ALL ); img = 0;
} "" if(_name) delete j F _name; }
/*
* This method is used to set up a frame when that frame
* is initialized by the default constructor. 7 void Frame :: Set(const char* file name, int sn)
{ if(!_name)
{
_name = new char[strlen(file_name) + 1]; strcpy( _name, filejiame );
_img = readlMG2IPL(file_name, 0);
// reading the corresponding image in
} else
{ delete fj __name;
_name = new char[strlen(file_name) + 1]; strcpy( _name, file_name ); if(_img) iplDeallocate(_img. IPL_IMAGE_ALL); _img = readlMG2IPL(file_name, 0);
}
_blank_FLAG = FALSE; _BLANK = FALSE; }
/*
* Returns the ECG of a frame if the frame has a valid ECG. */
ECG* Frame :: getECG()
{ ECG* ecg = new ECG(_img); if( ecg->isValid() ) return (ecg); else return(NULL); }
/*
* Returns the ROI of a frame. 7
ROI* Frame :: getROIQ
{
ROI* roi = new ROI(_img); return roi; } /*
* Returns the type of the frame if the type has been determined already.
* If not, it first determines the type of the frame by calling setType(). */
TYPE Frame :: getType()
{ ifLtypeJag == TRUE) return(_type); else
{ setType(); return(_type); } }
/*
* Finds the type of the frame as one of the following list:
* - BW: 2D black/white triangular ROI frames
* - COLOR: color doppler frames showing the blood flow
* - ZOOM: either a trapezoidal or a square zoom frame
* - DOPPLER: a doppler frame
* - BLANK: blank frames between views
* - OTHER: none of the above */ void Frame :: setType()
{ if(isBlank())
{
Jype = BLANK;
//printfflmage is Blank-Blank\n");
} else if( isColored() ) Jype = COLOR; else
{
ROI* roi = getROIO; switch(roi->getShape()) { case TRIANGLE: _type = BW; break; case SQUARE: case TRAPEZOID: Jype = ZOOM; break; case RECTANGLE: Jype = DOPPLER; break; case NOJ3HAPE: Jype = BW;
//printfflmage is Blank\n"); break;
}; delete roi; }
_type_flag = TRUE; }
/*
* This method determines if a frame is a colored one or not.
* Returns TRUE for colored and FALSE for bw frames. 7 bool Frame :: isCo!ored()
{ int W = _img->widttr, int H = _img->height; float color_content = 0; for( int h=0; h<H; h++ )
{ for( int w=0; w< 3*W; w+=3 )
{ int pix_pos = w + h*3*W; int r = static_cast<unsigned char>((jmg->imageData)[pix_pos]); int g = static_cast<unsigned char>((_img->imageData)[pix_pos+1]); int b = static_cast<unsigned char>((_jmg->imageData)[pixj>os+2J);
// Finding the T and 'Q' color components float I = ( 0.596 * static_cast<float>(r) +
-0.274 * static_cast<float>(g) +
-0.322 * static_cast<float>(b) ); float Q = ( 0.211 * static_cast<float>(r) + -0.523 * static_cast<float>(g) + 0.312 * static_cast<float>(b) ); color_content += ( fabs(l) + fabs(Q) ); } } if( (color_content / static_cast<float>(W*H)) < 1 ) return(FALSE); else return(TRUE);
}
/*
* Examines a frame to see if the majority of the regions of
* interest are blank by counting the pixels and thresholding
* the value against a pre-set threshold. 7 bool Frame :: isBlankQ
{ if(_blank_FLAG) return(_BLANK);
Il if frame has already been processed for blankness
_blank_FLAG = TRUE; // processing the view for blankness bool BLANK1, BLANK2; int h, w, i, j; double non_zero, percent_non_zero; int pix_pos1, pix_pos2, pix; int W_start, W_stop, H_start, H_stop; int W = _img->width; int H = _img->height; int W_p= (int)((double)W * 0.2); int H_p= (intχ(double)H * 0.1 );
// width and height of test blocks unsigned char* label 1 = new unsigned char[W_p * H__p]; assert(label1 != NULL); unsigned char* Iabel2 = new unsigned char[W_p * H_p * 2]; assert(label2 != NULL);
Ipllmage* img_gray = iplCreatelmageHeader( 1, O1 IPL_DEPTH_8U, "GRAY",
"GRAV, IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ALIGN_QWORD, W1 H, NULL, NULL, NULL, NULL ); if(img_gray == NULL) exit(EXIT_FAILURE); iplAllocatelmage(img_gray1 0, 0); if( NULL == img_gray->imageData ) exit(EXIT_FAILURE); iplColorToGray(_img, img_gray);
// Changing the image from color to gray-level
Ipllmage* img_S1 = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAY",
"GRAV, IPL_DATA_ORDER_PIXEL, IPL_OR!GIN_TL,
IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W_p, H_p, NULL, NULL,
NULL, NULL ); if(NULL == img_S1 ) exit(EXIT_FAILURE); iplAllocatelmage(img_S1, 0, 0); if( NULL == img_S1->imageData ) exit(EXIT_FAILURE);
Ipllmage* img_S2 = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAr1,
"GRAr, IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W_p*2, H_p, NULL,
NULL, NULL, NULL);
If(NULL == img_S2) exit(EXIT_FAILURE); iplAllocatelmage(img_S2, 0, 0); if( NULL == img_S2->imageData ) exit(EXIT_FAILURE);
/*
* Copying a small rectangle from the upper right corner of
* 'img_gray' to *img_S'.
* Rectangle:
(W_start, H_start) ************ (W_stop, H_start)
(W_start, H_stop) ************ (W_stop, H_stop)
*/
W_start = (int)((double)W * 0.8 ); W_stop = W_start + W_p; H_start = (intχ(double)H * 0.1 ); H_stoρ = H_start + H_p; for( h = H_start; h < H_stop; h++) { for( w = W_start; w < W_stop; w++) { pixj3os1 = w + W * h; pix__pos2 = (w - W_start) + W_p * (h - H_start); pix = static_cast<unsigπed char>(img_gray->imageData[pix_pos1]);
{img_S1->imageData)[pix_pos2] = pix; } }
Ipllmage* imgBluri = iplClonelmage(img_S1); Ipllmage* img_S_edge1 = iplClonelmage(img_S1 );
// Smoothing the image and finding the edges iplBlur(img_S1, imgBluii, 3, 3, 1, 1 ); clusterlmage(imgBlur1 , label 1, 2); mergeTo2Regions(img_S1, label 1 , 2); sobelEdgeDetect(img_S1 , img_S_edge1 ); //writelPL2IMG( img_S_edge1, "urc" );
/*
* Assigning an inner rectangle of 'img_S_edge' to 'img_SS' to
* eliminate the effect of smoothing on the blank rectangles. • */ intW_n1 = W_p - 10; int H_n1 = H_p - 10;
Ipllmage* img_SS1 = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY",
"GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ALIGN_QWORD, W_n1 , H_n1 , NULL, NULL, NULL,
NULL ); if(NULL == img_SS1) exit(EXIT_FAILURE); iplAllocatelmage(img_SS1, 0, 1); if( NULL == img_SS1->imageData ) exit(EXIT_FAILURE); for(h = 5; h < H_n1 + 5; h++) { for(w = 5; w < W_n1 + 5; w++) { pixjjosi = w + W_p * h; pix_pos2 = (w - 5) + W_n1 * (h - 5); pix = static_cast<unsigned char>(img_S_edge1->imageData[pix_pos1]);
(img_SS1->imageData)[pix_pos2] = pix; } } non_zero = static_cast<double>(cvCountNonZero(img_SS1));
// counting the edge pixels percent_non_zero = (non_zero / (double)(W_n1 * H_n1)) * 100.0;
// ratio of the edge pixels to total pixels in the middle area printffFor Upper-RightΛt"); cout « "non_zero: " « non_zero « ", percentage: " « percent_πon_zero « endl; /*
* If the upper-right hand corner is not blank then frame is not blank!
* Otherwise it may be blank. Frame should be checked to see if it's
* M-mode Doppler frame. 7 if(percent_non_zero > NON_ZERO1) BLANK1 = false; else BLANK1 = true;
I* * Copying a small rectangle from the center of 'img gray' ot 'img_S'.
* Rectangle:
(W_start, H_start) ************ (W_stop, H_start)
(W_start, H_stop) ************ (W_stop, H_stop)
*/
W_start = (int)((doub!e)W * 0.3 ); W_stop = W_start + W_p*2; H_start = (int)((double)H * 0.45 ); H_stop = H_start + H_p; for( h = H_start; h < H_stop; h++)
{ for( w = W_start; w < W_stop; w++)
{ pix_pos1 = w + W * h; pix_pos2 = (w - W_start) + W_p * 2 * (h - H_start); pix = static_cast<unsigned char>(img_gray->imageData[pix_pos1]); (img_S2->imageData)[pix_pos2] = pix; } }
Ipllmage* imgBlur2 = iplC!onelmage(img_S2); Ipllmage* img_S_edge2 = iplClonelmage(img_S2); iplBlur( img_S2, imgBlur2, 3, 3, 1 , 1 ); clusterlmage(imgBlur2, Iabel2, 2); mergeTo2Regions(img_S2, Iabel2, 2); sobelEdgeDetect(img_S2, img_S_edge2); //writelPL2IMG( img_S_edge2, "c" ); int W_n2 = W_p*2 - 10; int H_n2 = H_p - 10;
Ipllmage* img_SS2 = iplCreatelmageHeader( 1 , 0, IPL_DEPTH_8U, "GRAY",
"GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W_n2, H_n2, NULL,
NULL, NULL, NULL ); if(NULL == img_SS2) exit(EXIT_FAILURE); iplAllocatelmage(img_SS2, 0, 1); if( NULL == img_SS2->imageData ) exit(EXIT_FAILURE); for( h = 5; h < H_n2 + 5; h++) { for( w = 5; w < W_n2 + 5; w++) { pix_pos1 = w + W_p * 2 * h; pix_pos2 = (w - 5) + W_n2 * (h - 5); pix =static_cast<unsigned char>(img_S_edge2->imageData[pix_pos1 ]); (img_SS2->imageData)[pix_pos2] = pix; } } non_zero = static_cast<double>(cvCountNonZero(img_SS2));
// counting the edge pixels ρerceπt_non_zero = (non_zero / (doubIe)(W_n2 * H_n2)) * 100.0;
// ratio of the edge pixels to total pixels in the middle area printtfFor CenterΛf); cout « "non_zero: " « non_zero « ", percentage: " « percent_non_zero « endl; if(percent_non_zero > NON_ZERO2) BLANK2 = false; else BLANK2 = true; delete Q Iabel2; delete D labeli; iplDeallocate(img_gray, IPL_IMAGE_ALL); iplDeallocate(img_S1 , (PL_IMAGE_ALL); iplDeallocate(img_S2, IPLJMAGE_ALL); iplDeallocate(img_S_edge1 , IPLJMAGE_ALL); iplDeallocate(img_S_edge2, IPLJMAGE ALL); iplDeallocate(imgBlur1 , IPL_IMAGE_ALL); iplDeallocate(imgBlur2, IPLJMAGE_ALL); iplDeallocate(img_SS1 , IPLJMAGE_ALL); iplDeallocate(img_SS2, IPL_IMAGE_ALL);
//if( BLANK1 ) _BLANK = TRUE; // no DR bar on the frame => BLANK frame //else _BLANK = FALSE; if(BLANK1 == TRUE || BLANK2 == TRUE) _BLANK = TRUE; else _BLANK = FALSE; return J3LANK; }
/*
* This method can be used to anonimize a frame of the echo video.
* It first gets the type of the frame by calling the method 'getType()'
* and then based on the type opens a predefined mask. It then masks
* the frame with the specified mask and overwrites the frame on the
* disk. / void Frame :: anonimize(bool FLAG, TYPE type)
{
TYPE T; int w, h; int pix_pos, pix_pos_size; fstream mask_file; char mask_name[FILE_NAME_SIZEj; char tmp_name[FILE_NAME_SIZEJ; char tmpJine[MAX_LINE]; CvPoint P1. P2; float UL_x = 0, UL_y = 0,
LR_x = 0, LR_y = 0; unsigned char pix, pixjnedian; int H = _img->height; int W = img->width; int pix_size = _img->nChannels;
// getting the type of the frame Jf(FLAG == FALSE)
T = getType(); else
T = type;
// Creating the mask image
Ipllmage* mask = iplCreatelmageHeader (1 , 0, IPL_DEPTH_8U, "GRAY", "GRAY", IPLJ)ATAJDRDERJ3IXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL ); if (mask == NULL) exit(EXIT_FAILURE); iplAllocatelmage(mask, 1 , 0); if (NULL == mask->imageData) exit(EXIT_FAILURE);
// setting the directory where the mask could be found strcpy(mask_name, MASK_DIR);
// getting the correct mask image based on the type of the frame. switch(T)
{ case(BW): case(COLOR): case(ZOOM): strcat(mask_name, TXTJΞCHOJviASKJXT); //printf("mask name: %s\n", mask_name);
//mask = readlMG2IPL(mask_name,0);
// open the mask file maskjile.open(mask_name, ios::in); if (!mask_file) { cerr « "FRAME::anonimiz(): Cannot open file" « endl; exit (EXIT_FAILURE); }
// read the mask values and make up the image while (mask_file.getline(tmp_Iine, MAXJJNE, Vi')) {
//printf("mask info: %s\n", tmpjine); sscanf (tmpjine, "%f %f %f %f, &UL_x, &UL_y, &LR_x, &LR_y); P1.x = (int)(UL_x * W); P1.y = (int)(UL_y * H); P2.x = (int)(LR_x * W); P2.y = (int)(LR_y * H); //printf("P1 : (%d,%d), P2: (%d,%d)\n", P1.x, P1.y, P2.x, P2.y); cvRectangle (mask, P1, P2, -1, CV_RGB(255,255,255));
} writelPL2IMG(mask, "mask"); mask_file.close(); break; case (DOPPLER): strcat(mask_name, TXT_DOP_MASK_TXT); //mask = readlMG2IPL(mask_name,0);
// open the mask file mask_file.open(mask_name, ios::in); if (imask JiIe) { cerr « "FRAME::anonimiz(): Cannot open file" « endl; exit (EXIT_FAILURE); }
// read the mask values and make up the image while (mask_file.getline(tmp_line, MAX-LINE, V)) {
//printffmask info: %s\n", tmpjine); sscanf (tmpjine, "%f %f %f %f\ &UL_x, &UL_y, &LR_x, &LR_y); P1.x = (int)(UL_x * W); P1.y = (int)(UL_y * H); P2.x = (int)(LR_x * W); P2.y = (int)(LR_y * H); //printf("P1: (%d,%d), P2: (%d,%d)\n", P1.x, P1.y, P2.x, P2.y);
GvRectangle (mask, P1, P2, -1 , CV_RGB(255,255,255));
} writelPL2IMG(mask, "mask"); maskjile.close(); break; case (BLANK): case (OTHER):
// the mask for blank frame is sum of the echo and dop masks strcpy(tmp_name, mask_name); strcat(mask_name, TXT_ECHO_MASK_TXT);
// open the mask file mask_file.open(mask_name, ios::in); if (!mask_file) { cerr « "FRAME::anonimiz(): Cannot open file" « endl; exit (EXIT_FAILURE); }
// read the mask values and make up the image while (mask_file.getline(tmpjine, MAXJJNE, Vi')) { sscanf (tmpjine, "%f %f %f %f , &UL_x, &UL_y, &LR_x, &LR_y); P1.x = (int)(UL_x * W); P1.y = 0nt)(UL_y * H); P2.X = (int)(LR_x * W); P2.y = (int)(LR_y * H); //printf("P1: (%d,%d), P2: (%d,%d)\n", P1.x, P1.y, P2.x, P2.y); cvRectangle (mask, P1, P2, -1, CV_RGB(255,255,255));
} writelPL2IMG(mask, "mask"); mask_file.close(); break;
};
// declaring and initializing gray_img used for intermediate operations Ipllmage* grayjmg = iplCreatelmageHeader(1 , 0, IPLjDEPTH_8U, "GRAY",
"GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL); if(gray_img == NULL) exit(EXIT_FAILURE); iplAllocatelmage(grayJmg, 0, 0); if(NULL == grayjmg->imageData) exit(EXIT_FAILURE);
// Making gray-level image out of input image if( strcmp(j"mg->colorModel, "GRAY" ) != 0) iplColorToGray(_img, grayjmg); else iplCopy(_img, grayjmg);
// Changing image from color to gray-level pix_median = static_cast<unsigned char>( imageMedian(gray_img) ); // finding the median pixel value of the image iplDeallocate(gray_img, IPL_IMAGE_ALL);
// masking the frame's image with the mask for(h=0; h<H; h++)
{ for(w=0; w<W; w++)
{ pix_pos = w + W * h; pix_pos size = pix_pos * pix_size; pix =static_cast<unsigned char>(mask->imageData[pix_pos]); if(pix == MAX_PIX_VAL)
{
( (char*)_img->imageData )[pix_pos_size] = pix_median; if( pix__size > 1 )
{
( (char*)_img->imageData )[pix_jpos_size + 1] = pixjnedian;
( (char*)_img->imageData )[pix_pos_size + 2] = pix_median;
} } } } iplDeallocate(mask, IPL_IMAGE_ALL); un!ink(_name);
// delete the frame's image from disk deleteExtension(_name);
// deleting the file extension from the end of image name writelPL2IMG(_img, _name);
// replacing the frame with the anonimized one
437 * frame.h
* The definition of the "Frame" class is in this file.
* A Frame is a class to capture the notion of a video frame.
* A frame is closely related to an image.
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#ifndef _FRAME_H #define _FRAME_H ffinclude <ipl/ipl.h> #include <CV.h>
#include "ecg.h"
#include "roi.h"
#inr.!'ide "../util/filename.h" const double NON_ZERO1 = 10.0; // percent of non-zero pixels const double NON_ZERO2 = 5.0; // percent of non-zero pixels class Frame
{ public:
Frame();
Frame(const char*, int);
~Frame(); void Set(const char*, int ); void setSN(int i);
ECG* getECG(); // returns the ECG of the frame
ROI* getROI(); // returns the ROI of the frame
Ipllmage* getlMGQ { return _img; }
// returns the image associated with the frame int SN() { return _sn; } char* getName() { return _name; }
TYPE getType(); // utility function for finding the type of the frame void anonimize(bool FLAG, TYPE type = OTHER);
// blocks out the patient info from frame based on type
// if FLAG is false or not set, the type will be
// determined from the frame itself, otherwise it will
// be the type set by the caller. bool isBlank(); // returns TRUE if frame is blank bool isColored(); // returns TRUE if frame is colored private:
Ipllmage* _img; // each frame has an image char* _name; // name of the image file corresponding to frame int _sn; // sequence number of the frame TYPE Jype; // type of the frame bool _type_flag;// TRUE if the type has been determined already bool _BLANK; // TRUE if the frame is blank bool _blank_FLAG; // TRUE if the frame has been processed for blankness void setType();
};
#endif
CARDIOHOME = /home/shahram/cardio/source
CC = g++
CPPFLAGS = -g -O2
INCLS = -!$(CARDIOHOME)/include
LIBS = -lopencv -lipl -lstdc++
.SUFFIXES = .cxx $0 $H
PROG = anonimize
SRCS = $(CARDIOHOME)/util/ppmlO.cc $(CARDIOHOME)/echo_video/img_util.cc $(CARDIOHOME)/util/util.cc $(CARDIOHOME)/echo_video/ecg.cc $(CARDIOHOME)/echo_video/tm_blob.cc $(CARDIOHOME)/echo_video/roi.cc $(CARDIOHOME)/echo_video/frame.cc $(CARDIOHOME)/echo_video/echo_anonimizer.cc
OBJS = $(SRCS:%.cc=%.o)
.cc.o:
$(CC) $(CPPFLAGS) $(INCLS) -c $<
all: $(PROG)
$(PROG): $(OBJS)
$(CC) $(CPPFLAGS) -o $@ $(OBJS) $(INCLS) $(LIBS) clean: rm *.o $(PROG)
* roi.cc
*
* Definition of the methods of class ROI.
* For the definition of class ROI look at roi.h
*
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University ffinclude <iostream.h> ffinclude <fstream.h> #include <stdlib.h> #include <string.h> #include <stdio.h> #include <assert.h> #include <math.h>
#include "roi.h" #include "../util/img_util.h" #include "../util/filename.h" #include "../util/ppmlO.h"
/*
* Constructor for class ROI.
* It gets the frame image that we want the ROI for as the input.
* It assumes that the size of the input frame has been
* checked before and it complies with to the correct one. 7
ROI :: ROI(lpllmage* img)
{ int W = img->width; int H = img->height;
// settin the _in_img to the input image _in_img = iplClonelmage(img);
/* */
/*- Decimated version of input image for faster processing time -7 I*
_in_img_D = iplCreatelmageHeader(1 , O, IPL_DEPTΗ_8U, "GRAY", "GRAY", IPL_DATA_ORDER_PIXEL, IPLJDRIG IN_TL, IPL_ALIGN_QWORD, static_cast<int>(0.25*W), static_cast<int>(0.25Η), NULL, NULL, NULL, NULL); if(_in_img_D == NULL) { cerr « "Cannot initialize image"; exit(EXIT_FAILURE);
} iplAllocatetmage(_in_img_D, 0, 0); if(NULL == _in_img_D->imageData) { cerr « "Cannot initialize image"; exit(EXIT_FAILURE); }
/* */
/*- temporary image for middle operations -7
/* */ Ipllmage* tmpjmg = iplCreatelmageHeader( 1 , 0, IPL_DEPTH_8U, "GRAV1 11GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, img->width, img->height, NULL, NULL, NULL, NULL ); if(tmp_img == NULL)
{ cerr « "Cannot initialize image"; exit(EXIT_FAILURE);
} iplAllocatelmage(tmp_img, 0, 0); if (N ULL == tmp_img->imageData)
{ cerr « "Cannot initialize image"; exit(EXIT_FAILURE); } j* *j
I*— Make sure image is gray-Jevel — */
I* */ if( strcmp(img->colorModel, "GRAY") != 0 ) iplColorToGray(img, tmpjmg);
// Changing image from color to gray-level else iplCopy(img, tmpjmg);
// Just copying one gray-level image to other
;* * 1
I* — Segment the decimated image — */ iplDecimate(tmpjmg, _in_img_D, 1, 4, 1 , 4, IPL_INTER_LINEAR);
// decimating the input image by a factor of 4 (=> area by 1 /16) iplDeallocate(tmp_img, IPLJMAGE_ALL);
Gray_Level_Segment();
// Segmenting the input image into background and foreground j* *l
I*— Closing the input image. — */
/* *j
IplConvKernel* SE1 = cvCreateStructuringElementEx (3, 3, 0, 0,
CV_SHAPE_RECT, NULL);
IplConvKernel* SE2 = cvCreateStructuringElementEx (3, 3, 2, 2,
CV_SHAPE_RECT, NULL); cvDi!ate(_in_img, injmg, SE1, 1); cvErode(_in_img, Jn img, SE2, 1); cvReleaseStructuringElement(&SE2); cvReleaseStαιcturingElement(&SE1 ); j* */
/*— Smoothing the segmented image. — */
I* */
Ipllmage* tmp = iplClonelmage(_injmg_D); iplMedianFilterUnjmaJλ tmp, 5, 5, 2, 2); iplCopy(tmp, _in_img_D); iplDeallocate(tmp, IPL_IMAGE_ALL); // Shape of ROI hasn't been set yet _shape_flag = FALSE; }
ROI :: -ROI()
{ if(_in_img) ipfDealiocate(jπjmg, IPLJMAGE_ALL); if(_in_img_D) iplDeallocate(_in_img_D, IPL_IMAGE_ALL); }
/*
* Used to segment the input frame image into a foreground consisted of
* the ROI, TXT, DRl and ECG areas. It can be changed by a derivative of
* this class if needed. */ void ROI :: Gray_Level_Segment()
{ int H = _in_img_D->height; int W = _in_img_D->width;
// Initializing a temp image
Ipllmage* img = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAY", "GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL,
IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL ); if (img == NULL) exit(EXIT_FAILURE); iplAllocatelmage(img, 0, 0); if (NULL == img->imageData) exit(EXIT J= AILURE);
// 'label' holds the label of each pixel of the image unsigned char* label = new unsigned char[W * H]; assert(label != NULL);
// using K-means clustering algorithm, ekg image is classified // into 2 regions. clusterlmage(_in_img_D, label, 5);
// merging the small regions and bi-levelizing the image mergeTo2Regions(img, label, 5); delete 0 label; iplCopy(img, jnjmgJD); iplDeallocate(img, IPLJMAG E_ALL);
// NOTE: Previously I was copying 'img1 to 'Jn Jmg JJ pix by pix. }
/*
* This method calculates the edge-map for ROI and then compares
* this edge-map with the templates of different ROI types to
* determine the correct shape of the ROI.
* NOTE: Add template zm2 to the list */ void ROI :: setShape()
{ char tmpJine[MAX_LINE]; fstream tmplt_file1, tmplt_file2, tmp|t_fi!e3;
CvPoint P1. P2; float UL_x = 0, UL_y = 0,
LR_x = 0, LRj/ = 0; lpllmage *bw_tmp,
*dopjmp,
*zm1__tmp; int W = _in_img->width; int H = _in_img->height;
// making up the names of templates char* bw_ηame = new char[FILE_NAME_SIZE]; strcpy(bw_name, TEMPLATE_DIR); strcat(bw_name, BW_TMPLT_DAT); char* 2m1_narne = new cbar[F!LE_NAME_SIZE]; strcpy(zm1_name, TEMPLATE_DIR); strcat(zm 1 _name, TRAP_Z_TMPLT_DAT); char* dop_name = new char[FILE_NAME_SIZE]; strcpy(dop_name, TEMPLATE_DIR); strcat(dop_name, DOP_TMPLT_DAT);
/* *f
/*- Reading in the template for BW frames -*]
J* *l bwjmp = iplCreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAV, "GRAY", IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL ); if (bwjmp == NULL) exit(EXIT_FAILURE); iplAllocatelmage(bw_tmp, 1, 0); If (NULL == bwJmp->imageData) exit(EXIT_FAILURE);
// open the mask file coυt « bw_name « end(; tmpit_file1.open(bw_name, ios::in); if (!tmp!tjile1 ) { cerr « "ROi::setShape(): Cannot open file" « endl; exit (EXIT FAILURE); }
// read the mask values and make up the image while (tmplLfιle1.getline(tmpJine, MAXJ.INE, 1In1)) { sscanf (tmpjine, "%f %f %f %f\ &UL_x, &UL_y, &LR_x, &LR_y);
P1.x = (int)(UL_x * W);
P1.y = (int)(UL_y * H);
P2.x = (int)(LR_x * W);
P2.y = (int)(LR_y * H); cvLine (bwjmp, P1, P2, 1, CV_RGB(255,255,255)); } tmplt_file1.close(); I* */
/*-Reading in the template for doppter frames-7 /* */ dopjmp = ipICreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPL_DATA_ORDER_PIXEL, IPLJDRIGINJTL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL ); if (dopjmp == NULL) exit(EXIT_FAILURE); iplAllocatelmage(dop Jmp, 1 , 0); if (NULL == dop_tmp->imageData) exit(EXlT_FAILURE);
// open the mask file cout « dop_name « endl; tmp!t_fιle2.open(dop_name, ios::in); if (!tmplt_file2) { cerr « "ROI::setShape(): Cannot open file" « endl; exit (EXIT FAILURE); }
// read the mask values and make up the image while (tmplt_file2.getline(tmp_line, MAXJ-INE, V)) { sscanf (tmpjine, "%f %f %f %f ', &UL_x, &UL_y, &LR_x, &LR_y);
P1.x = (int)(UL_x * W);
P1.y = (int)(UL_y * H);
P2.x = (int)(LR_x * W);
P2.y = (int)(LR_y * H); cvLine (dopjmp, P1, P2, 1 , CV_RGB(255,255,255));
} tmplt_file2.close(); j*
/*- Reading in the template for zoom frames -*/
I* *j zmijmp = ipICreatelmageHeader (1, 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPLJOATAJDRDERJ=1IXEL, IPLJDRIGINJTL, IPL_ALIGN_QWORD, W, H, NULL, NULL, NULL, NULL ); if (zm1 Jmp == NULL) exit(EXIT J=AILURE); iplAllocatelmage(zm1 Jmp, 1, 0); if (NULL == zm1 Jmp->imagβData) exit(EXIT_FAILURE);
// open the mask file cout « zm1_name « endl; tmpltjlle3.open(zm1 jiame, ios::in); if (!tmplt_file3) { cerr « "ROI::setShape(): Cannot open file" « endl; exit (EXITJTULURE); }
// read the mask values and make up the image while (tmpltjile3.getline(tmpjine, MAX J-INE, V)) { sscanf (tmpjine, "%f %f %f %f ', &UL_x, &UL_y, &LR_x, &LR_y);
P1.x = (int)(UL_x * W);
P1.y = (int)(UL_y * H);
P2.x = (int)(LR_x * W);
P2.y = (int)(LR_y * H); cvLine (zm1 Jmp, P1, P2, 1 , CV_RGB(255,255,255)); } tmplt_file3.close();
/* */
/*- Create a temporary image -*/
/* *ι
Ipllmage* img = iplCreatelmageHeader( 1, 0, IPL_DEPTH_8U, "GRAY", "GRAY",
IPL_DATA_ORDER_PIXEL, IPL_ORIGIN_TL, IPL_AL!GN_QWORD, W, H ,NULL, NULL, NULL, NULL ); if (img == NULL) exit(EXIT_FAILURE); iplAllocatelmage(img, 0, 0); if (NULL == img->imageData) exit(EXIT_FAILURE);
;* */
/*- Get edge image and zoom it out. -*/
Ipllmage* edgejmg = iplClonelmage(JnjmgjD); sobelEdgeDetect(_in_img_D, edgejmg); iplZoom(edge_img, img, 4, 1, 4, 1, IPLJNTER JJNEAR); iplDeallocate(edgeJmg, IPLJMAGE_ALL);
/* */
/*- Match each template to edge. -*/
/* *ι
Shape s; if( Match_Template(img, bwjmp) ) { s - TRIANGLE; cout « "triangle" « "\n"; } else if( Match_Tempfate(img, zm1 Jmp) ) { s = TRAPEZOID; cout « "trapezoid" « "\n"; } else if( Match_Template(img, dopjmp) ) { s = RECTANGLE; cout « "rectangle" « "\n"; } else { s = NOJ3HAPE; cout « "no shape" « "\n"; }
// Setting flags. _shapejlag = TRUE; _roi_shape = s;
/* */
/* — Cleaning up! — */ I" */ delete Q dop_name; //delete Q zm2_name; delete Q zm1_name; delete [] bw name; iplDeallocate(img, IPLJMAGE_ALL); iplDeallocate(dopJmp, IPLJMAGE_ALL); //iplDeallocate(zm2Jmp, IPLJMAGE_ALL); iplDeallocate(zm1Jmp, IPLJMAGE_ALL); iplDeallocate(bwJmp, IPLJMAGE_ALL); } /*
* This method returns the shape of the ROI. */
Shape ROI :: getShape()
{ if(!_shape_flag) setShapeQ; return(_roi_shaρe); }
/*
* Based on the shape of the ROI, this method applies a proper mask
* and returns an image that shows the ROI only. */
Ipllmage* ROI :: getlmgO
{ int h, w, i; int pix, mask_pix, pix_pos, pix_pos_size; int W = _in_img->width; int H = _in_img->height; int pix_size = _in_img->nChannels; if(!_shape_flag) setShape(); if(jOi_shape != NO_SHAPE) { // the mask image for the specific shape Ipllmage* mask = GET_ROI_MASK(_roi_shape);
// negative of the mask image
Ipllmage* mask_NOT = iplClonelmage(mask); iplNot(mask, mask_N0T);
Ipllmage* outjmg = iplClonelmage(jn_img); Ipllmage* tmpjmg = iplClonelmage(_jnjmg);
// setting temp_img to the complement of the ROI for(h=0; h<H; h++) { for(w=0; w<W; w++) { pix_pos = w + W * h; pix_pos_size = pix_pos * pix size; pix =static_cast<unsigned char>(mask_NOT->imageData[pix_pos]); if(pix == MAX_PIX_VAL) { ((char*)tmp_img->imageData)[pix_pos_size] = _in_img->imageData[pixj3OS_size]; if( pix size > 1 ) {
((char*)tmp_irng->imageData)[pix_pos_size + 1] = _injmg->imageData[pixj)os_size + 1]; ((char*)tmp_img->irnageData)[pix_pos_size + 2] = _in_img->imageData[pix_pos_size + 2]; } } else { ((char*)tmp_img->imageData)[pix_pos_size] = O; ιf( pιx_sιze > 1 ) {
((char*)tmpjmg->ιmageData)[pιx_pos_sιze + 1] = 0, ((char*)tmpjmg->ιmageData)[pιx_pos_sιze + 2] = 0, } } } } int* mean = new ιπt[pιx_sιze],
// setting up the mean of the complement of the ROl for(ι=0, ι<pιx_size, ι++) { cvSetlmageCOI(tmp_ιmg, ι+1), mean[ι] = statιc_cast<ιnt>(cvMean(tmp_ιmg)), }
// masking the input frame and setting out image to ROI region for(h=0, h<H, h++) { for(w=0, w<W, w++) { pιx_pos = w + W * h, pιx_pos_sιze = pιx_pos * pιx_sιze, maskjDix = statιc_cast<unsιgned char>(mask->ιmageData[pιx_pos]), if (mask_pιx == MAX_PIX_VAL) { ((char*)out_ιmg->ιmageData)[pιx_pos_sιze] = jn_ιmg->ιmageData[pιx_pos_sιze], ιf( pιx_sιze > 1 ) {
((char*)outjmg->ιmageData)[pιx_pos_sιze + 1] = _ιn_img->ιmageData[pιx_pos_sιze + 1], ((char*)out_ιmg->imageData)[pix_pos_sιze + 2] = _ιn_ιmg->ιmageData[pιx_pos_sιze + 2],
} } else { ((char*)out_ιmg->ιmageData)[pιx_pos_sιze] = meanfO], //255 ιf( pιx_sιze > 1 ) {
((char*)out_ιmg->ιmageData)[pιx_pos_s!ze + 1] = mean[1], ((char*)out_ιmg->ιrnageData)[pιx_pos_sιze + 2] = mean[2],
} } } } delete D mean, fplDeallocate(mask_NOT, IPLJMAGE_ALL), ιplDeallocate(mask, IPL_IMAGE_ALL), ιplDeallocate(tmp_ιmg, IPLJMAGE_ALL), return(out_ιmg),
} else return(_mjmg), }
/* * This function is used to match the edgejnap "I" to a template "T"
*/ bool Match_Template(lpllmage* I, Ipllmage* T)
{ int W = l->width; int H = l->height;
// making sure that the two images are of the same size if( (W != T->width) || (H != T->height) ) { cout « "Errorϋmages not of same size" « endl; exit(EXlT_FAiLURE); }
Ipllmage* temp_img = iplClonelmage(l); iplSet(temp_img, 0); int noffset; char *TData = T->imageData; char *IData = l->imageData; char *tmpData = temp_img->imageData; for(int h=0; h < H; h++ ) { noffset = h * T->widthStep; for(int w=0; w < W; w++ ) { int pix = (unsigned char)TData[noffset + w]; if(pix == MAX_PIX_VAL) tmpData[noffset+w] = IData[noffset+w]; } }
int N1 = cvCountNonZero(temp_img);
// number of non-zero pixels of the result int N2 = cvCountNonZero(T);
// number of non-zero pixels of the template image iplDeallocate(tempJmg, IPLJMAGE_ALL); if( N1 > static_cast<int>(0.6 * N2) ) return(TRUE); else return(FALSE); }
/*
* The function gets the correct mask for a specific ROI shape.
*/
Ipllmage* GET_ROI_MASK(Shape S)
{
Ipllmage* mask; char* mask_name = new char[FILE_NAME_SIZE]; strcpy(mask_name, MASK_DIR); switch(S) { case(TRIANGLE): strcat(mask_name, BW_MASK); mask = readlMG2IPL(mask_name,0); break; case(TRAPEZOID): strcat(mask_name, TRAP_Z_MASK); mask = readlMG2IPI_(mask_name,0); break; case(SQUARE): strcat(mask_name, SQR_Z_MASK); mask = readlMG2IPL(mask_name,0); break; case(RECTANGLE): strcat(mask_name, DOP_MASK); mask = readlMG2IPL(mask_name,0); break;
}; delete Q mask_name; return(mask);
}
* roi.h
* Written by Shahram Ebadollahi
* Digital Video|Multimedia Group
* Columbia University
#ifndef _ROI_H #define _ROI_H
#include <ipl/ipl.h> #include <CV.h> enum Shape
{ TRIANGLE,
SQUARE,
RECTANGLE,
TRAPEZOID,
NO_SHAPE }; // Different possible ROI shapes
class ROI
{ private:
Ipllmage* _in_img; // input image frame containing the ROI Ipllmage* _in_img_D; // decimated and segmented version of _in_img
Shape _roi_shape; // shape of the ROI bool _shape_flag; // indicating if the shape of the ROI has been
// determined already protected: void GrayJ_evel_Segment(); // segmenting into background and foreground void setShape(); // determines the shape of the ROI public:
ROI(ipllmage*); -ROI();
Shape getShape(); // returns shape of ROI as one of possible shapes
Ipllmage* getlmg(); // returns the image of the ROI region Ipllmage* getlmgD() {return _in_img_D;}
}; bool Match_Template(lpllmage*, Ipllmage*); Ipllmage* GET_ROI_MASK(Shape);
#endif <?xml version="l .0"?>
<echoFrame f ramePath="/home/shahram/cardio/echos/videos/echol/irag/00000.ppm" frameType="bw" frameView="PLA" f rameStatus="simple" cycleNum="l" sequenceNum="0"><ROI index="l" label="I>V" valid="true" roiMaskPath="/h.ome/shahram/cardio/echos/videos/echol/views/l/masks/maskl.ppm" f eatureFile="/home/shahram/cardio/echos/videos/echol/views/l/f eatures/f 1. dat" ></
ROIxROI index="2" label="LA" valid="true" roiMaskPath="/home/shahram/cardio/echos/videos/echol/views/I/masks/mask2.ppm" featureFile="/home/shahram/cardio/echos/videos/echol/views/1/features/f2.dat"></
ROIxROI index="3" label="RA" valid="true" roiMaskPath="/home/shahram/cardio/echos/videos/echol/views/I/masks/mask3.ppm" featureFile="/home/shahram/cardio/echos/videos/echol/views/1/features/f3.dat"></
ROIx/echoFrame>
panel Query. Java package dvmm; import javax.swing. *; import javax. swing. text.*; import j Java . awt . * ; import j ava. awt. Stroke ; import Java. awt. event.*; import com. j graph.*; import com. j graph. event.*; import com. j graph. graph.*; import java.util .*; import java.io.*;
/; ** Ti .t ,le: DVMM project
* Description:
* Copyright: Copyright (c) 2001
* Company: Columbia University
* ©author Hrair Mekhsian
* ©version 1.0 */ public class panelQuery extends JLayeredPane{
public MainFrame frame; private JTextPane description; private JPanel paπelsouth; private JPanel panelNorth; public JGraph mainGraph; public GraphData grapndata;
// private VertexLevell[ topCategories; // private Vertexl_evel2[_ [] subcategories; // private VertexLevel3[! [][] subsubcategories; // private Vertexl_evel3[; [][] [] severitycategories;
// private myEdge[][] top2subEdges;
// private myEdge[][][] sub2subsubEdges;
// private myEdge[][] [][] subsub2sevEdges; private static string newline = "\r»"; public JList listvideos; private DefaultListModel listModel; private JPanel animatedLogo; public myGraphBar popup; private myMenuBar menu; private Echolmage ProjectLogo;
public panelQuery(MainFrame f) { frame = f; setBackground(frame.backβlue);
//topCategories = new vertexl_evell[5] ; //subcategories = new vertexi_eve!2[5][10] ; //subsubcategories = new VertexLevel3[5] [10] [10] ; //severitycategories = new vertexLeve!3[5][10][10][3] //top2subEdges = new myEdge[5] [10] ; //sub2subsubEdges = new myEdge[5] [10] [10] ; //subsub2sevEdges = new myEdge[5] [10] [10] [3] ;
Page 1 panel Query. Java listModel = new DefaultListModel O ; listModel .addElenient(" Patient A"); listModel .addElement("Patient B") ; listModel .addElementC'Lisa Friendly") ; listModel .addElementC'Mary Campione") ; listvideos = new JLi St(I i stModel ); 1 i stvi deos . setBackground(col or . bl ack) ; 1 i stvi deos . setForeg round (Col or . whi te) ;
1 i stvi deos . setBorder (Border Factory . createBevel Border (0, new r (160, 160, 160), Col or. blue)); 1 i stvi deos . setsi ze(150 , 120) ; 1 i stvi deos . setvi si bl e (t rue) ; mainGraph = new UGraphO;
Object [] Cells = mainGraph.getModelQ .getcel Is (mainGraph, null) ; mainGraph. getDriver() . remove (mainGraph, Cells) ; mai nGraph . setBackground(f rame . backBl ue) ;
Object obj; try {
// Read from disk using Filelnputstream Fileinputstream f_in = new
Fi1elnputstreamC'GraphData.txt") ;
// Read object using objectlnputstream Objectlnputstream obj_in = new objectlnputstream (f_in);
// Read an object
+ " Error in read");
Figure imgf000455_0001
instead of loading");
Figure imgf000455_0002
FileOutputStream ("GraphData. txt") ;
// Write object with objectoutputstream objectoutputstream obj_out = new Objectoutputstream (f_out) ;
// Write object out to disk obj_out.writeθbject ( graphdata );
} catch (Exception e) {system. out. println(e. toStringO) ;}
Page 2 panel Query. Java
panelNorth = new jPanelO; panel North . setBackground(col or . bl ack) ; panel North . setsi ze(100 , 100) ; panel south = new jpanelC); panel south . setBackg round (Col or . bl ack) ; description = new JTextPaneO;
String [] initstring =
{ "D", "igital ", "E", "chocardiogram ", "V", "ideo ", "L'V'ibrary " + newline,
"DEVL contains 50 different textbook example cases of cardiac abnormalities. These 50 categories fall under five main categories. For each abnormality there are different levesl of severity. You can use this panel to browse the contents of the DEVL. By choosing a particular patient you'll be able to see a summary of the echo study." };
String [] initstyles = { "cap? tal " , "ti tie", "capi tal " , "ti tl e" , "capi tal " , " ti tl e" , "capi tal " , "ti tl e" ,
"regular", "regular", "regular", "regular", "regular"
}; i ni tstyl esForTextPane(descri pti on) ; Document doc = description.getDocumentO ; try { for (int i=0; i < initstring. length; i++) { doc. insertst ring (doc. getLength() , initstringCi] , descri pti on . getstyl e(i m tstyl es [i ] )) ;
} catch (BadLocationException ble) { system. err. printlnC'Couldn't insert initial text."); descri pti on. setBackg round (col or. black) ; descri pti on. setForeg round (Col or. white) ; descri pti on . setBounds (150 , 15 , 500 , 100) ; animatedLogo = new JPanelO; listyideos.addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent e) { if (e.getClickcountO == 2 && listvideos.isvisible() && listvideos.getSelectedvalueO != null) { frame . Resul tsl . LoadData(l i stvi deos . getSel ectedval ue () ) ;
}});
popup = new myGraphBar(frame) ; popup.setBounds(300,300,102,140) ; add(popup) ; popup.setvisible(false) ;
Page 3 panelQuery. Java menu = new myMenuBar (frame) ; menu . setBounds (0 , 0 , 102 , 140) ; add(menu) ;
ProqectLogo = new Echolmage (frame) ; Proj ectLogo . setlmage("heart . gi f ") ; Proj ectLogo. Xscale = 1; ProjectLogo.Yscale = 1; Pro;]' ectLogo . setBounds (855,7, 150 , 131) ; ProjectLogo.setBorder(nuii) ; add(ProjectLogo) ; panel North. setBorder (Border Factory. createLineBorder (col or. white, I)) ; add(description) ; add(mainGraph) ; add(panel North) ; mainGraph.add(listvideos) ; mai nGraph. setvi si ble (true) ; mai nGraph . setεdi tab! e (f al se) ; listvideos. setvi si ble(false) ; panel North . setvi si bl e(true) ; int left = 0; int top = 0; int width = frame. getContentPane() . getWidthO ; int height = frame. getContentPaneO . getHeightQ ;
//Responding to interaction GraphStatus gs = new Graphstatus() ;
// Add selection listener mai nGraph. addGraphselectionListener(gs) ;
// Add model listener mai nGraph . getModel () . addGraphModel Li stener (gs) ;
// click support mai nGraph. addMouseLi stener (new MouseAdapterQ { public void mousePressed(MouseEvent e) {
int x = e.getxO, y = e.getY(); frame.Query.graphdata.clickGraph(x,y,e);
});
protected void initstylesForTextPane(DTextPane textpane) {
Style def = StyleContext.getDefaultstyleContextO . getstyle(StyleContext.DEFAULT_STYLE); style regular = textPane.addstyle("regular", def); StyleConstants.setAlignment(def,styleconstants.ALIGN_CENTER) ; Styleconstants.setFontFami1y(def, "sansserif") ; page 4 panel Query. Java
Style s = textPane.addstyle ("title" , regular); Styl eConstants . setFontsi ze(s , 18) ; s = textPaπe.addStyleC'bold", regular); Styl eConstants. setBold (s, true); s = textPane.addstyleC'small", regular); styl eConstants. setFontsi ze(s, 10) ; s = textPane.addstyle("capital", regular); Styl eConstants. setForeground(s, Col or. red) ; Styl eConstants. setFontsi ze(s, 20) ;
public void windowResized(int width, int height) { int left = 0; int top = 0; panel North . setBounds (1 eft , top , wi dth-8 , 140) ; mai nGraph . setBounds Cl eft , top+140 , wi dth , hei ght-140) ;
public class GraphStatus implements GraphSelectionListener,
GraphModel Li stener , j ava . i o . Seri al i zabl e
// from GraphSelectionListener public void valueChangedCGraphselectionEvent e) {
// from GraphModel Li stener public void graphCellschanged (GraphModel Event e) {
// from GraphModel Li stener public void graphCellslnserted(GraphModel Event e) {
// from GraphModel Li stener public void graphCellsRemoved(GraphModel Event e) {
}
class GraphData implements java.io.serializable { public VertexLevell[] topCategories; public VertexLevel2[][] subcategories; public VertexLevel3[] [] [] subsubcategories; public VertexLevel3[][][][] seventycategories; public myεdge[][] top2subεdges; public myEdge[][][] sub2subsubEdges; public myεdge[] [] [] [] subsub2sevEdges; public transient panelQuery query; public GraphDataCpanelQuery par) { query = par;
Page 5 panel Query. Java topCategories = new vertexLevell[5] ; subcategories = new VertexLevel 2 [5] [10] ; subsubcategories = new VertexLevel 3 [5] [10] [10] ; severitycategories = new vertexl_evel3[5] [10] [10] [3] top2subEdges = new myEdge[5] [10] ; sub2subsubEdges = new myEdge[5] [10] [10] ; subsub2sevεdges = new myEdge[5] [10] [10] [3] ;
protected void restoreLinksQ { for (int i=0; i<topCategories.length;i++) for (int j=0; j<subCategories[i] .length; j++) for (int n=0; π<subsubcategories[i] [j] .length; n++) for (int 1=0; 1<3; 1++) { if (topCategories [i] != null) topCategories [i] .mainGraph = query. mainGraph; if (subcategories [i] [j] != null) subcategories [i] [j] .mainGraph = query.mainGraph; if (subsubcategories [i] [j] [n] != null) subsubcategories [i] [j] [n] .mainGraph = query . mai nGraph ; if (severityCategories[i] [j] [n] [1] != null) severitycategories[i] [i] [n] [1] .mainGraph = query.mainGraph; if (top2subEdges[i] [j] != null) top2subEdges[i] [j] .mainGraph = query.mainGraph; if (sub2subsubEdges[i] [j] [n] != null) sub2subsubEdges[i] [j] [n] .mainGraph = query . mai nGraph ; if (subsub2sevEdges[i] [j] [n] [1] != null) subsub2sevEdges[i] [j] [n] [1] .mainGraph = query.mainGraph;
} for (int i=0; i<topcategories. length; i++) if (topCategories [i] != null) topCategories[i] . ShowVertex(30+i *2OO,3O) ;
} protected void initGraph() {
// Create base graph structure
topCategories [0] = new vertexLevell("Acqui red Valvular Heart Disease", 10,10, query.mainGraph); subcategories [0] [0] = new VertexLevel 2 ("Mitral valve disease" ,0,0, query.mainGraph) ; subsubcategories [O][O][O] = new VertexLevel 3 ("Mitral Stenosis" ,0,0, query . mai nGraph) ; subsubcategories [0] [0] [1] = new VertexLevel 3 ("Mitral Regurgitation", 0,0, query.mainGraph) ; subsubcategories [0] [0] [2] = new VertexLevel 3 ("Mitral Valve prolapse", 0,0, query . mai nGraph) ; subsubcategories [0] [0] [3] = new VertexLevel 3 ("Papillary Muscle Dysfunction",0τ0, query.mainGraph) ; subsubcategories [0] [0] [4] = new VertexLevel 3 ("Fl ail Mitral valve", 0,0, query.mainGraph); subcategories [0] [1] = new VertexLevel 2("Aortic Valve Disease" ,0,0, query . mai nGraph) ; subsubcategories [O][I][O] = new VertexLevel 3 ("Aortic stenosis", 0,0,
Page 6 panel Query. Java query. mainGraph) ; subsubcategori es[0] [1] [1] = new VertexLevel 3 ("Aortic Regurgitation" ,0,0, query. mainGraph) ; subCategories[0] [2] = new VertexLevel 2 ("Tricuspid valve disease" ,0,0, query . mai nGraph) ; subsubcategori es[0] [2] [0] = new VertexLevel 3 ("Tricuspid Stenosis",0,0, query . mai nGraph) ; subsubcategori es[0] [2] [1] = new VertexLevel 3 ("Tricuspid Regurgitation ",0,0, query . mai nGraph) ; subcategories[0] [3] = new VertexLevel 2("Pulmonary valve disease" ,0,0, query . mai nGraph) ; subsubcategori es[0] [3] [0] = new VertexLevel 3 ("sub sub category ",0,0, query . mai nGraph) ; subsubcategori es[0] [3] [1] = new VertexLevel 3 ("sub sub Category" ,0,0, query . mai nGraph) ; subsubcategori es[0] [3] [2] = new VertexLevel 3("sub sub category ",0,0, query . mai nGraph) ; subsubcategori es[0] [3] [3] = new VertexLevel 3 ("sub sub Category", 0,0, query . mai nGraph) ; subsubcategori es[0] [3] [4] = new VertexLevel 3("sub Sub Category", 0,0, query .mainGraph) ; subcategories[0] [4] = new VertexLevel2("Endocarditis" ,0,0, query. mai nGraph) ; subsubcategori es[0] [4] [0] = new VertexLevel 3 ("sub sub Category", 0,0, query . mai nGraph) ; subsubcategori es[0] [4] [1] = new VertexLevel 3 ("sub sub Category", 0,0, query. mai nGraph) ; subsubcategori es[0] [4] [2] = new VertexLevel 3("Sub sub Category", 0,0, query . mai nGraph) ; subsubcategori es[0] [4] [3] = new VertexLevel 3 ("sub Sub category" ,0,0, query . mai nGraph) ; subsubcategori es[0] [4] [4] = new VertexLevel 3 ("sub sub category" ,0,0, query. mai nGraph) ; subcategories[0] [5] = new VertexLevel 2 ("Calcified Mitral Annulus" ,0,0, query . mai nGraph) ; topcategories[l] = new VertexLevel ..("congenital Heart Disease", 210,10, query. mai nGraph) ; subcategories[l] [0] = new VertexLevel2("Abnormalities of cardiac Septation" ,0,0, query. mai nGraph); subsubcategori es [I][O][O] = new VertexLevel 3 ("Atrial Septal Defect", 0,0, query. mai nGraph) ; subsubcategori es [1] [0] [1] = new VertexLevel 3 ("Ventricular Septal Defect", 0,0, query. mai nGraph) ; subsubcategori es [1] [0] [2] = new VertexLevel 3("Endocardial cushion Defect", 0,0, query. mai nGraph) ; subcategories[l] [1] = new VertexLevel 2 ("Abnormal Vascular Connections and Structures" ,0,0, query. mai nGraph) ; subsubcategori es [I][I][O] = new VertexLevel 3 ("Patent Ductus Arteriosus", 0,0, query . mai nGraph) ; subsubcategori es[l] [I][I] = new VertexLevel 3 ("Abnormal Systemic venous", 0,0, query. mai nGraph) ; subsubcategori es [1] [1] [2] = new VertexLevel 3 ("Abnormal pulmonary page 7 panel Query. Java Venous" ,0,0, query. mai nGraph) ; subsubCategoriesCl] [1] [3] = new VertexLevel 3 ("Abnormalities of the Coronary Ci rcul ati on" , 0 , 0 , query . mai nGraph) ; subcategories[l][2] = new VertexLevel 2 ("conotruncal Abnormalities" ,0,0, query. mai nGraph) ; subsubcategories[l] [2] [0] = new VertexLevel 3 ("Tetralogy of Fallot" ,0,0, query. mai nGraph) ; subsubCategoriesCl] [2] [1] = new VertexLevel 3("τransposition of the Great Arteries" ,0,0, query. mai nGraph) ; subsubCategoriesCl] C2] C2] = new VertexLevel 3 ("Double-outlet Right Ventricle", 0,0, query. mai nGraph) ; subsubCategoriesCl] C2] C3] = new VertexLevel 3 ("Persistent Truncus Arteriosus", 0,0, query. mai nGraph) ; subCategories[l] [3] = new VertexLevel2("Abnormalities of Left ventricle" ,0,0, query . mai nGraph) ; subsubCategoriesCl] [3] CO] = new VertexLevel 3("subvalvular obstruction ",0,0, query. mai nGraph) ; subsubCategoriesCl] [3] [1] = new VertexLevel 3 ("valvular Aortic stenosis",0,0, query . mai nGraph) ; subsubCategoriesCl] [3] [2] = new VertexLevel 3 ("Supravalvular Aortic Stenosis ",0,0, query. mai nGraph) ; subsubCategoriesCl] [3] [3] = new VertexLevel 3 ("Coarctation of the Aorta", 0,0, query. mai nGraph) ; subcategories[l][4] = new vertexLevel2("Abnormalities of Right Ventricular outflow" ,0,0, query. mai nGraph) ; subsubCategoriesCl] C4] CO] = new VertexLevel 3 ("Right Ventricle", 0,0, query . mai nGraph) ; subsubCategoriesCl] [4] [1] = new VertexLevel3("Pulmonary Valve", 0,0, query. mai nGraph) ; subsubcategories[l] [4] [2] = new VertexLevel 3 ("Pulmonary Artery", 0,0, query . mai nGraph) ; subCategories[l] [5] = new VertexLevel 2 ("Abnormalities of Left Ventricular Inflow" ,0,0, query. mai nGraph) ; subsubCategoriesCl] [5] [0] = new VertexLevel 3 ("Pulmonary veins", 0,0, query . mai nGraph) ; subsubCategoriesCl] C5] El] = new VertexLevel 3 ("Left Atrium", 0,0, query. mainGraph) ; subsubCategoriesCl] [5] [2] = new VertexLevel 3 ("Mitral Valve", 0,0, query . mai nGraph) ;
// subcategories[l] [6] = new VertexLevel 2 ("Abnormalities of Right Ventricular inflow" ,0,0, query. mai nGraph) ;
// subsubcategories[l][6][0] = new VertexLevel 3 ("Right Atrium", 0,0, query . mai nGraph) ;
// subsubCategoriesCl] [6] [1] = new VertexLevel 3("Right ventricular
Inflow", 0,0, query. mai nGraph) ; topCategories[2] = new VertexLevel l("coronary Artery Disease", 410,10, query. mai nGraph) ; subcategories[2] [0] = new VertexLevel 2 ("Myocardial Infarction" ,0,0, query . mai nGraph) ; subcategories[2] [1] = new VertexLevel2("complications of Myocardial
Page 8 panel Query. Java Infarction" ,0,0, query. mai nGraph) ; subsubCategories[2] [1] [0] = new VertexLevel 3 ("Ventricular Aneurysm", 0,0, query. mai nGraph) ; subsubcategories[2] [1] [1] = new VertexLevel 3 ("Ventricular Pseudoaneurysm" ,0,0, query. mai nGraph) ; subsubCategories[2] [1] [2] = new VertexLevel 3 ("Ventricular Septal Defect", 0,0, query. mai nGraph) ; subsubCategories[2][lj[3] = new VertexLevel 3 ("Mural Thrombi" ,0,0, query . mai nGraph) ; subsubcategories[2] [1] [4] = new VertexLevel 3 ("Mitral Regurgitation", 0,0, query . mai nGraph) ; subsubcategories[2] [1] [5] = new VertexLevel 3 ("Right ventricular Infarction" ,0,0, query. mai nGraph) ; subcategories[2] [2] = new VertexLevel 2 ("Examination of the coronary Arteries" ,0,0, query. mai nGraph) ; subsubcategories[2] [2] [0] = new VertexLevel 3 ("coronary Atherosclerosis" ,0,0, query . mai nGraph) ; subsubcategories[2] [2] [1] = new VertexLevel 3 ("Kawasaki Disease", 0,0, query. mainGraph) ; subsubcategories[2] [2] [2] = new VertexLevel 3("Congenital Anomalies of the Coronary Arteries" ,0,0, query. mainGraph) ; topcategories[3] = new VertexLevel 1("Di seases of the Myocardium", 610,10, query. mai nGraph) ; subcategories[3] [0] = new VertexLevel 2("Hypertrophic Cardiomyopathy" ,0,0, query. mai nGraph) ; subsubCategories[3] [OJ [0] = new VertexLevel 3 ("Asymmetric Hypertrophy" ,0,0, query . mai nGraph) ; subsubcategories[3] [0] [1] = new VertexLevel 3 ("Left Ventricular Outflow Obstruction ",0,0, query. mai nGraph) ; subcategories[3] [1] = new VertexLevel 2("ldiopathic Dilated cardiomyopathy" ,0,0, query . mai nGraph) ;
subCategories[3] [2] = new VertexLevel2("Restrictive Cardiomyopathy" ,0,0, query. mai nGraph) ;
subCategories[3] [3] = new VertexLevel 2 ("infiltrative Cardiomyopathy" ,0,0, query. mai nGraph) ;
subcategories[3] [4] = new VertexLevel 2 ("Fibroplastic Cardiomyopathy" ,0,0, query . mai nGraph) ;
topCategories[4] = new vertexLevell("Peri cardial Disease", 810,10, query . mai nGraph) ; subcategories[4] [0] = new VertexLevel 2 ("Sub category 0:" ,0,0, query. mai nGraph) ; subsubcategories[4][0][0] = new VertexLevel 3 ("sub Sub category" ,0,0, query. mainGraph) ; subsubcategories[4] [0] [1] = new VertexLevel 3 ("sub sub category ",0,0, query. mai nGraph) ; subsubCategories[4] [0] [2] = new VertexLevel 3("Sub Sub Category", 0,0,
Page 9 panel Query. Java query. mainGraph) ; subsubcategories[4] [0] [3] = new VertexLevel3("Sub Sub Category ",0,0, query. mainGraph); subsubcategories[4][0][4] = new Vertexi_evel3("Sub Sub Category", 0,0, query. mainGraph); subCategories[4] [1] = new vertexLevel2("sub Category 1:" ,0,0, query. mainGraph); subsubCategories[4] [1] [0] = new Vertexl_evel3("Sub Sub Category", 0,0, query. mainGraph) ; subsubcategories[4][l][l] = new Vertexi_evel3("Sub sub Category", 0,0, query. mainGraph) ; subsubcategories[4] [1] [2] = new Vertexi_evel3("Sub sub category", 0,0, query. mainGraph) ; subsubCategories[4] [1] [3] = new VertexLevel3("Sub Sub Category" ,0,0, query. mainGraph) ; subsubCategories[4][l][4] = new VertexLevel3("Sub sub Category" ,0,0, query. mainGraph) ; subCategories[4] [2] = new VertexLevel2("Sub Category 2:" ,0,0, query . mai nGraph) ; subsubcategories[4] [2] [0] = new Vertexi_evel3("Sub sub category ",0,0, query. mai nGraph) ; subsubcategories[4][2][l] = new VertexLevel3("Sub Sub Category", 0,0, query. mai nGraph) ; subsubCategories[4][2][2] = new VertexLevel3("Sub Sub Category" ,0,0, query. mai nGraph) ; subsubCategories[4] [2] [3] = new VertexLevel3("Sub sub category", 0,0, query . mai nGraph) ; subsubcategories[4] [2] [4] = new VertexLevel3("sub sub category", 0,0, query. mai nGraph) ; subCategories[4] [3] = new VertexLevel2("sub Category 3:" ,0,0, query. mai nGraph) ; subsubCategories[4] [3] [0] = new VertexLevel3("sub sub category" ,0,0, query. mai nGraph) ; subsubcategories[4] [3] [1] = new Vertexι_evel3("sub sub category", 0,0, query. mai nGraph) ; subsubcategories[4] [3] [2] = new Vertexι_evel3("Sub Sub Category" ,0,0, query . mai nGraph) ; subsubcategories[4] [3] [3] = new Vertexi_eve13("Sub Sub category", 0,0, query. mai nGraph) ; subsubcategories[4] [3] [4] = new VertexLevel3("Sub Sub Category", 0,0, query. mai nGraph) ; subCategories[4] [4] = new vertexLevel2("sub category 4:" ,0,0, query . mai nGraph) ; subsubcategories[4] [4] [0] = new VertexLevel3("Sub Sub category" ,0,0, query. mai nGraph) ; subsubCategories[4] [4] [1] = new vertexLeve!3("Sub sub category" ,0,0, query. mai nGraph) ; subsubcategories[4] [4] [2] = new Vertexl_evel3("sub sub category", 0,0, query. mai nGraph) ; subsubcategories[4] [4] [3] = new Vertexl_eve!3("Sub sub category" ,0,0, query. mai nGraph) ; subsubCategories[4] [4] [4] = new VertexLeve13("Sub sub Category ",0,0, query . mai nGraph) ; system. out. printinC'check") ; page 10 panel Query. Java for (int i=0; i<topCategories. length; i++) { for (int j=0; j<subcategories[i] .length; j++) { if (topcategori es[i] != null && subCategories[i] [j] != null) { if (topcategori es[i] .name != "null" && subCategories[i] [j] .name != "null") top2subEdges[i] [j] = new myEdge("just an edge", topcategories[i] .GetvertexO ,subcategoπes[i] [j] .GetvertexO , query. mainGraph) ;
for (int n=0; n<subsubcategories[i] [j] .length; n++) { if (subcategories[i] Cj] != null && subsubcategories[i] [j] [n] != null) if (subcategoriesϊi] [i] .name != "null" && subsubcategories[i] [j] [n] .name != "null") { sub2subsubεdges[i] [j] [n] = new myEdcje("just an edge", subcategories[i] [j] .GetvertexO , subsubCategories[i] [j] [n] .GetvertexO , query . mai nGraph) ; severitycategories[i] [j] [n] [0] = new VertexLevel3("Low",0,0, query. mai nGraph) ; severitycategories[i] [j] [n] [1] = new vertexLevel 3 ("Medium" ,0,0, query. mainGraph) ; severitycategories[i] [j] [n] [2] = new VertexLevel 3 ("Hi gh" ,0,0, query . mai nGraph) ;
} for (int 1=0; 1<3; 1++) { if (subsubcategories[i] [j][n] != null && severitycategories[i] [j] [n] [1] != null) if (subsubcategories[i] [j] [n] .name != "null" && severityCategories[i] [j] [n] [1] .name != "null") subsub2sevEdges[i] [j] [n] [1] = new myEdgeC'just an edge", subsubcategories[i] [j] [n] .GetvertexO,severitycategories[i] [j] [n] [1] .GetvertexO, query.mainGraph) ;
} } for (int i=0; i<topcategories. length; i++) if (topCategories[i] != null) topcategori es [i ] . showVertex(30+i *200 , 30) ; query . mai nGraph . cl earsel ecti on () ;
protected void clickGraph(int x, int y, MouseEvent e) {
Object cell = query. mai nGraph. getFi rstCellForLocati on (x, y) ;
if (cell != null && e.getClickCountO == 1 && (e.getModifiersO & e.BUTTON3_MASK) != 0) {
query . popup . setBounds (x , y+140 , 102 , 140) ; query . popup . setvi si bl e(true) ; for (int i=0; i<topcategories. length; i++) { if (topcategori es[i] != null) if (topcategori es [i] .iscli eked (x,y)) query. popup. workingVertex = topcategori es [i ] ; for (int j=0; j<subcategories[i] .length; j++) { page 11 panel Query. Java if (subcategories[i][i] != null) if (subcategories[ij [j] .isC"licked(x,y)) query. popup. workingVertex = subcategories[i] [j] ; for (int π=0; n<subsubcategories[i] [j] .length; n++) { if (subsubCategoriesCi] Ci] [n] != null) if (subsubcategoriesti] [j] [n] .isclicked(x,y)) query. popup. workingVertex = subsubCategoriesCi KjHn];
}
if (cell != null && e.getClickCountO == 2) { for (int i=0; i<topcategories. length; i++) { boolean subcatclicked = false; boolean subsubcatclicked = false; boolean sevcatclicked = false; for (int j=0; j<subCategories[i] .length; j++) { subsubcatclicked = false; for (int n=0; n<subsubCategories[i] [j] .length; n++) { sevcatclicked = false; for (int 1=0; 1<3; 1++) { if (severitycategories[i] [j] [n] [1] != null) { if (severitycategories[ij [j] [n] [1] . isclicked(x,y)) { sevcatclicked = true; subsubcatclicked = true; subcatclicked = true; severitycategories[i] [j] [n] [1] .SelectO ; query. listvideos.setLocation(severityCategories[i] [j] [n3 [1] .x, severitycategories[i] [j] [n] [1] .y+70) ; query . setPosi ti on (query .1 i stvi deos , 0) ; query.1 i stvi deos . setvi si bl e(true) ; else { severitycategories[i] [j] [n] [1] .unselectO ; } }
Figure imgf000465_0001
Figure imgf000466_0001
if (subcategoriesCi] [j] != null) { if (subcategoriesCi] H] .isclicked(x,y)) { subCateqories[i] [j] .Select() ; subcatclicked = true;
Figure imgf000466_0002
Page 13 ResultPanel .Java package dcgui ; i mpo rt 3 ava . awt . * ; import qava.awt.geom.*; import j avax . swi ng . * ; //import com. j graph.*; //import com. j graph. event. *; //import com. j graph. graph.*; import java.util .Hashtable; import Java. awt. Rectangle, i mport 3 ava . uti 1. Map ; import j ava. lang. Math; import java.io.*; import java.util .Properties; import j avax. swing. event.*; import java.net.*; import j avax. swing. text.*; import xmlio.EchoFrame; import xmlio.Roi ; public class ResultPanel extends JPanel private static ResultPanel instance; public static synchronized ResultPanel getlnstanceO { if (instance == null) instance = new ResultPanel (); return instance; } public ResultPanel () {
System. out. println("Result panel is empty"); }
/* public class ResultPanel extends JPanel private static ResultPanel instance; private DGraph resultGraph; private GraphModel model; private static int Hl = 280, Wl = 375; private static int H = 170, W = 375; public static synchronized ResultPanel getlnstanceO { if (instance == null) instance = new ResultPanel (); return instance; } public ResultPanel O { model = new DefaultGraphModel O ; resultGraph = new JGraph (model ); resultGraph. setBackground(new color(220, 220, 255)); resultGraph. setvi si ble(true) ; resul tGraph . setEdi tab! e(f al se) ; resultGraph. setPreferredSize(new Dimension(W, H)); add(resultGraph) ;
Page 1 ResultPanel .Java public void update(EchoFrame ef) {
System. out. println ("Updating the Result Panel"); string nodeName = new StringC'Node") ; int numNodes = ef .getNumROlsO ; int numValidNodes = ef .getNumValidRθls() ;
//my Edge [] edges; myVertexG vertices = new myVertex [numValidNodes] ;
//if (numValidNodes > 0) myEdgeC] edges = new myEdge [numValidNodes * (numValidNodes-l)/2] ; int i , k, 1 ; int cnt = 0; for (i=0; i < numNodes; i++) { Roi roi = (Roi)ef .getRθlCi+1) ; if ((roi .getValidity()).equals("true")) { lmageRoi imgRoi = new lmageRoi (roi) ; Point2D. Double com = imgRoi .getCOM O ; double xLoc = com.getxO * W / (double) Wl; double yLoc = com.getY() * H / (double) Hl; System. out. printlnC'xLoc: " + xLoc + ", yLoc: " + yLoc) ;
//insert(roi , new Rectangle ((int) (w/2.0 + XLOC) , (int) (yLoc))) ; vertices [cnt] = new myvertex(roi , (int)(xLoc), (int) (yLoc), resultGraph) ; cnt++; }
System. out. println ("Number of vertices: " + vertices. length) ; if (numValidNodes > 0) { cnt = 0; for (k=0; k < vertices. length - 1; k++) { for (l=k+l; 1 < vertices. length; 1++) { edges[cnt] = new myEdge("just an edge", vertices[k] .GetVertexO , vertices [1] .GetVertexO, resultGraph); cnt++;
for (i=0; i < vertices. length; i++) (vertices [i]) .showvertex( ); for (i=0; i < edges. length; i++) (edges [i ]) .showEdge( );
} private class myVertex implements java.io.Serializable { private 3Graph mainGraph; private DefaultVertex vertex; public string name; public string nameselected; private boolean isDi splayed; private boolean isselected; public int x, y; public int width; public int height;
Page 2 ResultPanel .Java public myvertex (object ob j , int x, int y, DGraph parent) { mainGraph = parent; width = 50; height = 30; i SDi splayed = false; isselected = false; this. x = x; this.y = y; name = obj .toStringO ;
// Position in MODEL coordinates Object pos = mainGraph.toModel (x, y) ;
// Create a vertex using the passed in user object vertex = new Defaultvertex(obj , pos);
// Set the border attribute
Object key, val; key = VertexRenderer.BORDER_ΛTTRIBUTE; val = BorderFactory.createBevelBorderCO,new Color(160,160,160).Color.blue) ; vertex.setAttributeCkey, val); key = VertexRenderer.FOREGROUND_ATTRIBUTE; val = Col or. white; vertex. setAttributeCkey, val);
key = vertexRenderer.BACKGR0UND_ΛTTRlBUTE; val = color. black; vertex. setAttributeCkey, val); com. jgraph.plaf -Graphui ui = mainGraph. getui O ;
Dimension d = new DimensionCwidth, height); vertex. setsizeCmainGraph.toModel Cd. width, d. height)) ;
public void Showvertex ( ) { //int x, int y) { i y)) ;
Figure imgf000469_0001
obj ect[] {vertex}) ; i sDi splayed = true;
} public void Hidevertex C) { mainGraph.getDriverC)■removeCmainGraph, new object[]{vertex}); iSDisplayed = false; } public DefaultVertex GetVertexC) { return vertex; } private class tnyEdge implements java.io.Serializable { private jGraph mainGraph; page 3 ResultPanel .Java private Default vertex vertexl; private DefaultVertex vertex2; private DefaultEdge edge; private string name; private boolean isDi splayed; public myEdge(object obj , DefaultVertex vertl, DefaultVertex vert2, JGraph parent) { mainGraph= parent; vertexl = vertl; vertex2 = vert2; name = (string) obj; edge = new/ DefaultEdge (obj , vertexl, vertex2) ;
public void Show/Edge () {
Point s = mainGraph.toscreen(vertexl.getDefaultPortO) ; Point t = mai nGraph .toScreen(vertex2.getDefaultPort O) ; if (s.y ===== 0 J I s. x == 0 I I t.y ===== 0 J ) t.x == 0) return;
//s.y = s.y + 30; //t.y = t.y - 30; obiect[] points = new Ob ject[] {mai nGraph. toModel (s.x, s.y) , mai nGraph . toModel (t . x , t . y) } ; edge = new DefaultEdge (name, points); Object key, val ; key = EdqeRenderer .COLOR_J\TTRIBUTE; val = color. white; edge.setAttribute(key, val);
EdgeRenderer tmp = new EdgeRenderer() ; i sDi splayed = true; mai nGraph. getDriverO .add (mai nGraph, new obj ect[] {edge}) ;
public void HideEdgeO { i sDi splayed = false; maiπGraph. getDriverQ .remove (mai nGraph, new object [] {edge}) ;
}
Page 4 package dcgui; import java.io.*; import java.awt.*; import java.awt.image.*; import java.lang.Math; import java.utiLArrayList; import java.awt.event.*; import java.awt.geom.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; import javax.media.jai.*; import com.sun.media.jai.codec.*; import xmlio.EchoFrame; import xmlio.Roi; import xmlio.Neighbor; public class ROIAnalyzer implements GUIConstants { private static ROIAnalyzer instance; private EchoFrame echoFrame; private ArrayList imRois; public ROIAnalyzerO { imRois = new Arrayl_ist(); . } public static synchronized ROIAnalyzer getlnstance() { if (instance == null) instance = new ROIAnalyzer(); return instance; } public void initialize(EchoFrame ef) { try { imRois.clear(); echoFrame = ef; analyzeValidROIs();
} catch (Exception e) { System.out.println("Error."); e.printStackTrace (System .err); } } private void analyzeValidROIs() { int numRoi = echoFrame.getNumROIs(); if (numRoi > 0) {
// Create the image rois for(int i=0; i < numRoi; i++) {
Roi roi = echoFrame.getROI(i+1 );
String roiValid = roi.getValidity(); if (roiValid.equals(TRUE)) { ImageRoi currentRoi = new ImageRoi(roi); // add all other valid ROIs as neighbors for (iπt j=0; j < numRoi; j++) {
Roi roi2 = echoFrame.getROI(j+1 ); String roiValid2 = roi2.getValidity(); if 0!=') { if (roiValid2.equals(TRUE)) currentRoi.addNeighbor(roi2);
System.out.println("Adding ROl" + roi2.getLabel() + " as neighbor of ROI " + roi.getLabelQ); }
} imRois.add(currentRoi);
} }
} }
// Store the features of all rois in their respective files public void writeRoisToDiskQ {
!>y { for(int i=0; i < imRois.sizeQ; i++) { ImageRoi cυrrentRoi = (ImageRoi)imRois.get(i); currentRoi.storeToFile();
}
SelectionPanel selection = SelectionPanel.getlnstance();
String xmiName = selection.getSelectionNameO;
System .out.printlnf'Writing modified XML to file: " + xmiName); echoFrame.writeToFile(xmlName);
} catch (Exception e) { System.out.printlnfError."); e.printStackTrace (System .err); } } }
/*
* @(#) ROIContextJAI.java 1.0 2000/10/3
* Copyright (c) 2000 Larry Rodrigues 7 package com.vistech.jai.roi; import java.awt.*; import java.awtcolor.*; import java.awt.image.*; import java.awt.geom.*; import java.io.*; import java.util.*; import javax.media.jai.*;
/** A convenient class that represents the context of an ROI. It contains the image
* in which the ROI is deposited. All the ROIs deposited on an image have the same ROI context.
* ©version 1.0 3 Oct 2000
* ©author Lawrence Rodrigues 7 public class ROIContextJAI { protected Planarlmage origlmage; protected Tiledlmage displaylmage; protected Vector roiCollection = new Vector(); protected ROI currentROI; private AffineTransform atx = new AffineTransform(); public ROIContextJAI (Planarlmage img) { origlmage = img; roiCollection = new Vector(); reset(); } public Planarlmage getθriginallmage(){ return origlmage;} public Tiledlmage getDisplaylmage(){return displaylmage;} protected void createDisplaylmage(){ SampleModel sampleModel = origlmage.getSampleModel(); ColorModel colorModel = origlmage.getColorModel(); displaylmage = new Tiledlmage(origlmage.getMinX(), origlmage.getMinY(), origlmage.getWidth(), origlmage.getHeight(), origlmage.getTileGridXOffsetO, origlmage.getTileGridYOffset(), sampleModel, colorModel); displaylmage.setData(origlmage.copyDataO); } public WritableRaster getOriginalRaster() { if(origlmage != null) return origlmage.copyData(); return null; } public WritableRaster getDisplayRasterQ { if(displaylmage!= null) return displaylmage.copyData(); return null; } public void setCurrentROI(ROl roi){ currentROI = roi;} public ROI getCurrentROIO { return currentROI;} public void addROI(ROI roi){roiCollection.add(roi);} public void removeROI(ROI roi) {roiCollection.remove(roi);} public void depositROI(R01Shape roi) { displaylmage.setPropertyC'ROr, roiCollection); } public String^ getComponentNames() { if(origlmage == null) return null; ColorModel cm = origlmage.getColorModel(); ColorSpace cs = cm.getColorSpaceO; int numComponents = cs.getNumComponents(); StringQ componentName = new String[numComponents]; for(int i=0;i<numComponents;i++){ componentName[i] = cs.getName(i);
} return componentName;
} public void reset(){ displaylmage = null; createDisplaylmage(); } }
Java -classpath
/home/shahram/cardio/source/dataGollect:/usr/java/jars/domsclk.jar:/home/shahrarn/package/imaging/clas ses/ dcgui.LaunchPad
package dcgui; import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; import javax.swing.tree.*; import java.io.File; import Java. lang.*; import java.lang.String; import java.util. Random; public class SelectionPanel extends JPanel implements GUIConstants
{ private static SelectionPanel instance; private static final String rootDirName = "/home/shahram/cardio/source"; private static final String DEFAULTJMG = "test.ppm"; private DisplayPanel display; private JButton submitButton; private JButton resetButton; private JTree fileTree; private String filename; private MutableTreeNode root; public static synchronized SelectionPanel getlnstance() { if (instance == null) instance = new SelectionPanelQ; return instance; } public SelectionPanelO { super(); setLayout(null); setBackground(Color.white); try { createFileLocatorTreeQ; } catch(Exception e) { e.printStackTrace(); } display = DisplayPanel.getlnstance(); createSubmitButton(); createResetButton();
Insets insets = getlnsets();
JScrollPane scrollPane = new JScrollPane(fileTree); scrol!Pane.setBounds(insets.left, insets. top, 285, 435); // 200 x 350 submitButton.setBounds(insets.left, 435+insets.top, 285, 50); // 200 x 50 resetButton.setBounds(insets.left, 485+insets.top, 285, 50); // 200 x 50 add(scrollPane); add(submitButton); add(resetButton); //filename = new String(DEFAULTJMG);
}
// Create file locator tree private void createFileLocatorTree() throws Exception {
File rootDir = new File(rootDirName); root = new DefaultMutableTreeNode(rootDirName); initializeFilel_ocator(rootDir, root);
// create the JTree final DefaultTreeModel model = new DefaultTreeModel(root); fileTree = new JTree(model); fileTree.addTreeSelectionl_istener(new TreeSelectionListener() { public void valueChanged(TreeSelectionEvent e) { DefaultMutableTreeNode node = (DefaultMutableTreeNode) fileTree.getLastSelectedPathComponent(); if (node == null) return;
TreeNodefJ nodePath = node.getPath(); if (node.isLeafO) {
StringBuffer path = new StringBuffer(nodePath[0].toString()); for (int i=1; i < nodePath.length; i++) { path.append(T); path.append(nodePath[i].toString());
} setFiiename(path.toStringO);
} }
});
} private void setFilename(String fname) { filename = new String(fname); }
// Setting up the initial appearance of the file locator void initializeFileLocator(File rootDir, MutableTreeNode root) { int counter = 0;
StringfJ contents = rootDir.list(); int L = contents. length; for(int i=0; i < L; i++) {
File current = new File(rootDir, contents^]); if (current.isHidden() == false) { // if only it's not a hidden file/directory if (current.isFileO == true) { if(contents[i].endsWith(validFileExtension) == true) root.insert(new DefaultMutableTreeNode(contents[i]), counter++); } else if (validDirectory(current) == true) { // current element is a directory MutableTreeNode sub_dir = new DefaultMutableTreeNode(contents[i]); root.insert(sub_dir, counter++); initializeFileLocator(current, sub_dir); } } } }
// Recursively goes down the directory structure starting with // the provided node to see if there are any files with the // specified file extension. Returns 'true' if positive, private boolean validDirectory(Fiie dir) { StringQ contents = dir.list(); for (int j=0; j < contents.length; j++) { File sub_dir = new File(dir, contentsfj]); if (sub_dir.isHidden() == false) { if (sub_dir.isDirectory() == true) { boolean result = this.validDirectory(sub_dir); if (result == true) return (true); } else { if(contentsfj].endsWith(validFileExtension) == true) { return (true); } } } } return (false);
}
// Create button and handle interactions private void createSubmitButton() { submitButton = new JButtonfGet Selection"); submitButton .setEnabled(true); submitButton.setActionCommand(setlnputFile); submitButton.addActionListener(display); } private void createResetButton() { resetButton = new JButton("Reset!"); resetButton.setEnabled(true);
//resetButton.addActionListene^LaunchPad.getlnstanceO); } public String getSelectionName() { return filename; } public void setDisplay( ) { display = DisplayPanel.getlnstanceQ; } }
DATA COLLECTION / DESCGEN
#!/υsr/bin/perl -w
$srcdir = "/home/shahram/cardio/source/chamber_segment/testimg/echo"; $src_suffix = "/keys";
$start = 1 ; $stop = 7; for ($i=$start; $i < $stop; $i++ )
{
$src = $srcdir.$i.$src_suffix; opendir (SRCDIR, $src) || die "no directory: $!"; while ($imgln = readdir(SRCDIR)) {
$inFile = $src.7'.$imgln; print "$inFile\n"; system "java -classpath
/home/shanram/cardio/source/datacollect/:/usr/java/jars/domsdk.jar:/usr/java/jars/dom.jar descGen.FrameAttrib $inFile"
} closedir (SRCDIR);
}
package descGen; import java. io. File; import java.lang.String; import java.util.StringTokenizer; import java.util. Vector; import javax.xml.parsers.*; import javax.xml.parsers.DocumentBuilderFactory; import org.xml.sax.*; import org.xml.sax.helpers.*; import org.w3c.dom.*; import org.w3c.dom.DOMException; import com.docuverse.dom.DOM; import javax.xml.transform.Transformer; import javax.xml. transform. TransformerFactory; import javax.xml.transform.TransformerException; import javax.xml. transform.TransformerConfigurationException; import javax.xml.transform.dom.DOMSource; import javax.xml.transform. stream. StreamResult; public class FrameAttrib implements XMLConstants
{
// define constants private static final String keysDir = "keys"; private static final String regionDir = "regions"; private static final String descDir = "desc"; private static final String suffix = "_seg"; // was "_seg_" private static final String echoPrefix = "echo"; private static final String segPrefix = "_seg"; // "seg"; private static final String kfPrefix = "kf ';
// Fields of the class private static String inputName; private static String stemName; private static String pathName; private static String viewName; private static int echoNum; private static int segNum; private static int kfNum; private static int numRegions; private static Document xmlDoc; public FrameAttrib (String frame) { parselnput(frame); } private void parselnput(String name) { String delim = new String(T);
StringTokenizer st = new StringTokenizer(name, delim); int numTokens = st.countTokens();
// Find the name of the input file for (int i=0; i < numTokens-1 ; i++) { String word = st.nextToken();
} inputName = new String(st.nextToken());
// Get the path
StringBuffer strBuf = new StringBuffer (name); int location = strBuf.indexOf(keysDir); pathName = new String(name.substring(O, location));
// Get the stem name
StringTokenizer st2 = new StringTokenizer(inputName, "."); stemName = new String(st2.nextToken());
// Find the values of the different attributes of the input frame getFrameFieldsO;
// Get regions associated with input frame getNumRegions(); }
I*
* Find the specifications of the input frame. */ private void getFrameFields() { String delim = new String("_"); StringTokenizer st = new StringTokenizer(stemName, delim);
//if (st.countTokens() == 1 ) { viewName = ""; echoNum = 0; segNum = 0; kfNum = 0;
// NOTE: Commented out 12/16/03 - Didn't want to tokenize frame names! turn it back! If) else {
// Get the different fields
// String echoName = new String( st.nextToken() ); // String segName = new String( st.nextToken() ); // String kfName = new String( st.nextToken() ); // viewName = new String( st.nextToken() );
// String echoNumStr = new String( echoName. substring( echoPrefix.length(), echoName.length() )); // echoNum = lnteger.valueOf(echoNumStr).intValue();
// String segNumStr = new String( segName.substring( segPrefix.length(), segName.length() )); // segNum = lnteger.valueOf(segNumStr).intValue();
// String kfNumStr = new String( kfName.substring( kfPrefix.leπgth(), kfName.length() )); // kfNum = lnteger.valueOf(kfNumStr).intValue(); ID }
I*
* Find how many region masks are associated with
* the current input echo frame and update the * value of numRegions. η private void getNumRegions() { // Open the regions directory for this frame String regionDirName = new Striπg(pathName + regionDir); File regionDir = new File(regionDirName);
// See if regions corresponding to the frame exist StringfJ contents = regionDir.listø; numRegions = 0; for (int i=0; i < contents.length; i++) { if(contents[i].startsWith(stemName) == true) { numRegions++; } } } public static String getPath() { return pathName; } public static String getName() { return inputName; } public static String getEchoNum() { return String. valueOf(echoNum); } public static String getSegNum () { return String.valueOf(segNum); } public static String getKFNum() { return String.valueOf(kfNum); } public static String getView() { return viewName; } public static int getNumROIs() { return numRegions; } public static String getROIPath (int I) { StringBuffer path = new StringBuffer(pathName); path.appeπd(regionDir); path.append(V); path.append(stemName); path.append(suffix); path.append(String.valueOf(l+1 )); path.append(".pgm"); return path.toString(); } public static void writeToXML() throws Exception { // Make the output filename
StringBuffer outFileName = new StringBuffer(pathName); outFileName.append(descDir); outFileName.append("/"); outFileName.append(stemName); outFileName.appendC'.xml");
// Create a XML document for current file
DocumentBuilderFactory dbf = ParseXMLgetDocumentBuilderFactory();
DocumentBuilder db = ParseXML.getDocumentBuilder(dbf); xmlDoc = db.πewDocument();
// Write frame info to XML structure populateXMLDocO;
// Output xml to file DOM dom= new DOM(); dom.writeDocument(xmlDoc, outFileName.toString());
/*
* NOTE: when using the following block to write the XML to disk
* it doesn't put the closing tags in the XML file. TransformerFactory tFactory =
TransformerFactory.newlnstance(); Transformer transformer = tFactory.newTransformer();
DOMSource source = new DOMSource(xmlDoc); StreamResult result = new StreamResult(outFileName.toString()); transformer.transform(source, result); */ }
/*
* Populates the Document with frame info V private static void populateXMLDocO throws Exception { // Root of the xml document
Element root = xmlDoc.createE!ement(ECHO_FRAME); root.setAttribute(FRAME_PATH, getPath()); root.setAttribute(FRAME_NAME, getNameO); root.setAttribute(ECHO_NUM, getEchoNumO); root.setAttribute(SEG_NUM, getSegNumO); root.setAttribute(KF_NUM, getKFNumO); root.setAttribute(FRAME_VIEW, getViewO); xmlDoc.appendChild(root);
// Append ROI's to the echoFrame int numROIs = getl\lumROIs(); for (int i=0; i < numROIs; i++) {
Element roi = xmlDoc.createElement(ROI); roi.setAttribute(ROI_PATH, getROIPath(i)); roi.setAttribute(ROI_INDEX, String.valueθf(i+1 )); roi.setAttribute(ROI_LABEL, "UNKN"); roi.setAttribute(ROI_VALID, "FALSE"); root.appendChild(roι); } } public static void main(String argsfj) { try {
// Find attributes of input echo frame FrameAttrib FA = new FrameAttrib(args[O]);
// Print them out
//System.out.println(FA.getPathO);
//System.out.println(FA.getNameO);
//System.out.println(FA.getEchoNum());
//System.out.println(FA.getSegNumO);
//System. out.println(FA.getKFNum());
//System.out.println(FA.getViewO);
//System.out.println(FA.getNumROIs());
// Write them in xml format to disc FA.writeToXML();
} catch (TransformerConfigurationException tee) {
// Error generated by the parser
System. out.println ("\n** Transformer Factory error");
System.out.println(" " + tce.getMessage() );
// Use the contained exception, if any
Throwable x = tee; if (tce.getException() != null) x = tce.getException(); x.printStackTrace();
} catch (TransformerException te) {
// Error generated by the parser
System. out.println ("\n** Transformation error");
System. out.printlπ(" " + te.getMessage() );
// Use the contained exception, if any
Throwable x = te; if (te.getException() != null) x = te.getException(); x.printStackTrace();
} catch (DOM Exception de) {
System.out.println("DOM exception."); de.printStackTrace (System. err);
} catch (Exception e) { System.out.println("Error."); e.printStackTrace (System. err); } } } package descGen; import Java. io. File; import java.lang.String; import java.util.StringTokenizer; import java.util.Vector; import javax.xml.parsers.*; import javax.xml.parsers.DocumentBuilderFactory; import org.xml.sax.*; import org.xml.sax.helpers.*; import org.w3c.dom.*; import org.w3c.dom.DOMException; import com.docuverse.dom.DOM; public class FrameAttrib implements XMLConstants
{
// define constants private static final String keysDir = "keys"; private static final String regionDir = "regions"; private static final String descDir = "desc"; private static final String suffix = "_seg_"; private static final String echoPrefix = "echo"; private static final String segPrefix = "_seg"; // "seg"; private static final String kfPrefix = "kf";
// Fields of the class private static String inputName; private static String stemName; private static String pathName; private static String viewName; private static int echoNum; private static int segNum; private static int kflMum; private static int num Regions; private static Document xmlDoc; public FrameAttrib (String frame) { parselnput(frame); } private void parselnput(String name) { String delim = new String("/");
StringTokenizer st = new StringTokenizer( name, delim ); int numTokens = st.countTokens();
// Find the name of the input file for (int i=0; i < numTokens-1; i++) {
String word = st.nextToken(); } inputName = new String( st.nextToken() );
// Get the path
StringBuffer strBuf = new StringBuffer (name); int location = strBuf.indexOf(keysDir); pathName = new String( name.substring(O, location) );
// Get the stem name
StringTokenizer st2 = new StringTokenizer( inputName, "." ); stemName = new String( st2.nextToken() );
// Find the values of the different attributes of the input frame getFrameFields();
// Get regions associated with input frame getNumRegions();
}
/*
* Find the specifications of the input frame. */ private void getFrameFields() { String delim = new String("_"); StringTokenizer st = new StringTokenizer( stemName, delim );
Get the different fields
String echoName = new String( st.nextToken() ); String segName = new String( st.nextToken() ); String kfName = new String( st.nextTokenQ ); viewName = new String( st.nextToken() );
String echoNumStr = new String( echoName.substring( echoPrefix.length(), echoName.lengthQ ) ); echoNum = lnteger.valueOf(echoNumStr).intValϋe();
String segNumStr = new String( segName.substring( segPrefix.length(), segName.length() ) ); segNum = Integer. valueOf(segNumStr).intValue();
String kfNumStr = new String( kfName.substring( kfPrefix.length(), kfName.length() ) ); kfNum = lnteger.valueOf(kfNumStr).intValue(); }
/*
* Find how many region masks are associated with
* the current input echo frame and update the
* value of numRegions. 7 private void getNumRegions() { // Open the regions directory for this frame String regionDirName = new String(pathName + regionDir); File regionDir = new File(regionDirName);
// See if regions corresponding to the frame exist StringQ contents = regionDir.list(); numRegions = 0; for (int i=0; i < contents.length; i++) { if(contents[i].startsWith(stemName) == true) { numRegions++;
} } } public static String getPath() { return pathName; } public static String getName() { return inputName; } public static String getEchoNum() { return String.valueOf(echoNum); } public static String getSegNum () { return String.valueOf(segNum); } public static String getKFNum() { return String.valueOf(kfNum);
} public static String getView() { return view/Name; } public static int getNumROIs() { return numRegions; } public static String getROIPath (int I) { StringBuffer path = new StringBuffer(pathName); path.append(regionDir); path.append(V); path .append(stemName); path.append(suffιx); path.append(String.valueOf(l+1)); path.append(".pgm"); return path.toString(); } public static void writeToXML() throws Exception { // Make the output filename
StringBuffer outFileName = new StringBuffer(pathName); outFileName.append(descDir); outFileName.append(7"); outFileName.append(stemName); outFileName.append(".xml"); System.out.println(outFileName.toStringO);
// Create a XML document for current file
DocumentBuilderFactory dbf = ParseXML.getDocumentBuilderFactory();
DocumentBuilder db = ParseXML.getDocumentBuilder(dbf); xmlDoc = db.newDocument(); //DOM dom = new DOM();
//xmlDoc = dom.createDocumentfXML");
// Write frame info to XML structure populateXMLDocO;
// Output xml to file
DOM domOut = new DOM(); domOut.writeDocument(xmlDoc, outFileName.toStringO);
//dom.writeDocument(xmlDoc, outFileName.toStringO); }
/*
* Populates the Document with frame info
*/ private static void popu!ateXMLDoc() throws Exception {
// Root of the xml document
Element root = xmlDoc.createElement(ECHO_FRAME); root.setAttribute(FRAME_PATH, getPathO); root.setAttribute(FRAME_NAME, getNameO); root.setAttribute(ECHO_NUM, getEchoNumO); root.setAttribute(SEG_NUM, getSegNum()); root.setAttribute(KF_NUM, getKFNumO); root.setAttribute(FRAME_VIEW, getViewQ); xmlDoc.appendChild(root);
// Append ROI's to the echoFrame int numROIs = getNumROIs(); for (int i=0; i < numROIs; i++) {
Element roi = xmlDoc.createElement(ROI); roi.setAttribute(ROI_PATH, getROIPath(i)); roi.setAttribute(ROI_l NDEX, String.valueθf(i+1 )); roi.setAttribute(ROI_LABEL, "UNKN"); roi.setAttribute(ROI_VALID, "FALSE"); root.appendChild(roi); } } public static void main(String argsfj) { try {
// Find attributes of input echo frame FrameAttrib FA = new FrameAttrib(args[O]);
// Print them out
System. out.println(FA.getPathO);
System.out.println(FA.getName());
System.out.println(FA.getEchoNumO);
System.out.println(FA.getSegNum());
System.out.println(FA.getKFNumO);
System. out.println(FA.getView());
System.out.println(FA.getNumROIs());
// Write them in xml format to disc FA.writeToXML();
} catch (DOMException de) {
System .out.println("DOM exception."); de .printStackTrace (System . err);
} catch (Exception e) { System. out.println("Error."); e.printStackTrace (System. err); } } }
package descGen;
// JAXP packages import javax.xml.parsers.*; import javax.xml.parsers.DocumentBuilderFactory; import org.xml.sax.*; import org.xml.sax.helpers.*; import org.w3c.dom.*; import java.util.*; import Java. io.*; public class ParseXML
{
/** All output will use this encoding */ static final String outputEncoding = "UTF-8"; public static Document getDocument(lnputStream input) throws Exception
{
DocumentBuilderFactory dbf = getDocumentBuilderFactory(); DocumentBuilder db = getDocumentBuilder(dbf);
// parse the input file
Document doc = null; doc = db.parse(input,"file:/home/shahram/cardio/project/datacollect/xmlio/"); return doc; }
public static DocumentBuilderFactory getDocumentBuilderFactory() throws Exception
{
DocumentBuilderFactory dbf = DocumentBuilderFactory.newlnstanceQ; dbf.setValidating(false); dbf.setignoringComments(true); dbf.setCoalescing(false); dbf.setExpandEntityReferences(false); dbf.setlgnoringElementContentWhitespaceOrue); dbf.setNamespaceAware(true); return dbf; }
public static DocumentBuilder getDocumentBuilder(DocumentBuilderFactory dbf) throws Exception
{
// create a DocumentBuilder that satisfies the constraints // specified by the DocumentBuilderFactory DocumentBuilder db = null; db = dbf.newDocumentBuilder();
// set an ErrorHandler before parsing db.setErrorHandler(new MyErrorHandler(new PrintWriter( new OutputStreamWriter( System. err)))); return db; } public static void printDOMNode(No,de n, Writer writer) throws lOException
{ switch(n.getNodeType())
{ case Node.ELEMENT_NODE: writer .write("<" + n.getNodeName());
NamedNodeMap atts = n.getAttributes();
Node node; for (int i = 0; i < atts.getl_ength(); i++)
{ node = atts.item(i); writer.write( node.getNodeName() + "=V" + node.getNodeValue() + "Y"');
} if(n.hasChildNodes())
{ writer.write(">"); for (node = n.getFirstChild(); node != null; node = node.getNextSiblingO)
{ printDOMNode(node,writer);
} writer.write("</" + n.getNodeName() + ">");
} else
{ writer.write('7>");
} case Node.TEXT_NODE: String val = n.getNodeValueQ; if(val!=null && !val.equals("nu!!"))
{ writer.write(makeLegal(val));
} break; default: for (node = n.getFirstChild(); node != null; node = node.getNextSiblingO)
{ printDOMNode(node,writer);
} } } public static String makeLegal(String val)
{ // Check to see if we need to do anything boolean foundGTLT = false; charQ illegalChars = new charQ {'<','>','&'}; for (int i=0; i<illegalChars.length; i++) if (val.indexOf((int)illegalChars[i]) != -1)
{ foundGTLT = true; break; } if (foundGTLT)
{
StringBuffer sb = new StringBuffer (val.length()); int len = val.lengthQ; for (int i=0; i<len; i++)
{ char G = val.charAt(i); if (c == <") sb.append ("&lt;"); else if (c == ">') sb.append ("&gt;"); else if (c == '&') sb.append ("&amp;"); else sb.append (c);
} val = sb.toStringO;
} return val; }
// Error handler to report errors and warnings
// TAKEN STRAIGHT FROM JAXP SAMPLES public static class MyErrorHandler implements ErrorHandler
{
/** Error handler output goes here 7 private PrintWriter out;
MyErrorHandler(PrintWriter out) {this.out = out;} r*
* Returns a string describing parse exception details
*/ private String getParseExceptionlnfo(SAXParseException spe) {
String systemld = spe.getSystemld(); if (systemld == null) { systemld = "null";
}
String info = "URI=" + systemld +
" Line=" + spe.getLineNumberO +
": " + spe.getMessage(); return info; }
// The following methods are standard SAX ErrorHandler methods. // See SAX documentation for more info. public void waming(SAXParseException spe) throws SAXException { out.println("Warning: " + getParseExceptionlnfo(spe));
} public void error(SAXParseException spe) throws SAXException { String message = "Error: " + getParseExceptionlnfo(spe); throw new SAXException(message);
} public void fatalError(SAXParseException spe) throws SAXException { String message = "Fatal Error: " + getParseExceptionlπfo(spe); throw new SAXException(message); } } }
// For write operation import javax.xmi.transform.Transformer; import javax.xml.traπsform.TransformerFactory; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerConfigurationException; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.FactoryConfigurationError; im port javax. xml .parsers. ParserConf iguration Exception ; import org.xml.sax.SAXException; import org.xml.sax.SAXParseException; import org.w3c.dom. Document; import org.w3c.dom.DOMException; import java.io.*; public class TransformationApp
{
// Global value so it can be ref d by the tree-adapter static Document document; public static void main (String argv Q)
{ if (argv.length != 1) {
System. err.println ("Usage: Java TransformationApp filename");
System. exit (1 ); }
DocumentBuilderFactory factory =
DocumentBuilderFactory.newlnstance(); //factory.setNamespaceAware(true); //factory.setValidating(true); try {
File f = new File(argv[O]);
DocumentBuilder builder = factory.newDocumentBuilder(); document = builder.parse(f);
// Use a Transformer for output TransformerFactory {Factory =
TransformerFactory.newlnstanceO; Transformer transformer = tFactory.newTransformer();
DOMSource source = new DOMSource(document);
StreamResult result = new StreamResult(System.out); transformer.transform(source, result); } catch (TransformerConfigurationException tee) { // Error generated by the parser System. out.println ("\n** Transformer Factory error"); System. out.println(" " + tce.getMessage() );
// Use the contained exception, if any
Throwable x = tee; if (tce.getException() !- null) x = tce.getExceptionQ; x.printStackTrace();
} catch (TransformerException te) { // Error generated by the parser System.out.println ("\n** Transformation error"); System.out.printlnf " + te.getMessage() );
// Use the contained exception, if any
Throwable x = te; if (te.getException() != null) x = te.getExceptionQ; x.printStackTrace();
} catch (SAXParseException spe) { // Error generated by the parser System .out.printin("\n** Parsing error"
+ ", line " + spe.getLineNumberQ
+ ", uri " + spe.getSystemldO); System .out.println(" " + spe.getMessage() );
// Use the contained exception, if any
Exception x = spe; if (spe.getException() != null) x = spe.getException(); x.printStackTraceø;
} catch (SAXException sxe) { // Error generated by this application // (or a parser-initialization error) Exception x = sxe; if (sxe.getException() != null) x = sxe.getException(); x.printStackTrace();
} catch (ParserConfigurationException pee) { // Parser with specified options can't be built pce.printStackTrace();
} catch (lOException ioe) {
// I/O error ioe.printStackTrace(); }
} // main package descGen; interface XM ..Constants { String ECHO_FRAME = "echoFrame"; String FRAME-PATH = "path";
String FRAME_NAME = "name";
String FRAME-VIEW = "view"; String ECHO_NUM = "echoNum"; String SEG_NUM = "segNum"; String KF_NUM = "kfNum";
String ROI = "ROI"; String ROI. INDEX = "index"; String ROI. LABEL = "label"; String ROI_ PATH = "roiPath"; String ROI. VALID = "valid"; String ROI_ COMX = "comX"; String ROI_ COMY = "GomY"; String ROI. AREA = "area"; String ROl. ANGLE = "angle"; String ROI. FEAT = "fFile";
String PLA = "PLA"; String RVIT = 11RVIT"; String PLAP = "PLAP"; String PSAB = "PSAB"; String PSAPM = 11PSAPM"; String PSAM = "PSAM"; String PSAX = "PSAX"; String A2C = "A2C"; String A3C = "A3C"; String A4C = "A4C"; String A5C = "A5C"; String SC4C = "SC4C"; String SCIVC = "SCIVC";
String LV = "LV"; String RV = 11RV"; String LA = "LA"; String RA = "RA"; String AO = "AO"; String AV = "AV;
String UNKN = "UNKN";
String TRUE = "true"; String FALSE = "false";
} DATA COLLECTION / REGION VALIDATE
package regionValidate; import java.io.File; import java.io.FileWriter; import Java. lang.String; import java.util.StringTokenizer; import java.util.Vector; import java.lang.Exception; import xmlio.EchoFrame; import xmlio.Roi; public class RegionValidate
{ private static String descDirRel = "/desc"; private static String DAT = ".dat"; private static String XML = ".xml"; private String srcDir; private String xmlDir; public RegionValidate (String echoName) throws Exception { srcDir = new String(echoName); xmlDir = new String(echoName + descDirRel);
System.oιιt.println("source dir." + srcDir + ", xml din" + xmlDir); } public void Validate(String filename) throws Exception { // open the output file File outFile = new File(filename); FileWriter out = new FileWriter(outFile, true);
// open the input xml directory File inputDir = new File(xmlDir); String [] xmlFiles = inputDir. list();
// For each xml file find the invalid regions for (int i=0; i < xmlFiles.length; i++) {
String xmlFileName = new String(xmlDir + T + xmlFilesfi]);
System.out.println(xmlFileName);
EchoFrame EF = new EchoFrame(); EF.read(xmlFileName); int numROIs = EF.getNumROIs(); int count = 0; for (int k=0; k < numROIs; k++) { Roi roi = EF.getROI(k+1); if ((roi.getValidity()).equals("false") || (roi.getValidity()).equals("FALSE")) { count++; } } if (count == numROIs) continue; for (int j=0; j < numROIs; j++) { Roi roi = EF.getROIG+1); if ((roi.getVaiidity()).equals("false") || (roi.getValidity()).equals("FALSE")) { StringBuffer sb = new String BufferQ; sb.append(EF.getView()); sb.append(" "); sb.append(roi.getPathO); sb.append("\n"); out.write(sb.toString());
System. out.println("View: " + EF.getView()); System.out.println("Path: " + roi.getPath());
} }
} out.close();
} puNic static void main(String args[]) { try {
System. out.println(args[0]); RegionValidate rv = new RegionValidate(args[0]); rv.Validate(args[1]);
} catch (Exception e) { System. out. println("Error."); e.printStackTrace (System.err); } } }

Claims

WE CLAM: 1. A method for labeling video images comprising the steps of: providing a representation of a test image comprising nodes and edges corresponding to objects in the test image; providing a representation of a model for each label in a set of labels; determining an optimal mapping of the representation of the test image with the representation of each of said respective models; determining an confidence measure comprising a set of confidence values, each confidence value corresponding to said optimal mapping of the representation of the test image with the representation of each respective model; training a classifier; and classifying the test image by applying the learned classifier to the confidence measure.
2. The method recited in claim 1, wherein the step of providing a representation . of the test image comprising determining an attribution-relational graph.
3. The method recited in claim 1, wherein the step of providing a representation of a model for each label comprises determining an attribution-relational graph.
4. The method recited in claim 3, wherein the attribution-relational graph of the test image and the attribution-relational graph of the model each comprise nodes and edges, and wherein the step of determining an optimal mapping of the representation of the test image with the representation of each of said respective models comprises defining a set of random variables on the nodes of the attributed graph of the test image.
5. The method recited in claim 4, wherein the step of determining an optimal mapping of the representation of the test image with the representation of each of said respective models further comprises determining the optimal configuration of the set of random variables.
6. The method recited in claim 5, wherein the step of determining a confidence measure comprising a set of confidence values comprises determining energy vector comprising a set of energy values.
7. The method recited in claim 1, wherein the step of training a classifier comprises training a Support Vector Machine classifier.
8. The method recited in claim 6, wherein the step of classifying the test image by applying the learned classifier to the confidence measure comprises classifying the confidence measure using the learned Support Vector Machine classifier.
9. The method recited in claim 1 , wherein the test image comprises an echocardiogram image and wherein the step of providing a representation of a test image comprising nodes and edges corresponding to objects in the test image comprises providing a representation of an echocardiogram image comprising nodes and edges corresponding to cardiac chambers in the test image.
10. The method recited in claim 1 , wherein the label comprises a view and wherein the step of providing a representation of a model for each label in a set of labels comprises providing a representation of an echocardiogram image comprising providing a representation of a model for each view label in a set of view labels.
PCT/US2004/028722 2004-06-25 2004-08-26 System and methods of automatic view recognition of echocardiogram videos using parts-based representation WO2006011891A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58306504P 2004-06-25 2004-06-25
US60/583,065 2004-06-25

Publications (1)

Publication Number Publication Date
WO2006011891A1 true WO2006011891A1 (en) 2006-02-02

Family

ID=35786508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/028722 WO2006011891A1 (en) 2004-06-25 2004-08-26 System and methods of automatic view recognition of echocardiogram videos using parts-based representation

Country Status (1)

Country Link
WO (1) WO2006011891A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009042074A1 (en) * 2007-09-25 2009-04-02 Siemens Medical Solutions Usa, Inc. Automated view classification with echocardiographic data for gate localization or other purposes
US7996762B2 (en) 2007-09-21 2011-08-09 Microsoft Corporation Correlative multi-label image annotation
US8744152B2 (en) 2010-06-19 2014-06-03 International Business Machines Corporation Echocardiogram view classification using edge filtered scale-invariant motion features
CN107045624A (en) * 2017-01-06 2017-08-15 南京航空航天大学 A kind of EEG signals pretreatment rolled into a ball based on maximum weighted and sorting technique
CN107609552A (en) * 2017-08-23 2018-01-19 西安电子科技大学 Salient region detection method based on markov absorbing model
CN113283378A (en) * 2021-06-10 2021-08-20 合肥工业大学 Pig face detection method based on trapezoidal region normalized pixel difference characteristics
CN115564778A (en) * 2022-12-06 2023-01-03 深圳思谋信息科技有限公司 Defect detection method and device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694945A (en) * 1993-07-20 1997-12-09 Biosense, Inc. Apparatus and method for intrabody mapping
US6556695B1 (en) * 1999-02-05 2003-04-29 Mayo Foundation For Medical Education And Research Method for producing high resolution real-time images, of structure and function during medical procedures
US6708055B2 (en) * 1998-08-25 2004-03-16 University Of Florida Method for automated analysis of apical four-chamber images of the heart
US6716175B2 (en) * 1998-08-25 2004-04-06 University Of Florida Autonomous boundary detection system for echocardiographic images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694945A (en) * 1993-07-20 1997-12-09 Biosense, Inc. Apparatus and method for intrabody mapping
US6708055B2 (en) * 1998-08-25 2004-03-16 University Of Florida Method for automated analysis of apical four-chamber images of the heart
US6716175B2 (en) * 1998-08-25 2004-04-06 University Of Florida Autonomous boundary detection system for echocardiographic images
US6556695B1 (en) * 1999-02-05 2003-04-29 Mayo Foundation For Medical Education And Research Method for producing high resolution real-time images, of structure and function during medical procedures

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7996762B2 (en) 2007-09-21 2011-08-09 Microsoft Corporation Correlative multi-label image annotation
WO2009042074A1 (en) * 2007-09-25 2009-04-02 Siemens Medical Solutions Usa, Inc. Automated view classification with echocardiographic data for gate localization or other purposes
US8092388B2 (en) 2007-09-25 2012-01-10 Siemens Medical Solutions Usa, Inc. Automated view classification with echocardiographic data for gate localization or other purposes
US8744152B2 (en) 2010-06-19 2014-06-03 International Business Machines Corporation Echocardiogram view classification using edge filtered scale-invariant motion features
US8750375B2 (en) 2010-06-19 2014-06-10 International Business Machines Corporation Echocardiogram view classification using edge filtered scale-invariant motion features
CN107045624A (en) * 2017-01-06 2017-08-15 南京航空航天大学 A kind of EEG signals pretreatment rolled into a ball based on maximum weighted and sorting technique
CN107045624B (en) * 2017-01-06 2020-04-10 南京航空航天大学 Electroencephalogram signal preprocessing and classifying method based on maximum weighted cluster
CN107609552A (en) * 2017-08-23 2018-01-19 西安电子科技大学 Salient region detection method based on markov absorbing model
CN113283378A (en) * 2021-06-10 2021-08-20 合肥工业大学 Pig face detection method based on trapezoidal region normalized pixel difference characteristics
CN113283378B (en) * 2021-06-10 2022-09-27 合肥工业大学 Pig face detection method based on trapezoidal region normalized pixel difference characteristics
CN115564778A (en) * 2022-12-06 2023-01-03 深圳思谋信息科技有限公司 Defect detection method and device, electronic equipment and computer readable storage medium
CN115564778B (en) * 2022-12-06 2023-03-14 深圳思谋信息科技有限公司 Defect detection method and device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
Abdelrahman et al. Convolutional neural networks for breast cancer detection in mammography: A survey
US8744152B2 (en) Echocardiogram view classification using edge filtered scale-invariant motion features
US20060064017A1 (en) Hierarchical medical image view determination
Shaukat et al. Computer-aided detection of lung nodules: a review
US11357435B2 (en) Automatic extraction of disease-specific features from doppler images
Ammari et al. A review of approaches investigated for right ventricular segmentation using short‐axis cardiac MRI
Li et al. Online semantic object segmentation for vision robot collected video
Moussavi et al. 3D segmentation of cell boundaries from whole cell cryogenic electron tomography volumes
Wu et al. Improving video anomaly detection performance by mining useful data from unseen video frames
WO2006011891A1 (en) System and methods of automatic view recognition of echocardiogram videos using parts-based representation
Shin et al. Three aspects on using convolutional neural networks for computer-aided detection in medical imaging
Tu et al. A learning based approach for 3D segmentation and colon detagging
Corredor et al. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features
Mahapatra Learning of Inter-Label Geometric Relationships Using Self-Supervised Learning: Application To Gleason Grade Segmentation
Meng et al. Clinical applications of graph neural networks in computational histopathology: A review
Lee Automated boundary tracing using temporal information
Susomboon et al. Automatic single-organ segmentation in computed tomography images
Zheng Deep learning for robust segmentation and explainable analysis of 3d and dynamic cardiac images
Lakshmi Cardiac function review by machine learning approaches
Zhang et al. Application of graph-based features in computer-aided diagnosis for histopathological image classification of gastric cancer
Elst The usability of generative adversarial networks for automatic segmentation of lung nodules in CT-images
Pappalardo Università degli Studi di Catania
Tarando Quantitative follow-up of pulmonary diseases using deep learning models
Chan Early Detection of Osteoarthritis Stage by Applying Classification Technique to Human Joint imagery
Dimitriou et al. Precision medicine in digital pathology via image analysis and machine learning

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase