Results 1  10
of
17
Measuring Shape: Ellipticity, Rectangularity, and Triangularity
 Machine Vision and Applications, forthcoming
, 2000
"... Object classification often operates by making decisions based on the values of several shape properties measured from the image. This paper describes and tests several algorithms for calculating ellipticity, rectangularity, and triangularity shape descriptors. ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
(Show Context)
Object classification often operates by making decisions based on the values of several shape properties measured from the image. This paper describes and tests several algorithms for calculating ellipticity, rectangularity, and triangularity shape descriptors.
P.: Road Network Extraction and Intersection Detection From Aerial Images by Tracking Road Footprints
 IEEE Transactions on Geoscience and Remote Sensing
"... Abstract—In this paper, a new twostep approach (detecting and pruning) for automatic extraction of road networks from aerial images is presented. The road detection step is based on shape classification of a local homogeneous region around a pixel. The local homogeneous region is enclosed by a poly ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, a new twostep approach (detecting and pruning) for automatic extraction of road networks from aerial images is presented. The road detection step is based on shape classification of a local homogeneous region around a pixel. The local homogeneous region is enclosed by a polygon, called the footprint of the pixel. This step involves detecting road footprints, tracking roads, and growing a road tree. We use a spoke wheel operator to obtain the road footprint. We propose an automatic road seeding method based on rectangular approximations to road footprints and a toefinding algorithm to classify footprints for growing a road tree. The road tree pruning step makes use of a Bayes decision model based on the areatoperimeter ratio (the A/P ratio) of the footprint to prune the paths that leak into the surroundings. We introduce a lognormal distribution to characterize the conditional probability of A/P ratios of the footprints in the road tree and present an automatic method to estimate the parameters that are related to the Bayes decision model. Results are presented for various aerial images. Evaluation of the extracted road networks using representative aerial images shows that the completeness of our road tracker ranges from 84 % to 94%, correctness is above 81%, and quality is from 82 % to 92%. Index Terms—Bayes decision rule, road extraction, road footprint, road tracking, road tree pruning. I.
A Rectilinearity Measurement for Polygons
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2002
"... In this paper we define a function 7g(P) which is defined for any polygon P and which maps a given polygon P into a number from the interval (0, 1]. The number 7g(P) can be used as an estimate of the rectilinearity of P. The mapping 7(P) has the following desirable properties:  any polygon P h ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
In this paper we define a function 7g(P) which is defined for any polygon P and which maps a given polygon P into a number from the interval (0, 1]. The number 7g(P) can be used as an estimate of the rectilinearity of P. The mapping 7(P) has the following desirable properties:  any polygon P has the estimated rectilinearity 7(P) which is a number from (0, 1];  7(P)=1 if and only if P is a rectilinear polygon, i.e., all interior angles of P belong to the set {,r/2, 3,r/2};  inf 7(P) = 0, where H denotes the set of all polygons; PCH  a polygon's rectilinearity measure is invariant under similarity trans formations.
Remote sensing image thresholding methods for Determining landslide activity
 Vol
, 2005
"... Detecting landslides and monitoring their activity is of great relevance for disaster prevention, preparedness and mitigation in hilly areas. To this end, change detection techniques are developed and applied to multitemporal digital aerial photographs, simulating the very high spatial resolution o ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Detecting landslides and monitoring their activity is of great relevance for disaster prevention, preparedness and mitigation in hilly areas. To this end, change detection techniques are developed and applied to multitemporal digital aerial photographs, simulating the very high spatial resolution of new satellite sensor optical imagery, over the Tessina complex landslide in northeastern Italy. Several automatic thresholding algorithms are compared on the difference orthorectified and radiometrically normalised images, including some standard methods based on clustering, statistics, moments, and entropy, as well as some more novel techniques previously developed by the authors. In addition, a variety of filters are employed to eliminate much of the undesirable residual clutter in the thresholded difference image, mainly as a result of natural vegetation and manmade land cover changes. These filters are based on shape and size properties of the connected sets of pixels in the threshold maps. This has enabled us to discriminate most ground surface changes related to the movement of a preexisting landslide. 1
Road extraction in suburban areas based on normalized cuts
 International Archives of Photogrammetry and Remote Sensing
"... This paper deals with road extraction of high resolution aerial images of suburban scenes based on segmentation using the Normalized Cuts algorithm. The aim of our project is the extraction of roads for the assessment of a road database, however, this paper is restricted to road extraction. The segm ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
This paper deals with road extraction of high resolution aerial images of suburban scenes based on segmentation using the Normalized Cuts algorithm. The aim of our project is the extraction of roads for the assessment of a road database, however, this paper is restricted to road extraction. The segmentation as our basic step is designed to yield a good division between road areas and the surroundings. We use the Normalized Cuts algorithm, which is a graphbased approach that divides the image on the basis of pixel similarities. The definition of these similarities can incorporate several features, which is necessary for the segmentation in complex surroundings such as builtup areas. The features used for segmentation comprise colour, hue, edges and road colour derived with prior information about the position of the centerline from the database. The initial segments have to be grouped due to an enforced oversegmentation. The grouping is based on the criteria of mean colour difference, edge strength of the shared borders and colour standard deviation of merged initial segments. The grouped segments are then evaluated using shape criteria in order to extract road parts. Results on some test images show that the approach provides reliable road parts. Concluding remarks are given at the end to point out further investigations concerning the evaluation of the road segments and their use in database assessment. 1.
Rectilinearity measurements for polygons
 IEEE Trans. on Patt. Anal. and Mach. Intell
"... The paper introduces a shape measure intended to describe the extent to which a closed polygon is rectilinear. Other than somewhat obvious measures of rectilinearity (e.g. the sum of the differences of each corner’s angle from 90◦) there has been little work in deriving a measure that is straightfor ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
The paper introduces a shape measure intended to describe the extent to which a closed polygon is rectilinear. Other than somewhat obvious measures of rectilinearity (e.g. the sum of the differences of each corner’s angle from 90◦) there has been little work in deriving a measure that is straightforward to compute, is invarient under scale and translation, and corresponds with the intuitive notion of rectilinear shapes. There are applications in a number of different areas of computer vision and photogrammetry. Rectilinear structures often correspond to humanmade structure, and are therefore justified as attentional cues for further processing. For instance, in aerial image processing and reconstruction, where building footprints are often rectilinear on the local ground plane, building structures, once recognized as rectilinear can be matched to corresponding shapes in other views for stereo reconstruction. Perceptual grouping algorithms may seek to complete shapes based on the assumption that the object is question is rectilinear. Using the proposed measure, such systems can verify this assumption.
A Rectilinearity Measurement for 3D Meshes
, 2008
"... In this paper, we propose and evaluate a novel shape measurement describing the extent to which a 3D mesh is rectilinear. Since the rectilinearity measure corresponds proportionally to the ratio of the sum of three orthogonal projected areas and the surface area of the mesh, it has the following des ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper, we propose and evaluate a novel shape measurement describing the extent to which a 3D mesh is rectilinear. Since the rectilinearity measure corresponds proportionally to the ratio of the sum of three orthogonal projected areas and the surface area of the mesh, it has the following desirable properties: 1) the estimated rectilinearity is always a number from (0,1]; 2) the estimated rectilinearity is 1 if and only if the measured 3D shape is rectilinear; 3) there are shapes whose estimated rectilinearity is arbitrarily close to 0; 4) the measurement is invariant under scale, rotation, and translation; 5) the 3D objects can be either open or closed meshes, and we can also deal with poor quality meshes; 6) the measurement is insensitive to noise and stable under small topology errors; and 7) a Genetic Algorithm (GA) can be applied to calculate the approximate rectilinearity efficiently. We have also implemented two experiments of its applications. The first experiment shows that, in some cases, the calculation of rectilinearity provides a better tool for registering the pose of 3D meshes compared to PCA. The second experiment demonstrates that the combination of this measurement and other shape descriptors can significantly improve 3D shape retrieval performance.
Turning shape decision problems into measures
 Int. J. Shape Modelling
"... This paper considers the problem of constructing shape measures; we start by giving a short overview of areas of practical application of such measures. Shapes can be characterised in terms of a set of properties, some of which are Boolean in nature. E.g. is this shape convex? We show how it is poss ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This paper considers the problem of constructing shape measures; we start by giving a short overview of areas of practical application of such measures. Shapes can be characterised in terms of a set of properties, some of which are Boolean in nature. E.g. is this shape convex? We show how it is possible in many cases to turn such Boolean properties into continuous measures of that property e.g. convexity, in the range [0–1]. We give two general principles for constructing measures in this way, and show how they can be applied to construct various shape measures, including ones for convexity, circularity, ellipticity, triangularity, rectilinearity, rectangularity and symmetry in two dimensions, and 2.5Dness, stability, and imperforateness in three dimensions. Some of these measures are new; others are well known and we show how they fit into this general framework. We also show how such measures for a single shape can be generalised to multiple shapes, and briefly consider as particular examples measures for containment, resemblance, congruence, and similarity.
A New Convexity Measurement for 3D Meshes
"... This paper presents a novel convexity measurement for 3D meshes. The new convexity measure is calculated by minimizing the ratio of the summed area of valid regions in a mesh’s six views, which are projected on faces of the bounding box whose edges are parallel to the coordinate axes, to the sum of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a novel convexity measurement for 3D meshes. The new convexity measure is calculated by minimizing the ratio of the summed area of valid regions in a mesh’s six views, which are projected on faces of the bounding box whose edges are parallel to the coordinate axes, to the sum of three orthogonal projected areas of the mesh. The complete definition, theoretical analysis, and a computing algorithm of our convexity measure are explicitly described. This paper also proposes a new 3D shape descriptor CD (i.e., Convexity Distribution) based on the distribution of abovementioned ratios, which are computed by randomly rotating the mesh around its center, to better describe the object’s convexityrelated properties compared to existing convexity measurements. Our experiments not only show that the proposed convexity measure corresponds well with human intuition, but also demonstrate the effectiveness of the new convexity measure and the new shape descriptor by significantly improving the performance of other methods in the application of 3D shape retrieval. 1.
ANALYSIS OF CRAQUELURE PATTERNS FOR CONTENTBASED RETRIEVAL
, 2004
"... The advent of multimedia technology has offered a new dimension in computerised applications. Artbased applications are among those which have and will continue to benefit from this advancement. Contentbased image retrieval (CBIR) and analysis is attracting attention from museums and art instituti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The advent of multimedia technology has offered a new dimension in computerised applications. Artbased applications are among those which have and will continue to benefit from this advancement. Contentbased image retrieval (CBIR) and analysis is attracting attention from museums and art institutions. One of the imagebased requirements from museums is to automatically classify craquelure (cracks) in paintings for the purpose of aiding damage assessment using nondestructive monitoring and testing. Craquelure in paintings can be an important element in judging authenticity, use of material as well as environmental and physical impact, which these can contribute to different craquelure patterns. Mass screening of craquelure patterns will help to establish a better platform for conservators to identify cause of damage and a contentbased approach is seen as an appropriate path. This thesis covers the issues of crack enhancement and detection, using a mathematical morphology technique, namely the tophat operator and also a gridbased automatic thresholding. Craquelure representation aids the processes of craquelure pattern analysis in which the Freeman chaincode is used as a basis for converting the imagebased representation