文档视界 最新最全的文档下载
当前位置:文档视界 › Highfield, Southampton,

Highfield, Southampton,

Highfield, Southampton,
Highfield, Southampton,

Complex Texture Classi?cation with Edge Information J.K.P.Kuan P.H.Lewis

Multimedia Research Group

Dept.of Electronic and Comp.Science,

University of Southampton,

High?eld,Southampton,

SO171BJ,United Kingdom

jk94r,phl@https://www.docsj.com/doc/e6434846.html,

Abstract

We introduce a novel texture description scheme and demonstrate it with our fast similarity search technique for content-based retrieval and navigation applications.The texture representation uses a combination of edge and re-gion statistics.It is compared with the Multi-Resolution Simultaneous Auto-Regressive Model and Statistical Geo-metrical Features techniques using the entire Brodatz tex-ture set and on a collection of more complex texture images obtained from a product catalogue.In both cases,the edge based representation gives the best classi?cation.

1.Introduction

Texture analysis has been widely studied,and a large number of approaches have been developed.Amongst these methods,some of the popular models include Markov Random Field(MRF)[6],Simultaneous Auto-Regressive (SAR)[15],Gabor Filters[5],Wold Transform[13],and Wavelets.Texture analysis techniques may be described in three main categories:structural,statistical,and structural statistical approaches.The structural methods use the ge-ometrical features of texture primitives as the texture fea-tures.However,they involve a lot of image pre-processing procedures to extract texture primitives.Mostly these meth-ods are time-consuming,and often only regular textures can be recognized but the features are normally rotation-invariant.Statistical methods are the dominant approach for texture matching,and they can work well on regular, random and quasi-random textures.Not many researchers have developed texture analysis techniques using combined statistical structural methods.Chen et al.[4]use the statis-tical geometrical features to classify textures from the entire Brodatz texture album[3],and the method shows a good performance for classi?cation.

Several studies[2][5][7][18][16]have shown that us-ing edge information in the texture features can achieve good classi?cation performance.In this paper,we propose a novel edge based method which achieves a high classi?-cation rate with the entire Brodatz texture database.One of our objectives was to develop a texture matching technique which is effective with complex textures such as those in commercial furniture catalogues(see?gure3).We have compared our edge based method,the Statistical Geomet-rical Features(SGF)[4]and the Multi-Resolution Auto-Regressive Model(MR-SAR)[15]using the entire Brodatz texture database and a set of complex textures from the cat-alogue.In both cases,the new Edge Based method gives the best retrieval results.

2.Edge Based Texture Classi?cation

V arious methods have been developed by researchers to extract edge information from a texture image.These in-clude Gabor?lters by Coggins et al.[5]and Generalized Co-ocurrence matrices by Davis et al.[7].Patel et al.[16] calculate edge direction using33masks then use rank or-der statistics to produce the texture features.Our approach to Edge Based texture feature calculation begins like that of Patel et al.but where they provide only edge informa-tion,our method also captures details of regions with no edge information,as these too can contribute valuable in-formation to the texture features.We also introduce other low and high level texture measures as described below.

We calculate grey value variances of4different direc-tions(0,45,90,135)from a33mask.The direction with the minimum variance is chosen as the label on the centre pixel of the mask.However,some areas in an image may have no edge information,and these can also be used

a b

Figure1.Effect of edge extraction from an

image

as part of the texture features.Before the direction from a mask is determined,we must decide whether there is any edge information inside the mask.To do this we calculate the sum of differences in each33window.

sum of difference

where is the mean grey level value of the entire window, ,and is the grey level value of pixel.

if sum of difference area of mask T,

(T is the threshold.In our example,T=8) then

the centre pixel is labelled as Blank

else

?nd out the strong edge orientation,

i.e.the direction with minimum variance,and

label the central pixel with the direction When the direction is decided,then the labelling is per-

formed as0–Horizontal(),45–Right Diagonal (),90–V ertical(),and135–Left Diagonal().

Figure1a shows an original texture image,and?gure 1b shows that accurate edge retrieval is accomplished.The light areas in1b indicate a blank label and the darker areas are labelled with a direction.

After processing the entire image,the ratio for each edge direction and plain region is calculated as a fraction of the total number of labels:

(3) where is the maximum intensity level.

The higher level texture edge features are evaluated by using the conditional probability between the edge direc-tion of the centre of the mask and the surrounding locations.Figure2shows a matrix of conditional prob-abilities which has a similar form to the Generalized Co-occurrence Matrix suggested by Davis et al.[7].The dif-ferences are the use of conditional probabilities and the in-clusion of plain region statistics.For example,is the conditional probability of getting V ertical labels in a 33window given the central pixel is Horizontal.This is computed by counting the number of appearances of verti-cal edge labels in the surrounding location divided by the area of mask(excluding the centre pixel).Each entry in the conditional probability matrix is accumulated according to the labels of central and surrounding location in the mask. A55probability matrix is then generated and normalized into the range[0-1]by dividing with.

2.1.Similarity Measurement

In[15],Mao et al.reported that using a large number of parameters will cause an effect on severe averaging over power discriminatory features.When comparing textures, contributions to the similarity measure from edge informa-tion should be weighted according to the fractions of those edges occuring in the images,i.e.by the ratios,.If two images have a high ratio on horizontal edges,their similar-ity value on horizontal edges should also increase.If two images have high and low ratios on edge properties,then the weight is taken as the average between both ratios.In this case,we can match two images based on their similar-ity and dissimilarity.The weight is evaluated as:

(4) where and are the ratios of the general term of the two different images.The similarity,,is the sum of the squared Euclidean distances of the conditional probabilities and the contrasts,multiplied with the weights.

(5) where and are the general terms of, and.This measure uses a weighted combination of con-trast across edges and conditional probability of edge direc-tions and plain regions to assess the similarity between two images.

2.2.Brodatz Texture Database

Each Brodatz image is digitized into a512512256 grey level image,and cut into16subimages of128128.A total of1792(11216)images are produced from the tex-ture album.Eight out of each set of16subimages are ran-domly taken,texture features are extracted and pre-indexed into a training database.The rest of the subimages are used for testing.A similar experiment was also performed by Manjunath et al.[14].They used a nearly complete set of Brodatz textures(except D31,D32,and D99)for compar-ing between Gabor wavelets,MRSAR,pyramid-structured and tree-structured wavelet transform.The MRSAR result was fractionally lower than the Gabor wavelets with73% and74%respectively.

https://www.docsj.com/doc/e6434846.html,plicated Texture Database

Thirty texture patterns(11classes)were extracted from a commercial furniture catalogue.Figures3a-3d

show

a

b

c d

Figure 3.Examples of commercial compli-

cated texture patterns

some of the patterns which are categorised into groups such as,Abbey Stripe,Georgia Damask,Tournament Stripe,etc. Some of the images were taken directly from texture sam-ples in the catalogue,others were extracted from the pic-tures of furniture.

2.4.Results

Two different techniques with good matching capabil-ities,SGF[4]and MRSAR[15],are chosen to compare with our method.For each test image,we calculate the Euclidean distance between feature vectors to retrieve the top15nearest matches out of the896features in the image database.If all the8subimages from the same orginal tex-ture image are retrieved,the testing image scores a100% retrieval rate.If7subimages are retrieved,the result is 87.5%and so on.This experiment is repeated three times and the results are averaged to even the random selection of the test set and sample sets in the Brodatz database.For all the8test images of each class,we averaged the classi?ca-tion rate.The number of Brodatz textures that scored a clas-si?cation rate in a certain percentage range are presented in table1.Our method clearly outperforms the other two methods in that more than half of the entire Brodatz tex-tures have an accuracy between90%-100%.Our method also shows that an83%correct classi?cation rate is ob-tained on average over all the Brodatz textures compared to75.5%and71.4%achieved by MRSAR and SGF respec-tively.

Some textures have very low classi?cation success with all matching techniques tested;for examples D43,D44,and D58.This is due to a high inhomogoenity pattern spread over the whole original uncropped image.A preliminary

Figure4.Content Based Retrieval from a lace texture

SGF 100%-90%31 89%-80%20 79%-70%7 69%-60%17 59%-0%37 Average(%)71.4%

MRSAR Edge based method

26

11

00

11

22

10

41

36.4%72.0%

Table2.The classi?cation result of complex

textures

3.Content-based Retrieval and Navigation

A hypermedia package[12],Multimedia Architecture for Video,Image and Sound(MA VIS),is being developed at the University of Southampton which is capable of con-tent based retrieval and navigation for non-text media.In this section,the edge based texture classi?cation is demon-strated with MA VIS for retrieval of similar complex furni-ture textures and navigation to different media using links based on texture matching.

To index multidimensional image features,the R-tree[9] is one of the popular choices and Beckmann et al.[1]de-veloped the R*-tree which improved the space utilization compared against R-tree.In other popular content-based

Figure5.Content Based Navigation from a commercial furniture complex pattern

retrieval applications such as QBIC[8],the R*-tree is also used for indexing image features.

For ef?cient content-based retrieval,we use the Hilbert R-tree[10]for fast multi-dimensional indexing and re-trieval.It has been shown that it outperforms the R*-tree. We have compared the performance between Hilbert R-tree and the R*-tree with an image database.The results showed that using Hilbert R-tree accesses signi?cantly fewer nodes, has easy implementation,and is much less time consuming for building an indexing tree.We have experimented with nearest neighbour queries for R-tree by Roussopoulos et al.

[17]which gives a good performance.However,it is pos-sible to improve on the nearest neighbours search for a clustered image database.In[11],we showed that less data comparison can be achieved than with Roussopoulos et al. nearest neighbour search on image features data and less computation can be obtained for faster retrieval.

We use a subset of features(lowlevel:Ratio and Con-trast)indexed by a Hilbert R-tree,and enlarged near-est neighbours are searched with normal Euclidean distance measurement as the similarity measure.Then the weighted Euclidean distance measurement is performed among these retrievals using the full feature vectors described above.With40,the accuracy of classifying all Bro-datz textures drops down to around3%compared against sequential normal nearest neighbours search with full fea-tures set.

Figure4shows content-based retreival of a Brodatz lace texture(D40),and the results show a high accuracy retrieval with1792images stored in a database.All the15subim-ages are retrieved in the top20nearest matches.

In the MA VIS system it is possible to author generic links[12]from images to other parts of the information space using texture as the key.Once authored,the link may be followed from similar instances of the texture.In the next example,generic links have been authored from a tex-ture patch to an image of a sofa with a similar texture.Also a link has been authored to some text describing the tex-ture.Figure5shows an application using a similar com-mercial pattern(Tournament Stripe)to navigate to other in-formation with related content.An ordered list of links is displayed in the Image links window with their iconic im-ages.The related text information of the furniture pattern is shown in the txt window when the text media link(in Im-age links window)is selected and the Follow link button is clicked.A sofa image(in mavis viewer window)with the same furniture pattern is located with similar actions.

4.Conclusion

A new texture classi?cation technique has been pro-posed which uses edge and plain region information to characterize a texture.The texture method has been com-pared to MRSAR and SGF with the entire Brodatz texture database.The results show that our method outperforms the other two methods;more than half of the entire texture database is matched with90%-100%reliability.On aver-age,our method achieves at least83%matching accuracy over all the Brodatz textures.With complex commerical textures,our method also gives a better classi?cation rate. We demonstrate content based retrieval and navigation with the edge based texture scheme which provides an ac-curate method for content-based multimedia applications. Currently,our method is not rotation and scale invariant but modi?cations to include rotation invariance are in progress. References

[1]N.Beckmann,H.P.Kriegel,R.Schneider,and

B.Seeger.The R*-tree:an ef?cient and robust access

method for points and rectangles.ACM SIGMOD, pages322–331,May1990.

[2]A.C.Bovik,M.Clark,and W.S.Geisler.Mul-

tichannel texture analysis using localized spatial?l-ters.IEEE Trans.Pattern Analysis and Machine In-telligence,12,1990.

[3]P.Brodatz.Textures:A Photographic Album for

Artists&Designers.New Y ork:Dover,1966.

[4]Y.Q.Chen,M.S.Nixon,and D.W.Thomas.Statistical

geometrical features for texture classi?cation.Pattern Recognition,28(4):537–552,April1995.

[5]J.M.Coggins and A.K.Jain.A spatial?ltering ap-

proach to texture analysis.Pattern Recognition Let-ters,3:195–203,1985.

[6]G.R.Cross and A.K.Jain.Markov random?eld tex-

ture models.IEEE Trans.on Patter Analysis and Ma-chine Intelligence,PAMI-5(1):25–39,January1983.

[7]L.S.Davis,S.A.Johns,and J.K.Aggarwal.Tex-

ture analysis using generalized co-occurrence matri-ces.IEEE Trans.on Pattern Analysis and Machine Intelligence,PAMI-1(3):251–259,July1979.

[8]M.Flickner,H.Sawhney,W.Niblack,J.Ashley,

Q.Huang,B.Dom,M.Gorkani,J.Hafner,D.Lee,

D.Petkovic,D.Steele,and P.Yanker.Query by im-

age and video content:The QBIC system.IEEE Com-puter,28(9):23–32,September1995.

[9]A.Guttman.R-trees:A dynamic index structure for

spatial searching.In Proc.ACM SIGMOD Int.Conf.

on Management of Data,pages45–57,1984. [10]I.Kamel and C.Faloutsos.Hilbert R-tree:an im-

proved R-tree using fractals.In Proc.of Int.Conf.of Information and Knowledge Management,pages490–499,1993.

[11]J.K.P.Kuan and P.H.Lewis.Fast nearest neigh-

bour search for R-tree family.In Proc.of First Int.Conf.on Information,Communication and Signal Processing,Singapore,9-12,September1997.

[12]P.H.Lewis,J.K.P.Kuan,S.T.Perry,M.R.Dobie,

H.Davis,and W.Hall.Navigating from images using

generic links based on image content.In SPIE con-ference on Storage and Retrieval for Image and Video Database,pages238–248,San Jose,February1997.

[13]F.Liu and R.W.Picard.Periodicity,directionality,

and randomness:Wold features for image modeling and retrieval.IEEE Trans.Pattern Analysis Machine Intelligence,18(7):722–733,July1996.

[14]B.S.Manjunath and W.Y.Ma.Texture features for

browsing and retrieval of image data.IEEE Trans.on Pattern Analysis and Machine Intelligence,18(8):837–842,August1996.

[15]J.C.Mao and A.K.Jain.Texture classi?cation and

segmentation using multiresolution simultaneous au-toregressive models.Pattern Recognition,25(2):173–188,1992.

[16]D.Patel and T.J.Stonham.Texture image classi?-

cation and segmentation using RANK-order cluster-ing.In11th IAPR International Conference on Pat-tern Recognition,pages92–95,The Hague,The Netherlands,August30–September31992.

[17]N.Roussopoulos,S.Kelley,and F.Vincent.Near-

est neighbor queries.In Proceedings of the1995 ACM-SIGMOD Intl.Conf.on Management of Data, San Jose,CA,June1995.

[18]H.Tamura,S.Mori,and T.Yamawaki.Textural fea-

tures corresponding to visual perception.IEEE Trans.

on Systems,Man,and Cybernetics,SMC-8(6):460–473,June1978.

相关文档