British Machine Vision Association and Society for Pattern Recognition


Multi-resolution and multiscale methods in image processing


Joint meeting of the British Machine Vision Association and RSS Study Group in Statistical Image Analysis and Processing to be held on 5th April 2000 at the Royal Statistical Society, 12 Errol Street, London EC1Y 8LX

Chairpersons: Chris Glasbey (RSS), Josef Kittler (BMVA)

10.00 coffee

10.30 Simplifying images, Lewis Griffin (Vision Sciences, Aston)

11.15 Location dependent models of image variation, Idris Eckley (Mathematics, Bristol)

12.00 A multirate, circular Hough transform using wavelets, A. A. Bharath (Biological & Medical Systems, Imperial College)

12.30 Nonlinear gray-scale granulometries for texture analysis, Jennifer McKenzie (Electronic and Electrical Engineering, Strathclyde)

13.00 lunch

14.00 Multi-resolution algorithms and subpixel restoration, Chris Jennison (Mathematical Sciences, Bath)

14.40 Multiresolution colour texture segmentation, Maria Petrou (Electronic Engineering, Surrey)

15.20 tea

15.40 Multiscale entropy for semantic description of images and signals, Fionn Murtagh (Computer Science, Queen's, Belfast)

16.20 Dynamic Tree belief network models of images
Amos Storkey (Informatics, Edinburgh)

17.00 end


There is no need to pre-register for this meeting.
Sandwiches will be on sale during lunch time.

For more details contact Chris Glasbey or Josef Kittler

Directions: Exit Barbican underground station, cross Aldersgate Street and walk up Beech Street through the underpass. Take the second left, along Whitecross Street, and then the second right (just after Safeways), which is Errol Street. RSS HQ is on the right just as the street bends to the right.


Simplifying images
Lewis Griffin (Vision Sciences, Aston)

If an image is simplified by repeatedly replacing its values with the mean in an infinitesimal neighbourhood it evolves according to the diffusion
equation Ltt=Lxx+Lyy (subscripts denote differentiation). This can equation can alternatively be written in gauge coordinates as Lt=Lvv+Lww; where the v-direction is tangent to the isophote and the w- is along the gradient. The alternative evolution scheme of Lt=Lvv, known as mean curvature flow, has also attracted attention. Guichard & Morel [1996, Ceremade Technical Report #9335] showed that this equation describes the operation of repeatedly applied infinitesimal median filtering. I have proved that repeated infinitesimal mode filtering is described by Lt=Lvv-2Lww.

Location dependent models of image variation
Idris Eckley (Mathematics, Bristol)

This talk considers the application of wavelets to the problem of modelling and estimating the covariance structure of an image. Our
approach introduces the concept of locally stationary, two-dimensional wavelet processes; a class of processes which permits the second order structure of an image to vary over space. In contrast to the Fourier decomposition, the locally stationary wavelet process supplies a form of location-scale decomposition. As a consequence, certain location-dependent measures of variation may be defined which could, for
example, be applied in the analysis of textured images. We conclude by outlining some recent developments within this modelling approach.

A multirate, circular Hough transform using wavelets
A. A. Bharath (Biological & Medical Systems, Imperial College)

The Compact Hough Transform can be expressed as a spatial filtering of image feature fields. Using steerable filters, this can be implemented
as a convolution between a number of wavelet-like basis masks, and a series of "control" images. We apply this idea to design a circular
Hough transform which constructs accumulator spaces for a number of circle sizes. The operators used for estimating gradient data are the
multi-rate operators of a steerable quadrature wavelet pyramid, and the gradient magnitude data is used to weight accumulator contributions. The issue of interpolating between accumulator spaces to achieve fine precision in parameter space is proposed as an area for further

Nonlinear gray-scale granulometries for texture analysis
Jennifer McKenzie (Electronic and Electrical Engineering, Strathclyde)

Nonlinear gray-scale granulometries are multiresolution tools which may be used for image decomposition. They act as an image sieves, allowing larger and larger 'grains' of image data to fall through at each step of the granulometric process. The degree by which the volume of the image is decreased at each sieving stage can be used for statistical analysis and comparisons of the image and the size, shape and orientation of its 'grains', allowing analysis and classification of image texture content.

The above properties allow segmentation and texture analysis which is translation and rotation invariant, and which has a high tolerance to
changes in lighting levels. This paper investigates their usefulness for automatic inspection processes, and investigates how the mesh
dimensions used affect the accuracy of the results.

Multi-resolution algorithms and subpixel restoration
Chris Jennison (Mathematical Sciences, Bath)

The forms of probabilistic model assumed in Bayesian image analysis can be applied at various scales of resolution. In particular, they are not
restricted to the pixel scale of the imaging process. Thus, under suitable assumptions, one may restore detail at below the pixel level.
In implementing subpixel algorithms it is natural to start work at coarser levels, especially when data are noisy; indeed, such a
multi-resolution strategy can be of benefit more generally when pixel-based MCMC methods run too slowly. I shall present research with
colleagues at Bath and a variety of examples to illustrate these ideas.

Multiresolution colour texture segmentation
Maria Petrou (Electronic Engineering, Surrey)

Colour is a property that is meaningful only in relation to the human vision system. The quality of a generic colour texture segmentation
algorithm, therefore, can only be judged if it agrees with the segmentation performed by humans. Although texture is a property that
is often associated with scale, this is not so for colour. However, recent psychophysical studies have shown that colour perception is
resolution dependent. A colour texture segmentation scheme will be presented that takes into consideration the colour human perception as a
function of the resolution. The scheme utilises a multiresolution probabilistic relaxation approach to propagate the segmentation results
from one level of resolution to the next.

Multiscale entropy for semantic description of images and signals
Fionn Murtagh (Computer Science, Queen's, Belfast)

Multiscale entropy is based on the wavelet transform and noise modeling. We describe how it can be used for signal and image
filtering and deconvolution. We then proceed to the use of multiscale entropy for description of image content. We pursue
two directions of enquiry: determining whether signal is present in the image or not, possibly at or below the image's noise level;
and how multiscale entropy is very well correlated with the image's content in the case of astronomical stellar fields. Knowing
that multiscale entropy represents well the content of the image, we finally use it to define the optimal compression rate of the
image. In all cases, a range of examples illustrate these new results.

Dynamic Tree belief network models of images
Amos Storkey (Informatics, Edinburgh)

We are interested in segmenting an image into a number of pre-defined classes. Markov Random Fields are commonly used as a prior distribution over segmentation labellings. However, they have a number of problems as prior models, including a lack of hierarchical structure and the fact that inference is NP-hard. An alternative is a hierarchical prior model called a Tree Structured Belief Network (TSBN) (Bouman and Shapiro, 1994). Inference in TSBNs is efficient (linear time) and parameter estimation can be carried out using the EM algorithm. However TSBNs have the disadvantage that they can give rise to "blocky" segmentations.

In recent work on "Dynamic Tree" (DT) belief networks, we specify a prior over a large number of TSBNs. Experiments show that DTs are capable of generating images that are less blocky, and the models have better translation invariance properties than a fixed, ``balanced'' TSBN. We report results on both exact and approximate inference methods for DTs and on parameter estimation for these networks.