British Machine Vision Association and Society for Pattern Recognition

Machine Vision in Photogrammetry

One Day joint BMVA and Photogrammetric Society Technical Meeting in association with IEE/E4 to be held on 26th May 99 at British Institute of Radiology, 36 Portland Place, London.

Chairpersons: Tim Ellis (City Uni); Stuart Robson (UCL)

ABSTRACT: The aim of this meeting is to provide a forum for researchers in the fields of machine vision and photogrammetry to meet and exchange information on active research issues within the two communities.
    Current research in machine vision has seen an increasing use of multiple cameras for a variety of applications (broadcasting of sporting events, security surveillance, traffic monitoring, inspection), and the techniques and algorithms needed to analyse such image data can benefit from the methods currently used in photogrammetetry. Conversely, the need to extract and understand information in images is important in automating the process of high accuracy measurement.
    The meeting will comprise talks covering both the theoretical and practical aspects on topics common to both areas of research, and will include a tutorial on camera calibration.

10:30    Registration and coffee

10:55    Introduction and welcome, Tim Ellis (City Uni); Stuart Robson (UCL)

11:00    Aspects of Camera Self-calibration, Ian Reid, University of Oxford

11:45    Gauge Methods in 3D Reconstruction, Phil Mclauchlan, University of Surrey

12:15    Fast and Robust Integration of Multiple Range Models, Tim Niblett, Turing Institute

12.45    Surface-based Structure from Motion, Phil Mclauchlan, University of Surrey

13:15    Lunch

14:00    Camera Calibration – a photogrammetric viewpoint, Tim Clarke, City University

14:30   An Integrated System for 3D Scene Reconstruction, Kia NG, University of Leeds

15:00    Tea

15:30    Industrial Applications of Geometrically Indexed Panoramic Image Archives, Andy Deacon, As Built Solutions Ltd

16:00    A Visualisation Methodology Suited to Engineering Measurement Using Vision Metrology, Neil Woodhouse, University College London

16:30   Summary and discussion

16:40    Closing remarks and finish


Please return this form to Richard Bowden, Dept M & ES, Brunel University, Uxbridge, UB8 3PH or via email to The meeting is free to members of the BMVA, Photogrammetric Society or IEE but a charge of 20 is payable by non-members. A sandwich lunch is bookable on the day. When registering please enclose a cheque for the appropriate amount made payable to "The British Machine Vision Association".

NAME: ………………………………………………………………………………….

ADDRESS: ………………………………………………………………………………….


TEL: ………………………………………………………………………………….


The BMVA is an accredited provider for the IEE/IMechE Continuous Professional Development scheme: attendance at this meeting will earn delegates 3 CPD points

Title: Aspects of camera self-calibration
Dr Ian Reid, Department of Engineering, University of Oxford.

Camera calibration is a prerequisite for the computation of metric structure from any number of views of a scene. Although this can be achieved through the use of calibration grids or other accurately known scene structure, much interest in computer vision during the last six years has centred on methods for camera "self"-calibration.

In this presentation I will give a tutorial introduction to the ideas behind camera self-calibration, and then, without an attempt to be comprehensive, I will discuss some recent results.

Further information can be found at:


Title: Gauge Methods in 3D Reconstruction
Phil Mclauchlan, Department of Electronic and Electrical Engineering, University of Surrey

Bundle adjustment is a standard photogrammetric technique for optimizing the 3D reconstruction of a scene from multiple images. There is an inherent gauge (coordinate frame) ambiguity in 3D reconstruction that can seriously affect the convergence of bundle adjustment algorithms. Existing schemes for dealing with this ambiguity, both in photogrammetry and computer vision, have drawbacks. In those schemes, the results of bundle adjustment depend on the initially chosen coordinate frame, or on the order of the images, or both. Our scheme, which eliminates both these effects, involves first normalizing an initial reconstruction to achieve coordinate frame invariance, and then selecting gauge constraints on the parameter updates so that the normalization conditions applied are maintained to first order by the bundle adjustment iteration. The new approach applies to all the well-known 3D reconstruction models: projective, affine and Euclidean, and has been implemented for the reconstruction of 3D point & line features as well as planar surfaces.

The normalization stage partially removes the gauge freedom, reducing the coordinate frame choice from a general 3D homography/affinity/similarity transformation to an orthogonal transformation, which is a 3x3 rotation in the affine & Euclidian cases, and a 4x4 orthogonal matrix in the projective case. In the projective case the normalisation relies on a general conjecture concerning projective vectors and matrices, which we strongly believe to be correct, based on extensive experimental evidence. The conjecture provides a new way to compute with projective quantities, allowing the effects of the choice of arbitrary scale factors and coordinate frame to be eliminated completely.

Our results suggest that our new treatment of the coordinate frame ambiguity problem in bundle adjustment achieves faster and more stable convergence than existing methods.

Further information can be found at:

Title: Fast and Robust Integration of Multiple Range Models
Tim Niblett, The Turing Institute

The Turing Institute's C3D(R) 3-D capture system captures all-round models of 3-D objects. The models are obtained by "integrating" multiple range maps, with the range maps obtained from a pair of stereo images using a matching and photogrammetric procedures to produce the range map.

This talk will provide a brief overview of the various approaches to range-model integration that have been discussed in the literature. The process used by C3D will be described and related to these, with particular emphasis on robustness and speed. Some examples of the output of C3D's integration method will be given.

For further information see:

Title: Surface-based Structure from Motion
Phil Mclauchlan, Department of Electronic and Electrical Engineering, University of Surrey

The existing state-of-the art in structure-from-motion systems is the construction of sparse feature-based scene representations, e.g. from points and lines. The main drawback of such systems is the lack of surface information, which restricts their usefulness. Although it is possible to build surface modelling on top of the feature information, we have designed an algorithm that allows surface information to be built directly into the reconstruction algorithm, so that along with features, surface parameters may be computed. Constraints between surfaces and features, and between the surfaces themselves, may be incorporated.

Our work may be seen as extending existing photogrammetric bundle adjustment algorithms to compute surface parameters. We employ the recursive partitioning algorithm, well-known in the photogrammetry community, to obtain efficient iterative updates of the reconstruction. Incorporating the surface constraints modifies the sparse structure of the normal equations, making it important to order the feature, surface and motion parameter blocks appropriately to achieve the best performance. We employ the Variable State Dimension Filter (VSDF) to effect both batch and recursive updates within the same framework.

Further information can be found at: 

Title: Camera calibration - a photogrammetric viewpoint
Dr. Tim Clarke and Dr. Xinchi Wang, School of Engineering, City University

It can be argued that, with the possible exception of astronomers, photogrammetrists have derived the greatest quantity and quality of geometric information from images. For example images from low-level aerial surveys to remote sensing from satellites have produced maps of high quality for civil and military uses. As a result of such activity "camera calibration" (almost always meaning the estimation of the cameras interior parameters) has often been of national importance resulting in multi-million pound camera calibration facilities. On a practical level a wide range of calibration methods and models have been developed, some of these have become so standard that they have remained little changed over the past thirty or forty years. Never-the-less camera calibration is always a live topic and further advances are constantly being made.

The developments in photogrammetry of aerial surveying for mapping are, of course, large and there are many aspects where machine vision techniques are used to automatically extract structures such as buildings, roads etc from such imagery. However, an area of closer overlap with machine vision is in what is variously called: close-range photogrammetry, videogrammetry, vision metrology, videometrics, or digital photogrammetry. Here there are a number of areas where the two communities can learn from each other, camera calibration is a good example. This tutorial will briefly review the development of photogrammetric camera calibration methods and models and give some practical examples of the calibration of a variety of cameras.

For further information see:

Title: An Integrated System for 3D Scene Reconstruction
Kia NG, Department of Computing, University of Leeds

This talk will describe an integrated multi-sensory system for the acquisition and reconstruction of textured 3D scene models from laser range data and digital images, developed by an EU-ACTS project RESOLV. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction,
known as the Environmental Sensor for Telepresence (EST). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. I'll describe the main components of the EST/AEST, and presents some example results. The reconstructed model is encoded in VRML format so that it is
possible to access and view the model via the World Wide Web.

Further information about the project and example reconstruction can be found on the RESOLV web page,

Title: Industrial Applications of Geometrically Indexed Panoramic Image Archives.
Andy Deacon, As Built Solutions

The talk will describe the use of large scale, geometrically indexed visual archives in the process industries. Applications include providing spatial data for the creation and update of 'as-built' 3D CAD models and providing a visual interface with facilities management systems.

For further information see:

Title: A visualisation methodology suited to engineering measurement using vision metrology

Neil Woodhouse, Department of Geomatic Engineering, University College London

Techniques employing vision metrology to make high precision measurements of engineering structures for manufacturing purposes are in widespread usage within the aeronautic, automobile and shipbuilding industries. To extend the applicability of such techniques to engineering disciplines, where measurement is undertaken for monitoring or verification purposes, requires the rapid representation and visualisation of both spatial information and other engineering data.

This presentation will introduce a generalised technique for the generation of computer graphics surface models from geometrically precise image networks taken for vision metrology purposes. The methodology incorporates
a triangulation technique allied with robust testing routines that utilise image content including target location, occlusion and image texture information to provide a solution that is solely dependant on network
geometry. End use examples taken from a variety of civil and mechanical engineering applications will be given.

For further information see: