British Machine Vision Association and Society for Pattern Recognition
One Day BMVA Technical Meeting in association with UKIVA and IEE/E4 to be held on the 1st December 1999 at British Institute of Radiology, 36 Portland Place, London.
Chairperson: Majid Mirmehdi (University of Bristol)
Industrial Inspection is one of the most profitable areas of Computer Vision as it is one of the few areas which can truly provide working, practical, real-world solutions to real-world problems. It combines image acquisition devices, high-speed computers, simple and complex algorithms, and robotics to oversee and automate manufacturing processes. The aim of this meeting is to bring together a number of talks on the application of novel image processing and computer vision techniques to solve real-world problems.
10:30 Registration and coffee
10:55 Introduction and welcome, Majid Mirmehdi (Bristol University)
11:00 Algorithms for real-time location of contaminants in cereals, E.R. Davies et. al. (Royal Holloway College, University of London)
11:25 Scene in the Right Light, G. Awcock (University of Brighton)
11:50 Characterising the Illumination In an Automated Visual Inspection Work Cell, B. G. Batchelor (Cardiff University)
12.15 Measuring and Evaluation of Rolled Products, V. Smutny (Czech Technical University of Prague)
13:45 Automatically Detecting Defects in the Bodywork of Vehicles, M. Evans et.al (University of Bristol)
14:10 Planning Viewpoints for 3D Visual Inspection, D.R. Roberts et. al. (Cardiff University)
14:35 Fast MRF image segmentation with applications in the food industry, A. Ripke and A. R. Allen (Aberdeen University)
15:35 The use of X-ray Machine Vision for on-line Poultry Inspection, M. Graves et. al. (Intelligent Manufacturing Systems Ltd)
16:00 A Framework for Motion Parameter Estimation, M. A. Rodrigues (University of Hull)
16:25 Summary and discussion
16:40 Closing remarks and finish
Please return this form to Richard Bowden, Dept M & ES, Brunel University, Uxbridge, UB8 3PH or via email to email@example.com. The meeting is free to members of the BMVA, UKIVA or IEE but a charge of £20 is payable by non-members. A sandwich lunch is bookable on the day. When registering please enclose a cheque for the appropriate amount made payable to "The British Machine Vision Association".NAME: .
BMVA MEMBER: YES/NO
|The BMVA is an accredited provider for the IEE/IMechE Continuous Professional Development scheme: attendance at this meeting will earn delegates 3 CPD points|
for real-time location of contaminants in cereals
E.R. Davies*, M. Bateman*, D.R. Mason*, J. Chambers** and C. Ridgway**
* Machine Vision Group, Department of Physics, Royal Holloway, University of London, Egham Hill, Egham, Surrey, TW20 0EX
** Central Science Laboratory, Sand Hutton, York, YO41 1LZ
This work is concerned with the need to quickly check consignments of grain for contaminants. The contaminants of greatest concern to the cereals industry are insects, rodent droppings and moulds such as ergot. All can in principle be located from their darker coloration. However, this approach proved unreliable, and instead we have used new types of linear feature detector for locating insects and novel morphological operations to locate rodent droppings and ergot. A key aspect of the work has been the need to devise algorithms that will work in real time on limited hardware, the target being to inspect a 3 Kg sample of grain in 3 minutes.
Scene in the Right Light
University of Brighton, Lewes Road, Moulsecoomb, East Sussex BN2 4GJ
When setting out to design a practicable machine vision system, it is impossible to overstate the importance of fully exploiting opportunities to control scene illumination. This talk will 'illuminate' that message with practical case histories.
the Illumination In an Automated Visual Inspection Work Cell
B. G. Batchelor
Dept of Computer Science, Cardiff University, 5 The Parade, Cardiff, CF24 3XF, Wales
It is common practice to begin the design of an illumination-viewing sub-system for Machine Vision by experimentation: simply manoeuvring the lamps and camera around until a high-contrast image has been obtained. There then follows the process of duplicating that lighting pattern in a rugged, well-engineered rig that will with-stand the rigours of the factory floor. This requires careful measurement, to obtain the same illumination angles. Two alternative approaches are suggested in this paper. In the first, an image is created in an hemispherical mirror, so that the lights appear as bright spots. By measuring the positions of these spots, the geometery of a lighting rig can be determined very quickly and easily. In the second approach, a video camera fitted with a fish-eye lens can be used to obtain a map that is then analysed in a similar way. A third technique is decribed for characterising the lighting when a large, diffusely reflecting object is to be examined. This is also able to provide sufficient data to enable a lighting pattern to be duplicated.
Evaluation of Rolled Products
Vladimir Smutny, Center for Machine Perception,Czech Technical University of Prague, Czech Republic
A device for measuring and evaluating the profiles of rolled products is presented. The device is a stand alone station which measures profiles of a samples cut out from rolled products. The measurement is based on the well known laser plane range finder principle. The sample is rotated to compose its complete profile from partial measurements. The complete profile is segmented using a CAD model and a few important dimensions on the profile are measured. The contribution of the work is mainly in suitable combination of algorithms in order to get maximal measurement precision with the components used.
Detecting Defects in the Bodywork of Vehicles
M. Evans, C. Setchell, B. Thomas and T. Troscianko
Department of Computer Science, University of Bristol, Bristol, BS8 1UB
We have developed a system for automatically detecting defects in the bodywork of vehicles prior to them leaving the factory. Each vehicle is imaged by a set of 36 cameras mounted in a 'drive through' tunnel which forms the last stage of the production line. For each camera we have a golden image which is simply an image of a defect free vehicle. The image from each camera is aligned, via affine transformation, with its golden image. Frame differencing reveals regions of suspected defects from which features are extracted (size, shape, intensity, etc.). A small set of heuristic rules in conjunction with a neural network then decide which regions are truly defects.
Viewpoints for 3D Visual Inspection
D.R. Roberts and A.D. Marshall, Department of Computer Science, Cardiff University, PO Box 916, Cardiff, CF2 3XF
Many machine vision tasks, e.g. object recognition and object inspection, cannot be performed robustly from a single image. For certain tasks ( e.g. 3D object recognition and automated inspection) the availability of multiple views of an object is a requirement. However, until recently, little consideration has been applied towards identifying strategies for selecting such viewpoints.
The work presented in this talk details a novel approach to selecting a minimised number of views that allow each object face to be adequately viewed according to specifiedconstraints on viewpoints and other features important to inspection. The planner is generic and can be employed for a wide range of multiple view acquisition/inspection systems, ranging from camera systems mounted on the end of a robot arm, i.e an eye-in-hand camera setup, to a turntable and fixed stereo cameras to allow different views of an object to be obtained.
Additional constraints can also be specified for the purpose of adapting the planner to search for viewpoints suitable for the specific inspection of features such as datum faces, parallel and perpendicular faces, and special views.
Fast MRF image
segmentation with applications in the food industry
A Ripke and A R Allen, Department of Engineering, University of Aberdeen, Aberdeen AB24 3FX
We report some experiments in the segmentation of colour images and methods of fast implementation for embedded vision systems. The application stems from the problems a food processing company has with the automation of vegetable sorting. The images exhibit areas of highlight and shadow, as well as variations in colour and luminance: these features are typical of many real applications without constrained lighting. We use a combination of approaches to reliably find areas of darker staining and different colours: (i) an adaptive clustering algorithm based on Markov random fields (MRF); (ii) an MRF filter followed by watershed segmentation. The results are encouraging: however, the algorithms are computationally intensive. We have developed fast implementations on multiple processors, and we discuss algorithmic optimisations for compilation into programmable hardware.
The use of
X-ray Machine Vision for on-line Poultry Inspection
Mark Graves, Intelligent Manufacturing Systems Limited
Bruce Batchelor, University of Cardiff
Peter Dobson, Alex Smith, University of Oxford
This presentation will outline the authors research into the field of the use of x-ray machine vision techniques for the detection of bones in poultry meat. The limitations of conventional systems based on thresholding will be described. A system developed by the authors using morphological segmentation and neural network classification will be described. Results from extensive on-line testing will be discussed. The use of such a system within a production environment with closed loop control will be described. The limitations of this system will be presented. Results from the latest trials of a novel dual-energy sensing technology will be presented.
for Motion Parameter Estimation
Marcos A Rodrigues
Department of Computer Science, The University of Hull, Hull HU6 7RX
We are investigating a system design for a fast response, automatic 3D machine vision inspection of high volume manufacturing components. A case-study of a production line at Donaldson Filter Components Ltd (Hull) is used which includes air filter components for petrol and diesel engines, and industrial pollution control. Conformance to dimensional specifications according to a Quality Control Plan is achieved by inspection at every stage of the manufacturing process. Distinct inspection strategies are used for different components which can be inspected either at setup or through statistical process control. Up to now, Donaldson have relied on the optimisation of the manufacturing process and on human visual inspection to reduce the incidence of, and weed out defective components. This has proved unsatisfactory given the sheer volume of production and the number of parameters and possible defects that need to be verified at inspection time. The more urgent inspection problems are those related to conformance to dimensional specifications. The main parameters of interest are width, height, depth, relationships (perpendicular, parallel, non-parallel sides) and internal measurements and thus, involve metric measurements or structural analysis of 3D objects from 2D images and 3D range image data.
It has been demonstrated by Chasles over a century ago that any rigid planar displacement can be represented by a pure rotation about a fixed point or pole in the plane. Equally, any rigid spatial displacement can be represented by a screw motion consisting of a rotation about a unique screw axis and a translation along the same axis. While such concepts have been widely used to analyse the nature and characteristics of rigid body motions in kinematics and mechanics, the implication of such concepts to the calibration of transformation parameters in computer vision has not yet been properly assessed.
In this presentation, we analyse rigid planar and spatial displacements from a computer vision perspective, in which calibration of transformation parameters is performed from image correspondences. We first describe geometric properties of image correspondence vectors. We then formalise such properties by extending Chasles' work and put forward a new general theoretical framework for the analysis of rigid body transformations in 2D and in 3D. This framework can be used as a basis for the development of calibration algorithms. This is demonstrated by the description of two novel algorithms to calibrate transformation parameters which are validated through experiments. Our analysis addresses central issues in computer vision applications as it provides closed form solutions for all calibrated parameters and an accurate insight into the number of calibration solutions. Finally, consideration is given on how such a framework can be translated into real world applications in connection with the problem of industrial inspection of filter components.