Calibration of CCD Cameras

TNT members involved in this project:
Nobody is listed for this project right now!
Show all

Every time a camera is used for geometric measurement, which is determination of the position or size of an object or its distance to another object, it is necessary to know the properties of the optical system of the camera. Determining these properties is a classic research topic in computer vision. The main parameters needed for measurement are the linear extrinsic and intrinsic camera parameters. Extrinsic parameters describe the position (translation to the origin of the coordinate system) and the orientation (rotation) of the camera. Linear intrinsic camera parameters describe the linear properties of the optical system within the camera, i.e. the focal length, aspect ratio of the pixels, shear r, and displacement of the principal point.

Non-linear parameters describing radial, tangential, or prism distortion are also considered as intrinsic parameters. Most calibration methods for single cameras derive the parameters of the camera model on the basis of corresponding points in the world coordinate system and on the camera target. For this procedure, they need one or more camera images of a planar calibration pattern or calibration object with marks on the surface with known size or distance to each other. For each visible mark the position on the camera target is determined giving a list of correspondences between points in world coordinates and points in camera target coordinates, i.e. pixel. By using the equation of the mapping of a pinhole camera, these correspondences are also modeled by algebraic means.

Traditional calibration methods determine the coefficients of the camera matrix minimizing the sum of the errors done by mapping the known world points to the measured camera points. Prerequisite for this approach is that the correspondences between points in the world and the target can be described by the pinhole camera model.

The image of a scene taken by a pinhole camera is undistorted. A real camera uses a system of different lenses and an aperture within the course of the light rays to bundle the light projecting a scene onto the camera target, so the image of the scene becomes distorted.

An analytical solution of the equations for distorted mappings cannot be calculated in general. For this reason, an initial set of distortion parameters is presumed to be able to estimate the camera matrix. Then the distortion parameters are adjusted to minimize the projection error. Afterwards, the camera matrix is estimated again. This is done iteratively until the projection error is small enough or there are no improvements by adjusting the distortion parameters.

The basic premise for the traditional approach is that the distortion of a real lens system can be completely described using algebraic formulas and that their degrees of freedom can be calculated. Both assumptions are usually not valid: The lenses are different to each other and normally have unknown characteristics. There are manufacturing tolerances in the fabrication process of both the lenses and the lens system; the lenses are not perfectly symmetrical, they are not exactly concentric and orthogonal to the optical axis, etc. In addition, the global optimum cannot be calculated. The goal of this project is to estimate and compensate the optical distortions by the optical system prior to the step of determining the linear camera parameters.

The distortion is measured by associating every point of the camera target with a corresponding point on a known image plane. It can be shown, that if it is known for every point of the camera target which point of the image plane it is the mapping of, then every camera image can be undistorted by reprojecting it onto that image plane.

Show recent publications only
  • Conference Contributions
    • Tobias Elbrandt, Andreas Reisch, Jörn Ostermann
      Kompensation von chromatischer Aberration und geometrischen Verzerrungen
      Zweiter Workshop Optische Technologien, Tagungsband, pp. 161-163, Hannover, November 2008
    • Tobias Elbrandt, Ralf Dragon, Jörn Ostermann
      Non-iterative Camera Calibration Procedure Using A Virtual Camera
      Vision, Modeling, and Visualisation, Max-Planck-Institut für Informatik, pp. 143-150, Saarbrücken, Germany, November 2007
    • Tobias Elbrandt, Jörn Ostermann
      Kompensation optischer Abbildungsfehler
      Erster Workshop Optische Technologien, HOT, Hannover, November 2007
    • Patrick Mikulastik, Raphael Hoever, Onay Urfalioglu
      Error analysis of subpixel edge localisation
      Signal Processing for Image Enhancement and Multimedia Processing, Springer US, Vol. 31, No. 2, pp. 103-113, December 2006, edited by Ernesto Damiani, Kokou Yétongnon, Peter Schelkens, Albert Dipanda, Louis Legrand, Richard Chbeir
  • Journals
    • Tobias Elbrandt, Jörn Ostermann
      Enabling accurate measurement of camera distortions using dynamic continuous-tone patterns
      Integrated Computer-Aided Engineering, IOS Press, Vol. 18, pp. 3-14, 2011, edited by Adeli, Hojjat