Skip to content


  • Research Article
  • Open Access

Lane Tracking with Omnidirectional Cameras: Algorithms and Evaluation

EURASIP Journal on Embedded Systems20072007:046972

  • Received: 13 November 2006
  • Accepted: 29 May 2007
  • Published:


With a panoramic view of the scene, a single omnidirectional camera can monitor the 360-degree surround of the vehicle or monitor the interior and exterior of the vehicle at the same time. We investigate problems associated with integrating driver assistance functionalities that have been designed for rectilinear cameras with a single omnidirectional camera instead. Specifically, omnidirectional cameras have been shown effective in determining head gaze orientation from within a vehicle. We examine the issues involved in integrating lane tracking functions using the same omnidirectional camera, which provide a view of both the driver and the road ahead of the vehicle. We present analysis on the impact of the omnidirectional camera's reduced image resolution on lane tracking accuracy, as a consequence of gaining the expansive view. And to do so, we present Omni-VioLET, a modified implementation of the vision-based lane estimation and tracking system (VioLET), and conduct a systematic performance evaluation of both lane-trackers operating on monocular rectilinear images and omnidirectional images. We are able to show a performance comparison of the lane tracking from Omni-VioLET and Recti-VioLET with ground truth using images captured along the same freeway road in a specified course. The results are surprising: with 1/10th the number of pixels representing the same space and about 1/3rd the horizontal image resolution as a rectilinear image of the same road, the omnidirectional camera implementation results in only three times the amount the mean absolute error in tracking the left lane boundary position.


  • Ground Truth
  • Image Resolution
  • Tracking System
  • Electronic Circuit
  • Tracking Accuracy

[1 2 3 4 5 6 7 8 9 10]

Authors’ Affiliations

Laboratory for Intelligent and Safe Automobiles (LISA), University of California, San Diego, La Jolla, CA 92093-0434, USA


  1. Petersson L, Fletcher L, Zelinsky A, Barnes N, Arnell F: Towards safer roads by integration of road scene monitoring and vehicle control. International Journal of Robotics Research 2006,25(1):53-72. 10.1177/0278364906061156View ArticleGoogle Scholar
  2. Huang KS, Trivedi MM, Gandhi T: Driver's view and vehicle surround estimation using omnidirectional video stream. Proceedings of IEEE Intelligent Vehicles Symposium (IV '03), June 2003, Columbus, Ohio, USA 444-449.View ArticleGoogle Scholar
  3. McCall J, Wipf D, Trivedi MM, Rao B: Lane change intent analysis using robust operators and sparse Bayesian learning. to appear in IEEE Transactions on Intelligent Transportation Systems Google Scholar
  4. Cheng SY, Trivedi MM: Turn-intent analysis using body pose for intelligent driver assistance. IEEE Pervasive Computing 2006,5(4):28-37. 10.1109/MPRV.2006.88View ArticleGoogle Scholar
  5. Enkelmann W: Video-based driver assistance—from basic functions to applications. International Journal of Computer Vision 2001,45(3):201-221. 10.1023/A:1013658100226MATHView ArticleGoogle Scholar
  6. McCall JC, Trivedi MM: Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation. IEEE Transactions on Intelligent Transportation Systems 2006,7(1):20-37. 10.1109/TITS.2006.869595View ArticleGoogle Scholar
  7. Bertozzi M, Broggi A: GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Transactions on Image Processing 1998,7(1):62-81. 10.1109/83.650851View ArticleGoogle Scholar
  8. Nedevschi S, Schmidt R, Graf T, et al.: 3D lane detection system based on stereovision. Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems (ITSC '04), October 2004, Washington, DC, USA 161-166.Google Scholar
  9. Ishikawa K, Kobayashi K, Watanabe K: A lane detection method for intelligent ground vehicle competition. SICE Annual Conference, August 2003, Fukui, Japan 1: 1086-1089.Google Scholar
  10. Scaramuzza D, Martinelli A, Siegwart R: A flexible technique for accurate omnidirectional camera calibration and structure from motion. Proceedings of the 4th IEEE International Conference on Computer Vision Systems (ICVS '06), January 2006, New York, NY, USA 45.View ArticleGoogle Scholar


© S. Y. Cheng and M. M. Trivedi. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.