چكيده به لاتين
Camera tracking or estimation of camera extrinsic parameters finds extensive applications. Robot navigation and localization in robotics, 3D reconstruction using camera motion parameters and augmented reality in computer vision and construction of digital elevation models using aerial image in photogrammetry are instances of camera tracking applications. In this thesis, a filtering-based camera tracking approach for sequences of monocular images is presented. Video sequences are captured in indoor environment with maximum depth of 5 to 6 meters. Furthermore, objects observed in the scene have no motion. In the proposed approach, camera localization and mapping is initialized using a few calibrated images. This, enables the system to construct a map without scale ambiguity, contrary to most visual simultaneous localization and mapping (visual SLAM) algorithms. In other words, estimated camera pose and scene structure have metric values. Proposed algorithm is classified as a feature-based approach. Estimation of camera extrinsic parameters is carried out based on the knowledge provided by feature tracking along successive images. Additionally, speed of the camera is in a way that the amount of created blurriness in video frames does not get the feature-tracking routine into trouble. For consecutive video frames, extracted features are tracked using a pyramidal method on the basis of Lucas-Kanade optical flow approach and the camera pose parameters is estimated using a particle filter framework. To evaluate Performance of the proposed algorithm, two sets of video sequences were used. The first groups of sequences are prepared around a computer desk in the laboratory using a simple camera with VGA resolution. To provide camera ground-truth pose, a chessboard with known cell size is employed. The second set includes four indoor sequences selected from RGBD benchmark of computer vision goup of Technical University of Munich (TUM). For both of the test set sequences camera internal parameters are calculated in advance. The experimental results show that the proposed approach istracking the camera trajectory with high accuracy. Besides, with relatively smooth motion of camera, the algorithm is capable of extending the currently constructed map to regions that were initially out of the camera’s field of view without losing the correct path.