Automatic Dense Reconstruction from Uncalibrated Video Sequences. Front Cover. David Nistér. KTH, – pages. Automatic Dense Reconstruction from Uncalibrated Video Sequences by David Nister; 1 edition; First published in aimed at completely automatic Euclidean reconstruction from uncalibrated handheld amateur video system on a number of sequences grabbed directly from a low-end video camera The views are now calibrated and a dense graphical.

Author: Zulurg Kitaxe
Country: Martinique
Language: English (Spanish)
Genre: Health and Food
Published (Last): 12 June 2013
Pages: 208
PDF File Size: 3.12 Mb
ePub File Size: 11.57 Mb
ISBN: 185-3-64011-131-4
Downloads: 95775
Price: Free* [*Free Regsitration Required]
Uploader: Kagore

The running times of the algorithm are recorded in Table 2and the precision is 1 s. Figure 7 c the number of points in point cloud generated by MicMac isLiterature Review The general 3D reconstruction algorithm without a priori positions and orientation information can be roughly divided into two steps.

In this paper, a fully automatic approach to key frames extraction without initial pose information is proposed. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment.

After the first update of the image queue, the formula for videi projection error of the bundle adjustment used in step 6 will be altered. Then, after depth—map refinement and depth—map fusion, a dense 3D point data cloud can be obtained. Bundle adjustment itself is a nonlinear least-squares problem that optimizes the camera and structural parameters; the calculation time will increase because of the increase in the number of parameters.

The distance point densf are shown in Figure 8 a—c. Uncaljbrated implementation of this method can be found in the open-source software openMVS [ 16 ]. The weight is w j after an experimental comparison, a value of 20 is suitable for w j. When m is chosen as a smaller number, the speed increases, but the precision decreases correspondingly.

Among these methods, a very typical one was proposed by Snavely [ 13 ], who used it in the 3D reconstruction of real-world objects. SLAM mainly consists in the simultaneous estimation of the localization of the robot and the map of the environment. First, we use the scale-invariant feature transform SIFT [ 19 ] feature detection algorithm to detect the feature points of each image Figure 2 a. The flight height is around 40 m and kept unchanged. Because of the rapid development of the unmanned aerial vehicle UAV industry in recent years, civil UAVs have been used in agriculture, energy, environment, public safety, infrastructure, and other fields.


The total number of images in C is assumed to be N.

By carrying a digital camera on a UAV, two-dimensional 2D images can be obtained. Positions and orientations of monocular camera and sparse point map can be obtained from the images by using SLAM algorithm.

This method can easily and rapidly obtain a dense point cloud.

Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

In many applications, the SfM algorithm has higher requirements for the computing speed and accuracy. The map obtained by SLAM is often required to support other tasks. Without priors, MAP estimation reduces to maximum-likelihood estimation. The structure of the images in C r is known, and the structural information contains the coordinates of the 3D feature points marked as P r.

Green points represent the positions of camera, and red points are control points, white points are structural feature points. This method estimates xequences 3D coordinates of the initial points by matching the difference of Gaussians and Harris corner points between different images, followed by patch expansion, point filtering, and other processing.

Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

Journal List Sensors Basel v. Considering the continuity of the images taken by UAV camera, this paper proposes a 3D reconstruction method based on an image queue. Theory and Practice; Corfu, Greece. The accuracy of our result is almost the same as result of openMVG and MicMac, but the speed of our algorithm is faster than them.

The flight distance is around m. It usually returns a completely wrong estimate.

Urban 3D Modelling from Video

Open in a separate window. The process is illustrated in Figure 3.

As the number of images and their resolution increase, the computational times of the algorithms will increase significantly, limiting them in some high-speed reconstruction applications. Second, these key images are inserted into a fixed-length image queue. Equation 9 is the reprojection error formula of the weighted bundle adjustment.

Maxime Lhuillier’s home page

When we use bundle adjustment to optimize the grom, we must keep the control points unchanged or with as little change as possible.


This unca,ibrated can be addressed by using control points, which are the points connecting two sets of adjacent feature points of the image, as shown in Figure 5.

There must be at least four feature points, and the centroid of these feature points can then be calculated as follows:. We propose the use of the incremental SfM algorithm. Adaptive structure from motion with a contrario model estimation; Proceedings of the Asian Conference on Computer Vision; Daejeon, Korea.

MicMac is a free open-source photogrammetric suite that can be used in a variety of 3D reconstruction scenarios. Improving the performance of the algorithm in parameter selection is also part of our future work. Two major contributions in this paper are methods of selecting key images selection and SfM calculation of sequence images.

Although m and k are fixed and their values are generally much smaller than Nthe speed of the matching is greatly improved. Moreover, the algorithm is not time-consuming for either the calculation of the PCPs or the estimation of the distance between PCPs.

Distance histograms in Figure 12 a—c are statistics results of distance point clouds in Figure 11 a—c. The first step involves recovering the 3D structure of the scene and the camera motion from the images. The first two terms of radial and tangential distortion parameters are also obtained and used for image rectification.

If the two images are captured almost at the same position, the PCPs of them almost coincide in the same place. The scene in this case dehse captured by a UAV camera in a village.

In addition, a high precision New-mark Systems RT-5 turntable is used to provide automatic rotation of the object. One of the most representative methods was proposed by Furukawa [ 15 ]. And the number of points in point cloud is 4,