TY - JOUR
T1 - A Calculation Method for Vehicle Movement Reconstruction from Videos
AU - Feng, Hao
AU - Shi, Weiguo
AU - Chen, Feng
AU - Byon, Young Ji
AU - Heng, Weiwei
AU - Pan, Shaoyou
N1 - Funding Information:
This paper proposes a new enhanced method based on one-dimensional direct linear transformation for estimating vehicle movement states in video sequences. The proposed method utilizes a contoured structure of target vehicles, and the data collection procedure is found to be relatively stable and effective, providing a better applicability. The movements of vehicles in the video are captured by active calibration regions while the spatial consistency between the vehicle’s driving track and the calibration information are in sync. The vehicle movement states in the verification phase are estimated using the proposed method first, and then the estimated states are compared with the actual movement states recorded in the experimental test. The results show that, in the case of camera perspective of 90 degrees, in all driving states of low speed, high speed, or deceleration, the error between estimated speed and recorded speed is less than 1.5%, the error of accelerations is less than 7%, and the error of distances is less than 2%; similarly, in the case of camera perspective of 30 degrees, the errors of speeds, distances, and accelerations are less than 4%, 5%, and 10%, respectively. It is found that the proposed method is superior to other existing methods. National Key Research Development Program of China: Research on Digital Simulation and Reappearance Technology of Road Traffic Accidents 2016YFC0800702-1 Science and Technology Commission of Shanghai Municipality 17DZ1205500 National Natural Science Foundation of China 81571851 ADEK Award for Research Excellence (AARE-2017) 8-434000104 Central-Level Research Institutes Public Welfare Project GY2020G-7 GY2018Z-3
Publisher Copyright:
© 2020 Hao Feng et al.
PY - 2020
Y1 - 2020
N2 - This paper proposes a new enhanced method based on one-dimensional direct linear transformation for estimating vehicle movement states in video sequences. The proposed method utilizes a contoured structure of target vehicles, and the data collection procedure is found to be relatively stable and effective, providing a better applicability. The movements of vehicles in the video are captured by active calibration regions while the spatial consistency between the vehicle's driving track and the calibration information are in sync. The vehicle movement states in the verification phase are estimated using the proposed method first, and then the estimated states are compared with the actual movement states recorded in the experimental test. The results show that, in the case of camera perspective of 90 degrees, in all driving states of low speed, high speed, or deceleration, the error between estimated speed and recorded speed is less than 1.5%, the error of accelerations is less than 7%, and the error of distances is less than 2%; similarly, in the case of camera perspective of 30 degrees, the errors of speeds, distances, and accelerations are less than 4%, 5%, and 10%, respectively. It is found that the proposed method is superior to other existing methods.
AB - This paper proposes a new enhanced method based on one-dimensional direct linear transformation for estimating vehicle movement states in video sequences. The proposed method utilizes a contoured structure of target vehicles, and the data collection procedure is found to be relatively stable and effective, providing a better applicability. The movements of vehicles in the video are captured by active calibration regions while the spatial consistency between the vehicle's driving track and the calibration information are in sync. The vehicle movement states in the verification phase are estimated using the proposed method first, and then the estimated states are compared with the actual movement states recorded in the experimental test. The results show that, in the case of camera perspective of 90 degrees, in all driving states of low speed, high speed, or deceleration, the error between estimated speed and recorded speed is less than 1.5%, the error of accelerations is less than 7%, and the error of distances is less than 2%; similarly, in the case of camera perspective of 30 degrees, the errors of speeds, distances, and accelerations are less than 4%, 5%, and 10%, respectively. It is found that the proposed method is superior to other existing methods.
UR - http://www.scopus.com/inward/record.url?scp=85087995687&partnerID=8YFLogxK
U2 - 10.1155/2020/8896826
DO - 10.1155/2020/8896826
M3 - Article
AN - SCOPUS:85087995687
SN - 0197-6729
VL - 2020
JO - Journal of Advanced Transportation
JF - Journal of Advanced Transportation
M1 - 8896826
ER -