• Title/Summary/Keyword: Traffic calibration

Search Result 75, Processing Time 0.03 seconds

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Monitoring of bridge overlay using shrinkage-modified high performance concrete based on strain and moisture evolution

  • Yifeng Ling;Gilson Lomboy;Zhi Ge;Kejin Wang
    • Structural Monitoring and Maintenance
    • /
    • v.10 no.2
    • /
    • pp.155-174
    • /
    • 2023
  • High performance concrete (HPC) has been extensively used in thin overlay for repair purpose due to its excellent strength and durability. This paper presents an experiment, where the sensor-instrumented HPC overlays have been followed by dynamic strain and moisture content monitoring for 1 year, under normal traffic. The vibrating wire and soil moisture sensors were embedded in overlay before construction. Four given HPC mixes (2 original mixes and their shrinkage-modified mixes) were used for overlays to contrast the strain and moisture results. A calibration method to accurately measure the moisture content for a given concrete mixture using soil moisture sensor was established. The monitoring results indicated that the modified mixes performed much better than the original mixes in shrinkage cracking control. Weather condition and concrete maturity at early age greatly affected the strain in concrete. The strain in HPC overlay was primarily in longitudinal direction, leading to transverse cracks. Additionally, the most moisture loss in concrete occurred at early age. Its rate was very dependent on weather. After one year, cracking survey was carried out by vision to verify the strain direction and no cracks observed in shrinkage modified mixes.

The Quartile Deviation and the Control Chart Model of Improvement Confidence for Link Travel Speed from GPS Probe Data (사분위편차 및 관리도 모형에 의한 GPS 수집기반 구간통행속도 데이터 이상치 제거방안 연구)

  • Han, Won-Sub;Kim, Dong-Hyo;Hyun, Cheol-Seung;Lee, Ho-Won;Oh, Yong-Tae;Lee, Choul-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.6
    • /
    • pp.21-30
    • /
    • 2008
  • The travel speed collected by the prove-car equipped with the GPS has the problems, which are the data's stability and finding out the representative travel speed, by the influence of the traffic signal and etc. at the interrupted traffic. This study was conducted to develop the method of filtering the outlier data from the data collected by the prove-car. The method to remove the outlier data from the serial data which were collected by the prove-car was adapted to each of the quartile deviation statistics model and the management graphic statistics model. The rate of removing the outlier data by the quartile deviation method was $0{\sim}3.7%$ while the rate by the management graphic statistic methods was $0.3{\sim}7.2%$. Both methods show the low removal rate at the dawn time when the traffic is inactivity, on the other hand the remove rate is high during the daytime. However, both methods have the problem such that the threshold level for removing the outlier data was established at the low bound in the case as good as the statistics model. Therefore, it is required for the experience calibration.

  • PDF

Study on Enhancement of TRANSGUIDE Outlier Filter Method under Unstable Traffic Flow for Reliable Travel Time Estimation -Focus on Dedicated Short Range Communications Probes- (불안정한 교통류상태에서 TRANSGUIDE 이상치 제거 기법 개선을 통한 교통 통행시간 예측 향상 연구 -DSRC 수집정보를 중심으로-)

  • Khedher, Moataz Bellah Ben;Yun, Duk Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.249-257
    • /
    • 2017
  • Filtering the data for travel time records obtained from DSRC probes is essential for a better estimation of the link travel time. This study addresses the major deficiency in the performance of TRANSGUIDE in removing anomalous data. This algorithm is unable to handle unstable traffic flow conditions for certain time intervals, where fluctuations are observed. In this regard, this study proposes an algorithm that is capable of overcoming the weaknesses of TRANSGUIDE. If TRANSGUIDE fails to validate sufficient number of observations inside one time interval, another process specifies a new validity range based on the median absolute deviation (MAD), a common statistical approach. The proposed algorithm suggests the parameters, ${\alpha}$ and ${\beta}$, to consider the maximum allowed outlier within a one-time interval to respond to certain traffic flow conditions. The parameter estimation relies on historical data because it needs to be updated frequently. To test the proposed algorithm, the DSRC probe travel time data were collected from a multilane highway road section. Calibration of the model was performed by statistical data analysis through using cumulative relative frequency. The qualitative evaluation shows satisfactory performance. The proposed model overcomes the deficiency associated with the rapid change in travel time.

Implications and numerical application of the asymptotical shock wave model (점진적 충격파모형의 함축적 의미와 검산)

  • Cho, Seong-Kil
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.4
    • /
    • pp.51-62
    • /
    • 2012
  • According to the Lighthill and Whitham's shock wave model, a shock wave exists even in a homogeneous speed condition. They referred this wave as unobservable- analogous to a radio wave that cannot be seen. Recent research has attempted to identify how such a counterintuitive conclusion results from the Lighthill and Whitham's shock wave model, and derive a new asymptotical shock wave model. The asymptotical model showed that the shock wave in a homogenous speed traffic stream is identical to the ambient vehicle speed. Thus, no radio wave-like shock wave exists. However, performance tests of the asymptotical model using numerical values have not yet been performed. We investigated the new asymptotical model by examining the implications of the new model, and tested it using numerical values based on a test scenario. Our investigation showed that the only difference between both models is in the third term of the equations, and that this difference has a crucial role in the model output. Incorporation of model parameter${\alpha}$ is another distinctive feature of the asymptotical model. This parameter makes the asymptotical model more flexible. In addition, due to various choices of ${\alpha}$ values, model calibration to accommodate various traffic flow situations is achievable. In Lighthill and Whitham's model, this is not possible. Our numerical test results showed that the new model yields significantly different outputs: the predicted shock wave speeds of the asymptotical model tend to lean toward the downstream direction in most cases compared to the shock wave speeds of Lighthill and Whitham's model for the same test environment. Statistical tests of significance also indicate that the outputs of the new model are significantly different than the corresponding outputs of Lighthill and Whitham's model.

A Study on Reliability Based Design Criteria for Reinforced Concrete Bridge Superstructures (철근(鐵筋)콘크리트 도로교(道路橋) 상부구조(上部構造) 신뢰성(信賴性) 설계규준(設計規準)에 관한 연구(研究))

  • Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.2 no.3
    • /
    • pp.87-99
    • /
    • 1982
  • This study proposes a reliability based design criteria for the R.C. superstructures of highway bridges. Uncertainties associated with the resistance of T or rectangular sections are investigated, and a set of appropriate uncertainties associated with the bridge dead and traffic live loads are proposed by reflecting our level of practice. Major 2nd moment reliability analysis and design theories including both Cornell's MFOSM(Mean First Order 2nd Moment) Methods and Lind-Hasofer's AFOSM(Advanced First Order 2nd Moment) Methods are summarized and compared, and it has been found that Ellingwood's algorithm and an approximate log-normal type reliability formula are well suited for the proposed reliability study. A target reliability index (${\beta}_0=3.5$) is selected as an optimal value considering our practice based on the calibration with the current R.C. bridge design safety provisions. A set of load and resistance factors is derived by the proposed uncertainties and the methods corresponding to the target reliability. Furthermore, a set of nominal safety factors and allowable stresses are proposed for the current W.S.D. design provisions. It may be asserted that the proposed L.R.F.D. reliability based design criteria for the R.C. highway bridges may have to be incorporated into the current R.C. bridge design codes as a design provision corresponding to the U.S.D. provisions of the current R.C. design code.

  • PDF

Automatic Detection Approach of Ship using RADARSAT-1 Synthetic Aperture Radar

  • Yang, Chan-Su
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.14 no.2
    • /
    • pp.163-168
    • /
    • 2008
  • Ship detection from satellite remote sensing is a crucial application for global monitoring for the purpose of protecting the marine environment and ensuring marine security. It permits to monitor sea traffic including fisheries, and to associate ships with oil discharge. An automatic ship detection approach for RADARSAT Fine Synthetic Aperture Radar (SAR) image is described and assessed using in situ ship validation information collected during field experiments conducted on August 6, 2004. Ship detection algorithms developed here consist of five stages: calibration, land masking, prescreening, point positioning, and discrimination. The fine image was acquired of Ulsan Port, located in southeast Korea, and during the acquisition, wind speeds between 0 m/s and 0.4 m/s were reported. The detection approach is applied to anchoring ships in the anchorage area of the port and its results are compared with validation data based on Vessel Traffic Service (VTS) radar. Our analysis for anchoring ships, above 68 m in length (LOA), indicates a 100% ship detection rate for the RADARSAT single beam mode. It is shown that the ship detection performance of SAR for smaller ships like barge could be higher than the land-based radar. The proposed method is also applied to estimate the ship's dimensions of length and breadth from SAR radar cross section(RCS), but those values were comparatively higher than the actual sizes because of layover and shadow effects of SAR.

  • PDF

Estimating O-D Trips Between Sub-divided Smaller Zones Within a Traffic Analysis Zone (대존 세분화에 따른 내부 소존 간의 O-D 통행량 추정 방법)

  • KIM, Jung In;KIM, Ikki
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.6
    • /
    • pp.575-583
    • /
    • 2015
  • The Korea Transport Institute (KOTI) builds the origin and destination(O-D) trip data with relatively smaller zone size such as Eup, Myeon, Dong administration unit districts in metropolitan area. Otherwise, O-D trip data was built by bigger size of traffic analysis zone(TAZ) such as Si, Gun, Gu administration unit districts for rural area. In some cases, it is needed to divide a zone into several sub-zones for rural area in order to analyze travel distribution pattern in detail for a certain highway and rail project. The study suggested a method to estimate O-D trips for sub-zones in the larger-size zones in rural area. Two different distribution models, direct demand model and gravity model, were calibrated for sub-zone's intra-zonal O-D trip pattern with metropolitan area O-D data which has smaller zone-size (sub-zone) data categorized by low, middle and high population density. The calibration results were compared between the two models. The gravity model with impedance function of power functional form was selected with better explanation for all groups in the metropolitan area. The adjusted $R^2$ was 0.7426, 0.6456 and 0.7194 for low, middle and high population density group, respectively. The suggested O-D trip estimating method is expected to produce enhanced trip patterns with sub-divided small zones.

A Study on the Theme Park Users' Choice behavior -Application of Constraints-Induced Conjoint Choice Model- (주제공원 이용자들의 선택행동 연구 -Constraints-Induced Conjoint Choice Model의 적용-)

  • 홍성권;이용훈
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.28 no.2
    • /
    • pp.18-27
    • /
    • 2000
  • The importance of constraints has been one of major issues in recreation for prediction of choice behavior; however, traditional conjoint choice model did not consider the effects of these variables or fail to integrate them into choice model adequately. The purposes of this research are (a) to estimate the effects of constraints in theme park choice behavior by the constraints-induced conjoint choice model, and (b) to test additional explanatory power of the additional constraints in this suggested model against the more parsimonious traditional model. A leading polling agency was employed to select respondents. Both alternative generating and choice set generating fractional factorial design were conducted to meet the necessary and sufficient conditions for calibration of the constraints-induced conjoint choice model. Th alternative-specific model was calibrated. The log-likelihood ratio test revealed that suggested model was accepted in the favor of the traditional model, and the goodness-of-fit($\rho$$^2$) of suggested and traditional model was 0.48427 and 0.47950, respectively. There was no difference between traditional and suggested model in estimates of attribute levels of car and shuttle bus because alternatives were created to estimate the effects of constraints independently from mode related variables. Most parameters values of constraints had the expected sign and magnitude: the results reflected the characteristics of the theme parks, such as abundance of natural attractions and poor accessibility in Everland, location of major fun rides indoor in Lotte World, city park like characteristics of Dream Land, and traffic jams in Seoul. Instead of the multinomial logit model, the nested logit model is recommended for future researches because this model more reasonably reflects the real decision-making process in park choice. Development of new methodology too integrate this hierarchical decision-making into choice model is anticipated.

  • PDF

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.