• Title/Summary/Keyword: KM algorithm

Search Result 311, Processing Time 0.024 seconds

Quality Test and Control of Kinematic DGPS Survey Results

  • Lim, Sam-Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.5 s.23
    • /
    • pp.75-80
    • /
    • 2002
  • Depending upon geographical features and surrounding errors in the survey field, inaccurate positioning is inevitable in a kinematic DGPs survey. Therefore, a data inaccuracy detection algorithm and an interpolation algorithm are essential to meet the requirement of a digital map. In this study, GPS characteristics are taken into account to develop the data inaccuracy detection algorithm. Then, the data interpolation algothim is obtained, based on the feature type of the survey. A digital map for 20km of a rural highway is produced by the kinematic DGPS survey and the features of interests are lines associated with the road. Since the vertical variation of GPS data is relatively higher, the trimmed mean of vertical variation is used as criteria of the inaccuracy detection. Four cases of 0.5%, 1%, 2.5% and 5% trimmings have been experimented. Criteria of four cases are 69cm, 65cm, 61cm and 42cm, respectively. For the feature of a curved line, cublic spine interpolation is used to correct the inaccurate data. When the feature is more or less a straight line, the interpolation has been done by a linear polynomial. Difference between the actual distance and the interpolated distance are few centimeters in RMS.

  • PDF

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Development of Halfway Station Recommendation Application Using Dijkstra's Algorithm (다익스트라 알고리즘을 활용한 중간지점 추천 애플리케이션 개발)

  • Park, Naeun;Mun, Jiyeon;Jeoung, Yuna;Cho, Seoyeon;Huh, Won Whoi
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.312-319
    • /
    • 2021
  • This study aims to help users to have a more satisfying encounter based on the problems found by comparing and analyzing similar applications. That is, an application that derives intermediate points through the subway, which is a public transportation means, and provides information on nearby convenience facilities was proposed. The middle point calculation process uses the dijkstra algorithm, which stores the minimum number of nodes in the stored path from the first input location to the last location. The stack and arraylist are used to search all paths from the first input position to the last position, and then the path with the smallest number of nodes is selected. After that, the number of stations in the route is divided in half and the resulting station is output. In addition, this study provides information on convenience facilities near intermediate points in order to have differences from similar applications. It categorizes within a 1km radius of the point and provides a function that helps to conveniently identify only facilities around the middle point. In particular, by visualizing the number of convenience facilities with radar charts and numbers, it is possible to grasp the commercial district around the midpoint at a glance.

Terrain Referenced Navigation Simulation using Area-based Matching Method and TERCOM (영역기반 정합 기법 및 TERCOM에 기반한 지형 참조 항법 시뮬레이션)

  • Lee, Bo-Mi;Kwon, Jay-Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.1
    • /
    • pp.73-82
    • /
    • 2010
  • TERCOM(TERrain COntour Matching), which is the one of the Terrain Referenced Navigation and used in the cruise missile navigation system, is still under development. In this study, the TERCOM based on area-based matching algorithm and extended Kalman filter is analysed through simulation. In area-based matching, the mean square difference (MSD) and cross-correlation matching algorithms are applied. The simulation supposes that the barometric altimeter, radar altimeter and SRTM DTM loaded on board. Also, it navigates along the square track for 545 seconds with the velocity of 1000km per hour. The MSD and cross-correlation matching algorithms show the standard deviation of position error of 99.6m and 34.3m, respectively. The correlation matching algorithm is appeared to be less sensitive than the MSD algorithm to the topographic undulation and the position accuracy of the both algorithms is extremely depends on the terrain. Therefore, it is necessary to develop an algorithm that is more sensitive to less terrain undulation for reliable terrain referenced navigation. Furthermore, studies on the determination of proper matching window size in long-term flight and the determination of the best terrain database resolution needed by the flight velocity and area should be conducted.

Adaptive Channel Estimation Algorithm for DVB-T (DVB-시스템을 위한 적응형 채널 추정 알고리즘)

  • Kim, Seung-Hwan;Lee, Jin-Beom;Lee, Jin-Yong;Kim, Young-Lok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.6A
    • /
    • pp.676-684
    • /
    • 2008
  • In digital video broadcasting-terrestrial (DVB-T), which is the European digital terrestrial television standard, the orthogonal frequency division multiplexing (OFDM) has been adopted for signal transmission. The main reasons using OFDM are to increase the robustness against the frequency selective fading and impulse noise, and to use available bandwidth efficiently. However, channel variation within an OFDM symbol destroys orthogonality between subcarriers, resulting in inter-carrier interference (ICI), which increases an error floor in proportional to maximum Doppler spread. This paper provides an ICI analysis in both time and frequency domains while existing literatures analyze the ICI effects mainly in frequency domain and proposes the algorithms that estimate the channel impulse response and channel variation using least square (LS) algorithm which is the most simple channel estimation technique. And we propose adaptive channel estimation algorithm that estimates the velocity of terminals. The simulation results show that proposed algorithm has similar performance with about 1.5% computational complexity of noise and ICI reduction LS algorithm in low speed environments.

AUTOMATIC DETECTION AND EXTRACTION ALGORITHM OF INTER-GRANULAR BRIGHT POINTS

  • Feng, Song;Ji, Kai-Fan;Deng, Hui;Wang, Feng;Fu, Xiao-Dong
    • Journal of The Korean Astronomical Society
    • /
    • v.45 no.6
    • /
    • pp.167-173
    • /
    • 2012
  • Inter-granular Bright Points (igBPs) are small-scale objects in the Solar photosphere which can be seen within dark inter-granular lanes. We present a new algorithm to automatically detect and extract igBPs. Laplacian and Morphological Dilation (LMD) technique is employed by the algorithm. It involves three basic processing steps: (1) obtaining candidate "seed" regions by Laplacian; (2) determining the boundary and size of igBPs by morphological dilation; (3) discarding brighter granules by a probability criterion. For validating our algorithm, we used the observed samples of the Dutch Open Telescope (DOT), collected on April 12, 2007. They contain 180 high-resolution images, and each has a $85{\times}68\;arcsec^2$ field of view (FOV). Two important results are obtained: first, the identified rate of igBPs reaches 95% and is higher than previous results; second, the diameter distribution is $220{\pm}25km$, which is fully consistent with previously published data. We conclude that the presented algorithm can detect and extract igBPs automatically and effectively.

The Accuracy Analysis of Methods to solve the Geodetic Inverse Problem (측지 역 문제 해석기법의 정확도 분석)

  • Lee, Yong-Chang
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.4
    • /
    • pp.329-341
    • /
    • 2011
  • The object of this paper is to compare the accuracy and the characteristic of various methods of solving the geodetic inverse problem for the geodesic lines which be in the standard case and special cases(antipodal, near antipodal, equatorial, and near equatorial situation) on the WGS84 reference ellipsoid. For this, the various algorithms (classical and recent solutions) to deal with the geodetic inverse problem are examined, and are programmed in order to evaluate the calculation ability of each method for the precise geodesic determination. The main factors of geodetic inverse problem, the distance and the forward azimuths between two points on the sphere(or ellipsoid) are determined by the 18 kinds of methods for the geodetic inverse solutions. After then, the results from the 17 kinds of methods in the both standard and special cases are compared with those from the Karney method as a reference. When judging these comparison, in case of the standard geodesics whose length do not exceed 100km, all of the methods show the almost same ability to Karney method. Whereas to the geodesics is longer than 4,000km, only two methods (Vincenty and Pittman) show the similar ability to the Karney method. In the cases of special geodesics, all methods except the Modified Vincenty method was not proper to solve the geodetic inverse problem through the comparison with Karney method. Therefore, it is needed to modify and compensate the algorithm of each methods by examining the various behaviors of geodesics on the special regions.

The Consideration for Optimum 3D Seismic Processing Procedures in Block II, Northern Part of South Yellow Sea Basin (대륙붕 2광구 서해분지 북부지역의 3D전산처리 최적화 방안시 고려점)

  • Ko, Seung-Won;Shin, Kook-Sun;Jung, Hyun-Young
    • The Korean Journal of Petroleum Geology
    • /
    • v.11 no.1 s.12
    • /
    • pp.9-17
    • /
    • 2005
  • In the main target area of the block II, Targe-scale faults occur below the unconformity developed around 1 km in depth. The contrast of seismic velocity around the unconformity is generally so large that the strong multiples and the radical velocity variation would deteriorate the quality of migrated section due to serious distortion. More than 15 kinds of data processing techniques have been applied to improve the image resolution for the structures farmed from this active crustal activity. The bad and noisy traces were edited on the common shot gathers in the first step to get rid of acquisition problems which could take place from unfavorable conditions such as climatic change during data acquisition. Correction of amplitude attenuation caused from spherical divergence and inelastic attenuation has been also applied. Mild F/K filter was used to attenuate coherent noise such as guided waves and side scatters. Predictive deconvolution has been applied before stacking to remove peg-leg multiples and water reverberations. The velocity analysis process was conducted at every 2 km interval to analyze migration velocity, and it was iterated to get the high fidelity image. The strum noise caused from streamer was completely removed by applying predictive deconvolution in time space and ${\tau}-P$ domain. Residual multiples caused from thin layer or water bottom were eliminated through parabolic radon transform demultiple process. The migration using curved ray Kirchhoff-style algorithm has been applied to stack data. The velocity obtained after several iteration approach for MVA (migration velocity analysis) was used instead or DMO for the migration velocity. Using various testing methods, optimum seismic processing parameter can be obtained for structural and stratigraphic interpretation in the Block II, Yellow Sea Basin.

  • PDF

Stereo Matching For Satellite Images using The Classified Terrain Information (지형식별정보를 이용한 입체위성영상매칭)

  • Bang, Soo-Nam;Cho, Bong-Whan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.1 s.6
    • /
    • pp.93-102
    • /
    • 1996
  • For an atomatic generation of DEM(Digital Elevation Model) by computer, it is a time-consumed work to determine adquate matches from stereo images. Correlation and evenly distributed area-based method is generally used for matching operation. In this paper, we propose a new approach that computes matches efficiantly by changing the size of mask window and search area according to the given terrain information. For image segmentation, at first edge-preserving smoothing filter is used for preprocessing, and then region growing algorithm is applied for the filterd images. The segmented regions are classifed into mountain, plain and water area by using MRF(Markov Random Filed) model. Maching is composed of predicting parallex and fine matching. Predicted parallex determines the location of search area in fine matching stage. The size of search area and mask window is determined by terrain information for each pixel. The execution time of matching is reduced by lessening the size of search area in the case of plain and water. For the experiments, four images which are covered $10km{\times}10km(1024{\times}1024\;pixel)$ of Taejeon-Kumsan in each are studied. The result of this study shows that the computing time of the proposed method using terrain information for matching operation can be reduced from 25% to 35%.

  • PDF

A Model Study of Dissolved Oxygen Change by Waste Water Discharge in the River (하수방류에 따른 하천의 용존산소변화 예측)

  • Sung, Dong-Gwon;Kim, Tae-Keun;Choi, Kyoung-Sik
    • Korean Journal of Ecology and Environment
    • /
    • v.34 no.2 s.94
    • /
    • pp.126-132
    • /
    • 2001
  • Urbanization and population increase result in the construction of STPs (Sewage Treatment Plants). Discharge from STPs greatly influences on the water quality in the stream which receives discharges. The decision of STP location should be considered with the discharge capacity of STP and self-purification of river in the water quality perspectively. In this study, a change of dissolved oxygen (DO) in a river being affected by STP discharge was simulated by the STELLA model. Minimum DO was 4.98 ppm in 42.6 km downstream of STP. Approximately, it takes 8days to recover the DO by the self-purification and this location is 340 km down-stream from the STP. If the model run for the consideration of the self-purification without phytoplankton algorithms, minimum DO was 4.92 ppm. It took 0.25 day longer to be the minimum DO than that with the phytoplankton functions. Without the phytoplankton algorithm, it took 11days to recover the DO. This proves the importance of phytoplankton in the self-purification processes. Additionally, the effect of adjacent STP discharge should be considered in the construction of new STP.

  • PDF