• Title/Summary/Keyword: Segment Algorithm

Search Result 590, Processing Time 0.027 seconds

A Study on Non-uniformity Correction Method through Uniform Area Detection Using KOMPSAT-3 Side-Slider Image (사이드 슬리더 촬영 기반 KOMPSAT-3 위성 영상의 균일 영역 검출을 통한 비균일 보정 기법 연구 양식)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1013-1027
    • /
    • 2021
  • Images taken with KOMPSAT-3 have additional NIR and PAN bands, as well as RGB regions of the visible ray band, compared to imagestaken with a standard camera. Furthermore, electrical and optical properties must be considered because a wide radius area of approximately 17 km or more is photographed at an altitude of 685 km above the ground. In other words, the camera sensor of KOMPSAT-3 is distorted by each CCD pixel, characteristics of each band,sensitivity and time-dependent change, CCD geometry. In order to solve the distortion, correction of the sensors is essential. In this paper, we propose a method for detecting uniform regions in side-slider-based KOMPSAT-3 images using segment-based noise analysis. After detecting a uniform area with the corresponding algorithm, a correction table was created for each sensor to apply the non-uniformity correction algorithm, and satellite image correction was performed using the created correction table. As a result, the proposed method reduced the distortion of the satellite image,such as vertical noise, compared to the conventional method. The relative radiation accuracy index, which is an index based on mean square error (RA) and an index based on absolute error (RE), wasfound to have a comparative advantage of 0.3 percent and 0.15 percent, respectively, over the conventional method.

Analysis of the Individual Tree Growth for Urban Forest using Multi-temporal airborne LiDAR dataset (다중시기 항공 LiDAR를 활용한 도시림 개체목 수고생장분석)

  • Kim, Seoung-Yeal;Kim, Whee-Moon;Song, Won-Kyong;Choi, Young-Eun;Choi, Jae-Yong;Moon, Guen-Soo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.22 no.5
    • /
    • pp.1-12
    • /
    • 2019
  • It is important to measure the height of trees as an essential element for assessing the forest health in urban areas. Therefore, an automated method that can measure the height of individual tree as a three-dimensional forest information is needed in an extensive and dense forest. Since airborne LiDAR dataset is easy to analyze the tree height(z-coordinate) of forests, studies on individual tree height measurement could be performed as an assessment forest health. Especially in urban forests, that adversely affected by habitat fragmentation and isolation. So this study was analyzed to measure the height of individual trees for assessing the urban forests health, Furthermore to identify environmental factors that affect forest growth. The survey was conducted in the Mt. Bongseo located in Seobuk-gu. Cheonan-si(Middle Chungcheong Province). We segment the individual trees on coniferous by automatic method using the airborne LiDAR dataset of the two periods (year of 2016 and 2017) and to find out individual tree growth. Segmentation of individual trees was performed by using the watershed algorithm and the local maximum, and the tree growth was determined by the difference of the tree height according to the two periods. After we clarify the relationship between the environmental factors affecting the tree growth. The tree growth of Mt. Bongseo was about 20cm for a year, and it was analyzed to be lower than 23.9cm/year of the growth of the dominant species, Pinus rigida. This may have an adverse effect on the growth of isolated urban forests. It also determined different trees growth according to age, diameter and density class in the stock map, effective soil depth and drainage grade in the soil map. There was a statistically significant positive correlation between the distance to the road and the solar radiation as an environmental factor affecting the tree growth. Since there is less correlation, it is necessary to determine other influencing factors affecting tree growth in urban forests besides anthropogenic influences. This study is the first data for the analysis of segmentation and the growth of the individual tree, and it can be used as a scientific data of the urban forest health assessment and management.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Vessel Tracking Algorithm using Multiple Local Smooth Paths (지역적 다수의 경로를 이용한 혈관 추적 알고리즘)

  • Jeon, Byunghwan;Jang, Yeonggul;Han, Dongjin;Shim, Hackjoon;Park, Hyungbok;Chang, Hyuk-Jae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.137-145
    • /
    • 2016
  • A novel tracking method is proposed to find coronary artery using high-order curve model in coronary CTA(Computed Tomography Angiography). The proposed method quickly generates numerous artificial trajectories represented by high-order curves, and each trajectory has its own cost. The only high-ranked trajectories, located in the target structure, are selected depending on their costs, and then an optimal curve as the centerline will be found. After tracking, each optimal curve segment is connected, where optimal curve segments share the same point, to a single curve and it is a piecewise smooth curve. We demonstrated the high-order curve is a proper model for classification of coronary artery. The experimental results on public data set sho that the proposed method is comparable at both accuracy and running time to the state-of-the-art methods.

Improvement of Fetal Heart Rate Extraction from Doppler Ultrasound Signal (도플러 초음파 신호에서의 태아 심박 검출 개선)

  • Kwon, Ja Young;Lee, Yu Bin;Cho, Ju Hyun;Lee, Yoo Jin;Choi, Young Deuk;Nam, Ki Chang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.328-334
    • /
    • 2012
  • Continuous fetal heart beat monitoring has assisted clinicians in assuring fetal well-being during antepartum and intrapartum. Fetal heart rate (FHR) is an important parameter of fetal health during pregnancy. The Doppler ultrasound is one of very useful methods that can non-invasively measure FHR. Although it has been commonly used in clinic, inaccurate heart rate reading has not been completely resolved.. The objective of this study is to improve detection algorithm of FHR from Doppler ultrasound signal with simple method. We modified autocorrelation function to enhance signal periodicity and adopted adaptive window size and shifted for data segment to be analysed. The proposed method was applied to real measured data, and it was verified that beat-to-beat FHR estimation result was comparable with the reference fetal ECG data. This simple and effective method is expected to be implemented in the embedded system.

Three-Dimensional Conversion of Two-Dimensional Movie Using Optical Flow and Normalized Cut (Optical Flow와 Normalized Cut을 이용한 2차원 동영상의 3차원 동영상 변환)

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Joo-Hwan;Kang, Jin-Mo;Lee, Byoung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.1
    • /
    • pp.16-22
    • /
    • 2009
  • We propose a method to convert a two-dimensional movie to a three-dimensional movie using normalized cut and optical flow. In this paper, we segment an image of a two-dimensional movie to objects first, and then estimate the depth of each object. Normalized cut is one of the image segmentation algorithms. For improving speed and accuracy of normalized cut, we used a watershed algorithm and a weight function using optical flow. We estimate the depth of objects which are segmented by improved normalized cut using optical flow. Ordinal depth is estimated by the change of the segmented object label in an occluded region which is the difference of absolute values of optical flow. For compensating ordinal depth, we generate the relational depth which is the absolute value of optical flow as motion parallax. A final depth map is determined by multiplying ordinal depth by relational depth, then dividing by average optical flow. In this research, we propose the two-dimensional/three-dimensional movie conversion method which is applicable to all three-dimensional display devices and all two-dimensional movie formats. We present experimental results using sample two-dimensional movies.

Load balancing of a deployment server using P2P (P2P를 이용한 배포 서버의 부하 분산)

  • Son Sei-Il;Lee Suk-Kyoon
    • The KIPS Transactions:PartA
    • /
    • v.13A no.1 s.98
    • /
    • pp.45-52
    • /
    • 2006
  • To perform on-line maintenance for Distributed Information System, it is indispensable to disseminate files to participant nodes in the network. When users' requests for file deployment occur simultaneously in a short period a deployment server falls into overload phase, which is often called Flash Crowds. h common solution to avoid Flash Crowds is to increase hardware capacity. In this paper, we propose a software solution based on P2P, which does not cost any additional expense. In the proposed solution, nodes in the network are grouped into subnetworks one of which is composed of only neighboring nodes. In each subnetwork, copies of deployment files can be transferred to each other. Consequently, it brings about the effect of load balancing in deployment server. To raise the effectiveness, target files for deployment are packed into one package. Before being transferred, each package is divided into multiple equal-sized segments. A deployment server in a normal phase transmits a package requested from nodes in segment units. However a deployment server is overloaded, if segments already exist in the subnetwork participant nodes in the subnetwork receive necessary segments from neighboring nodes. In this paper, we propose data structures and algorithm for this approach and show performance improvement through simulation.

English Phoneme Recognition using Segmental-Feature HMM (분절 특징 HMM을 이용한 영어 음소 인식)

  • Yun, Young-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.167-179
    • /
    • 2002
  • In this paper, we propose a new acoustic model for characterizing segmental features and an algorithm based upon a general framework of hidden Markov models (HMMs) in order to compensate the weakness of HMM assumptions. The segmental features are represented as a trajectory of observed vector sequences by a polynomial regression function because the single frame feature cannot represent the temporal dynamics of speech signals effectively. To apply the segmental features to pattern classification, we adopted segmental HMM(SHMM) which is known as the effective method to represent the trend of speech signals. SHMM separates observation probability of the given state into extra- and intra-segmental variations that show the long-term and short-term variabilities, respectively. To consider the segmental characteristics in acoustic model, we present segmental-feature HMM(SFHMM) by modifying the SHMM. The SFHMM therefore represents the external- and internal-variation as the observation probability of the trajectory in a given state and trajectory estimation error for the given segment, respectively. We conducted several experiments on the TIMIT database to establish the effectiveness of the proposed method and the characteristics of the segmental features. From the experimental results, we conclude that the proposed method is valuable, if its number of parameters is greater than that of conventional HMM, in the flexible and informative feature representation and the performance improvement.

A Proxy based QoS Provisioning Mechanism for Streaming Service in Wireless Networks (무선이동통신망에서 스트리밍 서비스를 위한 프락시 기반Qos 보장 방안)

  • Kim Yong-Sul;Hong Jung-Pyo;Kim Hwa-Sung;Yoo Ji-Sang;Kim Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.7B
    • /
    • pp.608-618
    • /
    • 2006
  • The increasing popularity of multimedia streaming services introduces new challenges in content distribution. Especially, it is important to provide the QoS guarantees as they are increasingly expected to support the multimedia applications. The service providers can improve the performance of multimedia streaming by caching the initial segment (prefix) of the popular streams at proxies near the requesting clients. The proxy can initiate transmission to the client while requesting the remainder of the stream from the server. In this paper, in order to apply the prefix caching service based on IETF's RTSP environment to the wireless networks, we propose the effective RTSP handling scheme that can adapt to the radio situation in wireless network and reduce the cutting phenomenon. Also, we propose the traffic based caching algorithm (TSLRU) to improve the performance of caching proxy. TSLRU classifies the traffic into three types, and improve the performance of caching proxy by reflecting the several elements such as traffic types, recency, frequency, object size when performing the replacement decision. In simulation, TSLRU and RTSP handling scheme performs better than the existing schemes in terms of byte hit rate, hit rate, startup latency, and throughput.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF