• Title/Summary/Keyword: Density-based Clustering

Search Result 166, Processing Time 0.025 seconds

A Comparative Analysis of land Cover Changes Among Different Source Regions of Dust Emission in East Asia: Gobi Desert and Manchuria (동아시아의 황사발원지들에 대한 토지피복 비교 연구: 고비사막과 만주)

  • Pi, Kyoung-Jin;Han, Kyung-Soo;Park, Soo-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.2
    • /
    • pp.175-184
    • /
    • 2009
  • This study attempts to analyze the difference among the variations of ecological distribution in Gobi desert and Manchuria through satellite based land cover classification. This was motivated by two well-known facts: 1) Gobi desert, which is an old source region, had been gradually expanded eastward; 2) Manchuria, which is located in east of Gobi desert, was observed as a new source region of yellow dust. An unsupervised classification called ISODATA clustering method was employed to detect the land cover change and to characterize the status of desertification and its expanding trends using NDVI (Normalized Difference Vegetation Index) derived from VEGETATION sensor onboard the SPOT satellite for 1999 and 2007. We analyzed NDVI annual variation pattern for every classes and divide into 5 level according to their vegetation's density level based on NDVI. As results, Gobi desert is showed positive variation: a decrease $78,066km^2$ in central Gobi desert and out skirts of Gobi desert (level-0) but Manchuria area is worse than previous time: an increase $25,744km^2$.

Analysis of Characteristics of NPS Runoff and Pollution Contribution Rate in Songya-stream Watershed (송야천 유역의 비점오염물질 유출 특성 및 오염기여율 분석)

  • Kang Taeseong;Yu Nayeong;Shin Minhwan;Lim Kyoungjae;Park Minji;Park Baekyung;Kim Jonggun
    • Journal of Korean Society on Water Environment
    • /
    • v.39 no.4
    • /
    • pp.316-328
    • /
    • 2023
  • In this study, the characteristics of nonpoint pollutant outflow and contribution rate of pollution in Songya-stream mainstream and tributaries were analyzed. Further, water pollution management and improvement measures for pollution-oriented rivers were proposed. An on-site investigation was conducted to determine the inflow of major pollutants into the basin, and it was found that pollutants generated from agricultural land and livestock facilities flowed into the river, resulting in a high concentration of turbid water. Based on the analysis results of the pollution load data calculated through actual measurement monitoring (flow and water quality) and the occurrence and emission load data calculated using the national pollution source survey data, the S3 and S6 were selected as the concerned pollution tributaries in the Songya-stream basin. Results of cluster analysis using Pearson correlation coefficient evaluation and Density based spatial clustering of applications with noise (DBSCAN) technique showed that the S3 and S6 were most consistent with the C2 cluster (a cluster of Songya-stream mainstream owned area) corresponding to the mainstream of Songya-stream. The analysis results of the major pollutants in the concerned pollution tributaries showed that livestock and land pollutants were the major pollutants. Consequently, optimal management techniques such as fertilizer management, water gate management in paddy, vegetated filter strip and livestock manure public treatment were proposed to reduce livestock and land pollutants.

Modeling of the Cluster-based Multi-hop Sensor Networks (클거스터 기반 다중 홉 센서 네트워크의 모델링 기법)

  • Choi Jin-Chul;Lee Chae-Woo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.1 s.343
    • /
    • pp.57-70
    • /
    • 2006
  • This paper descWireless Sensor Network consisting of a number of small sensors with transceiver and data processor is an effective means for gathering data in a variety of environments. The data collected by each sensor is transmitted to a processing center that use all reported data to estimate characteristics of the environment or detect an event. This process must be designed to conserve the limited energy resources of the sensor since neighboring sensors generally have the data of similar information. Therefore, clustering scheme which sends aggregated information to the processing center may save energy. Existing multi-hop cluster energy consumption modeling scheme can not estimate exact energy consumption of an individual sensor. In this paper, we propose a new cluster energy consumption model which modified existing problem. We can estimate more accurate total energy consumption according to the number of clusterheads by using Voronoi tessellation. Thus, we can realize an energy efficient cluster formation. Our modeling has an accuracy over $90\%$ when compared with simulation and has considerably superior than existing modeling scheme about $60\%.$ We also confirmed that energy consumption of the proposed modeling scheme is more accurate when the sensor density is increased.

Classification of Music Data using Fuzzy c-Means with Divergence Kernel (분산커널 기반의 퍼지 c-평균을 이용한 음악 데이터의 장르 분류)

  • Park, Dong-Chul
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.1-7
    • /
    • 2009
  • An approach for the classification of music genres using a Fuzzy c-Means(FcM) with divergence-based kernel is proposed and presented in this paper. The proposed model utilizes the mean and covariance information of feature vectors extracted from music data and modelled by Gaussian Probability Density Function (GPDF). Furthermore, since the classifier utilizes a kernel method that can convert a complicated nonlinear classification boundary to a simpler linear one, he classifier can improve its classification accuracy over conventional algorithms. Experiments and results on collected music data sets demonstrate hat the proposed classification scheme outperforms conventional algorithms including FcM and SOM 17.73%-21.84% on average in terms of classification accuracy.

A Probabilistic Approach for Mobile Robot Localization under RFID Tag Infrastructures (RFID Tag 기반 이동 로봇의 위치 인식을 위한 확률적 접근)

  • Won Dae-Heui;Yang Gwang-Woong;Choi Moo-Sung;Park Sang-Deok;Lee Ho-Gil
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1034-1039
    • /
    • 2005
  • SALM(Simultaneous localization and mapping) and AI(Artificial intelligence) have been active research areas in robotics for two decades. In particular, localization is one of the most important tasks in mobile robot research. Until now expensive sensors such as a laser sensor have been used for mobile robot localization. Currently, the proliferation of RFID technology is advancing rapidly, while RFID reader devices, antennas and tags are becoming increasingly smaller and cheaper. So, in this paper, the smart floor using passive RFID tags is proposed and, passive RFID tags are mainly used for identifying location of the mobile robot in the smart floor. We discuss a number of challenges related to this approach, such as tag distribution (density and structure), typing and clustering. In the smart floor using RFID tags, the localization error results from the sensing area of the RFID reader, because the reader just knows whether the tag is in the sensing range of the sensor and, until now, there is no study to estimate the heading of mobile robot using RFID tags. So, in this paper, two algorithms are suggested to. The Markov localization method is used to reduce the location(X,Y) error and the Kalman Filter method is used to estimate the heading($\theta$) of mobile robot. The algorithms which are based on Markov localization require high computing power, so we suggest fast Markov localization algorithm. Finally we applied these algorithms our personal robot CMR-P3. And we show the possibility of our probability approach using the cheap sensors such as odometers and RFID tags for mobile robot localization in the smart floor

  • PDF

Evaluation of Water Quality for the Han River Tributaries Using Multivariate Analysis (다변량 통계 분석기법을 이용한 한강수계 지천의 수질 평가)

  • Kim, Yo-Yong;Lee, Si-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.33 no.7
    • /
    • pp.501-510
    • /
    • 2011
  • In this study, water pollution sources of 14 major tributaries of Han river and characteristics of water quality for each target streams were evaluated based on water quality data in 2007.1-2009.12 (14 data sets) using a statistical package, SPSS-17.0. Cluster analysis over time and space for each stream resulted in 4 groups for the spatial variations in which type and density of pollution sources in the basins showed the greatest impact on grouping. Moreover, cluster analysis for the time variation in which rainfall, temperature and eutrophication were shown to contribute to the clustering, produced 2 groups, from summer to fall (July-Oct.) and from winter to early summer (Nov.-June). Four factors were found as responsible for the data structure explaining 71-90% of the total variance of the data set depending on the streams and they were organic matter, nutrients, bacterial contamination. Factor analysis showed main factors (water pollutants) changed according to the season with different pattern for each stream. This study demonstrated that water quality of each stream could produce useful outcomes when factor and pollution source of basin were evaluated together.

Automatic Detection of Foreign Body through Template Matching in Industrial CT Volume Data (산업용 CT 볼륨데이터에서 템플릿 매칭을 통한 이물질 자동 검출)

  • Ji, Hye-Rim;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1376-1384
    • /
    • 2013
  • In this paper, we propose an automaticdetection method of foreign bodies through template matching in industrial CT volume data. Our method is composed of three main steps. First,Indown-sampling data, the product region is separated from background after noise reduction and initial foreign-body candidates are extracted using mean and standard deviation of the product region. Then foreign-body candidates are extracted using K-means clustering. Second, the foreign body with different intensity of product region is detected using template matching. At this time, the template matching is performed by evaluating SSD orjoint entropy according to the size of detected foreign-body candidates. Third, to improve thedetection rate of foreign body in original volume data, final foreign bodiesare detected using percolation method. For the performance evaluation of our method, industrial CT volume data and simulation data are used. Then visual inspection and accuracy assessment are performed and processing time is measured. For accuracy assessment, density-based detection method is used as comparative method and Dice's coefficient is measured.

Clustering for Improved Actor Connectivity and Coverage in Wireless Sensor and Actor Networks (무선 센서 액터 네트워크에서 액터의 연결성과 커버리지를 향상시키기 위한 클러스터 구성)

  • Kim, Young-Kyun;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.63-71
    • /
    • 2014
  • This paper proposes an algorithm that forms the clusters on the basis of hop distance in order to improve the actor coverage and connectivity in the sink-based wireless sensor and actor networks. The proposed algorithm forms the clusters that are distributed evenly in the target area by electing the CHs(Cluster Heads) at regular hop intervals from a sink. The CHs are elected sequentially from the sink in order to ensure the connectivity between the sink and the actors that are located on the CHs. Additionally, the electing are achieved from the area of the higher rate of the sensors density in order to improve the actor coverage. The number of clusters that are created in the target area and the number of the actors that are placed on the positions of the CHs are reduced by forming the clusters with regular distribution and minimizing the overlap of them through the proposed algorithm. Simulations are performed to verify that the proposed algorithm constructs the actor network that is connected to the sink. Moreover, we shows that the proposed algorithm improves the actor coverage and, therefore, reduces the amount of the actors that will be deployed in the region by 9~20% compared to the IDSC algorithm.

Application of Bioinformatics for the Functional Genomics Analysis of Prostate Cancer Therapy

  • Mousses, Spyro
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.74-82
    • /
    • 2000
  • Prostate cancer initially responds and regresses in response to androgen depletion therapy, but most human prostate cancers will eventually recur, and re-grow as an androgen independent tumor. Once these tumors become hormone refractory, they usually are incurable leading to death for the patient. Little is known about the molecular details of how prostate cancer cells regress following androgen ablation and which genes are involved in the androgen independent growth following the development of resistance to therapy. Such knowledge would reveal putative drug targets useful in the rational therapeutic design to prevent therapy resistance and control androgen independent growth. The application of genome scale technologies have permitted new insights into the molecular mechanisms associated with these processes. Specifically, we have applied functional genomics using high density cDNA microarray analysis for parallel gene expression analysis of prostate cancer in an experimental xenograft system during androgen withdrawal therapy, and following therapy resistance, The large amount of expression data generated posed a formidable bioinformatics challenge. A novel template based gene clustering algorithm was developed and applied to the data to discover the genes that respond to androgen ablation. The data show restoration of expression of androgen dependent genes in the recurrent tumors and other signaling genes. Together, the discovered genes appear to be involved in prostate cancer cell growth and therapy resistance in this system. We have also developed and applied tissue microarray (TMA) technology for high throughput molecular analysis of hundreds to thousands of clinical specimens simultaneously. TMA analysis was used for rapid clinical translation of candidate genes discovered by cDNA microarray analysis to determine their clinical utility as diagnostic, prognostic, and therapeutic targets. Finally, we have developed a bioinformatic approach to combine pharmacogenomic data on the efficacy and specificity of various drugs to target the discovered prostate cancer growth associated candidate genes in an attempt to improve current therapeutics.

  • PDF

An Automatic Pattern Recognition Algorithm for Identifying the Spatio-temporal Congestion Evolution Patterns in Freeway Historic Data (고속도로 이력데이터에 포함된 정체 시공간 전개 패턴 자동인식 알고리즘 개발)

  • Park, Eun Mi;Oh, Hyun Sun
    • Journal of Korean Society of Transportation
    • /
    • v.32 no.5
    • /
    • pp.522-530
    • /
    • 2014
  • Spatio-temporal congestion evolution pattern can be reproduced using the VDS(Vehicle Detection System) historic speed dataset in the TMC(Traffic Management Center)s. Such dataset provides a pool of spatio-temporally experienced traffic conditions. Traffic flow pattern is known as spatio-temporally recurred, and even non-recurrent congestion caused by incidents has patterns according to the incident conditions. These imply that the information should be useful for traffic prediction and traffic management. Traffic flow predictions are generally performed using black-box approaches such as neural network, genetic algorithm, and etc. Black-box approaches are not designed to provide an explanation of their modeling and reasoning process and not to estimate the benefits and the risks of the implementation of such a solution. TMCs are reluctant to employ the black-box approaches even though there are numerous valuable articles. This research proposes a more readily understandable and intuitively appealing data-driven approach and developes an algorithm for identifying congestion patterns for recurrent and non-recurrent congestion management and information provision.