• Title/Summary/Keyword: random map

Search Result 259, Processing Time 0.026 seconds

Development of Random Forest Model for Sewer-induced Sinkhole Susceptibility (손상 하수관으로 인한 지반함몰의 위험도 평가를 위한 랜덤 포레스트 모델 개발)

  • Kim, Joonyoung;Kang, Jae Mo;Baek, Sung-Ha
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.12
    • /
    • pp.117-125
    • /
    • 2021
  • The occurrence of ground subsidence and sinkhole in downtown areas, which threatens the safety of citizens, has been frequently reported. Among the various mechanisms of a sinkhole, soil erosion through the damaged part of the sewer pipe was found to be the main cause in Seoul. In this study, a random forest model for predicting the occurrence of sinkholes caused by damaged sewer pipes based on sewage pipe information was trained using the information on the sewage pipe and the locations of the sinkhole occurrence case in Seoul. The random forest model showed excellent performance in the prediction of sinkhole occurrence after the optimization of its hyperparameters. In addition, it was confirmed that the sewage pipe length, elevation above sea level, slope, depth of landfill, and the risk of ground subsidence were affected in the order of sewage pipe information used as input variables. The results of this study are expected to be used as basic data for the preparation of a sinkhole susceptibility map and the establishment of an underground cavity exploration plan and a sewage pipe maintenance plan.

An Algorithms for Tournament-based Big Data Analysis (토너먼트 기반의 빅데이터 분석 알고리즘)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.545-553
    • /
    • 2015
  • While all of the data has a value in itself, most of the data that is collected in the real world is a random and unstructured. In order to extract useful information from the data, it is need to use the data transform and analysis algorithms. Data mining is used for this purpose. Today, there is not only need for a variety of data mining techniques to analyze the data but also need for a computational requirements and rapid analysis time for huge volume of data. The method commonly used to store huge volume of data is to use the hadoop. A method for analyzing data in hadoop is to use the MapReduce framework. In this paper, we developed a tournament-based MapReduce method for high efficiency in developing an algorithm on a single machine to the MapReduce framework. This proposed method can apply many analysis algorithms and we showed the usefulness of proposed tournament based method to apply frequently used data mining algorithms k-means and k-nearest neighbor classification.

Design of an RFID Authentication Protocol Using Nonlinear Tent-Map (비선형 Tent-Map을 이용한 RFID 인증 프로토콜 설계)

  • Han, Kyu-Kwang;Yim, Geo-Su
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.10
    • /
    • pp.1145-1152
    • /
    • 2014
  • The RFID (Radio-Frequency Identification) system is a technology to discern things by radio and an epoch-making new method to improve product management such as distribution, transport, mobilization, inventory control. However, RFID, which uses radio, is at risk for information leakage and falsification due to the vulnerability of security of the communication section. We designed the new authentication protocol by applying the tent map, which is the representative complex systems, to the RFID communication system. A more solid and simple authentication system was designed by applying the initial value sensitivity and irregularity, which are the representative characteristics of the complex system, to the reader and tag of RFID. The purpose of this paper is to verify the usability of the RFID authentication protocol design that uses the nonlinear system shown in this thesis by the new system differentiated from the authentication system that depends on the existing hash function or random numbers.

A Video Watermarking Method using Global Masking (전역 마스킹을 이용한 비디오 워터마킹 방법)

  • 문지영;호요성
    • Journal of Broadcast Engineering
    • /
    • v.8 no.3
    • /
    • pp.268-277
    • /
    • 2003
  • In this paper, we propose a new video watermarking method exploiting the human visual system (HVS) to find effective locations. in the video frames which make the watermark robust and imperceptible simultaneously. In particular, we propose a new HVS-optimized weighting map for hiding the watermark by considering HVS in three different aspects : frequency, spatial, and motion masking effects. The global masking map is modeled by combining the frequency masking, the spatial masking, and the motion masking. In this paper, we use a watermark which is generated by the bitwise exclusive-OR operation between a logo image and a random sequence. The amount of watermarks is weighted by a control parameter. Furthermore, we embed the watermark in the uncompressed video sequence for the general watermarking method available to various coding schemes. Simulation results show that the watermark is imperceptible and the proposed method is good for watermark capacity. It is also demonstrated that the proposed method is robust against various attacks, such as MPEG coding, MPEG re-encoding, and frame attacks.

Development of Radius Search System based on Raster Map in the Flash Environment (플래시 환경에서 래스터 지도를 기반으로 한 반경 검색 시스템 개발)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.4
    • /
    • pp.39-47
    • /
    • 2008
  • This paper describes the life GIS(Geographic Information System) system that enables users to look up the final destination on a map automatically by setting up a random radius of threshold under the present user position. Various information systems based on the existing GIS have disadvantage that a search is difficult when the destination location is not clear. And it is inefficient that the processing time is delay due to a complex configuration and large amount of informations based on the target of special business supports. Therefore, this paper improves these problems and proposes the customized life GIS which is for the general user in the Flash(Action Script) environment. The customized life GIS confirms the detail items from search results, which is destinations fitting for a condition in the suitable radius of threshold from the present user position. And the user can look up the suitable final destination on a map easily using the customized information system. The result of this paper, which is based on the sample of the large unit locations, will be expected to be able to guide more detail, extensive and various informations.

Estimation of Fractional Vegetation Cover in Sand Dunes Using Multi-spectral Images from Fixed-wing UAV

  • Choi, Seok Keun;Lee, Soung Ki;Jung, Sung Heuk;Choi, Jae Wan;Choi, Do Yoen;Chun, Sook Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.431-441
    • /
    • 2016
  • Since the use of UAV (Unmanned Aerial Vehicle) is convenient for the acquisition of data on broad or inaccessible regions, it is nowadays used to establish spatial information for various fields, such as the environment, ecosystem, forest, or for military purposes. In this study, the process of estimating FVC (Fractional Vegetation Cover), based on multi-spectral UAV, to overcome the limitations of conventional methods is suggested. Hence, we propose that the FVC map is generated by using multi-spectral imaging. First, two types of result classifications were obtained based on RF (Random Forest) using RGB images and NDVI (Normalized Difference Vegetation Index) with RGB images. Then, the result map was reclassified into vegetation and non-vegetation. Finally, an FVC map-based RF were generated by using pixel calculation and FVC map-based GI (Gutman and Ignatov) model were indirectly made by fixed parameters. The method of adding NDVI shows a relatively higher accuracy compared to that of adding only RGB, and in particular, the GI model shows a lower RMSE (Root Mean Square Error) with 0.182 than RF. In this regard, the availability of the GI model which uses only the values of NDVI is higher than that of RF whose accuracy varies according to the results of classification. Our results showed that the GI mode ensures the quality of the FVC if the NDVI maintained at a uniform level. This can be easily achieved by using a UAV, which can provide vegetation data to improve the estimation of FVC.

AN EFFICIENT AND SECURE STRONG DESIGNATED VERIFIER SIGNATURE SCHEME WITHOUT BILINEAR PAIRINGS

  • Islam, Sk Hafizul;Biswas, G.P.
    • Journal of applied mathematics & informatics
    • /
    • v.31 no.3_4
    • /
    • pp.425-441
    • /
    • 2013
  • In literature, several strong designated verifier signature (SDVS) schemes have been devised using elliptic curve bilinear pairing and map-topoint (MTP) hash function. The bilinear pairing requires a super-singular elliptic curve group having large number of elements and the relative computation cost of it is approximately two to three times higher than that of elliptic curve point multiplication, which indicates that bilinear pairing is an expensive operation. Moreover, the MTP function, which maps a user identity into an elliptic curve point, is more expensive than an elliptic curve scalar point multiplication. Hence, the SDVS schemes from bilinear pairing and MTP hash function are not efficient in real environments. Thus, a cost-efficient SDVS scheme using elliptic curve cryptography with pairingfree operation is proposed in this paper that instead of MTP hash function uses a general cryptographic hash function. The security analysis shows that our scheme is secure in the random oracle model with the hardness assumption of CDH problem. In addition, the formal security validation of the proposed scheme is done using AVISPA tool (Automated Validation of Internet Security Protocols and Applications) that demonstrated that our scheme is unforgeable against passive and active attacks. Our scheme also satisfies the different properties of an SDVS scheme including strongness, source hiding, non-transferability and unforgeability. The comparison of our scheme with others are given, which shows that it outperforms in terms of security, computation cost and bandwidth requirement.

The genetic structure of taro: a comparison of RAPD and isozyme markers

  • Sharma, Kamal;Mishra, Ajay Kumar;Misra, Raj Shekhar
    • Plant Biotechnology Reports
    • /
    • v.2 no.3
    • /
    • pp.191-198
    • /
    • 2008
  • Germplasm characterization and evolutionary process in viable populations are important links between the conservation and utilization of plant genetic resources. Here, an investigation is made, based on molecular and biochemical techniques for assessing and exploiting the genetic variability in germplasm characterization of taro, which would be useful in plant breeding and ex situ conservation of taro plant genetic resources. Geographical differentiation and phylogenetic relationships of Indian taro, Colocasia esculenta (L.) Schott, were analyzed by random amplified polymorphic DNA (RAPD) and isozyme of seven enzyme systems with specific reference to the Muktakeshi accession, which has been to be proved resistant to taro leaf blight caused by P. colocasiae. The significant differentiations in Indian taro cultivars were clearly demonstrated by RAPD and isozyme analysis. RAPD markers showed higher values for genetic differentiation among taro cultivars and lower coefficient of variation than those obtained from isozymes. Genetic differentiation was evident in the taro accessions collected from different regions of India. It appears that when taro cultivation was introduced to a new area, only a small fraction of genetic variability in heterogeneous taro populations was transferred, possibly causing random differentiation among locally adapted taro populations. The selected primers will be useful for future genetic analysis and provide taro breeders with a genetic basis for selection of parents for crop improvement. Polymorphic markers identified in the DNA fingerprinting study will be useful for screening a segregating population, which is being generated in our laboratory aimed at developing a taro genetic linkage map.

A Bottom-up and Top-down Based Disparity Computation

  • Kim, Jung-Gu;hong Jeong
    • Journal of Electrical Engineering and information Science
    • /
    • v.3 no.2
    • /
    • pp.211-221
    • /
    • 1998
  • It is becoming apparent that stereo matching algorithms need much information from high level cognitive processes. Otherwise, conventional algorithms based on bottom-up control alone are susceptible to local minima. We introduce a system that consists of two levels. A lower level, using a usual matching method, is based upon the local neighborhood and a second level, that can integrate the partial information, is aimed at contextual matching. Conceptually, the introduction of bottom-up and top-down feedback loop to the usual matching algorithm improves the overall performance. For this purpose, we model the image attributes using a Markov random field (MRF) and thereupon derive a maximum a posteriori (MAP) estimate. The energy equation, corresponding to the estimate, efficiently represents the natural constraints such as occlusion and the partial informations from the other levels. In addition to recognition, we derive a training method that can determine the system informations from the other levels. In addition to recognition, we derive a training method that can determine the system parameters automatically. As an experiment, we test the algorithms using random dot stereograms (RDS) as well as natural scenes. It is proven that the overall recognition error is drastically reduced by the introduction of contextual matching.

  • PDF

Data Mining-Aided Automatic Landslide Detection Using Airborne Laser Scanning Data in Densely Forested Tropical Areas

  • Mezaal, Mustafa Ridha;Pradhan, Biswajeet
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.1
    • /
    • pp.45-74
    • /
    • 2018
  • Landslide is a natural hazard that threats lives and properties in many areas around the world. Landslides are difficult to recognize, particularly in rainforest regions. Thus, an accurate, detailed, and updated inventory map is required for landslide susceptibility, hazard, and risk analyses. The inconsistency in the results obtained using different features selection techniques in the literature has highlighted the importance of evaluating these techniques. Thus, in this study, six techniques of features selection were evaluated. Very-high-resolution LiDAR point clouds and orthophotos were acquired simultaneously in a rainforest area of Cameron Highlands, Malaysia by airborne laser scanning (LiDAR). A fuzzy-based segmentation parameter (FbSP optimizer) was used to optimize the segmentation parameters. Training samples were evaluated using a stratified random sampling method and set to 70% training samples. Two machine-learning algorithms, namely, Support Vector Machine (SVM) and Random Forest (RF), were used to evaluate the performance of each features selection algorithm. The overall accuracies of the SVM and RF models revealed that three of the six algorithms exhibited higher ranks in landslide detection. Results indicated that the classification accuracies of the RF classifier were higher than the SVM classifier using either all features or only the optimal features. The proposed techniques performed well in detecting the landslides in a rainforest area of Malaysia, and these techniques can be easily extended to similar regions.