• Title/Summary/Keyword: mapping method

Search Result 2,590, Processing Time 0.03 seconds

Efficient Construction of Open Source-based Sewage Facility Database (오픈소스 기반의 하수 시설물 데이터베이스의 효율적 구축)

  • Ko, Jeongsang;Xu, Chunxu;Yun, Heecheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.393-402
    • /
    • 2022
  • Effective data management of underground facilities is very important in terms of human life. For this, input of up-to-date and high-accuracy data should be preceded. Therefore, it is important to have an efficient data input method. In this study, by developing a sewage facility site survey program using open source software, paper drawings could be replaced with tablet PCs. By using a tablet PC, figures and property information acquired from the field are transmitted in real time through a database server. PostGIS query is developed to automate structured editing to minimize manual work in constructing a GIS (Geographic Information System) database for sewage facilities. did. In addition, the database was built using the sewage facility GIS database building program. As a result of comparing and analyzing the existing sewage facility database construction, work process, and work time, the work process was simplified and work time was shortened. In addition, through simple customization of open source software, it will be able to be used for field surveys and database construction in other fields.

Combination Key Generation Scheme Robust to Updates of Personal Information (결합키 생성항목의 갱신에 강건한 결합키 생성 기법)

  • Jang, Hobin;Noh, Geontae;Jeong, Ik Rae;Chun, Ji Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.915-932
    • /
    • 2022
  • According to the Personal Information Protection Act and Pseudonymization Guidelines, the mapping is processed to the hash value of the combination key generation items including Salt value when different combination applicants wish to combine. Example of combination key generation items may include personal information like name, phone number, date of birth, address, and so on. Also, due to the properties of the hash functions, when different applicants store their items in exactly the same form, the combination can proceed without any problems. However, this method is vulnerable to combination in scenarios such as address changing and renaming, which occur due to different database update times of combination applicants. Therefore, we propose a privacy preserving combination key generation scheme robust to updates of items used to generate combination key even in scenarios such as address changing and renaming, based on the thresholds through probabilistic record linkage, and it can contribute to the development of domestic Big Data and Artificial Intelligence business.

An effective automated ontology construction based on the agriculture domain

  • Deepa, Rajendran;Vigneshwari, Srinivasan
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.573-587
    • /
    • 2022
  • The agricultural sector is completely different from other sectors since it completely relies on various natural and climatic factors. Climate changes have many effects, including lack of annual rainfall and pests, heat waves, changes in sea level, and global ozone/atmospheric CO2 fluctuation, on land and agriculture in similar ways. Climate change also affects the environment. Based on these factors, farmers chose their crops to increase productivity in their fields. Many existing agricultural ontologies are either domain-specific or have been created with minimal vocabulary and no proper evaluation framework has been implemented. A new agricultural ontology focused on subdomains is designed to assist farmers using Jaccard relative extractor (JRE) and Naïve Bayes algorithm. The JRE is used to find the similarity between two sentences and words in the agricultural documents and the relationship between two terms is identified via the Naïve Bayes algorithm. In the proposed method, the preprocessing of data is carried out through natural language processing techniques and the tags whose dimensions are reduced are subjected to rule-based formal concept analysis and mapping. The subdomain ontologies of weather, pest, and soil are built separately, and the overall agricultural ontology are built around them. The gold standard for the lexical layer is used to evaluate the proposed technique, and its performance is analyzed by comparing it with different state-of-the-art systems. Precision, recall, F-measure, Matthews correlation coefficient, receiver operating characteristic curve area, and precision-recall curve area are the performance metrics used to analyze the performance. The proposed methodology gives a precision score of 94.40% when compared with the decision tree(83.94%) and K-nearest neighbor algorithm(86.89%) for agricultural ontology construction.

Real-time Data Enhancement of 3D Underwater Terrain Map Using Nonlinear Interpolation on Image Sonar (비선형 보간법을 이용한 수중 이미지 소나의 3 차원 해저지형 실시간 생성기법)

  • Ingyu Lee;Jason Kim;Sehwan Rho;Kee–Cheol Shin;Jaejun Lee;Son-Cheol Yu
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.110-117
    • /
    • 2023
  • Reconstructing underwater geometry in real time with forward-looking sonar is critical for applications such as localization, mapping, and path planning. Geometrical data must be repeatedly calculated and overwritten in real time because the reliability of the acoustic data is affected by various factors. Moreover, scattering of signal data during the coordinate conversion process may lead to geometrical errors, which lowers the accuracy of the information obtained by the sensor system. In this study, we propose a three-step data processing method with low computational cost for real-time operation. First, the number of data points to be interpolated is determined with respect to the distance between each point and the size of the data grid in a Cartesian coordinate system. Then, the data are processed with a nonlinear interpolation so that they exhibit linear properties in the coordinate system. Finally, the data are transformed based on variations in the position and orientation of the sonar over time. The results of an evaluation of our proposed approach in a simulation show that the nonlinear interpolation operation constructed a continuous underwater geometry dataset with low geometrical error.

Highly catalysis Zinc MOF-loaded nanogold coupled with aptamer to assay trace carbendazim by SERS

  • Jinling Shi;Jingjing Li;Aihui Liang;Zhiliang Jiang
    • Advances in nano research
    • /
    • v.14 no.4
    • /
    • pp.313-327
    • /
    • 2023
  • Zinc metal organic framework (MOFZn)-loaded goad nanoparticles (AuNPs) sol (Au@MOFZn), which was characterized by TEM, Mapping, FTIR, XRD, and molecular spectrum, was prepared conveniently by solvothermal method. The results indicated that Au@MOFZn had a very strong catalytic effect with the nanoreaction of AuNPs formation between sodium oxalate (SO) and HAuCl4. AuNPs in the new indicator reaction had a strong resonance Rayleigh scattering (RRS) signal at 370 nm. The indicator AuNPs generated by this reaction, which had the most intense surface enhanced Raman scattering (SERS) peak at 1621 cm -1. The new SERS/RRS indicator reaction in combination with specific aptamer (Apt) to fabricate a sensitive and selective Au@MOFZn catalytic amplification-aptamer SERS/RRS assay platform for carbendazim (CBZ), with SERS/RRS linear range of 0.025-0.5 ng/mL. The detection limit was 0.02 ng/mL. Similarly, this assay platform has been also utilized to detect oxytetracycline (OTC) and profenofos (PF).

A Transformation Technique for Constraints-preserving of XML Data (XML 데이터의 제약조건 보존을 위한 변환 기법)

  • Cho, Jung-Gil;Keum, Young-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.5
    • /
    • pp.1-9
    • /
    • 2009
  • Many techniques have been proposed to store efficiently and query XML data. One way achieving this goal is using relational database by transforming XML data into relational format. But most researches only transformed content and structure of XML schema. Although they transformed semantic constrainment of XML schema, they did not all of semantics. In this paper, we propose a systematic technique for extracting semantic constrainment from XML schema and storing method when the extracting result is transformed into relational schema without any lost of semantic constrainment. The transforming algorithm is used for extracting and storing semantic constrainment from XML schema and it shows how extracted information is stored according to schema notation. Also it provides semantic knowledges that are needed to be confirmed during the transformation to ensure a correct relation schema. The technique can reduce storage redundancy and can keep up content and structure with integrity constraints.

A Study on Traceback by WAS Bypass Access Query Information of DataBase (DBMS WAS 우회접속의 쿼리정보 역추적 연구)

  • Baek, Jong-Il;Park, Dea-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.181-190
    • /
    • 2009
  • DBMS access that used high speed internet web service through WAS is increasing. Need application of DB security technology for 3-Tier about DBMS by unspecified majority and access about roundabout way connection and competence control. If do roundabout way connection to DBMS through WAS, DBMS server stores WAS's information that is user who do not store roundabout way connection user's IP information, and connects to verge system. To DBMS in this investigation roundabout way connection through WAS do curie information that know chasing station security thanks recording and Forensic data study. Store session about user and query information that do login through web constructing MetaDB in communication route, and to DBMS server log storing done query information time stamp query because do comparison mapping actuality user discriminate. Apply making Rule after Pattern analysis receiving log by elevation method of security authoritativeness, and develop Module and keep in the data storing place through collection and compression of information. Kept information can minimize false positives of station chase through control of analysis and policy base administration module that utilize intelligence style DBMS security client.

Landslide risk zoning using support vector machine algorithm

  • Vahed Ghiasi;Nur Irfah Mohd Pauzi;Shahab Karimi;Mahyar Yousefi
    • Geomechanics and Engineering
    • /
    • v.34 no.3
    • /
    • pp.267-284
    • /
    • 2023
  • Landslides are one of the most dangerous phenomena and natural disasters. Landslides cause many human and financial losses in most parts of the world, especially in mountainous areas. Due to the climatic conditions and topography, people in the northern and western regions of Iran live with the risk of landslides. One of the measures that can effectively reduce the possible risks of landslides and their crisis management is to identify potential areas prone to landslides through multi-criteria modeling approach. This research aims to model landslide potential area in the Oshvand watershed using a support vector machine algorithm. For this purpose, evidence maps of seven effective factors in the occurrence of landslides namely slope, slope direction, height, distance from the fault, the density of waterways, rainfall, and geology, were prepared. The maps were generated and weighted using the continuous fuzzification method and logistic functions, resulting values in zero and one range as weights. The weighted maps were then combined using the support vector machine algorithm. For the training and testing of the machine, 81 slippery ground points and 81 non-sliding points were used. Modeling procedure was done using four linear, polynomial, Gaussian, and sigmoid kernels. The efficiency of each model was compared using the area under the receiver operating characteristic curve; the root means square error, and the correlation coefficient . Finally, the landslide potential model that was obtained using Gaussian's kernel was selected as the best one for susceptibility of landslides in the Oshvand watershed.

Estimation of flood in Suncheon Dongcheon watershed using dynamic water resources assessment Tool (동적수자원평가모형을 이용한 순천동천 유역의 홍수량 산정)

  • Kim, Deokhwan;Kim, Hyeonjun;Jang, Cheolhee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.285-285
    • /
    • 2022
  • 기후변화가 현실화되면서 수자원평가 (Water Resources Assessment)에 대한 관심과 중요성이 높아지고 있다. 본 연구에서는 '주민공감 문제기획리빙랩' 대상지인 순천은 동천을 중심으로 홍수량을 정량적으로 분석하였다. 순천시의 가장 시급한 사안 중 하나인 범람 및 침수 문제로, 최근 3년(2018~2020)간의 집중호우로 인한 내수배제로 주택 및 도로침수, 산사태 등의 피해를 겪었다. 시기마다 고질적으로 반복되는 동천 인근 지역의 침수문제를 사전에 예방하고 피해의 빈도나 규모를 줄이기 위하여 분석을 수행하였다. 이에 본 연구에서는 환경부의 지원을 받아 한강홍수통제소와 한국건설기술연구원이 공동으로 개발한 동적수자원평가모형(DWAT, Dynamic Water Resources Assessment Tool)을 이용하여 정량적으로 홍수량 산정을 하고자 한다. 본 모형은 전 세계가 무료로 이용할 수 있는 수자원평가도구로 사용자의 편의를 위해 GIS전처리 기능을 포함하고 있어, 자동으로 유역 매개변수 및 면적 평균강우량을 Thiessen method를 사용하여 산정할 수 있다. 또한, 물의 순환과정을 투수 및 불투수지역으로 구분되며, 투수지역은 1개의 토양층과 1개의 불압대수층으로 구성되고, 유출기여역과 함양역으로 유역을 분할하여 적용할 수 있으며, 대수층을 통하여 지하수의 흐름을 산정할 수 있다. 기상청에서 제공하는 기상자료를 분석하여 과거 관측 강우사상 3개를 선정하여 검·보정을 수행하였으며, 그 결과 모형 효율계수(Nash-Sutcliffe efficiency) 및 결정계수(Coefficient of Determination)가 0.78~0.94, 0.82~0.94로 우수한 모의 결과를 산정할 수 있었다. 빈도별 확률강우량을 Huff 4분위법을 사용하여 확률홍수량을 산정하였다. 미래 홍수량 증감량 산정을 위하여 RCP(Representative Concentration Pathways) 기후변화 시나리오를 사용하였다. 관측값과 모의값의 누적확률분포 이용하여 모의값의 확률분포를 관측값의 확률분포에 사상시키는 방법인 분위사상법(Quantile Mapping)을 사용하여 시나리오자료를 보정하였다. 본 연구에서 산정한 홍수량을 바탕으로 침수피해를 막기 위한 구조적 및 비구조적 방안을 위한 기초자료로 사용될 것으로 판단된다.

  • PDF

Landslide Risk Assessment of Cropland and Man-made Infrastructures using Bayesian Predictive Model (베이지안 예측모델을 활용한 농업 및 인공 인프라의 산사태 재해 위험 평가)

  • Al, Mamun;Jang, Dong-Ho
    • Journal of The Geomorphological Association of Korea
    • /
    • v.27 no.3
    • /
    • pp.87-103
    • /
    • 2020
  • The purpose of this study is to evaluate the risk of cropland and man-made infrastructures in a landslide-prone area using a GIS-based method. To achieve this goal, a landslide inventory map was prepared based on aerial photograph analysis as well as field observations. A total of 550 landslides have been counted in the entire study area. For model analysis and validation, extracted landslides were randomly selected and divided into two groups. The landslide causative factors such as slope, aspect, curvature, topographic wetness index, elevation, forest type, forest crown density, geology, land-use, soil drainage, and soil texture were used in the analysis. Moreover, to identify the correlation between landslides and causative factors, pixels were divided into several classes and frequency ratio was also extracted. A landslide susceptibility map was constructed using a bayesian predictive model (BPM) based on the entire events. In the cross validation process, the landslide susceptibility map as well as observation data were plotted with a receiver operating characteristic (ROC) curve then the area under the curve (AUC) was calculated and tried to extract a success rate curve. The results showed that, the BPM produced 85.8% accuracy. We believed that the model was acceptable for the landslide susceptibility analysis of the study area. In addition, for risk assessment, monetary value (local) and vulnerability scale were added for each social thematic data layers, which were then converted into US dollar considering landslide occurrence time. Moreover, the total number of the study area pixels and predictive landslide affected pixels were considered for making a probability table. Matching with the affected number, 5,000 landslide pixels were assumed to run for final calculation. Based on the result, cropland showed the estimated total risk as US $ 35.4 million and man-made infrastructure risk amounted to US $ 39.3 million.