• Title/Summary/Keyword: 학습 시.공간 데이터

Search Result 68, Processing Time 0.036 seconds

A Study on Clustering Representative Color of Natural Environment of Korean Peninsula for Optimal Camouflage Pattern Design (최적 위장무늬 디자인을 위한 한반도 자연환경 대표 색상 군집화 연구)

  • Chun, Sungkuk;Kim, Hoemin;Yoon, Seon Kyu;Yun, Jeongrok;Kim, Un Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.315-316
    • /
    • 2019
  • 전투복, 군용 천막 등에 사용되는 위장무늬는 군 작전 수행 시 주변 환경의 색상, 패턴을 모사하여 개인병사 및 무기체계의 위장 기능을 극대화하고, 이를 통해 아군의 생명과 시설피해를 최소화하기 위한 목적으로 사용된다. 특히 최근 들어 군의 작전환경과 임무가 복잡하고 다양해짐에 따라, 작전환경에 대한 데이터의 취득 및 정량적 분석을 통해 전장 환경에 최적화된 위장무늬 패턴 및 색상 추출에 대한 연구의 필요성이 증대되고 있다. 본 논문에서는 한반도 자연환경 영상에 대한 자기 조직화 지도(SOM, Self-organizing Map) 기반의 한반도 자연환경 대표 색상 군집화 연구 방법에 대해 서술한다. 이를 위해 한반도 내 위도를 고려한 장소에서 시간별, 계절별 자연환경 영상 수집을 진행하며, 수집된 영상 내 다수의 화소의 군집화를 위해 2차원 SOM을 활용한다. 영상 내 각 화소의 색상 값에 대한 SOM의 학습 시, RGB공간상의 색차/색상 인지 왜곡을 피하기 위하여 CIEDE2000 색차 식을 통해 군집화를 진행한다. 실험결과에서는 온라인상으로 수집한 여름 및 가을철 대표 색상 군집화 결과와, 현재까지 수집된 계절별 자연환경 사진 내 6억 7648개 화소에 대한 대표 색상 군집화 결과를 보여준다.

  • PDF

Development of Tree Detection Methods for Estimating LULUCF Settlement Greenhouse Gas Inventories Using Vegetation Indices (식생지수를 활용한 LULUCF 정주지 온실가스 인벤토리 산정을 위한 수목탐지 방법 개발)

  • Joon-Woo Lee;Yu-Han Han;Jeong-Taek Lee;Jin-Hyuk Park;Geun-Han Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1721-1730
    • /
    • 2023
  • As awareness of the problem of global warming emerges around the world, the role of carbon sinks in settlement is increasingly emphasized to achieve carbon neutrality in urban areas. In order to manage carbon sinks in settlement, it is necessary to identify the current status of carbon sinks. Identifying the status of carbon sinks requires a lot of manpower and time and a corresponding budget. Therefore, in this study, a map predicting the location of trees was created using already established tree location information and Sentinel-2 satellite images targeting Seoul. To this end, after constructing a tree presence/absence dataset, structured data was generated using 16 types of vegetation indices information constructed from satellite images. After learning this by applying the Extreme Gradient Boosting (XGBoost) model, a tree prediction map was created. Afterward, the correlation between independent and dependent variables was investigated in model learning using the Shapely value of Shapley Additive exPlanations(SHAP). A comparative analysis was performed between maps produced for local parts of Seoul and sub-categorized land cover maps. In the case of the tree prediction model produced in this study, it was confirmed that even hard-to-detect street trees around the main street were predicted as trees.

A Study on the Prediction of Disc Cutter Wear Using TBM Data and Machine Learning Algorithm (TBM 데이터와 머신러닝 기법을 이용한 디스크 커터마모 예측에 관한 연구)

  • Tae-Ho, Kang;Soon-Wook, Choi;Chulho, Lee;Soo-Ho, Chang
    • Tunnel and Underground Space
    • /
    • v.32 no.6
    • /
    • pp.502-517
    • /
    • 2022
  • As the use of TBM increases, research has recently increased to to analyze TBM data with machine learning techniques to predict the exchange cycle of disc cutters, and predict the advance rate of TBM. In this study, a regression prediction of disc cutte wear of slurry shield TBM site was made by combining machine learning based on the machine data and the geotechnical data obtained during the excavation. The data were divided into 7:3 for training and testing the prediction of disc cutter wear, and the hyper-parameters are optimized by cross-validated grid-search over a parameter grid. As a result, gradient boosting based on the ensemble model showed good performance with a determination coefficient of 0.852 and a root-mean-square-error of 3.111 and especially excellent results in fit times along with learning performance. Based on the results, it is judged that the suitability of the prediction model using data including mechanical data and geotechnical information is high. In addition, research is needed to increase the diversity of ground conditions and the amount of disc cutter data.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Measurement of the Visibility of the Smoke Images using PCA (PCA를 이용한 연기 영상의 가시도 측정)

  • Yu, Young-Jung;Moon, Sang-ho;Park, Seong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1474-1480
    • /
    • 2018
  • When fires occur in high-rise buildings, it is difficult to determine whether each escape route is safe because of complex structure. Therefore, it is necessary to provide residents with escape routes quickly after determining their safety. We propose a method to measure the visibility of the escape route due to the smoke generated in the fire by analyzing the images. The visibility can be easily measured if the density of smoke detected in the input image is known. However, this approach is difficult to use because there are no suitable methods for measuring smoke density. In this paper, we use principal component analysis by extracting a background image from input images and making it training data. Background images and smoke images are extracted from images given as inputs, and then the learned principal component analysis is applied to map of as a new feature space, and the change is calculated and the visibility due to the smoke is measured.

Speaker Recognition Based on Robust PCA (강인한 주성분 분석법을 갖는 화자인식)

  • Lee Youn Jeong;Lee Ki Yong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.225-228
    • /
    • 2002
  • 본 논문에서는 화자인식을 위하여 강인한 주성분 분석법(Robust Principal Component Analysis)을 갖는 화자인식 방법을 제안하였다. 강인한 주성분 분석법은 특징벡터들의 outlier가 존재할 경우 k-차원으로 줄이면서 강인한 화자 모델을 만들기 위하여 사용한다. 기존의 PCA 방법은 순수한 화자의 정보가 잡음 등의 outlier에 의해 손상될 수 있으므로, 강인한 주성분 분석법을 사용하여 outlier의 영향을 감소 시켰다. 화자 별로 k-차원 diagonal GMM 학습시 mixture 수를 적응시켜 데이터 저장 공간을 최소화하였다. 200명의 고립 숫자음을 사용하여 기존의 diagonal GMM 방법과 제안된 방법을 실험한 결과, 제안된 방법에서 약 $1.5\%$더 높은 인증률을 얻을 수 있었다.

  • PDF

Development of path travel time forecasting model using wavelet transformation and RBF neural network (웨이브렛 변환과 RBF 신경망을 이용한 경로통행시간 예측모형 개발 -시내버스 노선운행시간을 중심으로-)

  • 신승원;노정현
    • Journal of Korean Society of Transportation
    • /
    • v.16 no.4
    • /
    • pp.153-166
    • /
    • 1998
  • 본 연구에서는 도시 가로망에서의 구간 통행시간을 예측하기 위하여 time-frequency 분석의 일종인 웨이브렛변환과 RBF신경망 모형을 이용한 예측모형을 개발하였다. 웨이브렛 변환을 이용한 시계열 자료 분석을 통해서 통행시간에 내재되어 있는 다양한 패턴의 특징을 추출함으로써 오전/오후의 첨두현상, 신호교차로의 현시주기 등 주기적으로 발생되는 요인들에 의해서 통행시간 시계열 자료의 패턴에 나타나는 규칙성을 분석해 내었다. 분석된 패턴정보에 대한 규명은 카오스 이론을 근간으로한 시간지연좌표를 이용하여 시계열 자료의 규칙성을 시각적으로 판별하여 예측모형 구축에 활용하도록 하였다. 또, RBF신경망을 이용하여 예측범위의 공간적/시간적 확대에 따른 모형 구축에 소요되는 시간을 최소화하도록 하였으며, 시내버스 노선의 정류장간 운행시간 예측을 통해서 기존 연구에서 제기되었던 현실세계의 단순화, 다단계 예측시 정확성 등의 문제를 해결하였다. 예측실험결과 웨이브렛 변환을 데이터의 전처리 과정에 삽입하여 링크 통행시간의 패턴정보 예측에 활용할 경우, 기존의 예측모형에 비해서 훨씬 정확한 예측이 가능한 것으로 나타났으며, RBF 신경망은 짧은 학습시간에도 불구하고 역전파 신경망보다 우수한 예측력을 갖고 있는 것으로 밝혀졌다.

  • PDF

3D Object Recognition Using Appearance Model of Feature Point (특징점 Appearance Model을 이용한 3차원 물체 인식)

  • Joo, Seong-Moon;Park, Jae-Wan;Lee, Chil-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1536-1539
    • /
    • 2013
  • 3차원 물체는 카메라의 시선 방향에 따라 다른 영상을 생성하므로 2차원 영상만으로 3차원 물체를 인식하는 것은 쉬운 일이 아니다. 특히 영상생성 시 강한 perspective transformation 이 발생할 경우 2차원 국소 특징을 이용하는 SIFT(Scale-Invariant Feature Transform) 알고리즘은 매칭에 활용하기 어렵다. 본 논문에서는 3차원 물체를 하나의 특정 축 중심으로 회전시키면서 얻은 복수의 영상을 학습 데이터로 활용하여 SIFT 알고리즘을 개선한 물체인식 방법을 제안한다. 이 방법은 복수 영상의 특징점들을 하나의 특징 공간으로 합성하고 그 특징점들 간의 기하학적인 제약조건을 확인하여 3차원 물체를 인식하는 방법이다. 실험에서는 알고리즘의 유용성을 먼저 확인하기 위해 조명조건과 카메라의 위치를 일정하게 유지하였다. 이 방법에 의해 SIFT 알고리즘만으로 인식이 힘들었던 3차원 물체의 다양한 외관(appearance) 인식이 가능하게 되었다.

MLP-based 3D Geotechnical Layer Mapping Using Borehole Database in Seoul, South Korea (MLP 기반의 서울시 3차원 지반공간모델링 연구)

  • Ji, Yoonsoo;Kim, Han-Saem;Lee, Moon-Gyo;Cho, Hyung-Ik;Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.5
    • /
    • pp.47-63
    • /
    • 2021
  • Recently, the demand for three-dimensional (3D) underground maps from the perspective of digital twins and the demand for linkage utilization are increasing. However, the vastness of national geotechnical survey data and the uncertainty in applying geostatistical techniques pose challenges in modeling underground regional geotechnical characteristics. In this study, an optimal learning model based on multi-layer perceptron (MLP) was constructed for 3D subsurface lithological and geotechnical classification in Seoul, South Korea. First, the geotechnical layer and 3D spatial coordinates of each borehole dataset in the Seoul area were constructed as a geotechnical database according to a standardized format, and data pre-processing such as correction and normalization of missing values for machine learning was performed. An optimal fitting model was designed through hyperparameter optimization of the MLP model and model performance evaluation, such as precision and accuracy tests. Then, a 3D grid network locally assigning geotechnical layer classification was constructed by applying an MLP-based bet-fitting model for each unit lattice. The constructed 3D geotechnical layer map was evaluated by comparing the results of a geostatistical interpolation technique and the topsoil properties of the geological map.