• Title/Summary/Keyword: 변환기반 학습

Search Result 418, Processing Time 0.025 seconds

Korean Morphological Analysis Method Based on BERT-Fused Transformer Model (BERT-Fused Transformer 모델에 기반한 한국어 형태소 분석 기법)

  • Lee, Changjae;Ra, Dongyul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.4
    • /
    • pp.169-178
    • /
    • 2022
  • Morphemes are most primitive units in a language that lose their original meaning when segmented into smaller parts. In Korean, a sentence is a sequence of eojeols (words) separated by spaces. Each eojeol comprises one or more morphemes. Korean morphological analysis (KMA) is to divide eojeols in a given Korean sentence into morpheme units. It also includes assigning appropriate part-of-speech(POS) tags to the resulting morphemes. KMA is one of the most important tasks in Korean natural language processing (NLP). Improving the performance of KMA is closely related to increasing performance of Korean NLP tasks. Recent research on KMA has begun to adopt the approach of machine translation (MT) models. MT is to convert a sequence (sentence) of units of one domain into a sequence (sentence) of units of another domain. Neural machine translation (NMT) stands for the approaches of MT that exploit neural network models. From a perspective of MT, KMA is to transform an input sequence of units belonging to the eojeol domain into a sequence of units in the morpheme domain. In this paper, we propose a deep learning model for KMA. The backbone of our model is based on the BERT-fused model which was shown to achieve high performance on NMT. The BERT-fused model utilizes Transformer, a representative model employed by NMT, and BERT which is a language representation model that has enabled a significant advance in NLP. The experimental results show that our model achieves 98.24 F1-Score.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

A Study on the Thermal Prediction Model cf the Heat Storage Tank for the Optimal Use of Renewable Energy (신재생 에너지 최적 활용을 위한 축열조 온도 예측 모델 연구)

  • HanByeol Oh;KyeongMin Jang;JeeYoung Oh;MyeongBae Lee;JangWoo Park;YongYun Cho;ChangSun Shin
    • Smart Media Journal
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2023
  • Recently, energy consumption for heating costs, which is 35% of smart farm energy costs, has increased, requiring energy consumption efficiency, and the importance of new and renewable energy is increasing due to concerns about the realization of electricity bills. Renewable energy belongs to hydropower, wind, and solar power, of which solar energy is a power generation technology that converts it into electrical energy, and this technology has less impact on the environment and is simple to maintain. In this study, based on the greenhouse heat storage tank and heat pump data, the factors that affect the heat storage tank are selected and a heat storage tank supply temperature prediction model is developed. It is predicted using Long Short-Term Memory (LSTM), which is effective for time series data analysis and prediction, and XGBoost model, which is superior to other ensemble learning techniques. By predicting the temperature of the heat pump heat storage tank, energy consumption may be optimized and system operation may be optimized. In addition, we intend to link it to the smart farm energy integrated operation system, such as reducing heating and cooling costs and improving the energy independence of farmers due to the use of solar power. By managing the supply of waste heat energy through the platform and deriving the maximum heating load and energy values required for crop growth by season and time, an optimal energy management plan is derived based on this.

Development of a deep learning-based cabbage core region detection and depth classification model (딥러닝 기반 배추 심 중심 영역 및 깊이 분류 모델 개발)

  • Ki Hyun Kwon;Jong Hyeok Roh;Ah-Na Kim;Tae Hyong Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.392-399
    • /
    • 2023
  • This paper proposes a deep learning model to determine the region and depth of cabbage cores for robotic automation of the cabbage core removal process during the kimchi manufacturing process. In addition, rather than predicting the depth of the measured cabbage, a model was presented that simultaneously detects and classifies the area by converting it into a discrete class. For deep learning model learning and verification, RGB images of the harvested cabbage 522 were obtained. The core region and depth labeling and data augmentation techniques from the acquired images was processed. MAP, IoU, acuity, sensitivity, specificity, and F1-score were selected to evaluate the performance of the proposed YOLO-v4 deep learning model-based cabbage core area detection and classification model. As a result, the mAP and IoU values were 0.97 and 0.91, respectively, and the acuity and F1-score values were 96.2% and 95.5% for depth classification, respectively. Through the results of this study, it was confirmed that the depth information of cabbage can be classified, and that it can be used in the development of a robot-automation system for the cabbage core removal process in the future.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

Derivation of Inherent Optical Properties Based on Deep Neural Network (심층신경망 기반의 해수 고유광특성 도출)

  • Hyeong-Tak Lee;Hey-Min Choi;Min-Kyu Kim;Suk Yoon;Kwang-Seok Kim;Jeong-Eon Moon;Hee-Jeong Han;Young-Je Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.695-713
    • /
    • 2023
  • In coastal waters, phytoplankton,suspended particulate matter, and dissolved organic matter intricately and nonlinearly alter the reflectivity of seawater. Neural network technology, which has been rapidly advancing recently, offers the advantage of effectively representing complex nonlinear relationships. In previous studies, a three-stage neural network was constructed to extract the inherent optical properties of each component. However, this study proposes an algorithm that directly employs a deep neural network. The dataset used in this study consists of synthetic data provided by the International Ocean Color Coordination Group, with the input data comprising above-surface remote-sensing reflectance at nine different wavelengths. We derived inherent optical properties using this dataset based on a deep neural network. To evaluate performance, we compared it with a quasi-analytical algorithm and analyzed the impact of log transformation on the performance of the deep neural network algorithm in relation to data distribution. As a result, we found that the deep neural network algorithm accurately estimated the inherent optical properties except for the absorption coefficient of suspended particulate matter (R2 greater than or equal to 0.9) and successfully separated the sum of the absorption coefficient of suspended particulate matter and dissolved organic matter into the absorption coefficient of suspended particulate matter and dissolved organic matter, respectively. We also observed that the algorithm, when directly applied without log transformation of the data, showed little difference in performance. To effectively apply the findings of this study to ocean color data processing, further research is needed to perform learning using field data and additional datasets from various marine regions, compare and analyze empirical and semi-analytical methods, and appropriately assess the strengths and weaknesses of each algorithm.

Subimage Detection of Window Image Using AdaBoost (AdaBoost를 이용한 윈도우 영상의 하위 영상 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.578-589
    • /
    • 2014
  • Window image is displayed through a monitor screen when we execute the application programs on the computer. This includes webpage, video player and a number of applications. The webpage delivers a variety of information by various types in comparison with other application. Unlike a natural image captured from a camera, the window image like a webpage includes diverse components such as text, logo, icon, subimage and so on. Each component delivers various types of information to users. However, the components with different characteristic need to be divided locally, because text and image are served by various type. In this paper, we divide window images into many sub blocks, and classify each divided region into background, text and subimage. The detected subimages can be applied into 2D-to-3D conversion, image retrieval, image browsing and so forth. There are many subimage classification methods. In this paper, we utilize AdaBoost for verifying that the machine learning-based algorithm can be efficient for subimage detection. In the experiment, we showed that the subimage detection ratio is 93.4 % and false alarm is 13 %.

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

Classification of Scaled Textured Images Using Normalized Pattern Spectrum Based on Mathematical Morphology (형태학적 정규화 패턴 스펙트럼을 이용한 질감영상 분류)

  • Song, Kun-Woen;Kim, Gi-Seok;Do, Kyeong-Hoon;Ha, Yeong-Ho
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.116-127
    • /
    • 1996
  • In this paper, a scheme of classification of scaled textured images using normalized pattern spectrum incorporating arbitrary scale changes based on mathematical morphology is proposed in more general environments considering camera's zoom-in and zoom-out function. The normalized pattern spectrum means that firstly pattern spectrum is calculated and secondly interpolation is performed to incorporate scale changes according to scale change ratio in the same textured image class. Pattern spectrum is efficiently obtained by using both opening and closing, that is, we calculate pattern spectrum by opening method for pixels which have value more than threshold and calculate pattern spectrum by closing method for pixels which have value less than threshold. Also we compare classification accuracy between gray scale method and binary method. The proposed approach has the advantage of efficient information extraction, high accuracy, less computation, and parallel implementation. An important advantage of the proposed method is that it is possible to obtain high classification accuracy with only (1:1) scale images for training phase.

  • PDF