• Title/Summary/Keyword: Software Engineering Level

Search Result 1,007, Processing Time 0.025 seconds

Korean Dependency Parsing Using Stack-Pointer Networks and Subtree Information (스택-포인터 네트워크와 부분 트리 정보를 이용한 한국어 의존 구문 분석)

  • Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.6
    • /
    • pp.235-242
    • /
    • 2021
  • In this work, we develop a Korean dependency parser based on a stack-pointer network that consists of a pointer network and an internal stack. The parser has an encoder and decoder and builds a dependency tree for an input sentence in a depth-first manner. The encoder of the parser encodes an input sentence, and the decoder selects a child for the word at the top of the stack at each step. Since the parser has the internal stack where a search path is stored, the parser can utilize information of previously derived subtrees when selecting a child node. Previous studies used only a grandparent and the most recently visited sibling without considering a subtree structure. In this paper, we introduce graph attention networks that can represent a previously derived subtree. Then we modify our parser based on the stack-pointer network to utilize subtree information produced by the graph attention networks. After training the dependency parser using Sejong and Everyone's corpus, we evaluate the parser's performance. Experimental results show that the proposed parser achieves better performance than the previous approaches at sentence-level accuracies when adopting 2-depth graph attention networks.

Effective Utilization of Domain Knowledge for Relational Reinforcement Learning (관계형 강화 학습을 위한 도메인 지식의 효과적인 활용)

  • Kang, MinKyo;Kim, InCheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.141-148
    • /
    • 2022
  • Recently, reinforcement learning combined with deep neural network technology has achieved remarkable success in various fields such as board games such as Go and chess, computer games such as Atari and StartCraft, and robot object manipulation tasks. However, such deep reinforcement learning describes states, actions, and policies in vector representation. Therefore, the existing deep reinforcement learning has some limitations in generality and interpretability of the learned policy, and it is difficult to effectively incorporate domain knowledge into policy learning. On the other hand, dNL-RRL, a new relational reinforcement learning framework proposed to solve these problems, uses a kind of vector representation for sensor input data and lower-level motion control as in the existing deep reinforcement learning. However, for states, actions, and learned policies, It uses a relational representation with logic predicates and rules. In this paper, we present dNL-RRL-based policy learning for transportation mobile robots in a manufacturing environment. In particular, this study proposes a effective method to utilize the prior domain knowledge of human experts to improve the efficiency of relational reinforcement learning. Through various experiments, we demonstrate the performance improvement of the relational reinforcement learning by using domain knowledge as proposed in this paper.

Method of ChatBot Implementation Using Bot Framework (봇 프레임워크를 활용한 챗봇 구현 방안)

  • Kim, Ki-Young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.56-61
    • /
    • 2022
  • In this paper, we classify and present AI algorithms and natural language processing methods used in chatbots. A framework that can be used to implement a chatbot is also described. A chatbot is a system with a structure that interprets the input string by constructing the user interface in a conversational manner and selects an appropriate answer to the input string from the learned data and outputs it. However, training is required to generate an appropriate set of answers to a question and hardware with considerable computational power is required. Therefore, there is a limit to the practice of not only developing companies but also students learning AI development. Currently, chatbots are replacing the existing traditional tasks, and a practice course to understand and implement the system is required. RNN and Char-CNN are used to increase the accuracy of answering questions by learning unstructured data by applying technologies such as deep learning beyond the level of responding only to standardized data. In order to implement a chatbot, it is necessary to understand such a theory. In addition, the students presented examples of implementation of the entire system by utilizing the methods that can be used for coding education and the platform where existing developers and students can implement chatbots.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

Analyzing Korean Math Word Problem Data Classification Difficulty Level Using the KoEPT Model (KoEPT 기반 한국어 수학 문장제 문제 데이터 분류 난도 분석)

  • Rhim, Sangkyu;Ki, Kyung Seo;Kim, Bugeun;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.315-324
    • /
    • 2022
  • In this paper, we propose KoEPT, a Transformer-based generative model for automatic math word problems solving. A math word problem written in human language which describes everyday situations in a mathematical form. Math word problem solving requires an artificial intelligence model to understand the implied logic within the problem. Therefore, it is being studied variously across the world to improve the language understanding ability of artificial intelligence. In the case of the Korean language, studies so far have mainly attempted to solve problems by classifying them into templates, but there is a limitation in that these techniques are difficult to apply to datasets with high classification difficulty. To solve this problem, this paper used the KoEPT model which uses 'expression' tokens and pointer networks. To measure the performance of this model, the classification difficulty scores of IL, CC, and ALG514, which are existing Korean mathematical sentence problem datasets, were measured, and then the performance of KoEPT was evaluated using 5-fold cross-validation. For the Korean datasets used for evaluation, KoEPT obtained the state-of-the-art(SOTA) performance with 99.1% in CC, which is comparable to the existing SOTA performance, and 89.3% and 80.5% in IL and ALG514, respectively. In addition, as a result of evaluation, KoEPT showed a relatively improved performance for datasets with high classification difficulty. Through an ablation study, we uncovered that the use of the 'expression' tokens and pointer networks contributed to KoEPT's state of being less affected by classification difficulty while obtaining good performance.

A Study on IT Curriculum Evaluation for College Students

  • Kim, Heon Joo;Kim, Kyung-mi;Yi, Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.255-265
    • /
    • 2022
  • We compared and analyzed the factors affecting the lecture evaluation of IT subjects, which are mandatory for all students of H University. The purpose of this study is to determine whether lecture satisfaction has a significant correlation with academic achievement, attendance rate, and categories of courses. In this study, we check whether the lecture satisfaction of IT liberal arts subjects that require a lot of computer-based practice differs from that of other liberal arts subjects. We used the 2,149 evaluation data of 12 lectures submitted by 2,322 students in the first and second semesters of year 2019 at University H. As for the lecture evaluation results, in addition to the evaluation scores of the multiple choice questions, the subjective questions were also quantified by classifying the statements submitted by the students into positive and negative types to make the results of the lecture evaluation objective. Our research results show that student group who have the higher attendance rates and academic achievements have higher level of lecture satisfaction and they also use more positive words than negative words in subjective evaluation questions. Students with the lower score use the more negative words, but the ratio between positive and negative words does not differ between groups. Higher attendance rates groups in the basic programming courses and software applications courses have higher lecture satisfaction ratio. But in the intermediate programming courses, the higher attendances rate and the lecture satisfaction do not have any significant relationship. Also students in the intermediate programming courses use more negative words than those in the basic programming courses.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

Soil Moisture Estimation Using KOMPSAT-3 and KOMPSAT-5 SAR Images and Its Validation: A Case Study of Western Area in Jeju Island (KOMPSAT-3와 KOMPSAT-5 SAR 영상을 이용한 토양수분 산정과 결과 검증: 제주 서부지역 사례 연구)

  • Jihyun Lee;Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1185-1193
    • /
    • 2023
  • The increasing interest in soil moisture data from satellite imagery for applications in hydrology, meteorology, and agriculture has led to the development of methods to produce variable-resolution soil moisture maps. Research on accurate soil moisture estimation using satellite imagery is essential for remote sensing applications. The purpose of this study is to generate a soil moisture estimation map for a test area using KOMPSAT-3/3A and KOMPSAT-5 SAR imagery and to quantitatively compare the results with soil moisture data from the Soil Moisture Active Passive (SMAP) mission provided by NASA, with a focus on accuracy validation. In addition, the Korean Environmental Geographic Information Service (EGIS) land cover map was used to determine soil moisture, especially in agricultural and forested regions. The selected test area for this study is the western part of Jeju, South Korea, where input data were available for the soil moisture estimation algorithm based on the Water Cloud Model (WCM). Synthetic Aperture Radar (SAR) imagery from KOMPSAT-5 HV and Sentinel-1 VV were used for soil moisture estimation, while vegetation indices were calculated from the surface reflectance of KOMPSAT-3 imagery. Comparison of the derived soil moisture results with SMAP (L-3) and SMAP (L-4) data by differencing showed a mean difference of 4.13±3.60 p% and 14.24±2.10 p%, respectively, indicating a level of agreement. This research suggests the potential for producing highly accurate and precise soil moisture maps using future South Korean satellite imagery and publicly available data sources, as demonstrated in this study.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

Proposal for Estimation Method of the Suspended Solid Concentration in EIA (환경영향평가에서 부유사 농도 추정 방법 제안)

  • Choo, Tai Ho;Kim, Young Hwan;Park, Bong Soo;Kwon, Jae Wook;Cho, Hyun Min
    • Journal of Wetlands Research
    • /
    • v.19 no.1
    • /
    • pp.30-36
    • /
    • 2017
  • SS(Suspended Solid) concentration by soil erosion into river at normal and flood season should be measured. However, to present the variation of SS due to various development project such as EIA(Environmental Impact Assessment), River Master Plan, and so on, it is necessary to estimate not measure SS, but there are not exist how to estimate SS. In the present study, therefore, we propose the hydrologic method of estimating SS concentration using the results of particular frequency flood discharge and sediment discharge by RUSLE method. SS consists of silty and clay soil and colloid particle etc. However, in the present study, silty and clay soils of sediment discharge except send set up SS standards. The flow discharge to estimate SS concentration are 1~2 years for normal season, 30~100 years for flood season. Meanwhile, analysis software for probable rainfall uses Fard2006, probable rainfalls under 2-year frequency are estimated using rainfall data and frequency factor of Gumbel distribution. The results of estimating SS concentration using runoff volume by sediment and flow discharges of silty and cray soils as above method show that reliable level of SS concentration is considered in predevelopment of natural condition and under development of barren condition. Especially, SS concentration takes notice that the value of sediment discharge makes a huge difference according to channel slope, it was confirmed that the value obtained by dividing the SS concentration by the channel slope is relatively constant even though the topographical factors are different. Therefore, if the present study will be proceeded for various watersheds, it will be developed as estimation method of SS concentration.