• Title/Summary/Keyword: Paper Mapping

Search Result 3,147, Processing Time 0.032 seconds

Ontology-based Course Mentoring System (온톨로지 기반의 수강지도 시스템)

  • Oh, Kyeong-Jin;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.149-162
    • /
    • 2014
  • Course guidance is a mentoring process which is performed before students register for coming classes. The course guidance plays a very important role to students in checking degree audits of students and mentoring classes which will be taken in coming semester. Also, it is intimately involved with a graduation assessment or a completion of ABEEK certification. Currently, course guidance is manually performed by some advisers at most of universities in Korea because they have no electronic systems for the course guidance. By the lack of the systems, the advisers should analyze each degree audit of students and curriculum information of their own departments. This process often causes the human error during the course guidance process due to the complexity of the process. The electronic system thus is essential to avoid the human error for the course guidance. If the relation data model-based system is applied to the mentoring process, then the problems in manual way can be solved. However, the relational data model-based systems have some limitations. Curriculums of a department and certification systems can be changed depending on a new policy of a university or surrounding environments. If the curriculums and the systems are changed, a scheme of the existing system should be changed in accordance with the variations. It is also not sufficient to provide semantic search due to the difficulty of extracting semantic relationships between subjects. In this paper, we model a course mentoring ontology based on the analysis of a curriculum of computer science department, a structure of degree audit, and ABEEK certification. Ontology-based course guidance system is also proposed to overcome the limitation of the existing methods and to provide the effectiveness of course mentoring process for both of advisors and students. In the proposed system, all data of the system consists of ontology instances. To create ontology instances, ontology population module is developed by using JENA framework which is for building semantic web and linked data applications. In the ontology population module, the mapping rules to connect parts of degree audit to certain parts of course mentoring ontology are designed. All ontology instances are generated based on degree audits of students who participate in course mentoring test. The generated instances are saved to JENA TDB as a triple repository after an inference process using JENA inference engine. A user interface for course guidance is implemented by using Java and JENA framework. Once a advisor or a student input student's information such as student name and student number at an information request form in user interface, the proposed system provides mentoring results based on a degree audit of current student and rules to check scores for each part of a curriculum such as special cultural subject, major subject, and MSC subject containing math and basic science. Recall and precision are used to evaluate the performance of the proposed system. The recall is used to check that the proposed system retrieves all relevant subjects. The precision is used to check whether the retrieved subjects are relevant to the mentoring results. An officer of computer science department attends the verification on the results derived from the proposed system. Experimental results using real data of the participating students show that the proposed course guidance system based on course mentoring ontology provides correct course mentoring results to students at all times. Advisors can also reduce their time cost to analyze a degree audit of corresponding student and to calculate each score for the each part. As a result, the proposed system based on ontology techniques solves the difficulty of mentoring methods in manual way and the proposed system derive correct mentoring results as human conduct.

Feature Analysis of Metadata Schemas for Records Management and Archives from the Viewpoint of Records Lifecycle (기록 생애주기 관점에서 본 기록관리 메타데이터 표준의 특징 분석)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.10 no.2
    • /
    • pp.75-99
    • /
    • 2010
  • Digital resources are widely used in our modern society. However, we are facing fundamental problems to maintain and preserve digital resources over time. Several standard methods for preserving digital resources have been developed and are in use. It is widely recognized that metadata is one of the most important components for digital archiving and preservation. There are many metadata standards for archiving and preservation of digital resources, where each standard has its own feature in accordance with its primary application. This means that each schema has to be appropriately selected and tailored in accordance with a particular application. And, in some cases, those schemas are combined in a larger frame work and container metadata such as the DCMI application framework and METS. There are many metadata standards for archives of digital resources. We used the following metadata standards in this study for the feature analysis me metadata standards - AGLS Metadata which is defined to improve search of both digital resources and non-digital resources, ISAD(G) which is a commonly used standard for archives, EAD which is well used for digital archives, OAIS which defines a metadata framework for preserving digital objects, and PREMIS which is designed primarily for preservation of digital resources. In addition, we extracted attributes from the decision tree defined for digital preservation process by Digital Preservation Coalition (DPC) and compared the set of attributes with these metadata standards. This paper shows the features of these metadata standards obtained through the feature analysis based on the records lifecycle model. The features are shown in a single frame work which makes it easy to relate the tasks in the lifecycle to metadata elements of these standards. As a result of the detailed analysis of the metadata elements, we clarified the features of the standards from the viewpoint of relationships between the elements and the lifecycle stages. Mapping between metadata schemas is often required in the long-term preservation process because different schemes are used in the records lifecycle. Therefore, it is crucial to build a unified framework to enhance interoperability of these schemes. This study presents a basis for the interoperability of different metadata schemas used in digital archiving and preservation.

An Introduction of Korean Soil Information System (한국 토양정보시스템 소개)

  • Hong, S. Young;Zhang, Yong-Seon;Hyun, Byung-Keun;Sonn, Yeon-Kyu;Kim, Yi-Hyun;Jung, Sug-Jae;Park, Chan-Won;Song, Kwan-Cheol;Jang, Byoung-Choon;Choe, Eun-Young;Lee, Ye-Jin;Ha, Sang-Keun;Kim, Myung-Suk;Lee, Jong-Sik;Jung, Goo-Bok;Ko, Byong-Gu;Kim, Gun-Yeob
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.1
    • /
    • pp.21-28
    • /
    • 2009
  • Detailed information on soil characteristics is of great importance for the use and conservation of soil resources that are essential for human welfare and ecosystem sustainability. This paper introduces soil inventory of Korea focusing on national soil database establishment, information systems, use, and future direction for natural resources management. Different scales of soil maps surveyed and soil test data collected by RDA (Rural Development Administration) were computerized to construct digital soil maps and database. Soil chemical properties and heavy metal concentrations in agricultural soils including vulnerable agricultural soils were investigated regularly at fixed sampling points. Internet-based information systems for soil and agro-environmental resources were developed based on 'National Soil Survey Projects' for managing soil resources and for providing soil information to the public, and 'Agroenvironmental Change Monitoring Project' to monitor spatial and temporal changes of agricultural environment will be opened soon. Soils data has a great potential of further application in estimation of soil carbon storage, water capacity, and soil loss. Digital mapping of soil and environment using state-of-the-art and emerging technologies with a pedometrics concept will lead to future direction.

SSP Climate Change Scenarios with 1km Resolution Over Korean Peninsula for Agricultural Uses (농업분야 활용을 위한 한반도 1km 격자형 SSP 기후변화 시나리오)

  • Jina Hur;Jae-Pil Cho;Sera Jo;Kyo-Moon Shim;Yong-Seok Kim;Min-Gu Kang;Chan-Sung Oh;Seung-Beom Seo;Eung-Sup Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.1-30
    • /
    • 2024
  • The international community adopts the SSP (Shared Socioeconomic Pathways) scenario as a new greenhouse gas emission pathway. As part of efforts to reflect these international trends and support for climate change adaptation measure in the agricultural sector, the National Institute of Agricultural Sciences (NAS) produced high-resolution (1 km) climate change scenarios for the Korean Peninsula based on SSP scenarios, certified as a "National Climate Change Standard Scenario" in 2022. This paper introduces SSP climate change scenario of the NAS and shows the results of the climate change projections. In order to produce future climate change scenarios, global climate data produced from 18 GCM models participating in CMIP6 were collected for the past (1985-2014) and future (2015-2100) periods, and were statistically downscaled for the Korean Peninsula using the digital climate maps with 1km resolution and the SQM method. In the end of the 21st century (2071-2100), the average annual maximum/minimum temperature of the Korean Peninsula is projected to increase by 2.6~6.1℃/2.5~6.3℃ and annual precipitation by 21.5~38.7% depending on scenarios. The increases in temperature and precipitation under the low-carbon scenario were smaller than those under high-carbon scenario. It is projected that the average wind speed and solar radiation over the analysis region will not change significantly in the end of the 21st century compared to the present. This data is expected to contribute to understanding future uncertainties due to climate change and contributing to rational decision-making for climate change adaptation.

Seismic Facies Classification of Igneous Bodies in the Gunsan Basin, Yellow Sea, Korea (탄성파 반사상에 따른 서해 군산분지 화성암 분류)

  • Yun-Hui Je;Ha-Young Sim;Hoon-Young Song;Sung-Ho Choi;Gi-Bom Kim
    • Journal of the Korean earth science society
    • /
    • v.45 no.2
    • /
    • pp.136-146
    • /
    • 2024
  • This paper introduces the seismic facies classification and mapping of igneous bodies found in the sedimentary sequences of the Yellow Sea shelf area of Korea. In the research area, six extrusive and three intrusive types of igneous bodies were found in the Late Cretaceous, Eocene, Early Miocene, and Quaternary sedimentary sequences of the northeastern, southwestern and southeastern sags of the Gunsan Basin. Extrusive igneous bodies include the following six facies: (1) monogenetic volcano (E.mono) showing cone-shape external geometry with height less than 200 m, which may have originated from a single monogenetic eruption; (2) complex volcano (E.comp) marked by clustered monogenetic cones with height less than 500 m; (3) stratovolcano (E.strato) referring to internally stratified lofty volcanic edifices with height greater than 1 km and diameter more than 15 km; (4) fissure volcanics (E.fissure) marked by high-amplitude and discontinuous reflectors in association with normal faults that cut the acoustic basement; (5) maar-diatreme (E.maar) referring to gentle-sloped low-profile volcanic edifices with less than 2 km-wide vent-shape zones inside; and (6) hydrothermal vents (E.vent) marked by upright pipe-shape or funnel-shape structures disturbing sedimentary sequence with diameter less than 2 km. Intrusive igneous bodies include the following three facies: (1) dike and sill (I.dike/sill) showing variable horizontal, step-wise, or saucer-shaped intrusive geometries; (2) stock (I.stock) marked by pillar- or horn-shaped bodies with a kilometer-wide intrusion diameter; and (3) batholith and laccoliths (I.batho/lac) which refer to gigantic intrusive bodies that broadly deformed the overlying sedimentary sequence.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.