• Title/Summary/Keyword: Preprocessing System

Search Result 712, Processing Time 0.032 seconds

Fusion of Aerosol Optical Depth from the GOCI and the AHI Observations (GOCI와 AHI 자료를 활용한 에어로졸 광학두께 합성장 산출 연구)

  • Kang, Hyeongwoo;Choi, Wonei;Park, Jeonghyun;Kim, Serin;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.861-870
    • /
    • 2021
  • In this study, fused Aerosol Optical Depth (AOD) data were produced using AOD products from the Geostationary Ocean Color Imager (GOCI) onboard Communication, Oceanography and Meteorology Satellite (COMS)satellite and the Advanced Himawari Imager (AHI) onboard Himawari-8. Since the spatial resolution and the coordinate system between the satellite sensors are different, a preprocessing was first preceded. After that, using the level 1.5 AOD dataset of AErosol RObotic NETwork (AERONET), which is ground-based observation, correlations and trends between each satellite AOD and AERONET AOD were utilized to produce more accurate satellite AOD data than the originalsatellite AODs. The fused AOD were found to be more accurate than the originalsatellite AODs. Root Mean Square Error (RMSE) and mean bias of the fused AODs were calculated to be 0.13 and 0.05, respectively. We also compared errors of the fused AODs against those of the original GOCI AOD (RMSE: 0.15, mean bias: 0.11) and the original AHI AOD (RMSE: 0.15, mean bias: 0.05). It was confirmed that the fused AODs have betterspatial coverage than the original AODsin areas where there are no observations due to the presence of cloud from a single satellite.

Analysis of Research Trends in Tax Compliance using Topic Modeling (토픽모델링을 활용한 조세순응 연구 동향 분석)

  • Kang, Min-Jo;Baek, Pyoung-Gu
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.1
    • /
    • pp.99-115
    • /
    • 2022
  • In this study, domestic academic journal papers on tax compliance, tax consciousness, and faithful tax payment (hereinafter referred to as "tax compliance") were comprehensively analyzed from an interdisciplinary perspective as a representative research topic in the field of tax science. To achieve the research purpose, topic modeling technique was applied as part of text mining. In the flow of data collection-keyword preprocessing-topic model analysis, potential research topics were presented from tax compliance related keywords registered by the researcher in a total of 347 papers. The results of this study can be summarized as follows. First, in the keyword analysis, keywords such as tax investigation, tax avoidance, and honest tax reporting system were included in the top 5 keywords based on simple term-frequency, and in the TF-IDF value considering the relative importance of keywords, they were also included in the top 5 keywords. On the other hand, the keyword, tax evasion, was included in the top keyword based on the TF-IDF value, whereas it was not highlighted in the simple term-frequency. Second, eight potential research topics were derived through topic modeling. The topics covered are (1) tax fairness and suppression of tax offenses, (2) the ideology of the tax law and the validity of tax policies, (3) the principle of substance over form and guarantee of tax receivables (4) tax compliance costs and tax administration services, (5) the tax returns self- assessment system and tax experts, (6) tax climate and strategic tax behavior, (7) multifaceted tax behavior and differential compliance intentions, (8) tax information system and tax resource management. The research comprehensively looked at the various perspectives on the tax compliance from an interdisciplinary perspective, thereby comprehensively grasping past research trends on tax compliance and suggesting the direction of future research.

Generation of Time-Series Data for Multisource Satellite Imagery through Automated Satellite Image Collection (자동 위성영상 수집을 통한 다종 위성영상의 시계열 데이터 생성)

  • Yunji Nam;Sungwoo Jung;Taejung Kim;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1085-1095
    • /
    • 2023
  • Time-series data generated from satellite data are crucial resources for change detection and monitoring across various fields. Existing research in time-series data generation primarily relies on single-image analysis to maintain data uniformity, with ongoing efforts to enhance spatial and temporal resolutions by utilizing diverse image sources. Despite the emphasized significance of time-series data, there is a notable absence of automated data collection and preprocessing for research purposes. In this paper, to address this limitation, we propose a system that automates the collection of satellite information in user-specified areas to generate time-series data. This research aims to collect data from various satellite sources in a specific region and convert them into time-series data, developing an automatic satellite image collection system for this purpose. By utilizing this system, users can collect and extract data for their specific regions of interest, making the data immediately usable. Experimental results have shown the feasibility of automatically acquiring freely available Landsat and Sentinel images from the web and incorporating manually inputted high-resolution satellite images. Comparisons between automatically collected and edited images based on high-resolution satellite data demonstrated minimal discrepancies, with no significant errors in the generated output.

CNN-LSTM-based Upper Extremity Rehabilitation Exercise Real-time Monitoring System (CNN-LSTM 기반의 상지 재활운동 실시간 모니터링 시스템)

  • Jae-Jung Kim;Jung-Hyun Kim;Sol Lee;Ji-Yun Seo;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.134-139
    • /
    • 2023
  • Rehabilitators perform outpatient treatment and daily rehabilitation exercises to recover physical function with the aim of quickly returning to society after surgical treatment. Unlike performing exercises in a hospital with the help of a professional therapist, there are many difficulties in performing rehabilitation exercises by the patient on a daily basis. In this paper, we propose a CNN-LSTM-based upper limb rehabilitation real-time monitoring system so that patients can perform rehabilitation efficiently and with correct posture on a daily basis. The proposed system measures biological signals through shoulder-mounted hardware equipped with EMG and IMU, performs preprocessing and normalization for learning, and uses them as a learning dataset. The implemented model consists of three polling layers of three synthetic stacks for feature detection and two LSTM layers for classification, and we were able to confirm a learning result of 97.44% on the validation data. After that, we conducted a comparative evaluation with the Teachable machine, and as a result of the comparative evaluation, we confirmed that the model was implemented at 93.6% and the Teachable machine at 94.4%, and both models showed similar classification performance.

A Study on the Development of integrated Process Safety Management System based on Artificial Intelligence (AI) (인공지능(AI) 기반 통합 공정안전관리 시스템 개발에 관한 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim;HeeJeong Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.403-409
    • /
    • 2024
  • In this paper, the guidelines for the design of an Artificial Intelligence(AI) based Integrated Process Safety Management(PSM) system to enhance workplace safety using data from process safety reports submitted by hazardous and risky facility operators in accordance with the Occupational Safety and Health Act is proposed. The system composed of the proposed guidelines is to be implemented separately by individual facility operators and specialized process safety management agencies for single or multiple workplaces. It is structured with key components and stages, including data collection and preprocessing, expansion and segmentation, labeling, and the construction of training datasets. It enables the collection of process operation data and change approval data from various processes, allowing potential fault prediction and maintenance planning through the analysis of all data generated in workplace operations, thereby supporting decision-making during process operation. Moreover, it offers utility and effectiveness in time and cost savings, detection and prediction of various risk factors, including human errors, and continuous model improvement through the use of accurate and reliable training data and specialized datasets. Through this approach, it becomes possible to enhance workplace safety and prevent accidents.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Robust Semi-auto Calibration Method for Various Cameras and Illumination Changes (다양한 카메라와 조명의 변화에 강건한 반자동 카메라 캘리브레이션 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.36-42
    • /
    • 2016
  • Recently, many 3D contents have been produced through the multiview camera system. In this system, since a difference of the viewpoint between color and depth cameras is inevitable, the camera parameter plays the important role to adjust the viewpoint as a preprocessing step. The conventional camera calibration method is inconvenient to users since we need to choose pattern features manually after capturing a planar chessboard with various poses. Therefore, we propose a semi-auto camera calibration method using a circular sampling and an homography estimation. Firstly, The proposed method extracts the candidates of the pattern features from the images by FAST corner detector. Next, we reduce the amount of the candidates by the circular sampling and obtain the complete point cloud by the homography estimation. Lastly, we compute the accurate position having the sub-pixel accuracy of the pattern features by the approximation of the hyper parabola surface. We investigated which factor affects the result of the pattern feature detection at each step. Compared to the conventional method, we found the proposed method released the inconvenience of the manual operation but maintained the accuracy of the camera parameters.