• Title/Summary/Keyword: Data preprocessing technique

Search Result 171, Processing Time 0.028 seconds

Analysis of Research Trends Related to drug Repositioning Based on Machine Learning (머신러닝 기반의 신약 재창출 관련 연구 동향 분석)

  • So Yeon Yoo;Gyoo Gun Lim
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.21-37
    • /
    • 2022
  • Drug repositioning, one of the methods of developing new drugs, is a useful way to discover new indications by allowing drugs that have already been approved for use in people to be used for other purposes. Recently, with the development of machine learning technology, the case of analyzing vast amounts of biological information and using it to develop new drugs is increasing. The use of machine learning technology to drug repositioning will help quickly find effective treatments. Currently, the world is having a difficult time due to a new disease caused by coronavirus (COVID-19), a severe acute respiratory syndrome. Drug repositioning that repurposes drugsthat have already been clinically approved could be an alternative to therapeutics to treat COVID-19 patients. This study intends to examine research trends in the field of drug repositioning using machine learning techniques. In Pub Med, a total of 4,821 papers were collected with the keyword 'Drug Repositioning'using the web scraping technique. After data preprocessing, frequency analysis, LDA-based topic modeling, random forest classification analysis, and prediction performance evaluation were performed on 4,419 papers. Associated words were analyzed based on the Word2vec model, and after reducing the PCA dimension, K-Means clustered to generate labels, and then the structured organization of the literature was visualized using the t-SNE algorithm. Hierarchical clustering was applied to the LDA results and visualized as a heat map. This study identified the research topics related to drug repositioning, and presented a method to derive and visualize meaningful topics from a large amount of literature using a machine learning algorithm. It is expected that it will help to be used as basic data for establishing research or development strategies in the field of drug repositioning in the future.

Measurement of Quality Parameters of Honey by Reflectance Spectra

  • Park, Chang-Hyun;Yang, Won-Jun;Sohn, Jae-Hyung;Kim, Jong-Hoon
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1530-1530
    • /
    • 2001
  • The objectives of this study were to develop models to predict quality parameters of Korean bee-honeys by visible and NIR spectroscopic technique. Two kinds of bee-honey fronl acacia and polyflower sources were tested in this study. The honeys were harvested in the spring of 2000 and stored in the storage facility at 20$^{\circ}C$ during experiments. Total of 394 samples of honey were analyzed. Reflectance spectra, moisture contents, ash, invert sugar, sucrose, F/G (fructose/glucose) ratio, HMF (hydroxymethyl furfural), and C12/C13 ratio of honeys were measured. The average values for the tested honeys were 19.9% of moisture contents, 0.12% of ash, 68.4% of invert sugar, 5.7% of sucrose, 1.27 of F/G(fructose/glucose) ratio, 14.4 mg/kg of HMF, and -19.1 of C12/C13 ratio. A spectrophotometer, equipped with a single-beam scanning monochromator (NIR Systems, Model 6500, USA) and a horizontal setup module, was used to collect reflectance data from honey. The reflectance spectra were measured in wavelength ranges of 400∼2,498 nm. with 2 nm of interval. Thirty-two repetitive scans were averaged, transformed to log(1/Reflectance), and then were stored in a microcomputer file, forming one spectrum per measurement. A sample cell and reflectance plate were made to hold honey samples constantly. Spectra of honey samples were divided into a calibration set and a validation set. The calibration set was used during model development, and the validation set was used to predict quality parameters from unknown spectra. The PLS(Partial Least Square) models were developed to predict the quality parameters of honeys. The first and the second derivatives of raw spectra were also used to develop the models with proper smoothing gap. The MSC (multiplicative scatter correction) and the SNV & Dtr.(standard normal variate and detranding) preprocessing were applied to all spectra to minimize sample-to-sample light scatter differences. The PLS models showed good relationships between predicted and measured quality parameters of honeys in the wavelength range of 1100∼2200 nm. However, the PLS analysis was not good enough to predict HMF of honeys.

  • PDF

Analysis of Research Trends in Tax Compliance using Topic Modeling (토픽모델링을 활용한 조세순응 연구 동향 분석)

  • Kang, Min-Jo;Baek, Pyoung-Gu
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.1
    • /
    • pp.99-115
    • /
    • 2022
  • In this study, domestic academic journal papers on tax compliance, tax consciousness, and faithful tax payment (hereinafter referred to as "tax compliance") were comprehensively analyzed from an interdisciplinary perspective as a representative research topic in the field of tax science. To achieve the research purpose, topic modeling technique was applied as part of text mining. In the flow of data collection-keyword preprocessing-topic model analysis, potential research topics were presented from tax compliance related keywords registered by the researcher in a total of 347 papers. The results of this study can be summarized as follows. First, in the keyword analysis, keywords such as tax investigation, tax avoidance, and honest tax reporting system were included in the top 5 keywords based on simple term-frequency, and in the TF-IDF value considering the relative importance of keywords, they were also included in the top 5 keywords. On the other hand, the keyword, tax evasion, was included in the top keyword based on the TF-IDF value, whereas it was not highlighted in the simple term-frequency. Second, eight potential research topics were derived through topic modeling. The topics covered are (1) tax fairness and suppression of tax offenses, (2) the ideology of the tax law and the validity of tax policies, (3) the principle of substance over form and guarantee of tax receivables (4) tax compliance costs and tax administration services, (5) the tax returns self- assessment system and tax experts, (6) tax climate and strategic tax behavior, (7) multifaceted tax behavior and differential compliance intentions, (8) tax information system and tax resource management. The research comprehensively looked at the various perspectives on the tax compliance from an interdisciplinary perspective, thereby comprehensively grasping past research trends on tax compliance and suggesting the direction of future research.

A Code Clustering Technique for Unifying Method Full Path of Reusable Cloned Code Sets of a Product Family (제품군의 재사용 가능한 클론 코드의 메소드 경로 통일을 위한 코드 클러스터링 방법)

  • Kim, Taeyoung;Lee, Jihyun;Kim, Eunmi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • Similar software is often developed with the Clone-And-Own (CAO) approach that copies and modifies existing artifacts. The CAO approach is considered as a bad practice because it makes maintenance difficult as the number of cloned products increases. Software product line engineering is a methodology that can solve the issue of the CAO approach by developing a product family through systematic reuse. Migrating product families that have been developed with the CAO approach to the product line engineering begins with finding, integrating, and building them as reusable assets. However, cloning occurs at various levels from directories to code lines, and their structures can be changed. This makes it difficult to build product line code base simply by finding clones. Successful migration thus requires unifying the source code's file path, class name, and method signature. This paper proposes a clustering method that identifies a set of similar codes scattered across product variants and some of their method full paths are different, so path unification is necessary. In order to show the effectiveness of the proposed method, we conducted an experiment using the Apo Games product line, which has evolved with the CAO approach. As a result, the average precision of clustering performed without preprocessing was 0.91 and the number of identified common clusters was 0, whereas our method showed 0.98 and 15 respectively.

The Fault Diagnosis Model of Ship Fuel System Equipment Reflecting Time Dependency in Conv1D Algorithm Based on the Convolution Network (합성곱 네트워크 기반의 Conv1D 알고리즘에서 시간 종속성을 반영한 선박 연료계통 장비의 고장 진단 모델)

  • Kim, Hyung-Jin;Kim, Kwang-Sik;Hwang, Se-Yun;Lee, Jang Hyun
    • Journal of Navigation and Port Research
    • /
    • v.46 no.4
    • /
    • pp.367-374
    • /
    • 2022
  • The purpose of this study was to propose a deep learning algorithm that applies to the fault diagnosis of fuel pumps and purifiers of autonomous ships. A deep learning algorithm reflecting the time dependence of the measured signal was configured, and the failure pattern was trained using the vibration signal, measured in the equipment's regular operation and failure state. Considering the sequential time-dependence of deterioration implied in the vibration signal, this study adopts Conv1D with sliding window computation for fault detection. The time dependence was also reflected, by transferring the measured signal from two-dimensional to three-dimensional. Additionally, the optimal values of the hyper-parameters of the Conv1D model were determined, using the grid search technique. Finally, the results show that the proposed data preprocessing method as well as the Conv1D model, can reflect the sequential dependency between the fault and its effect on the measured signal, and appropriately perform anomaly as well as failure detection, of the equipment chosen for application.

Development of Deep Learning Structure to Secure Visibility of Outdoor LED Display Board According to Weather Change (날씨 변화에 따른 실외 LED 전광판의 시인성 확보를 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.340-344
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure to secure visibility of outdoor LED display board according to weather change. The proposed technique secures the visibility of the outdoor LED display board by automatically adjusting the LED luminance according to the weather change using deep learning using an imaging device. In order to automatically adjust the LED luminance according to weather changes, a deep learning model that can classify the weather is created by learning it using a convolutional network after first going through a preprocessing process for the flattened background part image data. The applied deep learning network reduces the difference between the input value and the output value using the Residual learning function, inducing learning while taking the characteristics of the initial input value. Next, by using a controller that recognizes the weather and adjusts the luminance of the outdoor LED display board according to the weather change, the luminance is changed so that the luminance increases when the surrounding environment becomes bright, so that it can be seen clearly. In addition, when the surrounding environment becomes dark, the visibility is reduced due to scattering of light, so the brightness of the electronic display board is lowered so that it can be seen clearly. By applying the method proposed in this paper, the result of the certified measurement test of the luminance measurement according to the weather change of the LED sign board confirmed that the visibility of the outdoor LED sign board was secured according to the weather change.

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

Comparative study of flood detection methodologies using Sentinel-1 satellite imagery (Sentinel-1 위성 영상을 활용한 침수 탐지 기법 방법론 비교 연구)

  • Lee, Sungwoo;Kim, Wanyub;Lee, Seulchan;Jeong, Hagyu;Park, Jongsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.181-193
    • /
    • 2024
  • The increasing atmospheric imbalance caused by climate change leads to an elevation in precipitation, resulting in a heightened frequency of flooding. Consequently, there is a growing need for technology to detect and monitor these occurrences, especially as the frequency of flooding events rises. To minimize flood damage, continuous monitoring is essential, and flood areas can be detected by the Synthetic Aperture Radar (SAR) imagery, which is not affected by climate conditions. The observed data undergoes a preprocessing step, utilizing a median filter to reduce noise. Classification techniques were employed to classify water bodies and non-water bodies, with the aim of evaluating the effectiveness of each method in flood detection. In this study, the Otsu method and Support Vector Machine (SVM) technique were utilized for the classification of water bodies and non-water bodies. The overall performance of the models was assessed using a Confusion Matrix. The suitability of flood detection was evaluated by comparing the Otsu method, an optimal threshold-based classifier, with SVM, a machine learning technique that minimizes misclassifications through training. The Otsu method demonstrated suitability in delineating boundaries between water and non-water bodies but exhibited a higher rate of misclassifications due to the influence of mixed substances. Conversely, the use of SVM resulted in a lower false positive rate and proved less sensitive to mixed substances. Consequently, SVM exhibited higher accuracy under conditions excluding flooding. While the Otsu method showed slightly higher accuracy in flood conditions compared to SVM, the difference in accuracy was less than 5% (Otsu: 0.93, SVM: 0.90). However, in pre-flooding and post-flooding conditions, the accuracy difference was more than 15%, indicating that SVM is more suitable for water body and flood detection (Otsu: 0.77, SVM: 0.92). Based on the findings of this study, it is anticipated that more accurate detection of water bodies and floods could contribute to minimizing flood-related damages and losses.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.