• Title/Summary/Keyword: Conventional machine learning

Search Result 295, Processing Time 0.043 seconds

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

Development of Automatic Segmentation Algorithm of Intima-media Thickness of Carotid Artery in Portable Ultrasound Image Based on Deep Learning (딥러닝 모델을 이용한 휴대용 무선 초음파 영상에서의 경동맥 내중막 두께 자동 분할 알고리즘 개발)

  • Choi, Ja-Young;Kim, Young Jae;You, Kyung Min;Jang, Albert Youngwoo;Chung, Wook-Jin;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.3
    • /
    • pp.100-106
    • /
    • 2021
  • Measuring Intima-media thickness (IMT) with ultrasound images can help early detection of coronary artery disease. As a result, numerous machine learning studies have been conducted to measure IMT. However, most of these studies require several steps of pre-treatment to extract the boundary, and some require manual intervention, so they are not suitable for on-site treatment in urgent situations. in this paper, we propose to use deep learning networks U-Net, Attention U-Net, and Pretrained U-Net to automatically segment the intima-media complex. This study also applied the HE, HS, and CLAHE preprocessing technique to wireless portable ultrasound diagnostic device images. As a result, The average dice coefficient of HE applied Models is 71% and CLAHE applied Models is 70%, while the HS applied Models have improved as 72% dice coefficient. Among them, Pretrained U-Net showed the highest performance with an average of 74%. When comparing this with the mean value of IMT measured by Conventional wired ultrasound equipment, the highest correlation coefficient value was shown in the HS applied pretrained U-Net.

A New Adaptive Kernel Estimation Method for Correntropy Equalizers (코렌트로피 이퀄라이져를 위한 새로운 커널 사이즈 적응 추정 방법)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.627-632
    • /
    • 2021
  • ITL (information-theoretic learning) has been applied successfully to adaptive signal processing and machine learning applications, but there are difficulties in deciding the kernel size, which has a great impact on the system performance. The correntropy algorithm, one of the ITL methods, has superior properties of impulsive-noise robustness and channel-distortion compensation. On the other hand, it is also sensitive to the kernel sizes that can lead to system instability. In this paper, considering the sensitivity of the kernel size cubed in the denominator of the cost function slope, a new adaptive kernel estimation method using the rate of change in error power in respect to the kernel size variation is proposed for the correntropy algorithm. In a distortion-compensation experiment for impulsive-noise and multipath-distorted channel, the performance of the proposed kernel-adjusted correntropy algorithm was examined. The proposed method shows a two times faster convergence speed than the conventional algorithm with a fixed kernel size. In addition, the proposed algorithm converged appropriately for kernel sizes ranging from 2.0 to 6.0. Hence, the proposed method has a wide acceptable margin of initial kernel sizes.

COVID-19 Diagnosis from CXR images through pre-trained Deep Visual Embeddings

  • Khalid, Shahzaib;Syed, Muhammad Shehram Shah;Saba, Erum;Pirzada, Nasrullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.175-181
    • /
    • 2022
  • COVID-19 is an acute respiratory syndrome that affects the host's breathing and respiratory system. The novel disease's first case was reported in 2019 and has created a state of emergency in the whole world and declared a global pandemic within months after the first case. The disease created elements of socioeconomic crisis globally. The emergency has made it imperative for professionals to take the necessary measures to make early diagnoses of the disease. The conventional diagnosis for COVID-19 is through Polymerase Chain Reaction (PCR) testing. However, in a lot of rural societies, these tests are not available or take a lot of time to provide results. Hence, we propose a COVID-19 classification system by means of machine learning and transfer learning models. The proposed approach identifies individuals with COVID-19 and distinguishes them from those who are healthy with the help of Deep Visual Embeddings (DVE). Five state-of-the-art models: VGG-19, ResNet50, Inceptionv3, MobileNetv3, and EfficientNetB7, were used in this study along with five different pooling schemes to perform deep feature extraction. In addition, the features are normalized using standard scaling, and 4-fold cross-validation is used to validate the performance over multiple versions of the validation data. The best results of 88.86% UAR, 88.27% Specificity, 89.44% Sensitivity, 88.62% Accuracy, 89.06% Precision, and 87.52% F1-score were obtained using ResNet-50 with Average Pooling and Logistic regression with class weight as the classifier.

Development of suspended solid concentration measurement technique based on multi-spectral satellite imagery in Nakdong River using machine learning model (기계학습모형을 이용한 다분광 위성 영상 기반 낙동강 부유 물질 농도 계측 기법 개발)

  • Kwon, Siyoon;Seo, Il Won;Beak, Donghae
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.2
    • /
    • pp.121-133
    • /
    • 2021
  • Suspended Solids (SS) generated in rivers are mainly introduced from non-point pollutants or appear naturally in the water body, and are an important water quality factor that may cause long-term water pollution by being deposited. However, the conventional method of measuring the concentration of suspended solids is labor-intensive, and it is difficult to obtain a vast amount of data via point measurement. Therefore, in this study, a model for measuring the concentration of suspended solids based on remote sensing in the Nakdong River was developed using Sentinel-2 data that provides high-resolution multi-spectral satellite images. The proposed model considers the spectral bands and band ratios of various wavelength bands using a machine learning model, Support Vector Regression (SVR), to overcome the limitation of the existing remote sensing-based regression equations. The optimal combination of variables was derived using the Recursive Feature Elimination (RFE) and weight coefficients for each variable of SVR. The results show that the 705nm band belonging to the red-edge wavelength band was estimated as the most important spectral band, and the proposed SVR model produced the most accurate measurement compared with the previous regression equations. By using the RFE, the SVR model developed in this study reduces the variable dependence compared to the existing regression equations based on the single spectral band or band ratio and provides more accurate prediction of spatial distribution of suspended solids concentration.

A Technical Analysis on Deep Learning based Image and Video Compression (딥 러닝 기반의 이미지와 비디오 압축 기술 분석)

  • Cho, Seunghyun;Kim, Younhee;Lim, Woong;Kim, Hui Yong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.383-394
    • /
    • 2018
  • In this paper, we investigate image and video compression techniques based on deep learning which are actively studied recently. The deep learning based image compression technique inputs an image to be compressed in the deep neural network and extracts the latent vector recurrently or all at once and encodes it. In order to increase the image compression efficiency, the neural network is learned so that the encoded latent vector can be expressed with fewer bits while the quality of the reconstructed image is enhanced. These techniques can produce images of superior quality, especially at low bit rates compared to conventional image compression techniques. On the other hand, deep learning based video compression technology takes an approach to improve performance of the coding tools employed for existing video codecs rather than directly input and process the video to be compressed. The deep neural network technologies introduced in this paper replace the in-loop filter of the latest video codec or are used as an additional post-processing filter to improve the compression efficiency by improving the quality of the reconstructed image. Likewise, deep neural network techniques applied to intra prediction and encoding are used together with the existing intra prediction tool to improve the compression efficiency by increasing the prediction accuracy or adding a new intra coding process.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Prediction of squeezing phenomenon in tunneling projects: Application of Gaussian process regression

  • Mirzaeiabdolyousefi, Majid;Mahmoodzadeh, Arsalan;Ibrahim, Hawkar Hashim;Rashidi, Shima;Majeed, Mohammed Kamal;Mohammed, Adil Hussein
    • Geomechanics and Engineering
    • /
    • v.30 no.1
    • /
    • pp.11-26
    • /
    • 2022
  • One of the most important issues in tunneling, is the squeezing phenomenon. Squeezing can occur during excavation or after the construction of tunnels, which in both cases could lead to significant damages. Therefore, it is important to predict the squeezing and consider it in the early design stage of tunnel construction. Different empirical, semi-empirical and theoretical-analytical methods have been presented to determine the squeezing. Therefore, it is necessary to examine the ability of each of these methods and identify the best method among them. In this study, squeezing in a part of the Alborz service tunnel in Iran was estimated through a number of empirical, semi- empirical and theoretical-analytical methods. Among these methods, the most robust model was used to obtain a database including 300 data for training and 33 data for testing in order to develop a machine learning (ML) method. To this end, three ML models of Gaussian process regression (GPR), artificial neural network (ANN) and support vector regression (SVR) were trained and tested to propose a robust model to predict the squeezing phenomenon. A comparative analysis between the conventional and the ML methods utilized in this study showed that, the GPR model is the most robust model in the prediction of squeezing phenomenon. The sensitivity analysis of the input parameters using the mutual information test (MIT) method showed that, the most sensitive parameter on the squeezing phenomenon is the tangential strain (ε_θ^α) parameter with a sensitivity score of 2.18. Finally, the GPR model was recommended to predict the squeezing phenomenon in tunneling projects. This work's significance is that it can provide a good estimation of the squeezing phenomenon in tunneling projects, based on which geotechnical engineers can take the necessary actions to deal with it in the pre-construction designs.

Cancer-Subtype Classification Based on Gene Expression Data (유전자 발현 데이터를 이용한 암의 유형 분류 기법)

  • Cho Ji-Hoon;Lee Dongkwon;Lee Min-Young;Lee In-Beum
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.12
    • /
    • pp.1172-1180
    • /
    • 2004
  • Recently, the gene expression data, product of high-throughput technology, appeared in earnest and the studies related with it (so-called bioinformatics) occupied an important position in the field of biological and medical research. The microarray is a revolutionary technology which enables us to monitor several thousands of genes simultaneously and thus to gain an insight into the phenomena in the human body (e.g. the mechanism of cancer progression) at the molecular level. To obtain useful information from such gene expression measurements, it is essential to analyze the data with appropriate techniques. However the high-dimensionality of the data can bring about some problems such as curse of dimensionality and singularity problem of matrix computation, and hence makes it difficult to apply conventional data analysis methods. Therefore, the development of method which can effectively treat the data becomes a challenging issue in the field of computational biology. This research focuses on the gene selection and classification for cancer subtype discrimination based on gene expression (microarray) data.

PM2.5 Estimation Based on Image Analysis

  • Li, Xiaoli;Zhang, Shan;Wang, Kang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.907-923
    • /
    • 2020
  • For the severe haze situation in the Beijing-Tianjin-Hebei region, conventional fine particulate matter (PM2.5) concentration prediction methods based on pollutant data face problems such as incomplete data, which may lead to poor prediction performance. Therefore, this paper proposes a method of predicting the PM2.5 concentration based on image analysis technology that combines image data, which can reflect the original weather conditions, with currently popular machine learning methods. First, based on local parameter estimation, autoregressive (AR) model analysis and local estimation of the increase in image blur, we extract features from the weather images using an approach inspired by free energy and a no-reference robust metric model. Next, we compare the coefficient energy and contrast difference of each pixel in the AR model and then use the percentages to calculate the image sharpness to derive the overall mass fraction. Furthermore, the results are compared. The relationship between residual value and PM2.5 concentration is fitted by generalized Gauss distribution (GGD) model. Finally, nonlinear mapping is performed via the wavelet neural network (WNN) method to obtain the PM2.5 concentration. Experimental results obtained on real data show that the proposed method offers an improved prediction accuracy and lower root mean square error (RMSE).