• Title/Summary/Keyword: dimension reduction method

Search Result 251, Processing Time 0.028 seconds

Performance Improvement of Automatic Basal Cell Carcinoma Detection Using Half Hanning Window (Half Hanning 윈도우 전처리를 통한 기저 세포암 자동 검출 성능 개선)

  • Park, Aa-Ron;Baek, Seong-Joong;Min, So-Hee;You, Hong-Yoen;Kim, Jin-Young;Hong, Sung-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.105-112
    • /
    • 2006
  • In this study, we propose a simple preprocessing method for classification of basal cell carcinoma (BCC), which is one of the most common skin cancer. The preprocessing step consists of data clipping with a half Hanning window and dimension reduction with principal components analysis (PCA). The application of the half Hanning window deemphasizes the peak near $1650cm^{-1}$ and improves classification performance by lowering the false negative ratio. Classification results with various classifiers are presented to show the effectiveness of the proposed method. The classifiers include maximum a posteriori probability (MAP), k-nearest neighbor (KNN), probabilistic neural network (PNN), multilayer perceptron(MLP), support vector machine (SVM) and minimum squared error (MSE) classification. Classification results with KNN involving 216 spectra preprocessed with the proposed method gave 97.3% sensitivity, which is very promising results for automatic BCC detection.

  • PDF

Cluster Feature Selection using Entropy Weighting and SVD (엔트로피 가중치 및 SVD를 이용한 군집 특징 선택)

  • Lee, Young-Seok;Lee, Soo-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.248-257
    • /
    • 2002
  • Clustering is a method for grouping objects with similar properties into a same cluster. SVD(Singular Value Decomposition) is known as an efficient preprocessing method for clustering because of dimension reduction and noise elimination for a high dimensional and sparse data set like E-Commerce data set. However, it is hard to evaluate the worth of original attributes because of information loss of a converted data set by SVD. This research proposes a cluster feature selection method, called ENTROPY-SVD, to find important attributes for each cluster based on entropy weighting and SVD. Using SVD, one can take advantage of the latent structures in the association of attributes with similar objects and, using entropy weighting one can find highly dense attributes for each cluster. This paper also proposes a model-based collaborative filtering recommendation system with ENTROPY-SVD, called CFS-CF and evaluates its efficiency and utilization.

Damage localization and quantification of a truss bridge using PCA and convolutional neural network

  • Jiajia, Hao;Xinqun, Zhu;Yang, Yu;Chunwei, Zhang;Jianchun, Li
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.673-686
    • /
    • 2022
  • Deep learning algorithms for Structural Health Monitoring (SHM) have been extracting the interest of researchers and engineers. These algorithms commonly used loss functions and evaluation indices like the mean square error (MSE) which were not originally designed for SHM problems. An updated loss function which was specifically constructed for deep-learning-based structural damage detection problems has been proposed in this study. By tuning the coefficients of the loss function, the weights for damage localization and quantification can be adapted to the real situation and the deep learning network can avoid unnecessary iterations on damage localization and focus on the damage severity identification. To prove efficiency of the proposed method, structural damage detection using convolutional neural networks (CNNs) was conducted on a truss bridge model. Results showed that the validation curve with the updated loss function converged faster than the traditional MSE. Data augmentation was conducted to improve the anti-noise ability of the proposed method. For reducing the training time, the normalized modal strain energy change (NMSEC) was extracted, and the principal component analysis (PCA) was adopted for dimension reduction. The results showed that the training time was reduced by 90% and the damage identification accuracy could also have a slight increase. Furthermore, the effect of different modes and elements on the training dataset was also analyzed. The proposed method could greatly improve the performance for structural damage detection on both the training time and detection accuracy.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Network-based regularization for analysis of high-dimensional genomic data with group structure (그룹 구조를 갖는 고차원 유전체 자료 분석을 위한 네트워크 기반의 규제화 방법)

  • Kim, Kipoong;Choi, Jiyun;Sun, Hokeun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1117-1128
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, regularization procedures based on penalized likelihood are often applied to identify genes or genetic regions associated with diseases or traits. A network-based regularization procedure can utilize biological network information (such as genetic pathways and signaling pathways in genetic association studies) with an outstanding selection performance over other regularization procedures such as lasso and elastic-net. However, network-based regularization has a limitation because cannot be applied to high-dimension genomic data with a group structure. In this article, we propose to combine data dimension reduction techniques such as principal component analysis and a partial least square into network-based regularization for the analysis of high-dimensional genomic data with a group structure. The selection performance of the proposed method was evaluated by extensive simulation studies. The proposed method was also applied to real DNA methylation data generated from Illumina Innium HumanMethylation27K BeadChip, where methylation beta values of around 20,000 CpG sites over 12,770 genes were compared between 123 ovarian cancer patients and 152 healthy controls. This analysis was also able to indicate a few cancer-related genes.

CLINICO-STATISTICAL ANALYSIS OF POSSIBLE FACTORS LEADING TO PROBLEMS IN THE SURGICAL TREATMENT OF UNILATERAL MANDIBLAR CONDYLE FRACTURES (편측 하악 과두골절의 관혈적 치료에 있어서 예후에 영향을 줄 수 있는 인자들에 관한 임상 통계학적 연구)

  • Sung, Hun-Mo;Lee, Dong-Keun;Min, Seung-Ki;Oh, Seung-Hwan;Jang, Kwan-Sik
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.23 no.1
    • /
    • pp.31-39
    • /
    • 2001
  • The purpose of this study is to reveal the factors leading to the problem of unilateral condylar fractures and suggest a treatment guideline of treatment for good prognosis in surgical treatment. The factors can be age, sex, fracture site, degree of displacement, posterior occlusion loss, post-operative alteration of condylar head position, post-operative condylar head resorption, and maxillomandibular fixation period. One hundred and eleven patients with unilateral condylar fractures, who were treated by surgical method from 1990 Feb. to 2000 Feb., were studied. Minimum follow-up period was 6 months. The results were as follows ; 1. In the age group of $41{\sim}60$, females had significantly higher complication rate than males, therefore we must be careful about treatment of female in this age group 2. In level I fractures of the mandibular condyle, because there were abundant complications when the patients were treated with fragment removal, conservative treatment is recommended over the surgical approach. 3. There were no differences in the complication rate, in the level II, III fractures. but were severe complications in the cases of patients treated by Dr.Nam's method or fragment removal. Therefore, open reduction and internal fixation is recommended over Dr.Nam's method or fragment removal. 4. In level IV fractures, open reduction and internal fixation is recommended 5. Although there was a higher complication rate depending on the degree of deviation, there was no correlation between the degree of deviation and development of complications in each level of fracture 6. Because the complication rate was higher in cases of condylar resorption, vertical dimension loss, and alteration of condylar head position, we must make an effort to prevent such complications during treatment

  • PDF

Dimensionality Reduction Methods Analysis of Hyperspectral Imagery for Unsupervised Change Detection of Multi-sensor Images (이종 영상 간의 무감독 변화탐지를 위한 초분광 영상의 차원 축소 방법 분석)

  • PARK, Hong-Lyun;PARK, Wan-Yong;PARK, Hyun-Chun;CHOI, Seok-Keun;CHOI, Jae-Wan;IM, Hon-Ryang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.1-11
    • /
    • 2019
  • With the development of remote sensing sensor technology, it has become possible to acquire satellite images with various spectral information. In particular, since the hyperspectral image is composed of continuous and narrow spectral wavelength, it can be effectively used in various fields such as land cover classification, target detection, and environment monitoring. Change detection techniques using remote sensing data are generally performed through differences of data with same dimensions. Therefore, it has a disadvantage that it is difficult to apply to heterogeneous sensors having different dimensions. In this study, we have developed a change detection method applicable to hyperspectral image and high spat ial resolution satellite image with different dimensions, and confirmed the applicability of the change detection method between heterogeneous images. For the application of the change detection method, the dimension of hyperspectral image was reduced by using correlation analysis and principal component analysis, and the change detection algorithm used CVA. The ROC curve and the AUC were calculated using the reference data for the evaluation of change detection performance. Experimental results show that the change detection performance is higher when using the image generated by adequate dimensionality reduction than the case using the original hyperspectral image.

Estimation of Buckling and Ultimate Strength of a Perforated Plate under Thrust (면내압축하중을 받는 유공판의 좌굴 및 최종강도 평가에 관한 연구)

  • Ko, Jae-Yong;Park, Joo-Shin;Joo, Jong-Gil
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.10 no.2 s.21
    • /
    • pp.41-47
    • /
    • 2004
  • Plate has cutout inner bottom and girder and Door etc. in hull construction absence is used much, and this is strength in case must be situated, but establish in region that high stress interacts sometimes fatally in region that there is no big problem usually by purpose of weight reduction, a person and freight movement, piping etc.. Because cutout‘s existence is positioning in this place, and, elastic bucking strength by load causes large effect in ultimate strength. Therefore, perforated plate elastic bucking strength and ultimate strength is one of important design criteria to decide structural elements size at early structure design step of a ship. Therefore, we need reasonable & reliable design formula for elastic bucking strength of the perforated plate. The author computed numerically ultimate strength change about several aspect ratios, cutout dimension, and plate thickness by using ANSYS Finite element analysis code based on finite element method in this paper.

  • PDF

MRS Pattern Classification Using Fusion Method based on SpPCA and MLP (SpPCA와 MLP에 기반을 둔 응합법칙에 의한 MRS 패턴분류)

  • Song Chang kyu;Lee Dae jong;Jeon Byeong seok;Ryu Jeong woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.922-929
    • /
    • 2005
  • In this paper, we propose the MRS p:Ittern classification techniques by the fusion scheme based on the SpPCA and MLP. A conventional PCA teclulique for the dimension reduction has the problem that it can't find a optimal transformation matrix if the property of input data is nonlinear. To overcome this drawback we extract features by the SpPCA technique which use the local patterns rather than whole patterns. In a next classification step, individual classifier based on MLP calculates the similarity of each class for local features. Finally, MRS patterns is classified by the fusion scheme to effectively combine the individual information. As the simulation results to verify the effectiveness, the proposed method showed more improved classification results than conventional methods.

A Study on Improvement of 100 Tons Toggle Injection Molding Machine's Weight Using Numerical Analysis (수치해석을 이용한 토글식 100톤 사출성형기의 중량 개선에 관한 연구)

  • Han, Seong-Ryeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.10
    • /
    • pp.4713-4718
    • /
    • 2013
  • Nowadays, three-dimensional computer added design(3D CAD) tool are widely and actively used for design of mechanical machine. Because using the tool is more effective to understand design concept and to collaborate with other operation than using two-dimensional design tool. In this study, the 3D CAD tool which is called I-DEAS was applied for three-dimensional modeling of main parts and assembling of modeled parts for identification the entire shape of a injection molding machine. In addition, a study was also performed regarding reduction for the weight of main plates for saving production cost and energy in the machine. A finite element method(FEM) program in I-DEAS tool was used for the improvement study. First, the current main plates were structural analysed and then the plate deformations, weak regions and stress distributions were graped. By the FEM results, the 2nd improved designing of the plates was conducted such as reinforcement or slimming of the plate wall thickness. The 2nd structural FEM was performed for verification of the redesigned plates and then the FEM results were compared with the 1st FEM's result. The weight of the main plates were averagely reduced approximately 3 - 7%. By these results, it was seemed that the improved plates have a useful availability.