• Title/Summary/Keyword: Quality Representation Classification

Search Result 20, Processing Time 0.026 seconds

Prediction of compressive strength of GGBS based concrete using RVM

  • Prasanna, P.K.;Ramachandra Murthy, A.;Srinivasu, K.
    • Structural Engineering and Mechanics
    • /
    • v.68 no.6
    • /
    • pp.691-700
    • /
    • 2018
  • Ground granulated blast furnace slag (GGBS) is a by product obtained from iron and steel industries, useful in the design and development of high quality cement paste/mortar and concrete. This paper investigates the applicability of relevance vector machine (RVM) based regression model to predict the compressive strength of various GGBS based concrete mixes. Compressive strength data for various GGBS based concrete mixes has been obtained by considering the effect of water binder ratio and steel fibres. RVM is a machine learning technique which employs Bayesian inference to obtain parsimonious solutions for regression and classification. The RVM is an extension of support vector machine which couples probabilistic classification and regression. RVM is established based on a Bayesian formulation of a linear model with an appropriate prior that results in a sparse representation. Compressive strength model has been developed by using MATLAB software for training and prediction. About 70% of the data has been used for development of RVM model and 30% of the data is used for validation. The predicted compressive strength for GGBS based concrete mixes is found to be in very good agreement with those of the corresponding experimental observations.

Data-based On-line Diagnosis Using Multivariate Statistical Techniques (다변량 통계기법을 활용한 데이터기반 실시간 진단)

  • Cho, Hyun-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.1
    • /
    • pp.538-543
    • /
    • 2016
  • For a good product quality and plant safety, it is necessary to implement the on-line monitoring and diagnosis schemes of industrial processes. Combined with monitoring systems, reliable diagnosis schemes seek to find assignable causes of the process variables responsible for faults or special events in processes. This study deals with the real-time diagnosis of complicated industrial processes from the intelligent use of multivariate statistical techniques. The presented diagnosis scheme consists of a classification-based diagnosis using nonlinear representation and filtering of process data. A case study based on the simulation data was conducted, and the diagnosis results were obtained using different diagnosis schemes. In addition, the choice of future estimation methods was evaluated. The results showed that the performance of the presented scheme outperformed the other schemes.

Optical In-Situ Plasma Process Monitoring Technique for Detection of Abnormal Plasma Discharge

  • Hong, Sang Jeen;Ahn, Jong Hwan;Park, Won Taek;May, Gary S.
    • Transactions on Electrical and Electronic Materials
    • /
    • v.14 no.2
    • /
    • pp.71-77
    • /
    • 2013
  • Advanced semiconductor manufacturing technology requires methods to maximize tool efficiency and improve product quality by reducing process variability. Real-time plasma process monitoring and diagnosis have become crucial for fault detection and classification (FDC) and advanced process control (APC). Additional sensors may increase the accuracy of detection of process anomalies, and optical monitoring methods are non-invasive. In this paper, we propose the use of a chromatic data acquisition system for real-time in-situ plasma process monitoring called the Plasma Eyes Chromatic System (PECS). The proposed system was initially tested in a six-inch research tool, and it was then further evaluated for its potential to detect process anomalies in an eight-inch production tool for etching blanket oxide films. Chromatic representation of the PECS output shows a clear correlation with small changes in process parameters, such as RF power, pressure, and gas flow. We also present how the PECS may be adapted as an in-situ plasma arc detector. The proposed system can provide useful indications of a faulty process in a timely and non-invasive manner for successful run-to-run (R2R) control and FDC.

Analysis of Korea Earth Science Olympiad Items for the Enhancement of Item Quality (한국 지구과학 올림피아드 문항 분석을 통한 문항의 질 향상 방안)

  • Lee Ki-Young;Kim Chan-Jong
    • Journal of the Korean earth science society
    • /
    • v.26 no.6
    • /
    • pp.511-523
    • /
    • 2005
  • The purpose of this study is to analyze the 1st and 2nd Korea Earth Science Olympiad (KESO) items, in order to find informations to enhance item quality. To do this, internal and external item classification frameworks are developed. Item difficulty (P), discrimination index (DI), correlation, and reliability are estimated by using classical test theory. Generalizability is also estimated by applying the generalizability theory. The results of item classification are as follows: (1) ‘Geology’, ‘astronomy’ and ‘data analysis and interpretation’ are dominant in content and inquiry process domain, respectively. Nearly every item has textbook context. (2) There is no difference between the preliminary and final tests in terms of their thinking skills sections. (3) As a whole, the ratio of items with pictures is high in item representation. However, multiple-choice and short answer items are more common in preliminary competition, and essay type items are found more often in final competition. The ratio of simple items is high in middle school section and preliminary competition, but composite items are dominant in high school section and final competition. The findings of item analysis are as follows: (1) In the middle school section, P is low and DI is moderate. But in the high school section, there is a considerable differences between science high schools and other high schools in general. (2) The highest correlation is reported between the scores of meteorology domain and total score in middle school, whereas in high school astronomy domain and total score show the highest correlation. (3) General high school section show the highest Cronbach $\alpha$ and generalizability. (4) General high school section show acceptable generalizability coefficient (> 0.80), but middle and science high school section should increase the number of items to reach acceptable generalizability level.

Variable Rate IMBE-LP Coding Algorithm Using Band Information (주파수대역 정보를 이용한 가변률 IMBE-LP 음성부호화 알고리즘)

  • Park, Man-Ho;Bae, Geon-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.576-582
    • /
    • 2001
  • The Multi-Band Excitation(MBE) speech coder uses a different approach for the representation of the excitation signal. It replaces the frame-based single voiced/unvoiced classification of a classical speech coder with a set of such decision over harmonic intervals in the frequency domain. This enables each speech segment to be a mixture of voiced and unvoiced, and improves the synthetic speech quality by reducing decision errors that might occur on the frame-based single voiced and unvoiced decision process when input speech is degraded with noise. The IMBE-LP, improved version of MBE with linear prediction, represents the spectral information of MBE model with linear prediction coefficients to obtain low bit rate of 2.4 kbps. In this Paper, we proposed a variable rate IMBE-LP vocoder that has lower bit rate than IMBE-LP without degrading the synthetic speech quality. To determine the LP order, it uses the spectral band information of the MBE model that has something to do with he input speech's characteristics. Experimental results are riven with our findings and discussions.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Key Point Extraction from LiDAR Data for 3D Modeling (3차원 모델링을 위한 라이다 데이터로부터 특징점 추출 방법)

  • Lee, Dae Geon;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.479-493
    • /
    • 2016
  • LiDAR(Light Detection and Ranging) data acquired from ALS(Airborne Laser Scanner) has been intensively utilized to reconstruct object models. Especially, researches for 3D modeling from LiDAR data have been performed to establish high quality spatial information such as precise 3D city models and true orthoimages efficiently. To reconstruct object models from irregularly distributed LiDAR point clouds, sensor calibration, noise removal, filtering to separate objects from ground surfaces are required as pre-processing. Classification and segmentation based on geometric homogeneity of the features, grouping and representation of the segmented surfaces, topological analysis of the surface patches for modeling, and accuracy assessment are accompanied by modeling procedure. While many modeling methods are based on the segmentation process, this paper proposed to extract key points directly for building modeling without segmentation. The method was applied to simulated and real data sets with various roof shapes. The results demonstrate feasibility of the proposed method through the accuracy analysis.

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.