• Title/Summary/Keyword: Classification of Quality

Search Result 1,574, Processing Time 0.026 seconds

A Study of the Development of Apartment's Structural Cost Saving Checklist through the Case Research (사례분석을 통한 공동주택 골조공사의 원가절감 체크리스트 개발에 관한 연구)

  • Lee, Kyeong-Seob;Suh, Sang-Wook
    • Korean Journal of Construction Engineering and Management
    • /
    • v.11 no.6
    • /
    • pp.65-77
    • /
    • 2010
  • Our nation's housing construction is given much weight over 32% in 2007 and especially apartment is taking over 67%. If we put into construction environment consideration, we are having a trouble with price cap policy and the realestate recession due to the global economic crisis. So in order to get competitive power and supply of cheap apartment, the necessity of cost saving is increasing. This research collected the past constructed apartment project's cost saving examples which were influencing on the construction cost, quality and time. We analyzed collected cost saving datum and assorted these in compliance with classification system. By analysis of correlation among datum with exclusion and integration, we make a propose cost saving Checklist that will be a base data to give a chance to use in working level and other research.

Improving Embedding Model for Triple Knowledge Graph Using Neighborliness Vector (인접성 벡터를 이용한 트리플 지식 그래프의 임베딩 모델 개선)

  • Cho, Sae-rom;Kim, Han-joon
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2021
  • The node embedding technique for learning graph representation plays an important role in obtaining good quality results in graph mining. Until now, representative node embedding techniques have been studied for homogeneous graphs, and thus it is difficult to learn knowledge graphs with unique meanings for each edge. To resolve this problem, the conventional Triple2Vec technique builds an embedding model by learning a triple graph having a node pair and an edge of the knowledge graph as one node. However, the Triple2 Vec embedding model has limitations in improving performance because it calculates the relationship between triple nodes as a simple measure. Therefore, this paper proposes a feature extraction technique based on a graph convolutional neural network to improve the Triple2Vec embedding model. The proposed method extracts the neighborliness vector of the triple graph and learns the relationship between neighboring nodes for each node in the triple graph. We proves that the embedding model applying the proposed method is superior to the existing Triple2Vec model through category classification experiments using DBLP, DBpedia, and IMDB datasets.

Power Disturbance Detection using the Inflection Point Estimation (변곡점 추정을 이용한 전력선 신호의 이상현상 검출)

  • Iem, Byeong-Gwan
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.710-715
    • /
    • 2021
  • Power line signal can show disturbances due to various causes. Typical anomalies are temporary sag/swell of the amplitude, flat topped signal, and harmonic distortions. The disturbances need to be detected and treated properly for the quality of the power signal. In this study, the power disturbances are detected using the inflection points (IP). The inflection points are defined as points where local maxima/minima or the slope changes occur. The power line signal has a fixed IP pattern since it is basically sinusoidal, and it may have additional inflection points if there is any disturbance. The disturbance is detected by comparing the IP patterns between the normal signal and distorted signal. In addition, by defining a cost function, the time instant where the disturbance happens can be decided. The computer simulation shows that the proposed method is useful for the detection of various disturbances. The simple sag or swell signal only shows the amplitude changes at the detected inflection points. However, the flat top signal and harmonically distorted signal produce additional inflection points and large values in the cost function. These results can be exploited for the further processing of disturbance classification.

Hair Classification and Region Segmentation by Location Distribution and Graph Cutting (위치 분포 및 그래프 절단에 의한 모발 분류와 영역 분할)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.1-8
    • /
    • 2022
  • Recently, Google MedeiaPipe presents a novel approach for neural network-based hair segmentation from a single camera input specifically designed for real-time, mobile application. Though neural network related to hair segmentation is relatively small size, it produces a high-quality hair segmentation mask that is well suited for AR effects such as a realistic hair recoloring. However, it has undesirable segmentation effects according to hair styles or in case of containing noises and holes. In this study, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood function. It is further optimized according to graph cuts algorithm and initial hair region is obtained. Finally, clustering algorithm and image post-processing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. The proposed method is applied to MediaPipe hair segmentation pipeline.

Anomaly Detection of Generative Adversarial Networks considering Quality and Distortion of Images (이미지의 질과 왜곡을 고려한 적대적 생성 신경망과 이를 이용한 비정상 검출)

  • Seo, Tae-Moon;Kang, Min-Guk;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.3
    • /
    • pp.171-179
    • /
    • 2020
  • Recently, studies have shown that convolution neural networks are achieving the best performance in image classification, object detection, and image generation. Vision based defect inspection which is more economical than other defect inspection, is a very important for a factory automation. Although supervised anomaly detection algorithm has far exceeded the performance of traditional machine learning based method, it is inefficient for real industrial field due to its tedious annotation work, In this paper, we propose ADGAN, a unsupervised anomaly detection architecture using the variational autoencoder and the generative adversarial network which give great results in image generation task, and demonstrate whether the proposed network architecture identifies anomalous images well on MNIST benchmark dataset as well as our own welding defect dataset.

GAM: A Criticality Prediction Model for Large Telecommunication Systems (GAM: 대형 통신 시스템을 위한 위험도 예측 모델)

  • Hong, Euy-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.2
    • /
    • pp.33-40
    • /
    • 2003
  • Criticality prediction models that determine whether a design entity is fault-prone or non fault-prone play an important role in reducing system development costs because the problems in early phases largely affect the quality of the late products. Real-time systems such as telecommunication systems are so large that criticality prediction is mere important in real-time system design. The current models are based on the technique such as discriminant analysis, neural net and classification trees. These models have some problems with analyzing causes of the prediction results and low extendability. This paper builds a new prediction model, GAM, based on Genetic Algorithm. GAM is different from other models because it produces a criticality function. So GAM can be used for comparison between entities by criticality. GAM is implemented and compared with a well-known prediction model, BackPropagation neural network Model(BPM), considering Internal characteristics and accuracy of prediction.

  • PDF

Low-dose CT Image Denoising Using Classification Densely Connected Residual Network

  • Ming, Jun;Yi, Benshun;Zhang, Yungang;Li, Huixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2480-2496
    • /
    • 2020
  • Considering that high-dose X-ray radiation during CT scans may bring potential risks to patients, in the medical imaging industry there has been increasing emphasis on low-dose CT. Due to complex statistical characteristics of noise found in low-dose CT images, many traditional methods are difficult to preserve structural details effectively while suppressing noise and artifacts. Inspired by the deep learning techniques, we propose a densely connected residual network (DCRN) for low-dose CT image noise cancelation, which combines the ideas of dense connection with residual learning. On one hand, dense connection maximizes information flow between layers in the network, which is beneficial to maintain structural details when denoising images. On the other hand, residual learning paired with batch normalization would allow for decreased training speed and better noise reduction performance in images. The experiments are performed on the 100 CT images selected from a public medical dataset-TCIA(The Cancer Imaging Archive). Compared with the other three competitive denoising algorithms, both subjective visual effect and objective evaluation indexes which include PSNR, RMSE, MAE and SSIM show that the proposed network can improve LDCT images quality more effectively while maintaining a low computational cost. In the objective evaluation indexes, the highest PSNR 33.67, RMSE 5.659, MAE 1.965 and SSIM 0.9434 are achieved by the proposed method. Especially for RMSE, compare with the best performing algorithm in the comparison algorithms, the proposed network increases it by 7 percentage points.

Error Recovery by the Classification of Candidate Motion Vectors for H.263 Video Communications (후보벡터 분류에 의한 영상 에러 복원)

  • Son, Nam-Rye;Lee, Guee-Sang
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.163-168
    • /
    • 2003
  • In transmitting compressed video bit-stream over Internet, packet loss causes error propagation in both spatial and temporal domain, which in turn leads to severe degradation in image quality. In this paper, a new approach for the recovery of lost or erroneous Motion Vector(MV)s by classifying the movements of neighboring blocks by their homogeneity is proposed. MVs of neighboring blocks are classified according to the direction of MVs and a representative value for each class is determined to obtain the candidate MV set. By computing the distortion of the candidates, a MV with the minimum distortion is selected. Experimental results show that the proposed algorithm exhibits better performance in many cases than existing methods.

Document Clustering using Term reweighting based on NMF (NMF 기반의 용어 가중치 재산정을 이용한 문서군집)

  • Lee, Ju-Hong;Park, Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.11-18
    • /
    • 2008
  • Document clustering is an important method for document analysis and is used in many different information retrieval applications. This paper proposes a new document clustering model using the re-weighted term based NMF(non-negative matrix factorization) to cluster documents relevant to a user's requirement. The proposed model uses the re-weighted term by using user feedback to reduce the gap between the user's requirement for document classification and the document clusters by means of machine. The Proposed method can improve the quality of document clustering because the re-weighted terms. the semantic feature matrix and the semantic variable matrix, which is used in document clustering, can represent an inherent structure of document set more well. The experimental results demonstrate appling the proposed method to document clustering methods achieves better performance than documents clustering methods.

  • PDF

Meteorological Information Analysis Algorithm based on Weight for Outdoor Activity Decision-Making (야외활동 의사결정을 위한 가중치 기반 기상정보 분석 알고리즘)

  • Lee, Moo-Hun;Kim, Min-Gyu
    • Journal of Digital Convergence
    • /
    • v.14 no.3
    • /
    • pp.209-217
    • /
    • 2016
  • Recently, the outdoor activities were increased in accordance with economic growth and improved quality of life. In addition, weather and outdoor activities are closely related. Currently, Outdoor Activities decisions are determined by the Korea Meteorological Administrator's forecasts and subjective experience. Therefore, we need the analysis method that can provide a basis for the decision on outdoor activities based on meteorological information. In this paper, we propose an algorithm that can analyze meteorological information to support decision-making outdoor activities. And the algorithm is based on the data mining. In addition, we have constructed a baseball game schedule with automatic weather system's observation data in the training data. We verified the improved performance of the proposed algorithm.