• Title/Summary/Keyword: 품질 분류

Search Result 958, Processing Time 0.023 seconds

A Feasibility Study on the Outsourcing of Cataloging in the Libraries (목록 아웃소싱의 타당성 분석에 관한 연구)

  • Chung Hye-Kyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.2
    • /
    • pp.35-55
    • /
    • 2005
  • This study attempts feasibility analysis of cataloging outsourcing. The economic analysis model based on information economics categorizes the benefit into direct benefit and value linking. We measure direct benefit by cost savings and cost avoidance, value linking by the degree of improvement in cataloging quality The results show that there is no feasibility overall, because librarians spent more time to control the quality due to vendor's lack of professionalism, resulting little effect on cost savings. When cataloging outsourcing is forcibly used under the economically infeasible condition, it is impossible to achieve the basic purpose of operating cost savings and improvement of service function.

Scheduling Algorithms for QoS Provision in Broadband Convergence Network (광대역통합 네트워크에서의 스케쥴링 기법)

  • Jang, Hee-Seon;Cho, Ki-Sung;Shin, Hyun-Chul;Lee, Jang-Hee
    • Convergence Security Journal
    • /
    • v.7 no.2
    • /
    • pp.39-47
    • /
    • 2007
  • The scheduling algorithms to provide quality of service (QoS) in broadband convergence network (BcN) are compared and analysed. The main QoS management methods such as traffic classification, traffic processing in the input queue and weighted queueing are first analysed, and then the major scheduling algorithms of round robin, priority and weighted round robin under recently considering for BcN to supply real time multimedia communications are analysed. The simulation results by NS-2 show that the scheduling algorithm with proper weights for each traffic class outperforms the priority algorithm.

  • PDF

A Study on the Quality Control of UIS DB (UIS 데이터베이스 품질관리에 관한 연구)

  • Kim, Kye-Hyun;Kim, Tae-Hwa;Lee, Woo-Chul
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.2 s.16
    • /
    • pp.79-88
    • /
    • 2000
  • It is essential to build a high quality database in developing a UIS to enhance the administrative effectiveness of municipal governments. To secure such a high quality DB, a proper methodology of quality control should be established. It is imperative to have such a method fit UIS DB considering that the conventional method has mainly been focusing on the quality control of the digital layers itself. Therefore, this study have analyzed the city of Inchon's UIS DB to devise a proper method to categorize the types of errors and to identify major relevant items. Also, the magnitude and frequency of each error along with its major cause have been analyzed to propose a quality control procedure to minimize the errors

  • PDF

Liquefaction Characteristics of HDPE by Pyrolysis (HDPE의 열분해에 의한 액화 특성)

  • 유홍정;이봉희;김대수
    • Polymer(Korea)
    • /
    • v.27 no.1
    • /
    • pp.84-89
    • /
    • 2003
  • Pyrolysis of high density polyethylene(HDPE) was carried out to find the effects of temperature and time on the pyrolysis. The starting temperature and activation energy of HDPE pyrolysis increased with increasing heating rate. In general, conversion and liquid yield continuously increased with pyrolysis temperature and pyrolysis time. This tendency is very sensitive with pyrolysis time, especially at 45$0^{\circ}C$. Pyrolysis temperature has more influence on the conversion than pyrolysis time. Each liquid product formed during pyrolysis was classified into gasoline, kerosene, light oil and wax according to the distillation temperature based on the petroleum product quality standard of Korea Petroleum Quality Inspection Institute. As a result, the amount of liquid products produced during HDPE pyrolysis at 45$0^{\circ}C$ was in the order of light oil > wax > kerosene > gasoline, and at 475$^{\circ}C$ and 50$0^{\circ}C$, it was wax > light > oil > kerosene > gasoline.

Refining software vulnerbility Analysis under ISO/IEC 15408 and 18045 (ISO/IEC 15408, 18045 기반 소프트웨어 취약성 분석 방법론)

  • Im, Jae-Woo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.5
    • /
    • pp.969-974
    • /
    • 2014
  • CC (Common Criteria) requires collecting vulnerability information and analyzing them by using penetration testing for evaluating IT security products. Under the time limited circumstance, developers cannot help but apply vulnerability analysis at random to the products. Without the systematic vulnerability analysis, it is inevitable to get the diverse vulnerability analysis results depending on competence in vulnerability analysis of developers. It causes that the security quality of the products are different despite of the same level of security assurance. It is even worse for the other IT products that are not obliged to get the CC evaluation to be applied the vulnerability analysis. This study describes not only how to apply vulnerability taxonomy to IT security vulnerability but also how to manage security quality of IT security products practically.

Design of Distributed Processing Framework Based on H-RTGL One-class Classifier for Big Data (빅데이터를 위한 H-RTGL 기반 단일 분류기 분산 처리 프레임워크 설계)

  • Kim, Do Gyun;Choi, Jin Young
    • Journal of Korean Society for Quality Management
    • /
    • v.48 no.4
    • /
    • pp.553-566
    • /
    • 2020
  • Purpose: The purpose of this study was to design a framework for generating one-class classification algorithm based on Hyper-Rectangle(H-RTGL) in a distributed environment connected by network. Methods: At first, we devised one-class classifier based on H-RTGL which can be performed by distributed computing nodes considering model and data parallelism. Then, we also designed facilitating components for execution of distributed processing. In the end, we validate both effectiveness and efficiency of the classifier obtained from the proposed framework by a numerical experiment using data set obtained from UCI machine learning repository. Results: We designed distributed processing framework capable of one-class classification based on H-RTGL in distributed environment consisting of physically separated computing nodes. It includes components for implementation of model and data parallelism, which enables distributed generation of classifier. From a numerical experiment, we could observe that there was no significant change of classification performance assessed by statistical test and elapsed time was reduced due to application of distributed processing in dataset with considerable size. Conclusion: Based on such result, we can conclude that application of distributed processing for generating classifier can preserve classification performance and it can improve the efficiency of classification algorithms. In addition, we suggested an idea for future research directions of this paper as well as limitation of our work.

A study on applicability of the digit frequency analysis to Hydrological Data (수문학적 데이터의 자릿수 빈도 분석 적용가능성 연구)

  • Jung Eun Park;Seung Jin Maeng;Kwang Suop Lim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.102-102
    • /
    • 2023
  • 벤포드 법칙(Benford's Law)은 실생활에서 관찰되는 수치 데이터를 첫 자리 숫자에 따라 분류할 때 첫 자리의 숫자가 커질수록 그 분포가 점차 감소되는 현상을 말한다. 이러한 벤포드 법칙은 일반식으로 도출하여 다양한 자릿수로 확장하여 적용할 수 있는 연구결과가 제시되었으며, 회계학, 사회과학, 물리학, 컴퓨터과학, 생물학 등 다방면의 수치 자료에서 그 유효성이 확인되고 있다. 자릿수의 관찰빈도를 분석하는 것만으로 많은 양의 실생활 데이터에서 빠르고 쉽게 데이터 조작여부를 탐지하거나 1차적인 데이터 품질검사에 효과적으로 활용되고 있다. 본 연구에서는 다학제적 연구의 측면에서 수학·물리적 법칙인 벤포드 법칙을 일유량 등 다양한 수문학 측정자료에 적용하여 그 적용가능성을 확인하고 자료의 불균질성과 신뢰성을 빠르게 탐지할 수 있는 방법론을 제시하고자 한다. 수문자료는 공인심의를 통해 자료의 신뢰도를 확보하고 있으나 확정·배포까지 약 2년이 소요되어 활용기간 단축에 대한 사용자 요구가 지속되고 있는 실정이다. 따라서 본 연구에서는 분석대상 데이터의 자릿수 관찰빈도가 벤포드 법칙에 의한 예상자릿수 빈도를 따르는지 여부에 대한 가설을 설정하고 카이제곱 검정 또는 Kolmogorov-Smirnov(K-S) 검정 등을 통해 적합도에 대한 통계적 유의미함을 분석함으로써 대략적으로나마 빠르고 쉽게 측정자료의 신뢰성을 판단할 수 있다. 본 연구는 다양한 학문과의 결합을 통한 새로운 접근을 시도함으로써 빅데이터 시대에 효과적으로 수자원의 개발, 관리 및 운영의 의사결정을 하는데 도움이 될 수 있을 것으로 판단된다.

  • PDF

A Study on Optimization of Classification Performance through Fourier Transform and Image Augmentation (푸리에 변환 및 이미지 증강을 통한 분류 성능 최적화에 관한 연구)

  • Kihyun Kim;Seong-Mok Kim;Yong Soo Kim
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.119-129
    • /
    • 2023
  • Purpose: This study proposes a classification model for implementing condition-based maintenance (CBM) by monitoring the real-time status of a machine using acceleration sensor data collected from a vehicle. Methods: The classification model's performance was improved by applying Fourier transform to convert the acceleration sensor data from the time domain to the frequency domain. Additionally, the Generative Adversarial Network (GAN) algorithm was used to augment images and further enhance the classification model's performance. Results: Experimental results demonstrate that the GAN algorithm can effectively serve as an image augmentation technique to enhance the performance of the classification model. Consequently, the proposed approach yielded a significant improvement in the classification model's accuracy. Conclusion: While this study focused on the effectiveness of the GAN algorithm as an image augmentation method, further research is necessary to compare its performance with other image augmentation techniques. Additionally, it is essential to consider the potential for performance degradation due to class imbalance and conduct follow-up studies to address this issue.

A Study on the Improvement of the Defense-related International Patent Classification using Patent Mining (특허 마이닝을 이용한 국방관련 국제특허분류 개선 방안 연구)

  • Kim, Kyung-Soo;Cho, Nam-Wook
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.1
    • /
    • pp.21-33
    • /
    • 2022
  • Purpose: As most defense technologies are classified as confidential, the corresponding International Patent Classifications (IPCs) require special attention. Consequently, the list of defense-related IPCs has been managed by the government. This paper aims to evaluate the defense-related IPCs and propose a methodology to revalidate and improve the IPC classification scheme. Methods: The patents in military technology and their corresponding IPCs during 2009~2020 were utilized in this paper. Prior to the analysis, patents are divided into private and public sectors. Social network analysis was used to analyze the convergence structure and central defense technology, and association rule mining analysis was used to analyze the convergence pattern. Results: While the public sector was highly cohesive, the private sector was characterized by easy convergence between technologies. In addition, narrow convergence was observed in the public sector, and wide convergence was observed in the private sector. As a result of analyzing the core technologies of defense technology, defense-related IPC candidates were identified. Conclusion: This paper presents a comprehensive perspective on the structure of convergence of defense technology and the pattern of convergence. It is also significant because it proposed a method for revising defense-related IPCs. The results of this study are expected to be used as guidelines for preparing amendments to the government's defense-related IPC.

A Study on Prototyping and Classification of Meta Data for Teaching-Learning Content Management (교수-학습 컨텐츠 관리를 위한 메타데이터 분류 및 프로토타이핑에 관한 연구)

  • Song Yu-Jin;Kim Haeng-Kon;Moon Hyun Chang
    • Proceedings of the Korea Association of Information Systems Conference
    • /
    • 2004.05a
    • /
    • pp.265-268
    • /
    • 2004
  • 최근 디지털 지식기반 사회에 대응하는 교육의 형태로 e-Learning이 교육적 대안으로 급부상하면서, 시스템의 상호 운영성 및 컨텐츠 명세, 활용을 지원하기 위한 표준화에 따른 연구가 국내외에서 급속도로 확산되고 있다. 특히, 국제표준기관에서 제시한 e-Learning 개발 환경을 위한 Learning Technology Standard Architecture(LTSA)와 Sharable Content Object Reference Model(SCORM)을 제 정하여 컨텐츠의 사용과 상호 호환을 가능하게 함으로써 e-Learning의 효율성을 증대시키고 산업 시장의 확장을 이룰 수 있다. 또한, 현재 많은 교육관련 업체에서는 SCORM 체계를 기반으로 한 학습 컨텐츠들을 개발하여 제공하고 있다. 따라서, 본 논문에서는 국제 표준 기술인 SCORM을 기반으로 개발된 학습 컨텐츠를 체계적으로 지원하기 위해 컨텐츠 관리 시스템 개발에 대한 기술을 정의하고, 다양한 관점의 컨텐츠 메타 데이터를 식별, 분류함으로써 컨텐츠의 생성과 저장, 검색 나아가 형상관리를 위한 기본 정보로 이용 가능하다. 또한 이들 메타 데이터를 기반으로 한 학습 컨텐츠 관리 시스템의 프로토타이핑을 제시함으로써 재사용성과 유지보수성 향상을 통해 컨텐츠 개발의 용이성과 품질 및 생산성을 높일 수 있다.

  • PDF