• 제목/요약/키워드: data process

검색결과 23,656건 처리시간 0.043초

Process Capability Analysis by a New Process Incapability Index

  • Kim, Hee-Jung;Cho, Gyo-Young
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권2호
    • /
    • pp.457-469
    • /
    • 2007
  • Process Capability Indexes(PCI) are used as the measure for evaluation of process capability analysis and is the statistical method for efficient process control. The fourth generation $PCI(C_{psk})$ is constructed from $C_{pmk}$ by introducing the factor $\mid\mu-T\mid$ in the numerator as an extra penalty for the departure of the process mean from the preassigned target value T And Process Incapability Indexes(PII) are presented by inversing PCI and include the information of PCI. This paper introduces the PII $C_{ss}^*$ provide manager with various information of process and include Gage R&R. PII $C_{ss}^*$ is presented by inversing PCI $C_{psk}$ and include the information of PCI $C_{psk}$.

  • PDF

적응 훈련 신경망을 이용한 플라즈마 식각 공정 수율 향상을 위한 공정 분석 및예측 시스템 개발 (Development of Process Analysis and Prediction Systeme to Improve Yield in Plasma Etching Process Using Adaptively Trained Neural Network)

  • 최문규;김훈모
    • 한국정밀공학회지
    • /
    • 제16권11호
    • /
    • pp.98-105
    • /
    • 1999
  • As the IC(Integrated Circuit) has been densified and complicated, it is required to thorough process control to improve yield. Experts, for this purpose, focused on the process analysis automation, which is came from the strict data management in semiconductor manufacturing. In this paper, we presents the process analysis system that can analyze causes, for a output after processes. Also, the plasma etching process that highly affects yield among semiconductor process is modeled to predict a output before the process. To approach this problem, we use adaptively trained neural networks that exhibit superior accuracy over statistical techniques. And in comparison with methods in other paper, a method that history of trend for input data is considered is shown to offer advantage in both learning and prediction capability. This research regards CD(Critical Dimension) that is considerable in high integrated circuit as output variable of the prediction model.

  • PDF

제조 공정 빅데이터 분석을 위한 플랫폼 연구 (A Study on the Platform for Big Data Analysis of Manufacturing Process)

  • 구진희
    • 융합정보논문지
    • /
    • 제7권5호
    • /
    • pp.177-182
    • /
    • 2017
  • IoT, 클라우드 컴퓨팅, 빅데이터와 같은 주요 ICT 기술이 제조 분야에 적용되기 시작하면서 스마트 공장 구축이 본격화 되고 있다. 스마트 공장 구현의 핵심은 공장 내외부의 데이터 확보 및 분석력에 있다. 따라서 빅데이터 분석 플랫폼에 대한 필요성이 증가하고 있다. 본 연구의 목적은 제조 공정 빅데이터 분석을 위한 플랫폼을 구성하고, 분석을 위한 통합 메소드를 제안하는데 있다. 제안하는 플랫폼은 대량의 데이터 셋을 분산 처리하기 위해 분석도구 R과 하둡을 통합한 RHadoop 기반 구조로서 자동화 시스템의 단위 공정 및 공장 내에서 수집되는 빅데이터를 하둡 HBase에 직접 저장 및 분석이 가능하다. 또한 기존 RDB 기반 분석의 한계점을 보완하였다. 이러한 플랫폼은 스마트 공장을 위한 단위 공정 적합성을 고려하여 개발되어야 하며, 제조 공정에 스마트 공장을 도입하고자 하는 중소기업에 IoT 플랫폼 구축의 가이드가 될 수 있을 것으로 전망된다.

FTP와 JSON을 활용한 대용량 미디어의 항공장비용 데이터 로드 프로세스 (Data Load Process of large-sized media for avionics using FTP and JSON)

  • 최지환;최낙민;신재권
    • 한국항행학회논문지
    • /
    • 제27권5호
    • /
    • pp.610-620
    • /
    • 2023
  • 4차 산업 혁명에 기반한 기술발전 및 항공사들의 고객 유치를 위한 경쟁으로 인해 항공기 인테리어 시장에 대한 관심이 증가하고 있으며, 그 일환으로 국내에서는 FAA Part.25급 민항기를 대상으로 한 CDS (Cabin Display System)이 개발되고 있다. CDS는 IDPM (Integrated Display Processing Module)로 제어되는 대형 Flexible 및 투명 OLED (Organic Light Emitting Diodes)를 활용하여 승객들에게 다양한 멀티미디어 서비스를 제공하는 시스템으로 고품질의 서비스 제공을 위해 대용량의 미디어 콘텐츠 활용이 필수적으로 요구된다. 본 논문에서는 대용량 파일들의 효율적인 Data Load Process를 수행하기 위한 새로운 방안을 제시하고 그 구현 및 성능을 다룬다. 본 연구 결과는 기존 ARINC-615A 대비 Data Load Process 개발 비용의 절감과 더불어 신뢰성 높은 대용량 파일 전송이 필요한 항공 장비의 Data Load Process 개발에 대체 적용이 가능할 것으로 기대된다.

Throughput Maximization for Cognitive Radio Users with Energy Constraints in an Underlay Paradigm

  • Vu, Van-Hiep;Koo, Insoo
    • Journal of information and communication convergence engineering
    • /
    • 제15권2호
    • /
    • pp.79-84
    • /
    • 2017
  • In a cognitive radio network (CRN), cognitive radio users (CUs) should be powered by a small battery for their operations. The operations of the CU often include spectrum sensing and data transmission. The spectrum sensing process may help the CU avoid a collision with the primary user (PU) and may save the energy that is wasted in transmitting data when the PU is present. However, in a time-slotted manner, the sensing process consumes energy and reduces the time for transmitting data, which degrades the achieved throughput of the CRN. Subsequently, the sensing process does not always offer an advantage in regards to throughput to the CRN. In this paper, we propose a scheme to find an optimal policy (i.e., perform spectrum sensing before transmitting data or transmit data without the sensing process) for maximizing the achieved throughput of the CRN. In the proposed scheme, the data collection period is considered as the main factor effecting on the optimal policy. Simulation results show the advantages of the optimal policy.

Data Access Control Scheme Based on Blockchain and Outsourced Verifiable Attribute-Based Encryption in Edge Computing

  • Chao Ma;Xiaojun Jin;Song Luo;Yifei Wei;Xiaojun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1935-1950
    • /
    • 2023
  • The arrival of the Internet of Things and 5G technology enables users to rely on edge computing platforms to process massive data. Data sharing based on edge computing refines the efficiency of data collection and analysis, saves the communication cost of data transmission back and forth, but also causes the privacy leakage of a lot of user data. Based on attribute-based encryption and blockchain technology, we design a fine-grained access control scheme for data in edge computing, which has the characteristics of verifiability, support for outsourcing decryption and user attribute revocation. User attributes are authorized by multi-attribute authorization, and the calculation of outsourcing decryption in attribute encryption is completed by edge server, which reduces the computing cost of end users. Meanwhile, We implemented the user's attribute revocation process through the dual encryption process of attribute authority and blockchain. Compared with other schemes, our scheme can manage users' attributes more flexibly. Blockchain technology also ensures the verifiability in the process of outsourcing decryption, which reduces the space occupied by ciphertext compared with other schemes. Meanwhile, the user attribute revocation scheme realizes the dynamic management of user attribute and protects the privacy of user attribute.

A Study on Veracity of Raw Data based on Value Creation -Focused on YouTube Monetization

  • CHOI, Seoyeon;SHIN, Seung-Jung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.218-223
    • /
    • 2021
  • The five elements of big data are said to be Volume, Variety, Velocity, Veracity, and Value. Among them, data lacking the Veracity of the data or fake data not only makes an error in decision making, but also hinders the creation of value. This study analyzed YouTube's revenue structure to focus the effect of data integrity on data valuation among these five factors. YouTube is one of the OTT service platforms, and due to COVID-19 in 2020, YouTube creators have emerged as a new profession. Among the revenue-generating models provided by YouTube, the process of generating advertising revenue based on click-based playback was analyzed. And, analyzed the process of subtracting the profits generated from invalid activities that not the clicks due to viewers' pure interests, then paying the final revenue. The invalid activity in YouTube's revenue structure is Raw Data, not pure viewing activity of viewers, and it was confirmed a direct impact on revenue generation. Through the analysis of this process, the new Data Value Chain was proposed.

A Design of the Improved Data Conversion Process for System Upgrade Project

  • Kim, Hee Wan
    • International journal of advanced smart convergence
    • /
    • 제10권2호
    • /
    • pp.187-193
    • /
    • 2021
  • Data conversion refers to the process of extracting the data existing in the existing system, that is, the past data accumulated by the old information system or other methods and transferring it to the improved table of the new system. The person in charge of data conversion refers to the entire process of converting to the final destination table according to the rules designed/planned in advance. In most cases, data conversion design should be consider when the old system replace or the data of another existing system is converted and applied to a newly constructed information system. The goal of data conversion is to understand the current database system of operating environment, understand the characteristics of the DBMS in use, maintain the optimal database structure, and make the new system perform at its best. Data conversion methods are largely divide into a method using a tool and a conversion method using a program preparation. In this paper, we examine the advantages and disadvantages of the data conversion method, and try to derive the problems of the existing data conversion method. Based on this, an improved data conversion method for the system upgrade project was proposed, and verified through a questionnaire of an IT expert to prove its effectiveness

개선된 스케일 스페이스 필터링과 함수연결연상 신경망을 이용한 화학공정 감시 (Monitoring of Chemical Processes Using Modified Scale Space Filtering and Functional-Link-Associative Neural Network)

  • 최중환;김윤식;장태석;윤인섭
    • 제어로봇시스템학회논문지
    • /
    • 제6권12호
    • /
    • pp.1113-1119
    • /
    • 2000
  • To operate a process plant safely and economically, process monitoring is very important. Process monitoring is the task to identify the state of the system from sensor data. Process monitoring includes data acquisition, regulatory control, data reconciliation, fault detection, etc. This research focuses on the data recon-ciliation using scale-space filtering and fault detection using functional-link associative neural networks. Scale-space filtering is a multi-resolution signal analysis method. Scale-space filtering can extract highest frequency factors(noise) effectively. But scale-space filtering has too large calculation costs and end effect problems. This research reduces the calculation cost of scale-space filtering by applying the minimum limit to the gaussian kernel. And the end-effect that occurs at the end of the signal of the scale-space filtering is overcome by using extrapolation related with the clustering change detection method. Nonlinear principal component analysis methods using neural network have been reviewed and the separately expanded functional-link associative neural network is proposed for chemical process monitoring. The separately expanded functional-link associative neural network has better learning capabilities, generalization abilities and short learning time than the exiting-neural networks. Separately expanded functional-link associative neural network can express a statistical model similar to real process by expanding the input data separately. Combining the proposed methods-modified scale-space filtering and fault detection method using the separately expanded functional-link associative neural network-a process monitoring system is proposed in this research. the usefulness of the proposed method is proven by its application a boiler water supply unit.

  • PDF

Statistical process control of dye solution stream using spectrophotometer

  • Lee, Won-Jae;Cho, Gyo-Young
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권6호
    • /
    • pp.1289-1303
    • /
    • 2010
  • The need for statistical process control to check the performance of a process is becoming more important in chemical and pharmaceutical industries. This study illustrates the method to determine whether a process is in control and how to produce and interpret control charts. In the experiment, a stream of green dyed water and a stream of pure water were continuously mixed in the process. The concentration of the dye solution was measured before and after the mixer via a spectrophotometer. The in-line mixer provided benefits to the dye and water mixture but not for the stock dye solution. The control charts were analyzed, and the pre-mixer process was in control for both the stock and mixed solutions. The R and X-bar charts showed virtually all of the points within control limits, and there were no patterns in the X-bar charts to suggest nonrandom data. However, the post-mixer process was shown to be out of control. While the R charts showed variability within the control limits, the X-bar charts were out of control and showed a steady increase in values, suggesting that the data was nonrandom. This steady increase in dye concentration was due to discontinuous, non-steady state flow. To improve the experiment in the future, a mixer could be inserted into the stock dye tank. The mixer would ensure that the dye concentration of the stock solution is more uniform prior to entering the pre-mixer ow cell. Overall, this would create a better standard to judge the water and dye mixture data against as well.