• Title/Summary/Keyword: 품질 측정 소프트웨어

Search Result 274, Processing Time 0.027 seconds

In-situ Phase Transition Study of Minerals using Micro-focusing Rotating-anode X-ray and 2-Dimensional Area Detector (집속 회전형 X-선원과 이차원 검출기를 이용한 광물의 실시간 상전이 연구)

  • Seoung, Dong-Hoon;Lee, Yong-Moon;Lee, Yong-Jae
    • Economic and Environmental Geology
    • /
    • v.45 no.2
    • /
    • pp.79-88
    • /
    • 2012
  • The increased brightness and focused X-ray beams now available from laboratory X-ray sources facilitates a variety of powder diffraction experiments not practical using conventional in-house sources. Furthermore, the increased availability of 2-dimensional area detectors, along with implementation of improved software and customized sample environmental cells, makes possible new classes of in-situ and time-resolved diffraction experiments. These include phase transitions under variable pressure- and temperature conditions and ion-exchange reactions. Examples of in-situ and time-resolved studies which are presented here include: (1) time-resolved data to evaluate the kinetics and mechanism of ion exchange in mineral natrolite; (2) in-situ dehydration and thermal expansion behaviors of ion-exchanged natrolite; and (3) observations of the phases forming under controlled hydrostatic pressure conditions in ion-exchanged natrolite. Both the quantity and quality of the in-situ diffraction data are such to allow evaluation of the reaction pathway and Rietveld analysis on selected dataset. These laboratory-based in-situ studies will increase the predictability of the follow-up experiments at more specialized beamlines at the synchrotron.

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.

Improvement Plan of NFRDI Serial Oceanographic Observation (NSO) System for Operational Oceanographic System (운용해양시스템을 위한 한국정선해양관측시스템 발전방향)

  • Lee, Joon-Soo;Suh, Young-Sang;Go, Woo-Jin;Hwang, Jae-Dong;Youn, Seok-Hyun;Han, In-Seong;Yang, Joon-Yong;Song, Ji-Young;Park, Myung-Hee;Lee, Keun-Jong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.16 no.3
    • /
    • pp.249-258
    • /
    • 2010
  • This study seeks to improve NFRDI Serial Oceanographic observation (NSO) system which has been operated at current observation stations in the Korean Seas since 1961 and suggests the direction of NSO for practical use of Korean operational oceanographic system. For improvement, data handling by human after CTD (Conductivity-Temperature-Depth) observation on the deck, data transmission, data reception in the land station, and file storage into database need to be automated. Software development to execute QA/QC (Quality Assurance/Quality Control) of real-time oceanographic observation data and to transmit the data with conversion to appropriate format automatically will help to accomplish the automation. Inmarsat satellite telecommunication systems with which have already been equipped on board the current observation vessels can realize the real-time transmission of the data. For the near real-time data transmission, CDMA (Code Division Multiple Access) wireless telecommunication can provide efficient transmission in coastal area. Real-time QA/QC procedure after CTD observation will help to prevent errors which can be derived from various causes.

A Framework and Patterns for Efficient Service Monitoring (효율적인 서비스 모니터링 프레임워크 및 전송패턴)

  • Lee, Hyun-Min;Cheun, Du-Wan;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.11
    • /
    • pp.812-825
    • /
    • 2010
  • Service-Oriented Computing (SOC) is a reuse paradigm for developing business processes by dynamic service composition. Service consumers subscribe services deployed by service providers only through service interfaces. Therefore, services on server-side are perceived as black box to service consumers. Due to this nature of services, service consumers have limited knowledge on the quality of services. This limits utilizing of services in critical domains hard. Therefore, there is an increasing demand for effective methods for monitoring services. Current monitoring techniques generally depend on specific vendor's middleware without direct access to services due to the technical hardship of monitoring. However, these approaches have limitations including low data comprehensibility and data accuracy. And, this results in a demand for effective service monitoring framework. In this paper, we propose a framework for efficiently monitoring services. We first define requirements for designing monitoring framework. Based on the requirements, we propose architecture for monitoring framework and define generic patterns for efficiently acquiring monitored data from services. We present the detailed design of monitoring framework and its implementation. We finally implement a prototype of the monitor, and present the functionality of the framework as well as the results of experiments to verify efficiency of patterns for transmitting monitoring data.

Deep Learning Description Language for Referring to Analysis Model Based on Trusted Deep Learning (신뢰성있는 딥러닝 기반 분석 모델을 참조하기 위한 딥러닝 기술 언어)

  • Mun, Jong Hyeok;Kim, Do Hyung;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.133-142
    • /
    • 2021
  • With the recent advancements of deep learning, companies such as smart home, healthcare, and intelligent transportation systems are utilizing its functionality to provide high-quality services for vehicle detection, emergency situation detection, and controlling energy consumption. To provide reliable services in such sensitive systems, deep learning models are required to have high accuracy. In order to develop a deep learning model for analyzing previously mentioned services, developers should utilize the state of the art deep learning models that have already been verified for higher accuracy. The developers can verify the accuracy of the referenced model by validating the model on the dataset. For this validation, the developer needs structural information to document and apply deep learning models, including metadata such as learning dataset, network architecture, and development environments. In this paper, we propose a description language that represents the network architecture of the deep learning model along with its metadata that are necessary to develop a deep learning model. Through the proposed description language, developers can easily verify the accuracy of the referenced deep learning model. Our experiments demonstrate the application scenario of a deep learning description document that focuses on the license plate recognition for the detection of illegally parked vehicles.

Design and Implementation of Network-Adaptive High Definition MPEG-2 Streaming employing frame-based Prioritized Packetization (프레임 기반의 우선순위화를 적용한 네트워크 적응형 HD MPEG-2 스트리밍의 설계 및 구현)

  • Park SangHoon;Lee Sensjoo;Kim JongWon;Kim WooSuk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10A
    • /
    • pp.886-895
    • /
    • 2005
  • As the networked media technology have been grown in recent, there have been many research works to deliver high-quality video such as HDV and HDTV over the Internet. To realize high-quality media service over the Internet, however, the network adaptive streaming scheme is required to adopt to the dynamic fluctuation of underlying networks. In this paper, we design and implement the network-adaptive HD(high definition) MPEG-2 streaming system employing the frame-based prioritized packetization. Delivered video is inputted from the JVC HDV camera to the streaming sewer in real-time. It has a bit-rate of 19.2 Mbps and is multiplexed to the MPEG-2 TS (MPEG-2 MP@HL). For the monitoring of network status, the packet loss rate and the average jitter are measured by using parsing of RTP packet header in the streaming client and they are sent to the streaming server periodically The network adaptation manager in the streaming server estimates the current network status from feedback packets and adaptively adjusts the sending rate by frame dropping. For this, we propose the real-time parsing and the frame-based prioritized packetization of the TS packet. The proposed system is implemented in software and evaluated over the LAN testbed. The experimental results show that the proposed system can enhance the end-to-end QoS of HD video streaming over the best-effort network.

The Development of Biodegradable Fiber Tensile Tenacity and Elongation Prediction Model Considering Data Imbalance and Measurement Error (데이터 불균형과 측정 오차를 고려한 생분해성 섬유 인장 강신도 예측 모델 개발)

  • Se-Chan, Park;Deok-Yeop, Kim;Kang-Bok, Seo;Woo-Jin, Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.489-498
    • /
    • 2022
  • Recently, the textile industry, which is labor-intensive, is attempting to reduce process costs and optimize quality through artificial intelligence. However, the fiber spinning process has a high cost for data collection and lacks a systematic data collection and processing system, so the amount of accumulated data is small. In addition, data imbalance occurs by preferentially collecting only data with changes in specific variables according to the purpose of fiber spinning, and there is an error even between samples collected under the same fiber spinning conditions due to difference in the measurement environment of physical properties. If these data characteristics are not taken into account and used for AI models, problems such as overfitting and performance degradation may occur. Therefore, in this paper, we propose an outlier handling technique and data augmentation technique considering the characteristics of the spinning process data. And, by comparing it with the existing outlier handling technique and data augmentation technique, it is shown that the proposed technique is more suitable for spinning process data. In addition, by comparing the original data and the data processed with the proposed method to various models, it is shown that the performance of the tensile tenacity and elongation prediction model is improved in the models using the proposed methods compared to the models not using the proposed methods.

The Verification of Computer Simulation of Nitinol Wire Stent Using Finite Element Analysis (유한요소법을 이용한 나이티놀 와이어 스텐트의 전산모사 실험 데이터 검증)

  • Kim, Jin-Young;Jung, Won-Gyun;Jeon, Dong-Min;Shin, Il-Gyun;Kim, Han-Ki;Shin, Dong-Oh;Kim, Sang-Ho;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.139-144
    • /
    • 2009
  • Recently, the mathematical analysis of stent simulation has been improved, with the help of development of various tool which measure mechanical property and location of stent in artery. The most crucial part of the stent modeling is how to design ideal stent and to evaluate the interaction between stent and artery. While there has been great deal of researches on the evaluation of the expansion, stress distribution, deformation of the stent in terms of the various parameters, few verification through computer simulation has been performed about deformation and stress distribution of the stent. In this study, we have produced the corresponding results between experimental test using Universal Testing Machine and computer simulation for the ideal model of stent. Also, we have analyzed and compared stress distribution of stent in the cases of that with membrane and that without membrane. The results of this study would provide minimum change of plan and good quality for ideal stent replacing damaged artery through the analysis using computer simulation in the early stage of stent design.

  • PDF

A Study on Iris Image Restoration Based on Focus Value of Iris Image (홍채 영상 초점 값에 기반한 홍채 영상 복원 연구)

  • Kang Byung-Jun;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.30-39
    • /
    • 2006
  • Iris recognition is that identifies a user based on the unique iris texture patterns which has the functionalities of dilating or contracting pupil region. Iris recognition systems extract the iris pattern in iris image captured by iris recognition camera. Therefore performance of iris recognition is affected by the quality of iris image which includes iris pattern. If iris image is blurred, iris pattern is transformed. It causes FRR(False Rejection Error) to be increased. Optical defocusing is the main factor to make blurred iris images. In conventional iris recognition camera, they use two kinds of focusing methods such as lilted and auto-focusing method. In case of fixed focusing method, the users should repeatedly align their eyes in DOF(Depth of Field), while the iris recognition system acquires good focused is image. Therefore it can give much inconvenience to the users. In case of auto-focusing method, the iris recognition camera moves focus lens with auto-focusing algorithm for capturing the best focused image. However, that needs additional H/W equipment such as distance measuring sensor between users and camera lens, and motor to move focus lens. Therefore the size and cost of iris recognition camera are increased and this kind of camera cannot be used for small sized mobile device. To overcome those problems, we propose method to increase DOF by iris image restoration algorithm based on focus value of iris image. When we tested our proposed algorithm with BM-ET100 made by Panasonic, we could increase operation range from 48-53cm to 46-56cm.

Proposal and Verification of Image Sensor Non-uniformity Correction Algorithm (영상센서 픽셀 불균일 보정 알고리즘 개발 및 시험)

  • Kim, Young-Sun;Kong, Jong-Pil;Heo, Haeng-Pal;Park, Jong-Euk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.3
    • /
    • pp.29-33
    • /
    • 2007
  • All pixels of image sensor do not react uniformly even if the light of same radiance enters into the camera. This non-uniformity comes from the sensor pixel non-uniformity and non-uniformity induced by the changing transmission of the telescope over the field. The first contribution to the non-uniformity has high spatial frequency nature and has an influence on the result and quality of the data compression. The second source of non-uniformity has low frequency nature and has no influence of the compression result. As the contribution resulting from the sensor PRNU(Photo Response Non-Uniformity) is corrected inside the camera electronics, the effect of the remaining non-uniformity to the compression result will be negligible. The non-uniformity correction result shall have big difference according to the sensor modeling and the calculation method to get correction coefficient. Usually, the sensor can be modeled with one dimensional coefficients which are a gain and a offset for each pixel. Only two measurements are necessary theoretically to get coefficients. However, these are not the optimized value over the whole illumination level. This paper proposes the algorithm to calculate the optimized non-uniformity correction coefficients over whole illumination radiance. The proposed algorithm uses several measurements and the least square method to get the optimum coefficients. The proposed algorithm is verified using the own camera electronics including sensor, electrical test equipment and optical test equipment such as the integrating sphere.