• Title/Summary/Keyword: 성능의 척도

Search Result 661, Processing Time 0.023 seconds

Incentive Design Considerations for Free-riding Prevention in Cooperative Distributed Systems (협조적 분산시스템 환경에서 무임승차 방지를 위한 인센티브 디자인 고려사항 도출에 관한 연구)

  • Shin, Kyu-Yong;Yoo, Jin-Cheol;Lee, Jong-Deog;Park, Byoung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.137-148
    • /
    • 2011
  • Different from the traditional client-server model, it is possible for participants in a cooperative distributed system to get quality services regardless of the number of participants in the system since they voluntarily pool or share their resources in order to achieve their common goal. However, some selfish participants try to avoid providing their resources while still enjoying the benefits offered by the system, which is termed free-riding. The results of free-riding in cooperative distributed systems lead to system collapse because the system capacity (per participant) decreases as the number of free-riders increases, widely known as the tragedy of commons. As a consequence, designing an efficient incentive mechanism to prevent free-riding is mandatory for a successful cooperative distributed system. Because of the importance of incentive mechanisms in cooperative distributed system, a myriad of incentives mechanisms have been proposed without a standard for performance evaluation. This paper draws general incentive design considerations which can be used as performance metrics through an extensive survey on this literature, providing future researchers with guidelines for the effective incentive design in cooperative distributed systems.

Design of pHEMT channel structure for single-pole-double-throw MMIC switches (SPDT 단일고주파집적회로 스위치용 pHEMT 채널구조 설계)

  • Mun Jae Kyoung;Lim Jong Won;Jang Woo Jin;Ji, Hong Gu;Ahn Ho Kyun;Kim Hae Cheon;Park Chong Ook
    • Journal of the Korean Vacuum Society
    • /
    • v.14 no.4
    • /
    • pp.207-214
    • /
    • 2005
  • This paper presents a channel structure for promising high performance pseudomorphic high electron mobility transistor(pHEMT) switching device for design and fabricating of microwave control circuits, such as switches, phase shifters, attenuators, limiters, for application in personal mobile communication systems. Using the designed epitaxial channel layer structure and ETRI's $0.5\mu$m pHEMT switch process, single pole double throw (SPDT) Tx/Rx monolithic microwave integrated circuit (MMIC) switch was fabricated for 2.4 GHz and 5 GHz band wireless local area network (WLAN) systems. The SPDT switch exhibits a low insertion loss of 0.849 dB, high isolation of 32.638 dB, return loss of 11.006 dB, power transfer capability of 25dBm, and 3rd order intercept point of 42dBm at frequency of 5.8GHz and control voltage of 0/-3V These performances are enough for an application to 5 GHz band WLAN systems.

Detecting Errors in POS-Tagged Corpus on XGBoost and Cross Validation (XGBoost와 교차검증을 이용한 품사부착말뭉치에서의 오류 탐지)

  • Choi, Min-Seok;Kim, Chang-Hyun;Park, Ho-Min;Cheon, Min-Ah;Yoon, Ho;Namgoong, Young;Kim, Jae-Kyun;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.221-228
    • /
    • 2020
  • Part-of-Speech (POS) tagged corpus is a collection of electronic text in which each word is annotated with a tag as the corresponding POS and is widely used for various training data for natural language processing. The training data generally assumes that there are no errors, but in reality they include various types of errors, which cause performance degradation of systems trained using the data. To alleviate this problem, we propose a novel method for detecting errors in the existing POS tagged corpus using the classifier of XGBoost and cross-validation as evaluation techniques. We first train a classifier of a POS tagger using the POS-tagged corpus with some errors and then detect errors from the POS-tagged corpus using cross-validation, but the classifier cannot detect errors because there is no training data for detecting POS tagged errors. We thus detect errors by comparing the outputs (probabilities of POS) of the classifier, adjusting hyperparameters. The hyperparameters is estimated by a small scale error-tagged corpus, in which text is sampled from a POS-tagged corpus and which is marked up POS errors by experts. In this paper, we use recall and precision as evaluation metrics which are widely used in information retrieval. We have shown that the proposed method is valid by comparing two distributions of the sample (the error-tagged corpus) and the population (the POS-tagged corpus) because all detected errors cannot be checked. In the near future, we will apply the proposed method to a dependency tree-tagged corpus and a semantic role tagged corpus.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Analysis of Hydrodynamics in a Directly-Irradiated Fluidized Bed Solar Receiver Using CPFD Simulation (CPFD를 이용한 태양열 유동층 흡열기의 수력학적 특성 해석)

  • Kim, Suyoung;Won, Geunhye;Lee, Min Ji;Kim, Sung Won
    • Korean Chemical Engineering Research
    • /
    • v.60 no.4
    • /
    • pp.535-543
    • /
    • 2022
  • A CPFD (Computational particle fluid dynamics) model of solar fluidized bed receiver of silicon carbide (SiC: average dp=123 ㎛) particles was established, and the model was verified by comparing the simulation and experimental results to analyze the effect of particle behavior on the performance of the receiver. The relationship between the heat-absorbing performance and the particles behavior in the receiver was analyzed by simulating their behavior near bed surface, which is difficult to access experimentally. The CPFD simulation results showed good agreement with the experimental values on the solids holdup and its standard deviation under experimental condition in bed and freeboard regions. The local solid holdups near the bed surface, where particles primarily absorb solar heat energy and transfer it to the inside of the bed, showed a non-uniform distribution with a relatively low value at the center related with the bubble behavior in the bed. The local solid holdup increased the axial and radial non-uniformity in the freeboard region with the gas velocity, which explains well that the increase in the RSD (Relative standard deviation) of pressure drop across the freeboard region is responsible for the loss of solar energy reflected by the entrained particles in the particle receiver. The simulation results of local gas and particle velocities with gas velocity confirmed that the local particle behavior in the fluidized bed are closely related to the bubble behavior characterized by the properties of the Geldart B particles. The temperature difference of the fluidizing gas passing through the receiver per irradiance (∆T/IDNI) was highly correlated with the RSD of the pressure drop across the bed surface and the freeboard regions. The CPFD simulation results can be used to improve the performance of the particle receiver through local particle behavior analysis.

Testing for Measurement Invariance of Fashion Brand Equity (패션브랜드 자산 측정모델의 등치테스트에 관한 연구)

  • Kim Haejung;Lim Sook Ja;Crutsinger Christy;Knight Dee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.12 s.138
    • /
    • pp.1583-1595
    • /
    • 2004
  • Simon and Sullivan(l993) estimated that clothing and textile related brand equity had the highest magnitude comparing any other industry category. It reflects that fashion brands reinforce the symbolic, social values and emotional characteristics being different from generic brands. Recently, Kim and Lim(2002) developed a fashion brand equity scale to measure a brand's psychometric properties. However, they suggested that additional psychometric tests were needed to compare the relative magnitude of each brand's equity. The purpose of this study was to recognize the psychometric constructs of fashion brand equity and validate Kim and Lim's fashion brand equity scale using the measurement invariance test of cross-group comparison. First, we identified the constructs of fashion brand equity using confirmatory factor analysis through structural equation modeling. Second, we compared the relative magnitude of two brands' equity using the measurement invariance test of multi-group simultaneous factor analysis. Data were collected at six major universities in Seoul, Korea. There were 696 usable surveys for data analysis. The results showed that fashion brand equity was comprised of 16 items representing six dimensions: customer-brand resonance, customer feeling, customer judgment, brand imagery, brand performance and brand awareness. Also, we could support the measurement invariance of two brands' equities by configural and metric invariance tests. There were significant differences in five constructs' mean values. The greatest difference was in customer feeling; the smallest, in customer judgment.

Application of Texture Feature Analysis Algorithm used the Statistical Characteristics in the Computed Tomography (CT): A base on the Hepatocellular Carcinoma (HCC) (전산화단층촬영 영상에서 통계적 특징을 이용한 질감특징분석 알고리즘의 적용: 간세포암 중심으로)

  • Yoo, Jueun;Jun, Taesung;Kwon, Jina;Jeong, Juyoung;Im, Inchul;Lee, Jaeseung;Park, Hyonghu;Kwak, Byungjoon;Yu, Yunsik
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.1
    • /
    • pp.9-15
    • /
    • 2013
  • In this study, texture feature analysis (TFA) algorithm to automatic recognition of liver disease suggests by utilizing computed tomography (CT), by applying the algorithm computer-aided diagnosis (CAD) of hepatocellular carcinoma (HCC) design. Proposed the performance of each algorithm was to comparison and evaluation. In the HCC image, set up region of analysis (ROA, window size was $40{\times}40$ pixels) and by calculating the figures for TFA algorithm of the six parameters (average gray level, average contrast, measure of smoothness, skewness, measure of uniformity, entropy) HCC recognition rate were calculated. As a result, TFA was found to be significant as a measure of HCC recognition rate. Measure of uniformity was the most recognition. Average contrast, measure of smoothness, and skewness were relatively high, and average gray level, entropy showed a relatively low recognition rate of the parameters. In this regard, showed high recognition algorithms (a maximum of 97.14%, a minimum of 82.86%) use the determining HCC imaging lesions and assist early diagnosis of clinic. If this use to therapy, the diagnostic efficiency of clinical early diagnosis better than before. Later, after add the effective and quantitative analysis, criteria research for generalized of disease recognition is needed to be considered.

Edge Enhanced Error Diffusion Halftoning Method Using Local Activity Measure (공간활성도를 이용한 에지 강조 오차확산법)

  • Kwak Nae-Joung;Ahn Jae-Hyeong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.3
    • /
    • pp.313-321
    • /
    • 2005
  • Digital halftoning is a process to produce a binary image so that the original image and its binary counterpart appear similar when observed from a distance. Among digital halftoning methods, error diffusion is a procedure for generating high quality bilevel images from continuous-tone images but blurs the edge information in the bilevel images. To solve this problem, we propose the improved error diffusion using local spatial information of the original images. Based on the fact that the human vision perceives not a pixel but local mean of input image, we compute edge enhancement information(EEI) by appling the ratio of a pixel and its adjacent pixels to local mean. The weights applied to local means is computed using the ratio of local activity measure(LAM) to the difference between input pixels of 3$\times$3 blocks and theirs mean. LAM is the measure of luminance changes in local regions and is obtained by adding the square of the difference between input pixels of 3$\times$3 blocks and theirs mean. We add the value to a input pixel of quantizer to enhance edge. The performance of the proposed method is compared with conventional methods by measuring the edge correlation. The halftone images by using the proposed method show better quality due to the enhanced edge. And the detailed edge is preserved in the halftone images by using the proposed method. Also the proposed method improves the quality of halftone images because unpleasant patterns for human visual system are reduced.

  • PDF

Partial Denoising Boundary Image Matching Based on Time-Series Data (시계열 데이터 기반의 부분 노이즈 제거 윤곽선 이미지 매칭)

  • Kim, Bum-Soo;Lee, Sanghoon;Moon, Yang-Sae
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.943-957
    • /
    • 2014
  • Removing noise, called denoising, is an essential factor for the more intuitive and more accurate results in boundary image matching. This paper deals with a partial denoising problem that tries to allow a limited amount of partial noise embedded in boundary images. To solve this problem, we first define partial denoising time-series which can be generated from an original image time-series by removing a variety of partial noises and propose an efficient mechanism that quickly obtains those partial denoising time-series in the time-series domain rather than the image domain. We next present the partial denoising distance, which is the minimum distance from a query time-series to all possible partial denoising time-series generated from a data time-series, and we use this partial denoising distance as a similarity measure in boundary image matching. Using the partial denoising distance, however, incurs a severe computational overhead since there are a large number of partial denoising time-series to be considered. To solve this problem, we derive a tight lower bound for the partial denoising distance and formally prove its correctness. We also propose range and k-NN search algorithms exploiting the partial denoising distance in boundary image matching. Through extensive experiments, we finally show that our lower bound-based approach improves search performance by up to an order of magnitude in partial denoising-based boundary image matching.

An Image Warping Method for Implementation of an Embedded Lens Distortion Correction Algorithm (내장형 렌즈 왜곡 보정 알고리즘 구현을 위한 이미지 워핑 방법)

  • Yu, Won-Pil;Chung, Yun-Koo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.373-380
    • /
    • 2003
  • Most of low cost digital cameras reveal relatively high lens distortion. The purpose of this research is to compensate the degradation of image quality due to the geometrical distortion of a lens system. The proposed method consists of two stages : calculation of a lens distortion coefficient by a simplified version of Tsai´s camera calibration and subsequent image warping of the original distorted image to remove geometrical distortion based on the calculated lens distortion coefficient. In the lens distortion coefficient calculation stage, a practical method for handling scale factor ratio and image center is proposed, after which its feasibility is shown by measuring the performance of distortion correction using a quantitative image quality measure. On the other hand, in order to apply image warping via inverse spatial mapping using the result of the lens distortion coefficient calculation stage, a cubic polynomial derived from an adopted radial distortion lens model must be solved. In this paper, for the purpose of real-time operation, which is essential for embedding into an information device, an approximated solution to the cubic polynomial is proposed in the form of a solution to a quadratic equation. In the experiment, potential for real-time implementation and equivalence in performance as compared with that from cubic polynomial solution are shown.