• Title/Summary/Keyword: traditional experiments

Search Result 1,060, Processing Time 0.023 seconds

Design and Performance Evaluation of Software RAID for Video-on-Demand Servers (주문형 비디오 서버를 위한 소프트웨어 RAID의 설계 및 성능 분석)

  • Koh, Jeong-Gook
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.3 no.2
    • /
    • pp.167-178
    • /
    • 2000
  • Software RAID(Redundant Arrays of Inexpensive Disks) is defined as a storage system that provides capabilities of hardware RAID, and guarantees high reliability as well as high performance. In this paper, we propose an enhanced disk scheduling algorithm and a scheme to guarantee reliability of data. We also design and implement software RAID by utilizing these mechanism to develop a storage system for multimedia applications. Because the proposed algorithm improves a defect of traditional GSS algorithm that disk I/O requests arc served in a fixed order, it minimizes buffer consumption and reduces the number of deadline miss through service group exchange. Software RAID also alleviates data copy overhead during disk services by sharing kernel memory. Even though the implemented software RAID uses the parity approach to guarantee reliability of data, it adopts different data allocation scheme. Therefore, we reduce disk accesses in logical XOR operations to compute the new parity data on all write operations. In the performance evaluation experiments, we found that if we apply the proposed schemes to implement the Software RAID, it can be used as a storage system for small-sized video-on-demand servers.

  • PDF

A Fast and Robust Algorithm for Fighting Behavior Detection Based on Motion Vectors

  • Xie, Jianbin;Liu, Tong;Yan, Wei;Li, Peiqin;Zhuang, Zhaowen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2191-2203
    • /
    • 2011
  • In this paper, we propose a fast and robust algorithm for fighting behavior detection based on Motion Vectors (MV), in order to solve the problem of low speed and weak robustness in traditional fighting behavior detection. Firstly, we analyze the characteristics of fighting scenes and activities, and then use motion estimation algorithm based on block-matching to calculate MV of motion regions. Secondly, we extract features from magnitudes and directions of MV, and normalize these features by using Joint Gaussian Membership Function, and then fuse these features by using weighted arithmetic average method. Finally, we present the conception of Average Maximum Violence Index (AMVI) to judge the fighting behavior in surveillance scenes. Experiments show that the new algorithm achieves high speed and strong robustness for fighting behavior detection in surveillance scenes.

Integrating Granger Causality and Vector Auto-Regression for Traffic Prediction of Large-Scale WLANs

  • Lu, Zheng;Zhou, Chen;Wu, Jing;Jiang, Hao;Cui, Songyue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.136-151
    • /
    • 2016
  • Flexible large-scale WLANs are now widely deployed in crowded and highly mobile places such as campus, airport, shopping mall and company etc. But network management is hard for large-scale WLANs due to highly uneven interference and throughput among links. So the traffic is difficult to predict accurately. In the paper, through analysis of traffic in two real large-scale WLANs, Granger Causality is found in both scenarios. In combination with information entropy, it shows that the traffic prediction of target AP considering Granger Causality can be more predictable than that utilizing target AP alone, or that of considering irrelevant APs. So We develops new method -Granger Causality and Vector Auto-Regression (GCVAR), which takes APs series sharing Granger Causality based on Vector Auto-regression (VAR) into account, to predict the traffic flow in two real scenarios, thus redundant and noise introduced by multivariate time series could be removed. Experiments show that GCVAR is much more effective compared to that of traditional univariate time series (e.g. ARIMA, WARIMA). In particular, GCVAR consumes two orders of magnitude less than that caused by ARIMA/WARIMA.

Quantitative Polymerase Chain Reaction for Microbial Growth Kinetics of Mixed Culture System

  • Cotto, Ada;Looper, Jessica K.;Mota, Linda C.;Son, Ahjeong
    • Journal of Microbiology and Biotechnology
    • /
    • v.25 no.11
    • /
    • pp.1928-1935
    • /
    • 2015
  • Microbial growth kinetics is often used to optimize environmental processes owing to its relation to the breakdown of substrate (contaminants). However, the quantification of bacterial populations in the environment is difficult owing to the challenges of monitoring a specific bacterial population within a diverse microbial community. Conventional methods are unable to detect and quantify the growth of individual strains separately in the mixed culture reactor. This work describes a novel quantitative PCR (qPCR)-based genomic approach to quantify each species in mixed culture and interpret its growth kinetics in the mixed system. Batch experiments were performed for both single and dual cultures of Pseudomonas putida and Escherichia coli K12 to obtain Monod kinetic parameters (μmax and Ks). The growth curves and kinetics obtained by conventional methods (i.e., dry weight measurement and absorbance reading) were compared with that obtained by qPCR assay. We anticipate that the adoption of this qPCR-based genomic assay can contribute significantly to traditional microbial kinetics, modeling practice, and the operation of bioreactors, where handling of complex mixed cultures is required.

Frequent Items Mining based on Regression Model in Data Streams (스트림 데이터에서 회귀분석에 기반한 빈발항목 예측)

  • Lee, Uk-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.147-158
    • /
    • 2009
  • Recently, the data model in stream data environment has massive, continuous, and infinity properties. However the stream data processing like query process or data analysis is conducted using a limited capacity of disk or memory. In these environment, the traditional frequent pattern discovery on transaction database can be performed because it is difficult to manage the information continuously whether a continuous stream data is the frequent item or not. In this paper, we propose the method which we are able to predict the frequent items using the regression model on continuous stream data environment. We can use as a prediction model on indefinite items by constructing the regression model on stream data. We will show that the proposed method is able to be efficiently used on stream data environment through a variety of experiments.

Design of New Smoothing Mask of Color Inverse Halftoning (칼라 역 해프토닝을 위한 새로운 평활화 마스크의 설계)

  • 김종민;김민환
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.183-193
    • /
    • 1998
  • Color inverse halftoning is the transformation of a color-halftone image to a continuous-tone color image that is more natural in human vision. In this paper, we propose a new smoothing mask that can remove halftone patterns in channel effectively and we apply it to color inverse halftoning. The proposed smoothing mask can make channel images more smooth and natural in human vision than traditional ones. Its characteristic can be adapted automatically according to the various color halftone images. We analyze the result images in various aspects through experiments. Experimental results show that the mask is useful for color inverse halftoning. It can be applied to field of multimedia application, such as desktop publishing, color facsimile and digital library construction.

  • PDF

Product Life Cycle Based Service Demand Forecasting Using Self-Organizing Map (SOM을 이용한 제품수명주기 기반 서비스 수요예측)

  • Chang, Nam-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.37-51
    • /
    • 2009
  • One of the critical issues in the management of manufacturing companies is the efficient process of planning and operating service resources such as human, parts, and facilities, and it begins with the accurate service demand forecasting. In this research, service and sales data from the LCD monitor manufacturer is considered for an empirical study on Product Life Cycle (PLC) based service demand forecasting. The proposed PLC forecasting approach consists of four steps : understanding the basic statistics of data, clustering models using a self-organizing map, developing respective forecasting models for each segment, comparing the accuracy performance. Empirical experiments show that the PLC approach outperformed the traditional approaches in terms of root mean square error and mean absolute percentage error.

  • PDF

Positional Tracking System Using Smartphone Sensor Information

  • Kim, Jung Yee
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.265-270
    • /
    • 2019
  • The technology to locate an individual has enabled various services, its utilization has increased. There were constraints such as the use of separate expensive equipment or the installation of specific devices on a facility, with most of the location technology studies focusing on the accuracy of location verification. These constraints can result in accuracy within a few tens of centimeters, but they are not technology that can be applied to a user's location in real-time in daily life. Therefore, this paper aims to track the locations of smartphones only using the basic components of smartphones. Based on smartphone sensor data, localization accuracy that can be used for verification of the users' locations is aimed at. Accelerometers, Wifi radio maps, and GPS sensor information are utilized to implement it. In forging the radio map, signal maps were built at each vertex based on the graph data structure This approach reduces traditional map-building efforts at the offline phase. Accelerometer data were made to determine the user's moving status, and the collected sensor data were fused using particle filters. Experiments have shown that the average user's location error is about 3.7 meters, which makes it reasonable for providing location-based services in everyday life.

A Buffer Replacement Algorithm utilizing Reference Interval Information (참조 시간 간격 정보를 활용하는 버퍼 교체 알고리즘)

  • Koh, Jeong-Gook;Kim, Gil-Yong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3175-3184
    • /
    • 1997
  • To support large storage capacity and real-time characteristics of continuous media storage systems, we need to improve the performance of disk I/O subsystems. To improve the performance, we exploited buffer sharing scheme that reduces the number of disk I/Os. We utilized the advance knowledge of continuous media streams that is used to anticipate data demands, and so Promoting the sharing of blocks in buffers. In this paper, we proposed a buffer replacement algorithm that enables subsequent users requesting the same data to share buffer efficiently. The proposed algorithm manages buffers by utilizing reference interval information of blocks. In order to verify validity of the proposed algorithm, we accomplished simulation experiments and showed the results of performance improvements compared to traditional buffer replacement algorithms.

  • PDF

GPU-based Stereo Matching Algorithm with the Strategy of Population-based Incremental Learning

  • Nie, Dong-Hu;Han, Kyu-Phil;Lee, Heng-Suk
    • Journal of Information Processing Systems
    • /
    • v.5 no.2
    • /
    • pp.105-116
    • /
    • 2009
  • To solve the general problems surrounding the application of genetic algorithms in stereo matching, two measures are proposed. Firstly, the strategy of simplified population-based incremental learning (PBIL) is adopted to reduce the problems with memory consumption and search inefficiency, and a scheme for controlling the distance of neighbors for disparity smoothness is inserted to obtain a wide-area consistency of disparities. In addition, an alternative version of the proposed algorithm, without the use of a probability vector, is also presented for simpler set-ups. Secondly, programmable graphics-hardware (GPU) consists of multiple multi-processors and has a powerful parallelism which can perform operations in parallel at low cost. Therefore, in order to decrease the running time further, a model of the proposed algorithm, which can be run on programmable graphics-hardware (GPU), is presented for the first time. The algorithms are implemented on the CPU as well as on the GPU and are evaluated by experiments. The experimental results show that the proposed algorithm offers better performance than traditional BMA methods with a deliberate relaxation and its modified version in terms of both running speed and stability. The comparison of computation times for the algorithm both on the GPU and the CPU shows that the former has more speed-up than the latter, the bigger the image size is.