• Title/Summary/Keyword: 회로 중복사용

Search Result 203, Processing Time 0.032 seconds

Development of Delay Test Architecture for Counter (카운터 회로에 대한 지연결함 검출구조의 개발)

  • 이창희;장영식
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.1
    • /
    • pp.28-37
    • /
    • 1999
  • In this paper. we developed a delay test architecture and test procedure for clocked 5-bit asynchronous counter circuit based on boundary scan architecture. To develope, we analyze the problems of conventional method on delay test for clocked sequential circuit in boundary scan architecture. This paper discusses several problems of delay test on boundary scan architecture for clocked sequential circuit. Conventional test method has some problems of improper capture timing, of same pattern insertion, of increase of test time. We suggest a delay test architecture and test procedure, is based on a clock count-generation technique to generate continuous clocks for clocked input of CUT. The simulation results or 5-bit counter shows the accurate operation and effectiveness of the proposed delay test architecture and procedure.

  • PDF

Server Replication Degree Reducing Location Management Cost in Cellular Networks (셀룰라 네트워크에서 위치 정보 관리 비용을 최소화하는 서버의 중복도)

  • Kim, Jai-Hoon;Lim, Sung-Hwa
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.265-275
    • /
    • 2002
  • A default server strategy is a very popular scheme for managing location and state information of mobile hosts in cellular networks. But the communication cost increases if the call requests are frequent and the distant between the default server and the client is long. Still more any connection to a mobile host cannot be established when the default server of the destination mobile host fails. These problems can be solved by replicating default server and by letting nearest replicated default server process the query request which is sent from a client. It is important to allocate replicated default servers efficiently in networks and determine the number of replicated default servers. In this paper, we suggest and evaluate a default server replication strategy to reduce communication costs and to improve service availabilities. Furthermore we propose and evaluate an optimized allocation algorithm and an optimal replication degree for replicating: dofault servers in nn grid networks and binary tree networks.

Evaluation of the Redundancy in Decoy Database Generation for Tandem Mass Analysis (탠덤 질량 분석을 위한 디코이 데이터베이스 생성 방법의 중복성 관점에서의 성능 평가)

  • Li, Honglan;Liu, Duanhui;Lee, Kiwook;Hwang, Kyu-Baek
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.56-60
    • /
    • 2016
  • Peptide identification in tandem mass spectrometry is usually done by searching the spectra against target databases consisting of reference protein sequences. To control false discovery rates for high-confidence peptide identification, spectra are also searched against decoy databases constructed by permuting reference protein sequences. In this case, a peptide of the same sequence could be included in both the target and the decoy databases or multiple entries of a same peptide could exist in the decoy database. These phenomena make the protein identification problem complicated. Thus, it is important to minimize the number of such redundant peptides for accurate protein identification. In this regard, we examined two popular methods for decoy database generation: 'pseudo-shuffling' and 'pseudo-reversing'. We experimented with target databases of varying sizes and investigated the effect of the maximum number of missed cleavage sites allowed in a peptide (MC), which is one of the parameters for target and decoy database generation. In our experiments, the level of redundancy in decoy databases was proportional to the target database size and the value of MC, due to the increase in the number of short peptides (7 to 10 AA). Moreover, 'pseudo-reversing' always generated decoy databases with lower levels of redundancy compared to 'pseudo-shuffling'.

A study on utilizing electronic nautical charts for building a optimum marine transportation system (최적 해양물류시스템 구축을 위한 전자해도 활용에 관한 연구)

  • 김계현;최훈성;원대희
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.313-321
    • /
    • 2003
  • 우리나라의 연간 물동량 중 선박에 의한 물동량이 약 90% 이상을 차지하고 있으나 이러한 선박에 의한 막대한 물동량을 효율적으로 처리하기 위한 해양물류시스템의 구축이 미진한 상태이다. 이를 위해 국가에서는 항만의 물류시설을 확충하는 등 물리적인 대안을 가지고 시설 부족에 대한 대비를 하고 있다. 그러나 해양물류시스템에 있어서 또 다른 중요한 문제점은 각 선사별, 물류회사별로 공통업무에 대해 각기 다른 시스템과 데이터베이스를 운영함에 있다는 것이다. 이와 같은 독립적 시스템의 운영은 동일 업무에 대한 중복투자를 유발하게 되어 많은 시간 및 비용을 소요하게 되며, 원활한 정보교환에 걸림돌이 되고 있다. 따라서 본 연구에서는 국제적인 표준안이 마련되어 있는 전자해도(ENC, Electronic Nautical Chart)를 이용하여 클라이언트-서버 방식의 관리자용 선박 및 화물 위치검색시스템을 구현하였으며, 추후 다른 항만에서도 사용이 가능하여 독립적 시스템의 운영으로 인한 중복투자를 방지 할 수 있는 방안을 제시하였다. 아울러 전자해도의 새로운 활용 방안이 없는 현 시점에서 전자해도를 선박 및 화물 위치검색시스템의 위치 표시를 위한 도형 데이터베이스로 사용하여 전자해도 본연의 목적 이외에도 활용될 수 있는 실례를 제시함으로써 전자해도의 새로운 활용 가능성을 제시하였다.

  • PDF

Requirements Redundancy and Inconsistency Analysis for Use Case Modeling (유스케이스 모델링을 위한 요구사항 중복 및 불일치 분석)

  • 최진재;황선영
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.869-882
    • /
    • 2004
  • This paper proposes an effective method to create logically consistent and structured requirement model by applying consistency control approach of the formal method to the use-case modeling. This method integrates the multi-perspective scattered requirement segments that may overlap and conflict each other into a structured requirement model. The model structure can be analyzed based on context goal and concerned area overlap analysis. The model consistency can be achieved by using specification overlap-based consistency checking method as an integration vehicle. An experimental application to case study shows that the Proposed method can successfully identify requirement overlaps and inconsistency. It can also transfer multi-viewpoint requirement segments into a consistently integrated use-case model to clarify software behaviors and functionality This method helps users to enhance capability to identify specification inconsistency in the use-case modeling at the early stage of software engineering development. The proposed approach can also facilitate communication between users and developers to ensure customer satisfaction.

A Dual Filter-based Channel Selection for Classification of Motor Imagery EEG (동작 상상 EEG 분류를 위한 이중 filter-기반의 채널 선택)

  • Lee, David;Lee, Hee Jae;Park, Snag-Hoon;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.9
    • /
    • pp.887-892
    • /
    • 2017
  • Brain-computer interface (BCI) is a technology that controls computer and transmits intention by measuring and analyzing electroencephalogram (EEG) signals generated in multi-channel during mental work. At this time, optimal EEG channel selection is necessary not only for convenience and speed of BCI but also for improvement in accuracy. The optimal channel is obtained by removing duplicate(redundant) channels or noisy channels. This paper propose a dual filter-based channel selection method to select the optimal EEG channel. The proposed method first removes duplicate channels using Spearman's rank correlation to eliminate redundancy between channels. Then, using F score, the relevance between channels and class labels is obtained, and only the top m channels are then selected. The proposed method can provide good classification accuracy by using features obtained from channels that are associated with class labels and have no duplicates. The proposed channel selection method greatly reduces the number of channels required while improving the average classification accuracy.

A Perceptual Audio Coder Based on Temporal-Spectral Structure (시간-주파수 구조에 근거한 지각적 오디오 부호화기)

  • 김기수;서호선;이준용;윤대희
    • Journal of Broadcast Engineering
    • /
    • v.1 no.1
    • /
    • pp.67-73
    • /
    • 1996
  • In general, the high quality audio coding(HQAC) has the structure of the convertional data compression techniques combined with moodels of human perception. The primary auditory characteristic applied to HQAC is the masking effect in the spectral domain. Therefore spectral techniques such as the subband coding or the transform coding are widely used[1][2]. However no effort has yet been made to apply the temporal masking effect and temporal redundancy removing method in HQAC. The audio data compression method proposed in this paper eliminates statistical and perceptual redundancies in both temporal and spectral domain. Transformed audio signal is divided into packets, which consist of 6 frames. A packet contains 1536 samples($256{\times}6$) :nd redundancies in packet reside in both temporal and spectral domain. Both redundancies are elminated at the same time in each packet. The psychoacoustic model has been improved to give more delicate results by taking into account temporal masking as well as fine spectral masking. For quantization, each packet is divided into subblocks designed to have an analogy with the nonlinear critical bands and to reflect the temporal auditory characteristics. Consequently, high quality of reconstructed audio is conserved at low bit-rates.

  • PDF

Parallel Rabin Fingerprinting on GPGPU for Efficient Data Deduplication (효율적인 데이터 중복제거를 위한 GPGPU 병렬 라빈 핑거프린팅)

  • Ma, Jeonghyeon;Park, Sejin;Park, Chanik
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.611-616
    • /
    • 2014
  • Rabin fingerprinting used for chunking requires the largest amount computation time in data deduplication, In this paper, therefore, we proposed parallel Rabin fingerprinting on GPGPU for efficient data deduplication. In addition, for efficient parallelism in Rabin fingerprinting, four issues are considered. Firstly, when dividing input data stream into data sections, we consider the data located near the boundaries between data sections to calculate Rabin fingerprint continuously. Secondly, we consider exploiting the characteristics of Rabin fingerprinting for efficient operation. Thirdly, we consider the chunk boundaries which can be changed compared to sequential Rabin fingerprinting when adapting parallel Rabin fingerprinting. Finally, we consider optimizing GPGPU memory access. Parallel Rabin fingerprinting on GPGPU shows 16 times and 5.3 times better performance compared to sequential Rabin fingerprinting on CPU and compared to parallel Rabin fingerprinting on CPU, respectively. These throughput improvement of Rabin fingerprinting can lead to total performance improvement of data deduplication.

Eliminating Redundant Alarms of Buffer Overflow Analysis Using Context Refinements (분석 문맥 조절 기법을 이용한 버퍼 오버플로우 분석의 중복 경보 제거)

  • Kim, You-Il;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.942-945
    • /
    • 2010
  • In order to reduce the efforts to inspect the reported alarms from a static buffer overflow analyzer, we present an effective method to filter out redundant alarms. In the static analysis, a sequence of multiple alarms are frequently found due to the same cause in the code. In such a case, it is sufficient and reasonable for programmers to examine the first alarm instead of the entire alarms in the same sequence. Based on this observation, we devise a buffer overflow analysis that filters out redundant alarms with our context refinement technique. Our experiment with several open source programs shows that our method reduces the reported alarms by 23% on average.

Analysis of image distortion in 3D integral imaging display (집적결상된 3차원 영상의 중복 및 누락 왜곡에 대한 연구)

  • 서장일;차성도;신승호
    • Korean Journal of Optics and Photonics
    • /
    • v.15 no.3
    • /
    • pp.234-240
    • /
    • 2004
  • In the integral imaging system for 3D display, we have investigated the image distortions, such as duplication and omission, which are presented in the reconstructed image. We have also discussed the quantitative condition which minimizes the distortion, with several fundamental variables. In addition, we present the experimental results which support the quantitative analysis of the distortion.