• Title/Summary/Keyword: Ratio algorithm

Search Result 3,123, Processing Time 0.029 seconds

Performance Evaluation of Output Queueing ATM Switch with Finite Buffer Using Stochastic Activity Networks (SAN을 이용한 제한된 버퍼 크기를 갖는 출력큐잉 ATM 스위치 성능평가)

  • Jang, Kyung-Soo;Shin, Ho-Jin;Shin, Dong-Ryeol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2484-2496
    • /
    • 2000
  • High speed switches have been developing to interconnect a large number of nodes. It is important to analyze the switch performance under various conditions to satisfy the requirements. Queueing analysis, in general, has the intrinsic problem of large state space dimension and complex computation. In fact, The petri net is a graphical and mathematical model. It is suitable for various applications, in particular, manufacturing systems. It can deal with parallelism, concurrence, deadlock avoidance, and asynchronism. Currently it has been applied to the performance of computer networks and protocol verifications. This paper presents a framework for modeling and analyzing ATM switch using stochastic activity networks (SANs). In this paper, we provide the ATM switch model using SANs to extend easily and an approximate analysis method to apply A TM switch models, which significantly reduce the complexity of the model solution. Cell arrival process in output-buffered Queueing A TM switch with finite buffer is modeled as Markov Modulated Poisson Process (MMPP), which is able to accurately represent real traffic and capture the characteristics of bursty traffic. We analyze the performance of the switch in terms of cell-loss ratio (CLR), mean Queue length and mean delay time. We show that the SAN model is very useful in A TM switch model in that the gates have the capability of implementing of scheduling algorithm.

  • PDF

Hybrid Watermarking Technique using DWT Subband Structure and Spatial Edge Information (DWT 부대역구조와 공간 윤곽선정보를 이용한 하이브리드 워터마킹 기술)

  • 서영호;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.706-715
    • /
    • 2004
  • In this paper, to decide the watermark embedding positions and embed the watermark we use the subband tee structure which is presented in the wavelet domain and the edge information in the spatial domain. The significant frequency region is estimated by the subband searching from the higher frequency subband to the lower frequency subband. LH1 subband which has the higher frequency in tree structure of the wavelet domain is divided into 4${\times}$4 submatrices, and the threshold which is used in the watermark embedding is obtained by the blockmatrix which is consists by the average of 4${\times}$4 submatrices. Also the watermark embedding position, Keymap is generated by the blockmatrix for the energy distribution in the frequency domain and the edge information in the spatial domain. The watermark is embedded into the wavelet coefficients using the Keymap and the random sequence generated by LFSR(Linear feedback shift register). Finally after the inverse wavelet transform the watermark embedded image is obtained. the proposed watermarking algorithm showed PSNR over 2㏈ and had the higher results from 2% to 8% in the comparison with the previous research for the attack such as the JPEG compression and the general image processing just like blurring, sharpening and gaussian noise.

Evaluation of metabolic tumor volume using different image reconstruction on 18F-FDG PET/CT fusion image (18F-FDG PET/CT 융합영상에서 영상 재구성 차이에 의한 MTV (Metabolic tumor volume) 평가)

  • Yoon, Seok Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.1
    • /
    • pp.433-440
    • /
    • 2018
  • Recently, MTV(metabolic tumor volume) has been used as indices of the whole tumor FDG uptake on FDG PET image but it is influenced by image reconstruction. The purpose of this study was to evaluate the correlation between actual volume and metabolic tumor volume applying different SUVmax threshold for different reconstruction algorithm on phantom study. Measurement were performed on a Siemens Biograph mCT40 using a NEMA IEC body phantom containing different size six spheres filled with F18-FDG applying four SBRs (4:1, 8:1, 10:1, 20:1). Images reconstructed four algorithms (OSEM3D, OSEM3D+PSF, OSEM3D +TOF, OSEM3D+TOF+PSF) and MTV were measured with different SUVmax threshold. Overall, the use of increasing thresholds result in decreasing MTV. and increasing the signal to background ratio decreased MTV by applying same SUVmax threshold. The 40% SUVmax threshold gave the best concordance between measured and actual volume in PSF and PSF+TOF reconstruction image. and the 45% threshold had the best correlation between the volume measured and actual volume in OSEM3D and TOF reconstruction image. we believe that this study will be used when the measurement of MTV applying various reconstruction image.

Estimation of Medical Ultrasound Attenuation using Adaptive Bandpass Filters (적응 대역필터를 이용한 의료 초음파 감쇠 예측)

  • Heo, Seo-Weon;Yi, Joon-Hwan;Kim, Hyung-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.5
    • /
    • pp.43-51
    • /
    • 2010
  • Attenuation coefficients of medical ultrasound not only reflect the pathological information of tissues scanned but also provide the quantitative information to compensate the decay of backscattered signals for other medical ultrasound parameters. Based on the frequency-selective attenuation property of human tissues, attenuation estimation methods in spectral domain have difficulties for real-time implementation due to the complexicity while estimation methods in time domain do not achieve the compensation for the diffraction effect effectively. In this paper, we propose the modified VSA method, which compensates the diffraction with reference phantom in time domain, using adaptive bandpass filters with decreasing center frequencies along depths. The adaptive bandpass filtering technique minimizes the distortion of relative echogenicity of wideband transmit pulses and maximizes the signal-to-noise ratio due to the random scattering, especially at deeper depths. Since the filtering center frequencies change according to the accumulated attenuation, the proposed algorithm improves estimation accuracy and precision comparing to the fixed filtering method. Computer simulation and experimental results using tissue-mimicking phantoms demonstrate that the distortion of relative echogenicity is decreased at deeper depths, and the accuracy of attenuation estimation is improved by 5.1% and the standard deviation is decreased by 46.9% for the entire scan depth.

The Early Write Back Scheme For Write-Back Cache (라이트 백 캐쉬를 위한 빠른 라이트 백 기법)

  • Chung, Young-Jin;Lee, Kil-Whan;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.101-109
    • /
    • 2009
  • Generally, depth cache and pixel cache of 3D graphics are designed by using write-back scheme for efficient use of memory bandwidth. Also, there are write after read operations of same address or only write operations are occurred frequently in 3D graphics cache. If a cache miss is detected, an access to the external memory for write back operation and another access to the memory for handling the cache miss are operated simultaneously. So on frequent cache miss situations, as the memory access bandwidth limited, the access time of the external memory will be increased due to memory bottleneck problem. As a result, the total performance of the processor or the IP will be decreased, also the problem will increase peak power consumption. So in this paper, we proposed a novel early write back cache architecture so as to solve the problems issued above. The proposed architecture controls the point when to access the external memory as to copy the valid data block. And this architecture can improve the cache performance with same hit ratio and same capacity cache. As a result, the proposed architecture can solve the memory bottleneck problem by preventing intensive memory accesses. We have evaluated the new proposed architecture on 3D graphics z cache and pixel cache on a SoC environment where ARM11, 3D graphic accelerator and various IPs are embedded. The simulation results indicated that there were maximum 75% of performance increase when using various simulation vectors.

System Implementation for Generating High Quality Digital Holographic Video using Vertical Rig based on Depth+RGB Camera (Depth+RGB 카메라 기반의 수직 리그를 이용한 고화질 디지털 홀로그래픽 비디오 생성 시스템의 구)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.964-975
    • /
    • 2012
  • Recently the attention on digital hologram that is regarded as to be the final goal of the 3-dimensional video technology has been increased. A digital hologram can be generated with a depth and a RGB image. We proposed a new system to capture RGB and depth images and to convert them to digital holograms. First a new cold mirror was designed and produced. It has the different transmittance ratio against various wave length and can provide the same view and focal point to the cameras. After correcting various distortions with the camera system, the different resolution between depth and RGB images was adjusted. The interested object was extracted by using the depth information. Finally a digital hologram was generated with the computer generated hologram (CGH) algorithm. All algorithms were implemented with C/C++/CUDA and integrated in LabView environment. A hologram was calculated in the general-purpose computing on graphics processing unit (GPGPU) for high-speed operation. We identified that the visual quality of the hologram produced by the proposed system is better than the previous one.

Practical Virtual Compensator Design with Dynamic Multi-Leaf Collimator(dMLC) from Iso-Dose Distribution

  • Song, Ju-Young;Suh, Tae-Suk;Lee, Hyung-Koo;Choe, Bo-Young;Ahn, Seung-Do;Park, Eun-Kyung;Kim, Jong-Hoon;Lee, Sang-Wook;Yi, Byong-Yong
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.129-132
    • /
    • 2002
  • The practical virtual compensator, which uses a dynamic multi-leaf collimator (dMLC) and three-dimensional radiation therapy planning (3D RTP) system, was designed. And the feasibility study of the virtual compensator was done to verify that the virtual compensator acts a role as the replacement of the physical compensator. Design procedure consists of three steps. The first step is to generate the isodose distributions from the 3D RTP system (Render Plan, Elekta). Then isodose line pattern was used as the compensator pattern. Pre-determined compensating ratio was applied to generate the fluence map for the compensator design. The second step is to generate the leaf sequence file with Ma's algorithm in the respect of optimum MU-efficiency. All the procedure was done with home-made software. The last step is the QA procedure which performs the comparison of the dose distributions which are produced from the irradiation with the virtual compensator and from the calculation by 3D RTP. In this study, a phantom was fabricated for the verification of properness of the designed compensator. It is consisted of the styrofoam part which mimics irregular shaped contour or the missing tissues and the mini water phantom. Inhomogeneous dose distribution due to the styrofoam missing tissue could be calculated with the RTP system. The film dosimetry in the phantom with and without the compensator showed significant improvement of the dose distributions. The virtual compensator designed in this study was proved to be a replacement of the physical compensator in the practical point of view.

  • PDF

Diagnosis of Ictal Hyperperfusion Using Subtraction Image of Ictal and Interictal Brain Perfusion SPECT (발작기와 발작간기 뇌 관류 SPECT 감산영상을 이용한 간질원인 병소 진단)

  • Lee, Dong Soo;Seo, Jong-Mo;Lee, Jae Sung;Lee, Sang-Kun;Kim, Hyun Jip;Chung, June-Key;Lee, Myung Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.1
    • /
    • pp.20-31
    • /
    • 1998
  • A robust algorithm to disclose and display the difference of ictal and interictal perfusion may facilitate the detection of ictal hyperfusion foci. Diagnostic performance of localizing epileptogenic zones with subtracted SPECT images was compared with the visual diagnosis using ictal and interictal SPECT, MR, or PET. Ietal and interictal Tc-99m-HMPAO cerebral perfusion SPECT images of 48 patients(pts) were processed to get parametric subtracted images. Epileptogenic foci of all pts were diagnosed by seizure free state after resection of epileptogenic zones. In subtraction SPECT, we used normalized difference ratio of pixel counts(ictal-interictal)/interictal ${\times}100%$) after correcting coordinates of ictal and interictal SPECT in semi-automatized 3-dimensional fashion. We found epileptogenic zones in subtraction SPECT and compared the performance with visual diagnosis of ictal and interictal SPECT, MR and PET using post-surgical diagnosis as gold standard. The concordance of subtraction SPECT and ictal-interictal SPECT was moderately good(kappa=0.49). The sensitivity of ictal-interictal SPECT was 73% and that of subtraction SPECT 58%. Positive predictive value of ictal-interictal SPECT was 76% and that of subtraction SPECT was 64%. There was no statistical difference between sensitivity or positive predictive values of subtraction SPECT and ictal-interictal SPECT, MR or PET. Such was also the case when we divided patients into temporal lobe epilepsy and neocortical epilepsy. We conclude that subtraction SPECT we produced had equivalent diagnostic performance compared with ictal-interictal SPECT in localizing epileptogenic zones. Additional value of these subtraction SPECT in clinical interpretation of ictal and interictal SPECT should be further evaluated.

  • PDF

Coated Tongue Region Extraction using the Fluorescence Response of the Tongue Coating by Ultraviolet Light Source (설태의 자외선 형광 반응을 이용한 설태 영역 추출)

  • Choi, Chang-Yur;Lee, Woo-Beom;Hong, You-Sik;Nam, Dong-Hyun;Lee, Sang-Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.181-188
    • /
    • 2012
  • An effective extraction method for extracting a coated tongue is proposed in this paper, which is used as the diagnostic criteria in the tongue diagnosis. Proposed method uses the fluorescence response characteristics of the coated tongue that is occurred by using the ultraviolet light. Specially, this method can solved the previous problems including the issue in the limits of the diagnosis environment and in the objectivity of the diagnosis results. In our method, original tongue image is acquired by using the ultraviolet light, and binarization is performed by thresholding a valley-points in the histogram that corresponds to the color difference of tongue body and tongue coating. Final view image is presented to the oriental doctor, after applying the canny-edge algorithm to the binary image, and edge image is added to the original image. In order to evaluate the performance of the our proposed method, after building a various tongue image, we compared the true region of coated tongue by the oriental doctor's hand with the extracted region by the our method. As a result, the proposed method showed the average 87.87% extraction ratio. The shape of the extracted coated tongue region showed also significantly higher similarity.

Duty Cycle Scheduling considering Delay Time Constraints in Wireless Sensor Networks (무선네트워크에서의 지연시간제약을 고려한 듀티사이클 스케쥴링)

  • Vu, Duy Son;Yoon, Seokhoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.169-176
    • /
    • 2018
  • In this paper, we consider duty-cycled wireless sensor networks (WSNs) in which sensor nodes are periodically dormant in order to reduce energy consumption. In such networks, as the duty cycle interval increases, the energy consumption decreases. However, a higher duty cycle interval leads to the increase in the end-to-end (E2E) delay. Many applications of WSNs are delay-sensitive and require packets to be delivered from the sensr nodes to the sink with delay requirements. Most of existing studies focus on only reducing the E2E delay, rather than considering the delay bound requirement, which makes hard to achieve the balanced performance between E2E delay and energy consumption. A few study that considered delay bound requirement require time synchronization between neighboring nodes or a specific distribution of deployed nodes. In order to address limitations of existing works, we propose a duty-cycle scheduling algorithm that aims to achieve low energy consumption, while satisfying the delay requirements. To that end, we first estimate the probability distribution for the E2E delay. Then, by using the obtained distribution we determine the maximal duty cycle interval that still satisfies the delay constraint. Simulation results show that the proposed design can satisfy the given delay bound requirements while achieving low energy consumption.