• Title/Summary/Keyword: Signal Optimization

Search Result 748, Processing Time 0.028 seconds

Optimizing Imaging Conditions in Digital Tomosynthesis for Image-Guided Radiation Therapy (영상유도 방사선 치료를 위한 디지털 단층영상합성법의 촬영조건 최적화에 관한 연구)

  • Youn, Han-Bean;Kim, Jin-Sung;Cho, Min-Kook;Jang, Sun-Young;Song, William Y.;Kim, Ho-Kyung
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.281-290
    • /
    • 2010
  • Cone-beam digital tomosynthesis (CBDT) has greatly been paid attention in the image-guided radiation therapy because of its attractive advantages such as low patient dose and less motion artifact. Image quality of tomograms is, however, dependent on the imaging conditions such as the scan angle (${\beta}_{scan}$) and the number of projection views. In this paper, we describe the principle of CBDT based on filtered-backprojection technique and investigate the optimization of imaging conditions. As a system performance, we have defined the figure-of-merit with a combination of signal difference-to-noise ratio, artifact spread function and floating-point operations which determine the computational load of image reconstruction procedures. From the measurements of disc phantom, which mimics an impulse signal and thus their analyses, it is concluded that the image quality of tomograms obtained from CBDT is improved as the scan angle is wider than 60 degrees with a larger step scan angle (${\Delta}{\beta}$). As a rule of thumb, the system performance is dependent on $\sqrt{{\Delta}{\beta}}{\times}{\beta}^{2.5}_{scan}$. If the exact weighting factors could be assigned to each image-quality metric, we would find the better quantitative imaging conditions.

Fast Intra Prediction Mode Decision using Most Probable Mode for H.264/AVC (H.264/AVC에서의 최고 확률 모드를 이용한 고속 화면 내 예측 모드 결정)

  • Kim, Dae-Yeon;Kim, Jeong-Pil;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.380-390
    • /
    • 2010
  • The most recent standard video codec, H.264/AVC achieves significant coding efficiency by using a rate-distortion optimization(RDO). The RDO is a measurement for selecting the best mode which minimizes the Lagrangian cost among several modes. As a result, the computational complexity is increased drastically in encoder. In this paper, a method for fast intra prediction mode decision is proposed to reduce the RDO complexity. To speed up Intra$4{\times}4$ and Chroma Intra encoding, the proposed method decides the case that MPM (Most Probable Mode) is the best prediction mode. In this case, the RDO process is skipped, and only MPM is used for encoding the block in Intra$4{\times}4$. And the proposed method is also applied to the chroma Intra prediction mode in a similar way to the Intra$4{\times}4$. The experimental results show that the proposed method achieves an average encoding time saving of about 63% with negligible loss of PSNR (Peak Signal-to-Noise Ratio).

Run-time Memory Optimization Algorithm for the DDMB Architecture (DDMB 구조에서의 런타임 메모리 최적화 알고리즘)

  • Cho, Jeong-Hun;Paek, Yun-Heung;Kwon, Soo-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.413-420
    • /
    • 2006
  • Most vendors of digital signal processors (DSPs) support a Harvard architecture, which has two or more memory buses, one for program and one or more for data and allow the processor to access multiple words of data from memory in a single instruction cycle. We already addressed how to efficiently assign data to multi-memory banks in our previous work. This paper reports on our recent attempt to optimize run-time memory. The run-time environment for dual data memory banks (DBMBs) requires two run-time stacks to control activation records located in two memory banks corresponding to calling procedures. However, activation records of two memory banks for a procedure are able to have different size. As a consequence, dual run-time stacks can be unbalanced whenever a procedure is called. This unbalance between two memory banks causes that usage of one memory bank can exceed the extent of on-chip memory area although there is free area in the other memory bank. We attempt balancing dual run-time slacks to enhance efficiently utilization of on-chip memory in this paper. The experimental results have revealed that although our algorithm is relatively quite simple, it still can utilize run-time memories efficiently; thus enabling our compiler to run extremely fast, yet minimizing the usage of un-time memory in the target code.

Bio-Sensing Convergence Big Data Computing Architecture (바이오센싱 융합 빅데이터 컴퓨팅 아키텍처)

  • Ko, Myung-Sook;Lee, Tae-Gyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • Biometric information computing is greatly influencing both a computing system and Big-data system based on the bio-information system that combines bio-signal sensors and bio-information processing. Unlike conventional data formats such as text, images, and videos, biometric information is represented by text-based values that give meaning to a bio-signal, important event moments are stored in an image format, a complex data format such as a video format is constructed for data prediction and analysis through time series analysis. Such a complex data structure may be separately requested by text, image, video format depending on characteristics of data required by individual biometric information application services, or may request complex data formats simultaneously depending on the situation. Since previous bio-information processing computing systems depend on conventional computing component, computing structure, and data processing method, they have many inefficiencies in terms of data processing performance, transmission capability, storage efficiency, and system safety. In this study, we propose an improved biosensing converged big data computing architecture to build a platform that supports biometric information processing computing effectively. The proposed architecture effectively supports data storage and transmission efficiency, computing performance, and system stability. And, it can lay the foundation for system implementation and biometric information service optimization optimized for future biometric information computing.

Gaussian Noise Reduction Method using Adaptive Total Variation : Application to Cone-Beam Computed Tomography Dental Image (적응형 총변이 기법을 이용한 가우시안 잡음 제거 방법: CBCT 치과 영상에 적용)

  • Kim, Joong-Hyuk;Kim, Jung-Chae;Kim, Kee-Deog;Yoo, Sun-K.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.29-38
    • /
    • 2012
  • The noise generated in the process of obtaining the medical image acts as the element obstructing the image interpretation and diagnosis. To restore the true image from the image polluted from the noise, the total variation optimization algorithm was proposed by the R.O. F (L.Rudin, S Osher, E. Fatemi). This method removes the noise by fitting the balance of the regularity and fidelity. However, the blurring phenomenon of the border area generated in the process of performing the iterative operation cannot be avoided. In this paper, we propose the adaptive total variation method by mapping the control parameter to the proposed transfer function for minimizing boundary error. The proposed transfer function is determined by the noise variance and the local property of the image. The proposed method was applied to 464 tooth images. To evaluate proposed method performance, PSNR which is a indicator of signal and noise's signal power ratio was used. The experimental results show that the proposed method has better performance than other methods.

Real-Time Implementation of MPEG-1 Layer III Audio Decoder Using TMS320C6201 (TMS320C6201을 이용한 MPEG-1 Layer III 오디오 디코더의 실시간 구현)

  • 권홍석;김시호;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.8B
    • /
    • pp.1460-1468
    • /
    • 2000
  • The goal of this research is the real-time implementation of MPEG-1 Layer III audio decoder using the fixed-point digital signal processor of TMS320C6201 The main job for this work is twofold: one is to convert floating-point operation in the decoder into fixed-point operation while maintaining the high resolution, and the other is to optimize the program to make it run in real-time with memory size as small as possible. We, especially, devote much time to the descaling module in the decoder for conversion of floating-point operation into fixed-point operation with high accuracy. The inverse modified cosine transform(IMDCT) and synthesis polyphase filter bank modules are optimized in order to reduce the amount of computation and memory size. After the optimization process, in this paper, the implemented decoder uses about 26% of maximum computation capacity of TMS320C6201. The program memory, data ROM, data RAM used in the decoder are about 6.77kwords, 3.13 kwords and 9.94 kwords, respectively. Comparing the PCM output of fixed-point computation with that of floating-point computation, we achieve the signal-to-noise ratio of more than 60 dB. A real-time operation is demonstrated on the PC using the sound I/O and host communication functions in the EVM board.

  • PDF

Realization a Text Independent Speaker Identification System with Frame Level Likelihood Normalization (프레임레벨유사도정규화를 적용한 문맥독립화자식별시스템의 구현)

  • 김민정;석수영;김광수;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.1
    • /
    • pp.8-14
    • /
    • 2002
  • In this paper, we realized a real-time text-independent speaker recognition system using gaussian mixture model, and applied frame level likelihood normalization method which shows its effects in verification system. The system has three parts as front-end, training, recognition. In front-end part, cepstral mean normalization and silence removal method were applied to consider speaker's speaking variations. In training, gaussian mixture model was used for speaker's acoustic feature modeling, and maximum likelihood estimation was used for GMM parameter optimization. In recognition, likelihood score was calculated with speaker models and test data at frame level. As test sentences, we used text-independent sentences. ETRI 445 and KLE 452 database were used for training and test, and cepstrum coefficient and regressive coefficient were used as feature parameters. The experiment results show that the frame-level likelihood method's recognition result is higher than conventional method's, independently the number of registered speakers.

  • PDF

Development of a Photoplethysmographic method using a CMOS image sensor for Smartphone (스마트폰의 CMOS 영상센서를 이용한 광용적맥파 측정방법 개발)

  • Kim, Ho Chul;Jung, Wonsik;Lee, Kwonhee;Nam, Ki Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4021-4030
    • /
    • 2015
  • Pulse wave is the physiological responses through the autonomic nervous system such as ECG. It is relatively convenient because it can measure the signal just by applying a sensor on a finger. So, it can be usefully employed in the field of U-Healthcare. The objects of this study are acquiring the PPG (Photoplethysmography) one of the way of measuring the pulse waves in non-invasive way using the CMOS image sensor on a smartphone camera, developing the portable system judging stressful or not, and confirming the applicability in the field of u-Healthcare. PPG was acquired by using image data from smartphone camera without separate sensors and analyzed. Also, with that image signal data, HRV (Heart Rate Variability) and stress index were offered users by just using smartphone without separate host equipment. In addition, the reliability and accuracy of acquired data were improved by developing additional hardware device. From these experiments, we can confirm that measuring heart rate through the PPG, and the stress index for analysis the stress degree using the image of a smartphone camera are possible. In this study, we used a smartphone camera, not commercialized product or standardized sensor, so it has low resolution than those of using commercialized external sensor. However, despite this disadvantage, it can be usefully employed as the u-Healthcare device because it can obtain the promising data by developing additional external device for improvement reliability of result and optimization algorithm.

Image Optimization of Fast Non Local Means Noise Reduction Algorithm using Various Filtering Factors with Human Anthropomorphic Phantom : A Simulation Study (인체모사 팬텀 기반 Fast non local means 노이즈 제거 알고리즘의 필터링 인자 변화에 따른 영상 최적화: 시뮬레이션 연구)

  • Choi, Donghyeok;Kim, Jinhong;Choi, Jongho;Kang, Seong-Hyeon;Lee, Youngjin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.453-458
    • /
    • 2019
  • In this study we analyzed the tendency of the image characteristic by changing filtering factor for the proposed fast non local means (FNLM) noise reduction algorithm with designed Male Adult mesh (MASH) phantom through Geant4 application for tomographic emission (GATE) simulation program. To accomplish this purpose, MASH phantom for human copy was designed through the GATE simulation program. In addition, we acquired degraded image by adding Gaussian noise with a value of 0.005 using the MATALB program in MASH phantom. Moreover, in degraded image, the FNLM noise reduction algorithm was applied by changing the filtering factors, which set to 0.005, 0.01, 0.05, 0.1, 0.5, and 1.0 value, respectively. To quantitatively evaluate, the coefficient of variation (COV), signal to noise ratio (SNR), and contrast to noise ratio (CNR) were calculated in reconstructed images. Results of the COV, SNR and CNR were most improved in image with a filtering factor of 0.05 value. Especially, the COV was decreased with increasing filtering factor, and showed nearly constant values after 0.05 value of the filtering factor. In addition, SNR and CNR were showed that improvement with increasing filtering factor, and deterioration after 0.05 value of the filtering factor. In conclusion, we demonstrated the significance of setting the filtering factor when applying the FNLM noise reduction algorithm in degraded image.

Fault Detection Technique for PVDF Sensor Based on Support Vector Machine (서포트벡터머신 기반 PVDF 센서의 결함 예측 기법)

  • Seung-Wook Kim;Sang-Min Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.785-796
    • /
    • 2023
  • In this study, a methodology for real-time classification and prediction of defects that may appear in PVDF(Polyvinylidene fluoride) sensors, which are widely used for structural integrity monitoring, is proposed. The types of sensor defects appearing according to the sensor attachment environment were classified, and an impact test using an impact hammer was performed to obtain an output signal according to the defect type. In order to cleary identify the difference between the output signal according to the defect types, the time domain statistical features were extracted and a data set was constructed. Among the machine learning based classification algorithms, the learning of the acquired data set and the result were analyzed to select the most suitable algorithm for detecting sensor defect types, and among them, it was confirmed that the highest optimization was performed to show SVM(Support Vector Machine). As a result, sensor defect types were classified with an accuracy of 92.5%, which was up to 13.95% higher than other classification algorithms. It is believed that the sensor defect prediction technique proposed in this study can be used as a base technology to secure the reliability of not only PVDF sensors but also various sensors for real time structural health monitoring.