• Title/Summary/Keyword: 실험 영상

Search Result 10,218, Processing Time 0.047 seconds

A Genetic Programming Approach to Blind Deconvolution of Noisy Blurred Images (잡음이 있고 흐릿한 영상의 블라인드 디컨벌루션을 위한 유전 프로그래밍 기법)

  • Mahmood, Muhammad Tariq;Chu, Yeon Ho;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Usually, image deconvolution is applied as a preprocessing step in surveillance systems to reduce the effect of motion or out-of-focus blur problem. In this paper, we propose a blind-image deconvolution filtering approach based on genetic programming (GP). A numerical expression is developed using GP process for image restoration which optimally combines and exploits dependencies among features of the blurred image. In order to develop such function, first, a set of feature vectors is formed by considering a small neighborhood around each pixel. At second stage, the estimator is trained and developed through GP process that automatically selects and combines the useful feature information under a fitness criterion. The developed function is then applied to estimate the image pixel intensity of the degraded image. The performance of developed function is estimated using various degraded image sequences. Our comparative analysis highlights the effectiveness of the proposed filter.

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.

Design Space Exploration of Many-Core Processors for Ultrasonic Image Processing at Different Resolutions (다양한 해상도의 초음파 영상처리를 위한 매니코어 프로세서의 디자인 공간 탐색)

  • Kang, Sung-Mo;Kim, Jong-Myon
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.121-128
    • /
    • 2012
  • This paper explores the optimal processing element (PE) configuration for ultrasonic image processing at different resolutions ($256{\times}256$, $768{\times}1,024$, and $1,024{\times}1,280$). To determine the optimal PE configuration, this paper evaluates the impacts of a data-per-processing element (DPE) ratio that is defined as the amount of image data directly mapped to each PE on system performance and both energy and area efficiencies using architectural and workload simulations. This paper illustrates the correlation between DPE ratio and PE architecture for a target implementation in 130nm technology. To identify the most efficient PE structure, seven different PE configurations were simulated for ultrasonic image processing. Experimental results indicate that the highest energy efficiencies were achieved at PEs=1,024, 4,096, and 16,384 for ultrasonic images at $256{\times}256$, $768{\times}1,024$, $1,024{\times}1,280$ resolutions, respectively. Furthermore, the maximum area efficiencies were yielded at PEs=256 ($256{\times}256$ image) and 4,096 ($768{\times}1,024$ and $1,024{\times}1,280$ images), respectively.

Dynamic Parameter Visualization and Noise Suppression Techniques for Contrast-Enhanced Ultrasonography (조영증강 초음파진단을 위한 동적 파라미터 가시화기법 및 노이즈 개선기법)

  • Kim, Ho-Joon
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.910-918
    • /
    • 2015
  • This paper presents a parameter visualization technique to overcome the limitation of the naked eye in contrast-enhanced ultrasonography. A method is also proposed to compensate for the distortion and noise in ultrasound image sequences. Meaningful parameters for diagnosing liver disease can be extracted from the dynamic patterns of the contrast enhancement in ultrasound images. The visualization technique can provide more accurate information by generating a parametric image from the dynamic data. Respiratory motions and noise from micro-bubble in ultrasound data may cause a degradation of the reliability of the diagnostic parameters. A multi-stage algorithm for respiratory motion tracking and an image enhancement technique based on the Markov Random Field are proposed. The usefulness of the proposed methods is empirically discussed through experiments by using a set of clinical data.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.

Feature Extraction of Molecular Images by DWT (DWT에 의한 분자영상의 특징 추출)

  • Choi, Guirack;Ahng, Byungju;Lee, Sangbock
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.21-26
    • /
    • 2013
  • In this paper, We are suggested methods of feature extraction in molecular images. The result of image transform DWT examination by suggested method, we are obtained as follows. 1-level and 2-levels of decomposition results showed the composition of the low frequency region. But, 3-level decomposition results did not appear in the data component is almost. Observed not with the naked eye is not, but the 3-level output data values of the results were decomposed. We are printed the horizontal and vertical directions of low-frequency region of the data, the high frequency region of the horizontal and vertical data, and diagonal high frequency region of the horizontal and vertical directions data. If the output data using molecular imaging and CT, PET, MR imaging will be compared with the data.

A Dynamically Segmented DCT Technique for Grid Artifact Suppression in X-ray Images (X-ray 영상에서 그리드 아티팩트 개선을 위한 동적 분할 기반 DCT 기법)

  • Kim, Hyunggue;Jung, Joongeun;Lee, Jihyun;Park, Joonhyuk;Seo, Jisu;Kim, Hojoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.4
    • /
    • pp.171-178
    • /
    • 2019
  • The use of anti-scatter grids in radiographic imaging has the advantage of preventing the image distortion caused by scattered radiation. However, it carries the side effect of leaving artifacts in the X-ray image. In this paper, we propose a grid line suppression technique using discrete cosine transform(DCT). In X-ray images, the grid lines have different characteristics depending on the shape of the object and the area of the image. To solve this problem, we adopt the DCT transform based on a dynamic segmentation, and propose a filter transfer function for each individual segment. An algorithm for detecting the band of grid lines in frequency domain and a band stop filter(BSF) with a filter transfer function of a combination of Kaiser window and Butterworth filter have been proposed. To solve the blocking effects, we present a method to determine the pixel values using multiple structured images. The validity of the proposed theory has been evaluated from the experimental results using 140 X-ray images.

Reversible Watermarking in JPEG Compression Domain (JPEG 압축 영역에서의 리버서블 워터마킹)

  • Cui, Xue-Nan;Choi, Jong-Uk;Kim, Hak-Il;Kim, Jong-Weon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.17 no.6
    • /
    • pp.121-130
    • /
    • 2007
  • In this paper, we propose a reversible watermarking scheme in the JPEG compression domain. The reversible watermarking is useful to authenticate the content without the quality loss because it preserves the original content when embed the watermark information. In the internet, for the purpose to save the storage space and improve the efficiency of communication, digital image is usually compressed by JPEG or GIF. Therefore, it is necessary to develop a reversible watermarking in the JPEG compression domain. When the watermark is embedded, the lossless compression was used and the original image is recovered during the watermark extracting process. The test results show that PSNRs are distributed from 38dB to 42dB and the payload is from 2.5Kbits to 3.4Kbits where the QF is 75. Where the QF of the Lena image is varied from 10 to 99, the PSNR is directly proportional to the QF and the payload is around $1.6{\sim}2.8Kbits$.

Mask Wearing Detection System using Deep Learning (딥러닝을 이용한 마스크 착용 여부 검사 시스템)

  • Nam, Chung-hyeon;Nam, Eun-jeong;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.44-49
    • /
    • 2021
  • Recently, due to COVID-19, studies have been popularly worked to apply neural network to mask wearing automatic detection system. For applying neural networks, the 1-stage detection or 2-stage detection methods are used, and if data are not sufficiently collected, the pretrained neural network models are studied by applying fine-tuning techniques. In this paper, the system is consisted of 2-stage detection method that contain MTCNN model for face recognition and ResNet model for mask detection. The mask detector was experimented by applying five ResNet models to improve accuracy and fps in various environments. Training data used 17,217 images that collected using web crawler, and for inference, we used 1,913 images and two one-minute videos respectively. The experiment showed a high accuracy of 96.39% for images and 92.98% for video, and the speed of inference for video was 10.78fps.

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.