• Title/Summary/Keyword: Image sequence

Search Result 990, Processing Time 0.027 seconds

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF

Parallel Injection Method for Improving Descriptive Performance of Bi-GRU Image Captions (Bi-GRU 이미지 캡션의 서술 성능 향상을 위한 Parallel Injection 기법 연구)

  • Lee, Jun Hee;Lee, Soo Hwan;Tae, Soo Ho;Seo, Dong Hoan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1223-1232
    • /
    • 2019
  • The injection is the input method of the image feature vector from the encoder to the decoder. Since the image feature vector contains object details such as color and texture, it is essential to generate image captions. However, the bidirectional decoder model using the existing injection method only inputs the image feature vector in the first step, so image feature vectors of the backward sequence are vanishing. This problem makes it difficult to describe the context in detail. Therefore, in this paper, we propose the parallel injection method to improve the description performance of image captions. The proposed Injection method fuses all embeddings and image vectors to preserve the context. Also, We optimize our image caption model with Bidirectional Gated Recurrent Unit (Bi-GRU) to reduce the amount of computation of the decoder. To validate the proposed model, experiments were conducted with a certified image caption dataset, demonstrating excellence in comparison with the latest models using BLEU and METEOR scores. The proposed model improved the BLEU score up to 20.2 points and the METEOR score up to 3.65 points compared to the existing caption model.

Design of Mobile Supervisory System that Apply Action Tracing by Image Segmentation (영상분할에 의한 동작 추적 기법을 적용한 모바일 감시 시스템의 설계)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.2
    • /
    • pp.282-287
    • /
    • 2002
  • This paper action tracing by techniques to do image sequence component to watch invader based on Mobile internet use. First, detect frame in animation that film fixed area, and make use of image subtraction between two frame that adjoin, segment fixed backing and target who move. Segmentalized foreground object detected and did so that can presume middle value of gouge that is abstracted to position that is specified and watch invader by analyzing action gouge. Those watch information is stored, and made Mobile client send out SMS Message about situation of watch place to server being stored to sensed serial numbers, date, Image file with recording of time.

Detection Method of Leukocyte Motions in a Microvessel (미소혈관 내 백혈구 운동의 검출법)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.4
    • /
    • pp.128-134
    • /
    • 2014
  • In this paper, we propose a detection method of the leukocyte motions in a microvessel by using spatiotemporal image analysis. The leukocyte motions that adhere to blood vessel walls can be visualized to move along the blood vessel wall's contours in a sequence of images. In this proposal method, we use the constraint that the leukocytes move along the blood vessel wall's contours and detect the leukocyte motions by using the spatiotemporal image analysis method. The generated spatiotemporal image is processed by a special-purpose orientation-selective filter and then subsequent grouping processes are done. The subsequent grouping processes select and group the leukocyte trace segments among all the segments obtained by simple thresholding and skeletonizing operations. Experimental results show that the proposed method can stably detect the leukocyte motions even when multiple leukocyte traces intersect each other.

Wavelet based Blind Watermarking using Self-reference Method (웨이블릿 기반의 자기참조 기법을 이용한 블라인드 워터마킹)

  • Piao, Yong-Ri;Kim, Seok-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.1C
    • /
    • pp.62-67
    • /
    • 2008
  • In this paper, wavelet based blind watermarking using self-reference method is proposed. First, we process wavelet transform of original image. Then, we set all domain except for the low-frequency domain to zero and make self-reference image after wavelet reverse transformation. By choosing specific domain according to the pixel value difference between original image and self-reference image, we make random sequence, use as watermark and embed. The experimental results of the watermark embedding and extraction on various images show that the proposed scheme not only has good image quality, but also has stability on JPEG lossy compression, filtering, sharpening, blurring and noise.

A Hybrid Digital Watermarking Technique for Copyright Protection and Tamper Detection on Still images (정지영상에서 저작권 보호 및 위변조 검출을 위한 하이브리드 디지털 워터마킹 기법)

  • Yoo Kil-Sang;Song Geun-Sil;Choi Hyuk;Lee Won-Hyung
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.27-34
    • /
    • 2003
  • Digital image manipulation software is now readily available on personal computers. It is therefore very simple to tamper with any image and make it available to others. Therefore. copyright protection of digital contents and insurance of digital image integrity become major issues. In this paper, we propose a hybrid watermarking method to identify locations of tampered region as well as copyright. Our proposed algorithms embed the PN-sequence into low frequency sub-band of the wavelet transform domain and it doesn't need the original image in extraction procedure. The experimental results show good robustness against any signal processing with tamper detection on still image.

  • PDF

Specified Object Tracking Problem in an Environment of Multiple Moving Objects

  • Park, Seung-Min;Park, Jun-Heong;Kim, Hyung-Bok;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.118-123
    • /
    • 2011
  • Video based object tracking normally deals with non-stationary image streams that change over time. Robust and real time moving object tracking is considered to be a problematic issue in computer vision. Multiple object tracking has many practical applications in scene analysis for automated surveillance. In this paper, we introduce a specified object tracking based particle filter used in an environment of multiple moving objects. A differential image region based tracking method for the detection of multiple moving objects is used. In order to ensure accurate object detection in an unconstrained environment, a background image update method is used. In addition, there exist problems in tracking a particular object through a video sequence, which cannot rely only on image processing techniques. For this, a probabilistic framework is used. Our proposed particle filter has been proved to be robust in dealing with nonlinear and non-Gaussian problems. The particle filter provides a robust object tracking framework under ambiguity conditions and greatly improves the estimation accuracy for complicated tracking problems.

Flow Effects on Tailored RF Gradient Echo (TRFGE) Magnetic Resonance Imaging : In-flow and In-Plane Flow Effect (Tailored RF 경자사계방향 (TRFGE} 자기공명영상(MRI)에서 유체에 의한 영상신호 변화 : 유체유입효과와 영상면내를 흐르는 유체의 효과에 대하여)

  • Mun, Chi-Ung;Kim, Sang-Tae;No, Yong-Man;Im, Tae-Hwan;Jo, Jang-Hui
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.3
    • /
    • pp.243-251
    • /
    • 1997
  • In this paper, we have reported two interesting flow effects arising in the TRFGE sequence using water flow phantom. First, we have shown that the TRFGE sequence is indeed not affected by "in-flow" effect from the unsaturated spins flowing into the imaging slice. Second, the enhancement of "in-plane flow" signal in the readout gradient direction was observed when the TRFGE sequence was used without flow compensation. These two results have many interesting applications in MR imaging other than fMRI. Results obtained were also compared with the results obtained by the conventional gradient echo(CGE) imaging. Experiments were performed at 4.7T MRI/S animal system (Biospec, BRUKER, Switzerland). A cylindrical phantom was made using acryl and a vinyl tube was inserted at the center(Fig. 1). The whole cylinder was filled with water doped with $MnCl_2$ and the center tube was filled with saline which flows in parallel to the main magnetic field along the tube. Tailored RF pulse was designed to have quadratic ($z^2$) phase distribution in slice direction(z). Imaging parameters were TR/TE = 55~85/10msec, flip angle = $30^{\circ}$, slice thickness = 2mm, matrix size = 256${\times}$256, and FOV= 10cm. In-flow effect : Axial images were obtained with and without flow using the CGE and TRFGE sequences, respectively. The flow direction was perpendicular to the image slice. In-plane flow : Sagittal images were obtained with and without flow using the TRGE sequence. The readout gradient was applied in parallel to the flow direction. We have observed that the "in-flow" effect did not affect the TRFGE image, while "in-plane flow" running along the readout gradient direction enhanced the signal in the TRFGE sequence when flow compensation gradient scheme was not used.

  • PDF

An image sequence coding using motion-compensated transform technique based on the sub-band decomposition (움직임 보상 기법과 분할 대역 기법을 사용한 동영상 부호화 기법)

  • Paek, Hoon;Kim, Rin-Chul;Lee, Sang-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.1-16
    • /
    • 1996
  • In this paper, by combining the motion compensated transform coding with the sub-band decomposition technique, we present a motion compensated sub-band coding technique(MCSBC) for image sequence coding. Several problems related to the MCSBC, such as a scheme for motion compensation in each sub-band and the efficient VWL coding of the DCT coefficients in each sub-band are discussed. For an efficient coding, the motion estimation and compensation is performed only on the LL sub-band, but the discrete cosine transform(DCT) is employed to encode all sub-bands in our approach. Then, the transform coefficients in each sub-band are scanned in a different manner depending on the energy distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distribution of each sub-band. The performance of the proposed MCSBC technique is intensively examined by computer simulations on the HDTV image sequences. The simulation results reveal that the proposed MCSBC technique outperforms other coding techniques, especially the well-known motion compensated transform coding technique by about 1.5dB, in terms of the average peak signal to noise ratio.

  • PDF

An Efficient Syntax Rule for Selective Coding (선택적 부호화를 위한 효율적인 구문 표현)

  • 이종배
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.3
    • /
    • pp.347-354
    • /
    • 1999
  • Image sequence is compressed and stored as the unit of frame in computer, and reconstructed with desired quality according to several applications. In some cases specific parts are more important than other parts within a frame and these important parts must be reconstructed with high quality compared with the other parts and several schemes are suggested for such application and these schemes need to separate the important parts from a given frame, and also the syntax of shape, texture, and motion information must be defined for important parts and the other parts. But syntax rule in H.261, MPEG1 or MPEG2 is not suitable for our application because the syntax rules in the existing scheme can not express shape information for separating the important regions from each frame. So we propose a new syntax rule which represents the shape, texture and motion information in the circumstances where specific important parts exist in a frame.

  • PDF