• Title/Summary/Keyword: Affine-transform

Search Result 80, Processing Time 0.021 seconds

Wavelet-Based Fractal Image Coding Using SAS Method and Multi-Scale Factor (SAS 기법과 다중 스케일 인자를 이용한 웨이브릿 기반 프랙탈 영상압축)

  • Jeong, Tae-Il;Gang, Gyeong-Won;Mun, Gwang-Seok;Gwon, Gi-Yong;Kim, Mun-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.4
    • /
    • pp.335-343
    • /
    • 2001
  • The conventional wavelet-based fractal image coding has the disadvantage that the encoding takes a long time, since each range block finds the best domain in the image. In this Paper, we propose wavelet-based fractal image coding using SAS(Self Affine System) method and multi-scale factor. It consists of the range and domain blocks in DWT(discrete wavelet transform) region. Using SAS method, the proposed method is that the searching process of the domain block is not required, and the range block selects the domain which is relatively located the same position in the upper level. The proposed method can perform a fast encoding by reducing the computational complexity in the encoding process. In order to improve the disadvantage of SAS method which is reduced image qualify, the proposed method is improved image qualify using the different scale factors for each level. As a result, there is not influence on an image quality, the proposed method is enhanced the encoding time and compression ratio, and it is able to the progressive transmission.

  • PDF

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Integration of Motion Compensation Algorithm for Predictive Video Coding (예측 비디오 코딩을 위한 통합 움직임 보상 알고리즘)

  • Eum, Ho-Min;Park, Geun-Soo;Song, Moon-Ho
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.12
    • /
    • pp.85-96
    • /
    • 1999
  • In a number of predictive video compression standards, the motion is compensated by the block-based motion compensation (BMC). The effective motion field used for the prediction by the BMC is obviously discontinuous since one motion vector is used for the entire macro-block. The usage of discontinuous motion field for the prediction causes the blocky artifacts and one obvious approach for eliminating such artifacts is to use a smoothed motion field. The optimal procedure will depend on the type of motion within the video. In this paper, several procedures for the motion vectors are considered. For any interpolation or approaches, however, the motion vectors as provided by the block matching algorithm(BMA) are no longer optimal. The optimum motion vectors(still one per macro-block) must minimize the of the displaced frame difference (DFD). We propose a unified algorithm that computes the optimum motion vectors to minimize the of the DFD using a conjugate gradient search. The proposed algorithm has been implemented and tested for the affine transformation based motion compensation (ATMC), the bilinear transformation based motion compensation (BTMC) and our own filtered motion compensation(FMC). The performance of these different approaches will be compared against the BMC.

  • PDF

Feature-based Image Analysis for Object Recognition on Satellite Photograph (인공위성 영상의 객체인식을 위한 영상 특징 분석)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2007
  • This paper presents a system for image matching and recognition based on image feature detection and description techniques from artificial satellite photographs. We propose some kind of parameters from the varied environmental elements happen by image handling process. The essential point of this experiment is analyzes that affects match rate and recognition accuracy when to change of state of each parameter. The proposed system is basically inspired by Lowe's SIFT(Scale-Invariant Transform Feature) algorithm. The descriptors extracted from local affine invariant regions are saved into database, which are defined by k-means performed on the 128-dimensional descriptor vectors on an artificial satellite photographs from Google earth. And then, a label is attached to each cluster of the feature database and acts as guidance for an appeared building's information in the scene from camera. This experiment shows the various parameters and compares the affected results by changing parameters for the process of image matching and recognition. Finally, the implementation and the experimental results for several requests are shown.

  • PDF

Effective Reduction of Horizontal Error in Laser Scanning Information by Strip-Wise Least Squares Adjustments

  • Lee, Byoung-Kil;Yu, Ki-Yun;Pyeon, Moo-Wook
    • ETRI Journal
    • /
    • v.25 no.2
    • /
    • pp.109-120
    • /
    • 2003
  • Though the airborne laser scanning (ALS) technique is becoming more popular in many applications, horizontal accuracy of points scanned by the ALS is not yet satisfactory when compared with the accuracy achieved for vertical positions. One of the major reasons is the drift that occurs in the inertial measurement unit (IMU) during the scanning. This paper presents an algorithm that adjusts for the error that is introduced mainly by the drift of the IMU that renders systematic differences between strips on the same area. For this, we set up an observation equation for strip-wise adjustments and completed it with tie point and control point coordinates derived from the scanned strips and information from aerial photos. To effectively capture the tie points, we developed a set of procedures that constructs a digital surface model (DSM) with breaklines and then performed feature-based matching on strips resulting in a set of reliable tie points. Solving the observation equations by the least squares method produced a set of affine transformation equations with 6 parameters that we used to transform the strips for adjusting the horizontal error. Experimental results after evaluation of the accuracy showed a root mean squared error (RMSE) of the adjusted strip points of 0.27 m, which is significant considering the RMSE before adjustment was 0.77 m.

  • PDF

Feature point extraction using scale-space filtering and Tracking algorithm based on comparing texturedness similarity (스케일-스페이스 필터링을 통한 특징점 추출 및 질감도 비교를 적용한 추적 알고리즘)

  • Park, Yong-Hee;Kwon, Oh-Seok
    • Journal of Internet Computing and Services
    • /
    • v.6 no.5
    • /
    • pp.85-95
    • /
    • 2005
  • This study proposes a method of feature point extraction using scale-space filtering and a feature point tracking algorithm based on a texturedness similarity comparison, With well-defined operators one can select a scale parameter for feature point extraction; this affects the selection and localization of the feature points and also the performance of the tracking algorithm. This study suggests a feature extraction method using scale-space filtering, With a change in the camera's point of view or movement of an object in sequential images, the window of a feature point will have an affine transform. Traditionally, it is difficult to measure the similarity between correspondence points, and tracking errors often occur. This study also suggests a tracking algorithm that expands Shi-Tomasi-Kanade's tracking algorithm with texturedness similarity.

  • PDF

A Study on Real-Time Localization and Map Building of Mobile Robot using Monocular Camera (단일 카메라를 이용한 이동 로봇의 실시간 위치 추정 및 지도 작성에 관한 연구)

  • Jung, Dae-Seop;Choi, Jong-Hoon;Jang, Chul-Woong;Jang, Mun-Suk;Kong, Jung-Shik;Lee, Eung-Hyuk;Shim, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.536-538
    • /
    • 2006
  • The most important factor of mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from monocular camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was maximum. 8cm and angle error was within $10^{\circ}$.

  • PDF

Feature Extraction for Endoscopic Image by using the Scale Invariant Feature Transform(SIFT) (SIFT를 이용한 내시경 영상에서의 특징점 추출)

  • Oh, J.S.;Kim, H.C.;Kim, H.R.;Koo, J.M.;Kim, M.G.
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.6-8
    • /
    • 2005
  • Study that uses geometrical information in computer vision is lively. Problem that should be preceded is matching problem before studying. Feature point should be extracted for well matching. There are a lot of methods that extract feature point from former days are studied. Because problem does not exist algorithm that is applied for all images, it is a hot water. Specially, it is not easy to find feature point in endoscope image. The big problem can not decide easily a point that is predicted feature point as can know even if see endoscope image as eyes. Also, accuracy of matching problem can be decided after number of feature points is enough and also distributed on whole image. In this paper studied algorithm that can apply to endoscope image. SIFT method displayed excellent performance when compared with alternative way (Affine invariant point detector etc.) in general image but SIFT parameter that used in general image can't apply to endoscope image. The gual of this paper is abstraction of feature point on endoscope image that controlled by contrast threshold and curvature threshold among the parameters for applying SIFT method on endoscope image. Studied about method that feature points can have good distribution and control number of feature point than traditional alternative way by controlling the parameters on experiment result.

  • PDF

Motion estimation method using multiple linear regression model (다중선형회귀모델을 이용한 움직임 추정방법)

  • 김학수;임원택;이재철;이규원;박규택
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.10
    • /
    • pp.98-103
    • /
    • 1997
  • Given the small bit allocation for motion information in very low bit-rate coding, motion estimation using the block matching algorithm(BMA) fails to maintain an acceptable level of prediction errors. The reson is that the motion model, or spatial transformation, assumed in block matching cannot approximate the motion in the real world precisely with a small number of parameters. In order to overcome the drawback of the conventional block matching algorithm, several triangle-based methods which utilize triangular patches insead of blocks have been proposed. To estimate the motions of image sequences, these methods usually have been based on the combination of optical flow equation, affine transform, and iteration. But the compuataional cost of these methods is expensive. This paper presents a fast motion estimation algorithm using a multiple linear regression model to solve the defects of the BMA and the triange-based methods. After describing the basic 2-D triangle-based method, the details of the proposed multiple linear regression model are presented along with the motion estimation results from one standard video sequence, representative of MPEG-4 class A data. The simulationresuls show that in the proposed method, the average PSNR is improved about 1.24 dB in comparison with the BMA method, and the computational cost is reduced about 25% in comparison with the 2-D triangle-based method.

  • PDF

Image Registration for PET/CT and CT Images with Particle Swarm Optimization (Particle Swarm Optimization을 이용한 PET/CT와 CT영상의 정합)

  • Lee, Hak-Jae;Kim, Yong-Kwon;Lee, Ki-Sung;Moon, Guk-Hyun;Joo, Sung-Kwan;Kim, Kyeong-Min;Cheon, Gi-Jeong;Choi, Jong-Hak;Kim, Chang-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.195-203
    • /
    • 2009
  • Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  • PDF