• Title/Summary/Keyword: Rotation Estimation

Search Result 278, Processing Time 0.028 seconds

Spatial Spectrum Estimation of Broadband Incoherent Signals using Rotation of Signal Subspace Via Signal Enhancement (신호부각에 의한 신호 부공간 회전을 이용한 광대역 인코히어런트 신호의 공간 스펙트럼 추정)

  • 김영수;이계산;김정근
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.15 no.7
    • /
    • pp.669-676
    • /
    • 2004
  • In this paper, a new algorithm is proposed for resolving multiple broadband incoherent sources incident on a uniform linear array. The proposed method dose not require any initial estimates for finding the transformation matrix, while the Coherent Signal-Subspace Method(CSM) proposed by Wang and Kaveh requires preliminary estimates of multigroup source location. An effective procedure is derived for finding the enhanced spectral density matrix at the center frequency using signal enhancement approach and then constructing a common signal subspace by selecting a unitary transformation matrix which is obtained via rotation of signal subspace method. The proposed approach is found to provide superior performance relative to that obtained with the CSM method in terms of sample bias of direction-of-arrival estimates.

Bayesian estimation of kinematic parameters of disk galaxies in large HI galaxy surveys

  • Oh, Se-Heon;Staveley-Smith, Lister
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.62.2-62.2
    • /
    • 2016
  • We present a newly developed algorithm based on a Bayesian method for 2D tilted-ring analysis of disk galaxies which operates on velocity fields. Compared to the conventional ones based on a chi-squared minimisation procedure, this new Bayesian-based algorithm less suffers from local minima of the model parameters even with high multi-modality of their posterior distributions. Moreover, the Bayesian analysis implemented via Markov Chain Monte Carlo (MCMC) sampling only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature is essential for performing kinematic analysis of an unprecedented number of resolved galaxies from the upcoming Square Kilometre Array (SKA) pathfinders' galaxy surveys. A standalone code, the so-called '2D Bayesian Automated Tilted-ring fitter' (2DBAT) that implements the Bayesian fits of 2D tilted-ring models is developed for deriving rotation curves of galaxies that are at least marginally resolved (> 3 beams across the semi-major axis) and moderately inclined (20 < i < 70 degree). The main layout of 2DBAT and its performance test are discussed using sample galaxies from Australia Telescope Compact Array (ATCA) observations as well as artificial data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies.

  • PDF

A Novel Rotor Position Error Calculation Method using a Rotation Matrix for a Switching Frequency Signal Injected Sensorless Control in IPMSM (스위칭 주파수 신호 주입 IPMSM 센서리스 제어를 위한 회전 행렬 기반의 새로운 위치 오차 추정 기법)

  • Kim, Sang-Il;Kim, Rae-Young
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.20 no.5
    • /
    • pp.402-409
    • /
    • 2015
  • This paper proposes a novel rotor position error calculation method for high-frequency signal-injected sensorless control. The rotor position error using the conventional modulation method can be only measured up to ${\pm}45^{\circ}$. In addition, when the rotor position estimation error is not sufficiently small, the small angle approximation in no longer valid. To overcome these problems, this study introduces a new rotor position error calculation method using the rotating matrix. In this study, the position error measurement range of the proposed method is extended from ${\pm}45^{\circ}$ to ${\pm}90^{\circ}$. The linearity between the real rotor position error and the estimated error is maintained by nearly $90^{\circ}$. These features of the proposed method improve the performance of the sensorless control. The validity of the proposed method is verified by simulations and experiments.

A 3D Vision Inspection Method using One Camera (1대의 카메라를 이용한 3차원 비전 검사 방법)

  • Jung Cheol-Jin;Huh Kyung Moo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.1
    • /
    • pp.19-26
    • /
    • 2004
  • In this paper, we suggest a 3D vision inspection method which use only one camera. If we have the database of pattern and can recognize the object, and also estimate the rotated shape of the parts, we can inspect the parts using only one image. We used the 3D database and the 2D geometrical pattern matching, and the rotation transition theory about the algorithm. As the results, we could have the capability of the recognition and inspection of the rotated object through the estimation of rotation an81e. We applied our suggested algorithm to the inspection of typical IC and capacitor, and compared our suggested algorithm with the conventional 2D inspection method and the feature space trajectory method.

Comparison of Position-Rotation Models and Orbit-Attitude Models with SPOT images (SPOT 위성영상에서의 위치-회전각 모델과 궤도-자세각 모델의 비교)

  • Kim Tae-Jung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.1
    • /
    • pp.47-55
    • /
    • 2006
  • This paper investigates the performance of sensor models based on satellite position and rotation angles and sensor models based on satellite orbit and attitude angles. We analyze the performance with respect to the accuracy of bundle adjustment and the accuracy of exterior orientation estimation. In particular, as one way to analyze the latter, we establish sensor models with respect to one image and apply the models to other scenes that have been acquired from the same orbit. Experiment results indicated that fer the sole purpose of bundle adjustment accuracy one could use both position-rotation models and orbit-attitude models. The accuracy of estimating exterior orientation parameters appeared similar for both models when analysis was performed based on single scene. However, when multiple scenes within the same orbital segment were used for analysis, the orbit-attitude model with attitude biases as unknowns showed the most accurate results.

Phase Representation with Linearity for CORDIC based Frequency Synchronization in OFDM Receivers (OFDM 수신기의 CORDIC 기반 주파수 동기를 위한 선형적인 위상 표현 방법)

  • Kim, See-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.81-86
    • /
    • 2010
  • Since CORDIC (COordinate Rotation DIgital Computer) is able to carry out the phase operation, such as vector to phase conversion or rotation of vectors, with adders and shifters, it is well suited for the design of the frequency synchronization unit in OFDM receivers. It is not easy, however, to fully utilize the CORDIC in the OFDM demodulator because of the non-linear characteristics of the direction sequence (DS), which is the representation of the phase in CORDIC. In this paper a new representation method is proposed to linearize the direction sequence approximately. The maximum phase error of the linearized binary direction sequence (LBDS) is also discussed. For the purpose of designing the hardware, the architectures for the binary DS (BDS) to LBDS converter and the LBDS to BDS inverse converter are illustrated. Adopting LBDS, the overall frequency synchronization hardware for OFDM receivers can be implemented fully utilizing CORDIC and general arithmetic operators, such as adders and multipliers, for the phase estimation, loop filtering of the frequency offset, derotation for the frequency offset correction. An example of the design of 22 bit LBDS for the T-DMB demodulator is also presented.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Correction of Rotated Frames in Video Sequences Using Modified Mojette Transform (변형된 모젯 변환을 이용한 동영상에서의 회전 프레임 보정)

  • Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.42-49
    • /
    • 2013
  • The camera motion is accompanied with the translation and/or the rotation of objects in frames of a video sequence. An unnecessary rotation of objects declines the quality of the moving pictures and in addition is a primary cause of the viewers' fatigue. In this paper, a novel method for correcting rotated frames in video sequences is presented, where the modified Mojette transform is applied to the motion-compensated area in each frame. The Mojette transform is one of discrete Radon transforms, and is modified for correcting the rotated frames as follows. First, the bin values in the Mojette transform are determined by using pixels on the projection line and the interpolation of pixels adjacent to the line. Second, the bin values are calculated only at some area determined by the motion estimation between current and reference frames. Finally, only one bin at each projection is computed for reducing the amount of the calculation in the Mojette transform. Through the simulation carried out on various test video sequences, it is shown that the proposed scheme has good performance for correcting the rotation of frames in moving pictures.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.5
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.