• 제목/요약/키워드: Directional Image

검색결과 480건 처리시간 0.023초

경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구 (A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape)

  • 김홍만;이인희
    • KIEAE Journal
    • /
    • 제9권2호
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.

전방위 영상을 이용한 이동 로봇의 전역 위치 인식 (Global Localization of Mobile Robots Using Omni-directional Images)

  • 한우섭;민승기;노경식;윤석준
    • 대한기계학회논문집A
    • /
    • 제31권4호
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Projection Runlength를 이용한 필기체 숫자의 특징추출 (Feature Extraction of Handwritten Numerals using Projection Runlength)

  • 박중조;정순원;박영환;김경민
    • 제어로봇시스템학회논문지
    • /
    • 제14권8호
    • /
    • pp.818-823
    • /
    • 2008
  • In this paper, we propose a feature extraction method which extracts directional features of handwritten numerals by using the projection runlength. Our directional featrures are obtained from four directional images, each of which contains horizontal, vertical, right-diagonal and left-diagonal lines in entire numeral shape respectively. A conventional method which extracts directional features by using Kirsch masks generates edge-shaped double line directional images for four directions, whereas our method uses the projections and their runlengths for four directions to produces single line directional images for four directions. To obtain the directional projections for four directions from a numeral image, some preprocessing steps such as thinning and dilation are required, but the shapes of resultant directional lines are more similar to the numeral lines of input numerals. Four [$4{\times}4$] directional features of a numeral are obtained from four directional line images through a zoning method. By using a hybrid feature which is made by combining our feature with the conventional features of a mesh features, a kirsch directional feature and a concavity feature, higher recognition rates of the handwrittern numerals can be obtained. For recognition test with given features, we use a multi-layer perceptron neural network classifier which is trained with the back propagation algorithm. Through the experiments with the handwritten numeral database of Concordia University, we have achieved a recognition rate of 97.85%.

카메라 영상기반 전방향 이동 로봇의 제어 (Control of an Omni-directional Mobile Robot Based on Camera Image)

  • 김봉규;류정래
    • 한국지능시스템학회논문지
    • /
    • 제24권1호
    • /
    • pp.84-89
    • /
    • 2014
  • 본 논문에서는 카메라를 탑재한 전방향 이동 로봇에서의 표적 추종을 위한 영상기반 시각 서보 제어를 다룬다. 기존 연구에서는 카메라 영상에서 추출한 표적의 영상 좌표로부터 표적 추종을 위한 바퀴의 회전 각속도를 구하기 위하여 카메라의 수학적 모델과 이동 로봇의 기구학 특성으로부터 구한 수학적 영상 자코비안을 널리 활용하였다. 본 논문에서는 표적의 영상 좌표 정보를 이용한 단순한 규칙기반 제어 방식과 영상에 포착된 표적의 크기 정보를 조합하여 바퀴의 회전 각속도를 생성하는 새로운 방식을 제안한다. 카메라 영상을 몇 개의 영역으로 분할하고, 표적이 포함된 영역에 따라 미리 정의한 규칙을 적용하는데, 복잡한 수학적 표현을 사용하지 않으면서도 비교적 적은 수의 규칙을 사용하므로 구현이 용이한 장점이 있다. 제안된 방식은 실제 시스템으로 구현하여 실험하고, 전체 실험 시스템에 대한 설명과 함께 실험 결과를 제시하여 제안하는 방식의 타당성을 입증한다.

능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정 (Localization of Mobile Robot Using Active Omni-directional Ranging System)

  • 류지형;김진원;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제14권5호
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템 (Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images)

  • 김종록;임미섭;임준홍
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

DC값 차이와 AC계수 유사성을 이용한 방향성 블록 보간 (Directional Interpolation of Lost Block Using Difference of DC values and Similarity of AC Coefficients)

  • 이홍엽;엄일규;김유신
    • 한국통신학회논문지
    • /
    • 제30권6C호
    • /
    • pp.465-474
    • /
    • 2005
  • 본 논문은 잡음이 존재하는 전송 선로를 통해 전송된 영상의 손실 블록에 대해 방향성 복구 방법을 제안한다. 손실된 DCT 계수나 화소값들은 손실된 블록 주위의 마주보는 블록 간 DC값 차이(DDC:Difference of DC)와 AC계수의 유사성(SAC: Similarity of AC)으로 구성된 방향성 척도에 의해 적응적으로 선택되어진 이웃 블록들을 이용해서 선형 보간법으로 복구된다. 본 논문에서 제안하는 방향성 복구 방법은 고정된 이웃 블록을 이용하지 않고 국부 영상 내의 방향성 정보에 따라 적응적으로 변하는 이웃 블록들이 사용하기 때문에 강한 에지 성분이나 텍스쳐 영역에 대해서 효과적이다. 본 논문에서는 DDC와 SAC로 구성된 새로운 방향성 척도(CDS: Combination of DDC and SAC)를 구하고 그 방향성 척도를 통해 국부 영역의 특성에 따라 손실된 블록을 복구하기 위한 블록들을 선택한다. 모의실험에서 제안 방법은 기존의 방법보다 평균적으로 약 0.6dB의 PSNR 개선을 보였다.

적응성 방향 미분의 에지 검출에 의한 효율적인 접촉각 연산 (An Efficient Contact Angle Computation using MADD Edge Detection)

  • 양명섭;이종구;김은미;박철수
    • 융합보안논문지
    • /
    • 제8권4호
    • /
    • pp.127-134
    • /
    • 2008
  • 본 연구는 투명 성질을 가진 물방울의 윤곽선에 대한 효율적인 검출을 통해 분석 장비의 자동 측정에 대한 정확성을 향상시키는 것을 목적으로 한다. 투명성질을 가지는 원의 윤곽선 검출을 위해 밝기 분포에 대한 국소적 미분 대신에 적응성 방향 미분(Adaptive Directional Derivative;ADD)이라는 비국소적 연산자를 도입함으로써 에지의 램프 폭의 변화에 무관하게 에지 검출에 적용할 수 있는 MADD(Modified Adaptive Directional Derivative) 알고리즘을 사용한다. 이 방법은 램프 구간 내에서 방향 미분 값을 가중치로 사용하여 픽셀들의 위치를 평균한 방향 미분의 국소 중심(Local Center of Directional Derivative;LCDD)등의 위치를 찾는 추가적인 과정없이, 정확한 에지 픽셀의 위치가 완전 선명화 사상에 의한 단순 계단 함수의 위치로 자연스럽게 결정될 수 있다. 제시된 에지 검출 방법을 표면분석 기술인 접촉각 연산에 적용하여 실험 및 결과 분석을 통해 제안 기법의 타당성 및 효율성을 검증한다.

  • PDF

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • 제32권5호
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Speckle Noise Reduction and Edge Enhancement in Ultrasound Images Based on Wavelet Transform

  • Kim, Yong-Sun;Ra, Jong-Beom
    • 대한의용생체공학회:의공학회지
    • /
    • 제29권2호
    • /
    • pp.122-131
    • /
    • 2008
  • For B-mode ultrasound images, we propose an image enhancement algorithm based on a multi-resolution approach, which consists of edge enhancing and noise reducing procedures. Edge enhancement processing is applied sequentially to coarse-to-fine resolution images obtained from wavelet-transformed data. In each resolution, the structural features of each pixel are examined through eigen analysis. Then, if a pixel belongs to an edge region, we perform two-step filtering: that is, directional smoothing is conducted along the tangential direction of the edge to improve continuity and directional sharpening is conducted along the normal direction to enhance the contrast. In addition, speckle noise is alleviated by proper attenuation of the wavelet coefficients of the homogeneous regions at each band. This region-based speckle-reduction scheme is differentiated from other methods that are based on the magnitude statistics of the wavelet coefficients. The proposed algorithm enhances edges regardless of changes in the resolution of an image, and the algorithm efficiently reduces speckle noise without affecting the sharpness of the edge. Hence, compared with existing algorithms, the proposed algorithm considerably improves the subjective image quality without providing any noticeable artifacts.