• Title/Summary/Keyword: Feature point extraction

Search Result 265, Processing Time 0.026 seconds

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

HMM-based Intent Recognition System using 3D Image Reconstruction Data (3차원 영상복원 데이터를 이용한 HMM 기반 의도인식 시스템)

  • Ko, Kwang-Enu;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.135-140
    • /
    • 2012
  • The mirror neuron system in the cerebrum, which are handled by visual information-based imitative learning. When we observe the observer's range of mirror neuron system, we can assume intention of performance through progress of neural activation as specific range, in include of partially hidden range. It is goal of our paper that imitative learning is applied to 3D vision-based intelligent system. We have experiment as stereo camera-based restoration about acquired 3D image our previous research Using Optical flow, unscented Kalman filter. At this point, 3D input image is sequential continuous image as including of partially hidden range. We used Hidden Markov Model to perform the intention recognition about performance as result of restoration-based hidden range. The dynamic inference function about sequential input data have compatible properties such as hand gesture recognition include of hidden range. In this paper, for proposed intention recognition, we already had a simulation about object outline and feature extraction in the previous research, we generated temporal continuous feature vector about feature extraction and when we apply to Hidden Markov Model, make a result of simulation about hand gesture classification according to intention pattern. We got the result of hand gesture classification as value of posterior probability, and proved the accuracy outstandingness through the result.

Analysis of Shadow Effect on High Resolution Satellite Image Matching in Urban Area (도심지역의 고해상도 위성영상 정합에 대한 그림자 영향 분석)

  • Yeom, Jun Ho;Han, You Kyung;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.93-98
    • /
    • 2013
  • Multi-temporal high resolution satellite images are essential data for efficient city analysis and monitoring. Yet even when acquired from the same location, identical sensors as well as different sensors, these multi-temporal images have a geometric inconsistency. Matching points between images, therefore, must be extracted to match the images. With images of an urban area, however, it is difficult to extract matching points accurately because buildings, trees, bridges, and other artificial objects cause shadows over a wide area, which have different intensities and directions in multi-temporal images. In this study, we analyze a shadow effect on image matching of high resolution satellite images in urban area using Scale-Invariant Feature Transform(SIFT), the representative matching points extraction method, and automatic shadow extraction method. The shadow segments are extracted using spatial and spectral attributes derived from the image segmentation. Also, we consider information of shadow adjacency with the building edge buffer. SIFT matching points extracted from shadow segments are eliminated from matching point pairs and then image matching is performed. Finally, we evaluate the quality of matching points and image matching results, visually and quantitatively, for the analysis of shadow effect on image matching of high resolution satellite image.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

Frontal Face Region Extraction & Features Extraction for Ocular Inspection (망진을 위한 정면 얼굴 영역 및 특징 요소 추출)

  • Cho Dong-Uk;Kim Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.585-592
    • /
    • 2005
  • One of the most important things in the researches on diseases is to attach more importance to prevention of a disease and preservation of health than to treatment of a disease, also to foods rather than to medicines. In this context, the most significant concern in examining a patient is to find the presence of disease, and, if any, to diaguose the type of disease, after which a pharmacotherapy is followed. In this paper, various diagnosis methods of Oriental medicines are discussed. And ocular inspection, the most important method among the 4 disease diagnoses of Oriental medicines, is studied. Observing a person's shape and color has been the major method for ocular inspection, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for ocular inspection. As the first stage, we applied the signal processing techniques to automatic feature extraction of faces for ocular inspection. Firstly, facial regions are extracted from the point of frontal view, which was followed by extraction of their features. The experiment applied to 20 persons showed that frontal face regions are perfectly extracted, as well as their features, such as eyes, eyebrows, noses and mouths. Future work will seek to address the issues of morphological operation for a few unfinished extraction results, such as combined hair and eyebrows.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition (다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법)

  • Kim, Jung Hee;Yun, Yeohun;Kim, Junsu;Yun, Kugjin;Cheong, Won-Sik;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.919-927
    • /
    • 2019
  • In this paper, we propose an accurate camera calibration method for acquiring multiview stereoscopic images. Generally, camera calibration is performed by using checkerboard structured patterns. The checkerboard pattern simplifies feature point extraction process and utilizes previously recognized lattice structure, which results in the accurate estimation of relations between the point on 2-dimensional image and the point on 3-dimensional space. Since estimation accuracy of camera parameters is dependent on feature matching, accurate detection of checkerboard corner is crucial. Therefore, in this paper, we propose the method that performs accurate camera calibration method through accurate detection of checkerboard corners. Proposed method detects checkerboard corner candidates by utilizing 1-dimensional gaussian filters with succeeding corner refinement process to remove outliers from corner candidates and accurately detect checkerboard corners in sub-pixel unit. In order to verify the proposed method, we check reprojection errors and camera location estimation results to confirm camera intrinsic parameters and extrinsic parameters estimation accuracy.

Proposal and Evaluation of a Cost Estimation Model Considering Software Quality (소프트웨어의 품질을 고려한 비용 평가 모델의 제안과 평가)

  • Lee, Yong-Geun;Yang, Hae-Sul
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.2
    • /
    • pp.194-201
    • /
    • 1994
  • Recently, as application fields of software is extended, relative importance of software make a gradual increase and importance of development cost is being increased. However, as former evaluation model of development cost evaluate at the functional point of view for the most part, at this paper, I intend to propose evaluation model of software development cost COSMOS-Q(COSt MOdel for Subcontract-Quality) which one evaluate also in quality as well as function. The model proposed in this paper set the goal at software orderer evaluate software cost exactly with only order specification information. At this paper, I proposed cost evaluation model and evaluated it's validity refering to review result in ISO/SC7 about a software quality feature with extraction of quality feature factor which produce change of cost and set up the evaluation measure adoptable as order condition.

  • PDF

A Fast Recognition System of Gothic-Hangul using the Contour Tracing (윤곽선 추적에 의한 고딕체 한글의 신속인식에 관한 연구)

  • 정주성;김춘석;박충규
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.37 no.8
    • /
    • pp.579-587
    • /
    • 1988
  • Conventional methods of automatic recognition of Korean characters consist of the thinning processing, the segmentation of connected fundamental phonemes and the recognition of each fundamental character. These methods, however require the thinning processing which is complex and time consuming. Also several noise components make worse effects on the recognition of characters than in the case of no thinning. This paper describes the extraction method of the feature components of Korean fundamental characters of the Gothic Korean letter without the thinning. We regard line-components of the contour which describes the character's external boundary as the feature-components. The line-component includes the directional code, the length and the start point in the image. Each fundamental character is represented by the string of directional codes. Therefore the recognition process is only the string pattern matching. We use the Gothic-hangul in the experiment. The ecognition rate is 92%.

Face Recognition Based on Polar Coordinate Transform (극좌표계 변환에 기반한 얼굴 인식 방법)

  • Oh, Jae-Hyun;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.44-52
    • /
    • 2010
  • In this paper, we propose a novel method for face recognition which uses polar coordinate instead of conventional cartesian coordinate. Among the central area of a face, we select a point as a pole and make a polar image of a face by evenly sampling pixels in each direction of 360 degrees around the pole. By applying conventional feature extraction methods to the polar image, the recognition rates are improved. The polar coordinate delineates near-pole area more vividly than the area far from the pole. In a face, important regions such as eyes, nose and mouth are concentrated on the central part of a face. Therefore, the polar coordinate of a face image can achieve more vivid representation of important facial regions compared to the conventional cartesian coordinate. The proposed polar coordinate transform was applied to Yale and FRGC databases and LDA and NLDA were used to extract features afterwards. The experimental results show that the proposed method performs better than the conventional cartesian images.