• Title/Summary/Keyword: Landmark information

Search Result 173, Processing Time 0.025 seconds

Face Recognition Based on Facial Landmark Feature Descriptor in Unconstrained Environments (비제약적 환경에서 얼굴 주요위치 특징 서술자 기반의 얼굴인식)

  • Kim, Daeok;Hong, Jongkwang;Byun, Hyeran
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.666-673
    • /
    • 2014
  • This paper proposes a scalable face recognition method for unconstrained face databases, and shows a simple experimental result. Existing face recognition research usually has focused on improving the recognition rate in a constrained environment where illumination, face alignment, facial expression, and background is controlled. Therefore, it cannot be applied in unconstrained face databases. The proposed system is face feature extraction algorithm for unconstrained face recognition. First of all, we extract the area that represent the important features(landmarks) in the face, like the eyes, nose, and mouth. Each landmark is represented by a high-dimensional LBP(Local Binary Pattern) histogram feature vector. The multi-scale LBP histogram vector corresponding to a single landmark, becomes a low-dimensional face feature vector through the feature reduction process, PCA(Principal Component Analysis) and LDA(Linear Discriminant Analysis). We use the Rank acquisition method and Precision at k(p@k) performance verification method for verifying the face recognition performance of the low-dimensional face feature by the proposed algorithm. To generate the experimental results of face recognition we used the FERET, LFW and PubFig83 database. The face recognition system using the proposed algorithm showed a better classification performance over the existing methods.

Cloning of Notl-linked DNA Detected by Restriction Landmark Genomic Scanning of Human Genome

  • Kim Jeong-Hwan;Lee Kyung-Tae;Kim Hyung-Chul;Yang Jin-Ok;Hahn Yoon-Soo;Kim Sang-Soo;Kim Seon-Young;Yoo Hyang-Sook;Kim Yong-Sung
    • Genomics & Informatics
    • /
    • v.4 no.1
    • /
    • pp.1-10
    • /
    • 2006
  • Epigenetic alterations are common features of human solid tumors, though global DNA methylation has been difficult to assess. Restriction Landmark Genomic Scanning (RLGS) is one of technology to examine epigenetic alterations at several thousand Notl sites of promoter regions in tumor genome. To assess sequence information for Notl sequences in RLGS gel, we cloned 1,161 unique Notl-linked clones, compromising about 60% of the spots in the soluble region of RLGS profile, and performed BLAT searches on the UCSC genome server, May 2004 Freeze. 1,023 (88%) unique sequences were matched to the CpG islands of human genome showing a large bias of RLGS toward identifying potential genes or CpG islands. The cloned Notl-loci had a high frequency (71%) of occurrence within CpG islands near the 5' ends of known genes rather than within CpG islands near the 3' ends or intragenic regions, making RLGS a potent tool for the identification of gene-associated methylation events. By mixing RLGS gels with all Notl-linked clones, we addressed 151 Notl sequences onto a standard RLGS gel and compared them with previous reports from several types of tumors. We hope our sequence information will be useful to identify novel epigenetic targets in any types of tumor genome.

Experimental result of Real-time Sonar-based SLAM for underwater robot (소나 기반 수중 로봇의 실시간 위치 추정 및 지도 작성에 대한 실험적 검증)

  • Lee, Yeongjun;Choi, Jinwoo;Ko, Nak Yong;Kim, Taejin;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.3
    • /
    • pp.108-118
    • /
    • 2017
  • This paper presents experimental results of realtime sonar-based SLAM (simultaneous localization and mapping) using probability-based landmark-recognition. The sonar-based SLAM is used for navigation of underwater robot. Inertial sensor as IMU (Inertial Measurement Unit) and DVL (Doppler Velocity Log) and external information from sonar image processing are fused by Extended Kalman Filter (EKF) technique to get the navigation information. The vehicle location is estimated by inertial sensor data, and it is corrected by sonar data which provides relative position between the vehicle and the landmark on the bottom of the basin. For the verification of the proposed method, the experiments were performed in a basin environment using an underwater robot, yShark.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

Absolute Positioning System for Mobile Robot Navigation in an Indoor Environment (ICCAS 2004)

  • Yun, Jae-Mu;Park, Jin-Woo;Choi, Ho-Seek;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1448-1451
    • /
    • 2004
  • Position estimation is one of the most important functions for the mobile robot navigating in the unstructured environment. Most of previous localization schemes estimate current position and pose of mobile robot by applying various localization algorithms with the information obtained from sensors which are set on the mobile robot, or by recognizing an artificial landmark attached on the wall, or objects of the environment as natural landmark in the indoor environment. Several drawbacks about them have been brought up. To compensate the drawbacks, a new localization method that estimates the absolute position of the mobile robot by using a fixed camera on the ceiling in the corridor is proposed. And also, it can improve the success rate for position estimation using the proposed method, which calculates the real size of an object. This scheme is not a relative localization, which decreases the position error through algorithms with noisy sensor data, but a kind of absolute localization. The effectiveness of the proposed localization scheme is demonstrated through the experiments.

  • PDF

Development of Image-based Assistant Algorithm for Vehicle Positioning by Detecting Road Facilities

  • Jung, Jinwoo;Kwon, Jay Hyoun;Lee, Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.5
    • /
    • pp.339-348
    • /
    • 2017
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from a camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, the mathematical model based on SPR (Single Photo Resection) is derived for image-based assistant algorithm for vehicle positioning. Simulation test is performed to analyze factors affecting SPR. In addition, GNSS/on-board vehicle sensor/image based positioning algorithm is developed by combining image-based positioning algorithm with existing positioning algorithm. The performance of the integrated algorithm is evaluated by the actual driving test and landmark's position data, which is required to perform SPR, based on simulation. The precision of the horizontal position error is 1.79m in the case of the existing positioning algorithm, and that of the integrated positioning algorithm is 0.12m at the points where SPR is performed. In future research, it is necessary to develop an optimized algorithm based on the actual landmark's position data.

VRML image overlay method for Robot's Self-Localization (VRML 영상오버레이기법을 이용한 로봇의 Self-Localization)

  • Sohn, Eun-Ho;Kwon, Bang-Hyun;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

Vehicle License Plate Detection in Road Images (도로주행 영상에서의 차량 번호판 검출)

  • Lim, Kwangyong;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.186-195
    • /
    • 2016
  • This paper proposes a vehicle license plate detection method in real road environments using 8 bit-MCT features and a landmark-based Adaboost method. The proposed method allows identification of the potential license plate region, and generates a saliency map that presents the license plate's location probability based on the Adaboost classification score. The candidate regions whose scores are higher than the given threshold are chosen from the saliency map. Each candidate region is adjusted by the local image variance and verified by the SVM and the histograms of the 8bit-MCT features. The proposed method achieves a detection accuracy of 85% from various road images in Korea and Europe.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Landmark Recognition Method based on Geometric Invariant Vectors (기하학적 불변벡터기반 랜드마크 인식방법)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, we propose a landmark recognition method which is irrelevant to the camera viewpoint on the navigation for localization. Features in previous research is variable to camera viewpoint, therefore due to the wealth of information, extraction of visual landmarks for positioning is not an easy task. The proposed method in this paper, has the three following stages; first, extraction of features, second, learning and recognition, third, matching. In the feature extraction stage, we set the interest areas of the image. where we extract the corner points. And then, we extract features more accurate and resistant to noise through statistical analysis of a small eigenvalue. In learning and recognition stage, we form robust feature models by testing whether the feature model consisted of five corner points is an invariant feature irrelevant to viewpoint. In the matching stage, we reduce time complexity and find correspondence accurately by matching method using similarity evaluation function and Graham search method. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed methods.

  • PDF