• Title/Summary/Keyword: Camera Identification

Search Result 253, Processing Time 0.031 seconds

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.

Multiple Object Tracking and Identification System Using CCTV and RFID (감시 카메라와 RFID를 활용한 다수 객체 추적 및 식별 시스템)

  • Kim, Jin-Ah;Moon, Nammee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.51-58
    • /
    • 2017
  • Because of safety and security, Surveillance camera market is growing. Accordingly, Study on video recognition and tracking is also actively in progress, but There is a limit to identify object by obtaining the information of object identified and tracked. Especially, It is more difficult to identify multiple objects in open space like shopping mall, airport and others utilized surveillance camera. Therefore, This paper proposed adding object identification function by using RFID to existing video-based object recognition and tracking system. Also, We tried to complement each other to solve the problem of video and RFID based. Thus, through the interaction of system modules We propose a solution to the problems of failing video-based object recognize and tracking and the problems that could be cased by the recognition error of RFID. The system designed to identify the object by classifying the identification of object in four steps so that the data reliability of the identified object can be maintained. To judge the efficiency of this system, this demonstrated by implementing the simulation program.

Fusion algorithm for Integrated Face and Gait Identification (얼굴과 발걸음을 결합한 인식)

  • Nizami, Imran Fareed;An, Sung-Je;Hong, Sung-Jun;Lee, Hee-Sung;Kim, Eun-Tai;Park, Mig-Non
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.72-77
    • /
    • 2008
  • Identification of humans from multiple view points is an important task for surveillance and security purposes. For optimal performance the system should use the maximum information available from sensors. Multimodal biometric systems are capable of utilizing more than one physiological or behavioral characteristic for enrollment, verification, or identification. Since gait alone is not yet established as a very distinctive feature, this paper presents an approach to fuse face and gait for identification. In this paper we will use the single camera case i.e both the face and gait recognition is done using the same set of images captured by a single camera. The aim of this paper is to improve the performance of the system by utilizing the maximum amount of information available in the images. Fusion in considered at decision level. The proposed algorithm is tested on the NLPR database.

Dividing Occluded Pedestrians in Wide Angle Images for the Vision-Based Surveillance and Monitoring (시각 기반 감시 및 관측을 위한 광각 영상에서의 중첩된 보행자 구분)

  • Park, Jaehyeong;Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.54-61
    • /
    • 2015
  • In recent years, there has been increasing use of automatic surveillance and monitoring systems based on vision sensors. Humans are often the most important target in the systems, but processing human images is difficult due to the small sizes and flexible motions. Particularly, occlusion among pedestrians in camera images brings practical problems. In this paper, we propose a novel method to separate image regions of occluded pedestrians. A camera equipped with a wide angle lens is attached to the ceiling of a building corridor for sensing pedestrians with a wide field of view. The output images of the camera are processed for the human detection, tracking, identification, distortion correction, and occlusion handling. We resolve the occlusion problem adaptively depending on the angles and positions of their heads. Experimental results showed that the proposed method is more efficient and accurate compared with existing methods.

Camera Source Identification of Digital Images Based on Sample Selection

  • Wang, Zhihui;Wang, Hong;Li, Haojie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3268-3283
    • /
    • 2018
  • With the advent of the Information Age, the source identification of digital images, as a part of digital image forensics, has attracted increasing attention. Therefore, an effective technique to identify the source of digital images is urgently needed at this stage. In this paper, first, we study and implement some previous work on image source identification based on sensor pattern noise, such as the Lukas method, principal component analysis method and the random subspace method. Second, to extract a purer sensor pattern noise, we propose a sample selection method to improve the random subspace method. By analyzing the image texture feature, we select a patch with less complexity to extract more reliable sensor pattern noise, which improves the accuracy of identification. Finally, experiment results reveal that the proposed sample selection method can extract a purer sensor pattern noise, which further improves the accuracy of image source identification. At the same time, this approach is less complicated than the deep learning models and is close to the most advanced performance.

Treefrog lateral line as a mean of individual identification through visual and software assisted methodologies

  • Kim, Mi Yeon;Borzee, Amael;Kim, Jun Young;Jang, Yikweon
    • Journal of Ecology and Environment
    • /
    • v.41 no.12
    • /
    • pp.345-350
    • /
    • 2017
  • Background: Ecological research often requires monitoring of a specific individual over an extended period of time. To enable non-invasive re-identification, consistent external marking is required. Treefrogs possess lateral lines for crypticity. While these patterns decrease predator detection, they also are individual specific patterns. In this study, we tested the use of lateral lines in captive and wild populations of Dryophytes japonicus as natural markers for individual identification. For the purpose of the study, the results of visual and software assisted identifications were compared. Results: In normalized laboratory conditions, a visual individual identification method resulted in a 0.00 rate of false-negative identification (RFNI) and a 0.0068 rate of false-positive identification (RFPI), whereas Wild-ID resulted in RFNI = 0.25 and RFNI = 0.00. In the wild, female and male data sets were tested. For both data sets, visual identification resulted in RFNI and RFPI of 0.00, whereas the RFNI was 1.0 and RFPI was 0.00 with Wild-ID. Wild-ID did not perform as well as visual identification methods and had low scores for matching photographs. The matching scores were significantly correlated with the continuity of the type of camera used in the field. Conclusions: We provide clear methodological guidelines for photographic identification of D. japonicus using their lateral lines. We also recommend the use of Wild-ID as a supplemental tool rather the principal identification method when analyzing large datasets.

Identification with the Point of Views and the Characters in Game (게임에서의 시점과 캐릭터 동일시)

  • Lee, Sul-Hi;Sung, Yong-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.3
    • /
    • pp.117-126
    • /
    • 2008
  • Lacan's Identification theory has been applied to various cultural genres such as literature, film, media so on. Especially, Identification theory in Film Theory gives game researchers who try to apply identification theory to the game, a concrete base. This paper aims to explain the gamer's action through identification theory. Thus, we divide identification into the level of the gamer's point of view and that of the character in game text. While point-on-view of identification, the primary identification, in film has been explained that the audience identifies with camera's point-of view, in game one identifies with various point-of-views. There are two aspects of Identification of characters in game. First, gamers identify themselves with moving images on screen. Second, They identify with the roles given to themselves. The factors which allow them identification are suppression of false statement system, interpolation and process of selection, arrangement, a cursor, colours on screen.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

3D Range Measurement using Infrared Light and a Camera (적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발)

  • Kim, In-Cheol;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.