• Title/Summary/Keyword: Camera Model Identification

Search Result 46, Processing Time 0.027 seconds

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Camera Model Identification Based on Deep Learning (딥러닝 기반 카메라 모델 판별)

  • Lee, Soo Hyeon;Kim, Dong Hyun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.411-420
    • /
    • 2019
  • Camera model identification has been a subject of steady study in the field of digital forensics. Among the increasingly sophisticated crimes, crimes such as illegal filming are taking up a high number of crimes because they are hard to detect as cameras become smaller. Therefore, technology that can specify which camera a particular image was taken on could be used as evidence to prove a criminal's suspicion when a criminal denies his or her criminal behavior. This paper proposes a deep learning model to identify the camera model used to acquire the image. The proposed model consists of four convolution layers and two fully connection layers, and a high pass filter is used as a filter for data pre-processing. To verify the performance of the proposed model, Dresden Image Database was used and the dataset was generated by applying the sequential partition method. To show the performance of the proposed model, it is compared with existing studies using 3 layers model or model with GLCM. The proposed model achieves 98% accuracy which is similar to that of the latest technology.

Camera Model Identification Using Modified DenseNet and HPF (변형된 DenseNet과 HPF를 이용한 카메라 모델 판별 알고리즘)

  • Lee, Soo-Hyeon;Kim, Dong-Hyun;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.11-19
    • /
    • 2019
  • Against advanced image-related crimes, a high level of digital forensic methods is required. However, feature-based methods are difficult to respond to new device features by utilizing human-designed features, and deep learning-based methods should improve accuracy. This paper proposes a deep learning model to identify camera models based on DenseNet, the recent technology in the deep learning model field. To extract camera sensor features, a HPF feature extraction filter was applied. For camera model identification, we modified the number of hierarchical iterations and eliminated the Bottleneck layer and compression processing used to reduce computation. The proposed model was analyzed using the Dresden database and achieved an accuracy of 99.65% for 14 camera models. We achieved higher accuracy than previous studies and overcome their disadvantages with low accuracy for the same manufacturer.

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Parameter Calibration of Laser Scan Camera for Measuring the Impact Point of Arrow (화살 탄착점 측정을 위한 레이저 스캔 카메라 파라미터 보정)

  • Baek, Gyeong-Dong;Cheon, Seong-Pyo;Lee, In-Seong;Kim, Sung-Shin
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.21 no.1
    • /
    • pp.76-84
    • /
    • 2012
  • This paper presents the measurement system of arrow's point of impact using laser scan camera and describes the image calibration method. The calibration process of distorted image is primarily divided into explicit and implicit method. Explicit method focuses on direct optical property using physical camera and its parameter adjustment functionality, while implicit method relies on a calibration plate which assumed relations between image pixels and target positions. To find the relations of image and target position in implicit method, we proposed the performance criteria based polynomial theorem model that overcome some limitations of conventional image calibration model such as over-fitting problem. The proposed method can be verified with 2D position of arrow that were taken by SICK Ranger-D50 laser scan camera.

Development of Identification Method of Rice Varieties Using Image Processing Technique (화상처리법에 의한 쌀 품종별 판별기술 개발)

  • Kwon, Young-Kil;Cho, Rae-Kwang
    • Applied Biological Chemistry
    • /
    • v.41 no.2
    • /
    • pp.160-165
    • /
    • 1998
  • Current discriminating technique of rice variety is known to be not objective till this time because of depending on naked eye of well trained inspector. DNA finger print method based on genetic character of rice has been indicated inappropriate for on-site application, because the method need much labor and skilled expert. The purpose of this study was to develops the identification technique of polished rice varieties using CCD camera images. To minimize the noise of the captured image, thresholding and median filtering were carried out, and edge was extracted from the image data. Image data after pretreatment of normalize and FFT(fast fourier transform) were used for library model and feedforward backpropagation neural network model. Image processing technique using CCD camera could discriminate the variety of rice with high accuracy in case of quite different rice of shape, but the accuracy was reached at 85% in the similar shape of rice.

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

Parameter Identification of Robot Hand Tracking Model Using Optimization (최적화 기법을 이용한 로봇핸드 트래킹 모델의 파라미터 추정)

  • Lee, Jong-Kwang;Lee, Hyo-Jik;Yoon, Kwang-Ho;Park, Byung-Suk;Yoon, Ji-Sup
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.467-473
    • /
    • 2007
  • In this paper, we present a position-based robot hand tracking scheme where a pan-tilt camera is controlled such that a robot hand is always shown in the center of an image frame. We calculate the rotation angles of a pan-tilt camera by transforming the coordinate systems. In order to identify the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. From the simulation results, it is shown that the considered parameter identification problem is characterized by a highly multimodal landscape; thus, a global optimization technique such as a particle swarm optimization could be a promising tool to identify the model parameters of a robot hand tracking system, whereas the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.