• Title/Summary/Keyword: Camera Identification

Search Result 251, Processing Time 0.022 seconds

Camera Model Identification Based on Deep Learning (딥러닝 기반 카메라 모델 판별)

  • Lee, Soo Hyeon;Kim, Dong Hyun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.411-420
    • /
    • 2019
  • Camera model identification has been a subject of steady study in the field of digital forensics. Among the increasingly sophisticated crimes, crimes such as illegal filming are taking up a high number of crimes because they are hard to detect as cameras become smaller. Therefore, technology that can specify which camera a particular image was taken on could be used as evidence to prove a criminal's suspicion when a criminal denies his or her criminal behavior. This paper proposes a deep learning model to identify the camera model used to acquire the image. The proposed model consists of four convolution layers and two fully connection layers, and a high pass filter is used as a filter for data pre-processing. To verify the performance of the proposed model, Dresden Image Database was used and the dataset was generated by applying the sequential partition method. To show the performance of the proposed model, it is compared with existing studies using 3 layers model or model with GLCM. The proposed model achieves 98% accuracy which is similar to that of the latest technology.

Experimental Analysis on Barrel Zoom Module of Digital Camera for Noise Source Identification and Noise Reduction (실험적 방법을 이용한 디지털 카메라 경통 줌 모듈의 소음원 규명 및 저소음화)

  • Kwak, Hyung-Taek;Jeong, Jae-Eun;Jeong, Un-Chang;Lee, You-Yub;Oh, Jae-Eung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.21 no.11
    • /
    • pp.1074-1083
    • /
    • 2011
  • Noise of digital camera has been noticeable to its users. Particularly, noise of a barrel assembly module in zoom in/zoom out operation is recorded while taking a video. Reduction of barrel noise becomes crucial but there are not many studies on noise of digital camera due to its short history of use. In this study, experiment-based analyses are implemented to identify sources of noise and vibration because of complexity and compactness of the barrel system. Output noise is acquired in various operation conditions using synchronization for spectral analysis. Noise sources of a barrel assembly in zoom operating are first identified by the comparison with gear frequency analysis and then correlation analysis between noise and vibration is applied to confirm the generation path of noise. Analysis on noise transfer characteristic of zoom module is also carried out in order to identify the most contributing components. One of possible countermeasures of noise in zoom operating is investigated by an experimental approach.

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Fusion algorithm for Integrated Face and Gait Identification (얼굴과 발걸음을 결합한 인식)

  • Nizami, Imran Fareed;Hong, Sug-Jun;Lee, Hee-Sung;Ann, Toh-Kar;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.15-18
    • /
    • 2007
  • Identification of humans from multiple view points is an important task for surveillance and security purposes. For optimal performance the system should use the maximum information available from sensors. Multimodal biometric systems are capable of utilizing more than one physiological or behavioral characteristic for enrollment, verification, or identification. Since gait alone is not yet established as a very distinctive feature, this paper presents an approach to fuse face and gait for identification. In this paper we will use the single camera case i.e. both the face and gait recognition is done using the same set of images captured by a single camera. The aim of this paper is to improve the performance of the system by utilizing the maximum amount of information available in the images. Fusion is considered at decision level. The proposed algorithm is tested on the NLPR database.

  • PDF

Automatic identification of ARPA radar tracking vessels by CCTV camera system (CCTV 카메라 시스템에 의한 ARPA 레이더 추적선박의 자동식별)

  • Lee, Dae-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.45 no.3
    • /
    • pp.177-187
    • /
    • 2009
  • This paper describes a automatic video surveillance system(AVSS) with long range and 360$^{\circ}$ coverage that is automatically rotated in an elevation over azimuth mode in response to the TTM(tracked target message) signal of vessels tracked by ARPA(automatic radar plotting aids) radar. This AVSS that is a video security and tracking system supported by ARPA radar, CCTV(closed-circuit television) camera system and other sensors to automatically identify and track, detect the potential dangerous situations such as collision accidents at sea and berthing/deberthing accidents in harbor, can be used in monitoring the illegal fishing vessels in inshore and offshore fishing ground, and in more improving the security and safety of domestic fishing vessels in EEZ(exclusive economic zone) area. The movement of the target vessel chosen by the ARPA radar operator in the AVSS can be automatically tracked by a CCTV camera system interfaced to the ECDIS(electronic chart display and information system) with the special functions such as graphic presentation of CCTV image, camera position, camera azimuth and angle of view on the ENC, automatic and manual controls of pan and tilt angles for CCTV system, and the capability that can replay and record continuously all information of a selected target. The test results showed that the AVSS developed experimentally in this study can be used as an extra navigation aid for the operator on the bridge under the confusing traffic situations, to improve the detection efficiency of small targets in sea clutter, to enhance greatly an operator s ability to identify visually vessels tracked by ARPA radar and to provide a recorded history for reference or evidentiary purposes in EEZ area.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Development and Application of High-resolution 3-D Volume PIV System by Cross-Correlation (해상도 3차원 상호상관 Volume PIV 시스템 개발 및 적용)

  • Kim Mi-Young;Choi Jang-Woon;Lee Hyun;Lee Young-Ho
    • Proceedings of the KSME Conference
    • /
    • 2002.08a
    • /
    • pp.507-510
    • /
    • 2002
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity Held of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. Flows size is $1500{\times}100{\times}180(mm)$, particle is Nylon12(1mm) and illuminator is Hollogen type lamp(100w). The stereo photogrammetry is adopted for the three dimensional geometrical mesurement of tracer particle. For the stereo-pair matching, the camera parameters should be decide in advance by a camera calibration. Camera parameter calculation equation is collinearity equation. In order to calculate the particle 3-D position based on the stereo photograrnrnetry, the eleven parameters of each camera should be obtained by the calibration of the camera. Epipolar line is used for stereo pair matching. The 3-D position of particle is calculated from the three camera parameters, centers of projection of the three cameras, and photographic coordinates of a particle, which is based on the collinear condition. To find velocity vector used 3-D position data of the first frame and the second frame. To extract error vector applied continuity equation. This study developed of various 3D-PIV animation technique.

  • PDF

Location Identification Using an Fisheye Lens and Landmarks Placed on Ceiling in a Cleaning Robot (어안렌즈와 천장의 위치인식 마크를 활용한 청소로봇의 자기 위치 인식 기술)

  • Kang, Tae-Gu;Lee, Jae-Hyun;Jung, Kwang-Oh;Cho, Deok-Yeon;Yim, Choog-Hyuk;Kim, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.10
    • /
    • pp.1021-1028
    • /
    • 2009
  • In this paper, a location identification for a cleaning robot using a camera shooting forward a room ceiling which kas three point landmarks is introduced. These three points are made from a laser source which is placed on an auto charger. A fisheye lens covering almost 150 degrees is utilized and the image is transformed to a camera image grabber. The widly shot image has an inevitable distortion even if wide range is coverd. This distortion is flatten using an image warping scheme. Several vision processing techniques such as an intersection extraction erosion, and curve fitting are employed. Next, three point marks are identified and their correspondence is investigated. Through this image processing and image distortion adjustment, a robot location in a wide geometrical coverage is identified.

A Study on Three-Dimensional Motion Tracking Technique for Floating Structures Using Digital Image Processing (디지털 화상처리를 이용한 부유식 구조물의 3차원운동 계측법에 관한 연구)

  • Jo, Hyo-Je;Do, Deok-Hui
    • Journal of Ocean Engineering and Technology
    • /
    • v.12 no.2 s.28
    • /
    • pp.121-129
    • /
    • 1998
  • A quantitative non-contact multi-point measurement system is proposed to the measurement of three-dimensional movement of floating vessels by using digital image processing. The instantaneous three-dimensional movement of a floating structure which is floating in a small water tank is measured by this system and its three-dimensional movement is reconstructed by the measurement results. The validity of this system is verified by position identification for spatially distributed known positional values of basic landmarks set for the camera calibration. It is expected that this system is applicable to the non-contact measurement for an unsteady physical phenomenon especially for the measurement of three-dimensional movement of floating vessels in the laboratory model test.

  • PDF