• Title/Summary/Keyword: Camera Model Identification

검색결과 46건 처리시간 0.023초

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권3호
    • /
    • pp.35-42
    • /
    • 2021
  • 본 연구에서는 3차원 RGB-D Xtion2 카메라를 이용하여 보행자의 골격좌표를 추출한 결과를 바탕으로 동적인 특성(속도, 가속도)을 함께 고려하여 딥러닝 모델을 통해 사람을 인식하는 방법을 제안한다. 본 논문의 핵심목표는 RGB-D 카메라로 손쉽게 좌표를 추출하고 새롭게 생성한 동적인 특성을 기반으로 자체 고안한 1차원 합성곱 신경망 분류기 모델(1D-ConvNet)을 통해 자동으로 보행 패턴을 파악하는 것이다. 1D-ConvNet의 인식 정확도와 동적인 특성이 정확도에 미치는 영향을 알아보기 위한 실험을 수행하였다. 정확도는 F1 Score를 기준으로 측정하였고, 동적인 특성을 고려한 분류기 모델(JCSpeed)과 고려하지 않은 분류기 모델(JC)의 정확도 비교를 통해 영향력을 측정하였다. 그 결과 동적인 특성을 고려한 경우의 분류기 모델이 그렇지 않은 경우보다 F1 Score가 약 8% 높게 나타났다.

디지털 화상처리를 이용한 부유식 구조물의 3차원운동 계측법에 관한 연구 (A Study on Three-Dimensional Motion Tracking Technique for Floating Structures Using Digital Image Processing)

  • 조효제;도덕희
    • 한국해양공학회지
    • /
    • 제12권2호통권28호
    • /
    • pp.121-129
    • /
    • 1998
  • A quantitative non-contact multi-point measurement system is proposed to the measurement of three-dimensional movement of floating vessels by using digital image processing. The instantaneous three-dimensional movement of a floating structure which is floating in a small water tank is measured by this system and its three-dimensional movement is reconstructed by the measurement results. The validity of this system is verified by position identification for spatially distributed known positional values of basic landmarks set for the camera calibration. It is expected that this system is applicable to the non-contact measurement for an unsteady physical phenomenon especially for the measurement of three-dimensional movement of floating vessels in the laboratory model test.

  • PDF

레이저 스캔 카메라 보정을 위한 성능지수기반 다항식 모델 (Performance Criterion-based Polynomial Calibration Model for Laser Scan Camera)

  • 백경동;천성표;김수대;김성신
    • 한국지능시스템학회논문지
    • /
    • 제21권5호
    • /
    • pp.555-563
    • /
    • 2011
  • 영상의 왜곡보정은 영상 좌표계(이미지)와 전역 좌표계(대상체)의 상관관계를 규정하는 것이다. 기존의 왜곡영상에 대한 보정은 카메라의 광학적 특성을 모델링하여 영상 좌표계와 전역 좌표계의 물리적 관계를 찾는 방식이 주를 이루고 있다. 본 논문에서는 성능 지수기반 다항식 모델을 이용하여 왜곡영상의 보정을 시도하였다. 성능지수기반 다항식 모델은 영상 좌표계와 전역 좌표계 사이의 상관관계를 다항식으로 가정한 후, 이미지와 대상체의 좌표 데이터와 성능지수를 이용하여 다항식 모델의 계수와 차수를 결정하는 방식이다. 제안한 성능지수기반 다항식 모델을 이용하여 기존의 왜곡영상을 보정방식이 가진 과대적합 문제와 같은 한계를 극복하고자 한다. 제안한 방법을 레이저 스캔 카메라로 획득한 2차원 영상에 적용하여 모델의 유효성을 검증하였다.

CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템 (Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera)

  • 김승훈;정일균;박창우;황정훈
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출 (Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot)

  • 권오상
    • 센서학회지
    • /
    • 제14권4호
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성 (2D Spatial-Map Construction for Workers Identification and Avoidance of AGV)

  • 고정환
    • 전자공학회논문지
    • /
    • 제49권9호
    • /
    • pp.347-352
    • /
    • 2012
  • 본 논문에서는 지능적인 경로 계획을 위한 스테레오 카메라 기반의 AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성 기법을 제안하였다. 우선 스테레오 카메라로부터 입력된 영상 중 좌 영상에 YCbCr 컬러 모델 및 무게 중심법을 이용하여 이동중인 작업자의 얼굴 영역과 중심좌표를 검출하고, 검출된 좌표 값에 따라 스테레오 카메라 제어를 통해 이동하는 작업자를 실시간적으로 검출하게 된다. 다음으로, AGV 구동에 의해 추적 제어된 스테레오 카메라의 좌, 우 영상간의 시차정보와 카메라 내부 변환관계를 통해 깊이 정보를 검출한 후, 검출된 깊이 지도로부터 각 열에 존재하는 최소값을 이용한 2차원 공간좌표를 검출하여 AGV과 작업자간의 거리와 실제좌표는 물론 다른 물체들과의 상대 거리를 산출하게 되며, 산출된 위치 좌표를 토대로 AGV의 지능적인 경로 추정 및 판단에 따라 자율적인 주행을 수행하게 된다. 실시간적으로 입력되는 240 프레임의 스테레오 영상을 사용한 실험결과, 산출된 2차원 공간좌표는 검출된 작업자의 위치좌표와의 관계를 통해 작업자의 폭과 실제 측정한 값과의 오차율이 평균 1.8% 이하로 유지됨으로써 보다 지능적인 AGV 시스템의 구현 가능성을 제시하였다.

Estimation trial for rice production by simulation model with unmanned air vehicle (UAV) in Sendai, Japan

  • Homma, Koki;Maki, Masayasu;Sasaki, Goshi;Kato, Mizuki
    • 한국작물학회:학술대회논문집
    • /
    • 한국작물학회 2017년도 9th Asian Crop Science Association conference
    • /
    • pp.46-46
    • /
    • 2017
  • We developed a rice simulation model for remote-sensing (SIMRIW-RS, Homma et al., 2007) to evaluate rice production and management on a regional scale. Here, we reports its application trial to estimate rice production in farmers' fields in Sendai, Japan. The remote-sensing data for the application was periodically obtained by multispectral camera (RGB + NIR and RedEdge) attached with unmanned air vehicle (UAV). The airborne images was 8 cm in resolution which was attained by the flight at an altitude of 115 m. The remote-sensing data was relatively corresponded with leaf area index (LAI) of rice and its spatial and temporal variation, although the correspondences had some errors due to locational inaccuracy. Calibration of the simulation model depended on the first two remote-sensing data (obtained around one month after transplanting and panicle initiation) well predicted rice growth evaluated by the third remote-sensing data. The parameters obtained through the calibration may reflect soil fertility, and will be utilized for nutritional management. Although estimation accuracy has still needed to be improved, the rice yield was also well estimated. These results recommended further data accumulation and more accurate locational identification to improve the estimation accuracy.

  • PDF

무인감시장치 구현을 위한 단일 이동물체 추적 알고리즘 (A Single Moving Object Tracking Algorithm for an Implementation of Unmanned Surveillance System)

  • 이규원;김영호;이재구;박규태
    • 전자공학회논문지B
    • /
    • 제32B권11호
    • /
    • pp.1405-1416
    • /
    • 1995
  • An effective algorithm for implementation of unmanned surveillance system which detects moving object from image sequences, predicts the direction of it, and drives the camera in real time is proposed. Outputs of proposed algorithm are coordinates of location of moving object, and they are converted to the values according to camera model. As a pre- processing, extraction of moving object and shape discrimination are performed. Existence of the moving object or scene change is detected by computing the temporal derivatives of consecutive two or more images in a sequence, and this result of derivatives is combined with the edge map from one original gray level image to obtain the position of moving object. Shape discri-mination(Target identification) is performed by analysis of distribution of projection profiles in x and y directions. To reduce the prediction error due to the fact that the motion cha- racteristic of walking man may have an abrupt change of moving direction, an order adaptive lattice structured linear predictor is proposed.

  • PDF

인공지능(AI)을 활용한 드론방어체계 성능향상 방안에 관한 연구 (A study on Improving the Performance of Anti - Drone Systems using AI)

  • 마해철;문종찬;박재영;이수한;권혁진
    • 시스템엔지니어링학술지
    • /
    • 제19권2호
    • /
    • pp.126-134
    • /
    • 2023
  • Drones are emerging as a new security threat, and the world is working to reduce them. Detection and identification are the most difficult and important parts of the anti-drone systems. Existing detection and identification methods each have their strengths and weaknesses, so complementary operations are required. Detection and identification performance in anti-drone systems can be improved through the use of artificial intelligence. This is because artificial intelligence can quickly analyze differences smaller than humans. There are three ways to utilize artificial intelligence. Through reinforcement learning-based physical control, noise and blur generated when the optical camera tracks the drone may be reduced, and tracking stability may be improved. The latest NeRF algorithm can be used to solve the problem of lack of enemy drone data. It is necessary to build a data network to utilize artificial intelligence. Through this, data can be efficiently collected and managed. In addition, model performance can be improved by regularly generating artificial intelligence learning data.

Automatic Wood Species Identification of Korean Softwood Based on Convolutional Neural Networks

  • Kwon, Ohkyung;Lee, Hyung Gu;Lee, Mi-Rim;Jang, Sujin;Yang, Sang-Yun;Park, Se-Yeong;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • 제45권6호
    • /
    • pp.797-808
    • /
    • 2017
  • Automatic wood species identification systems have enabled fast and accurate identification of wood species outside of specialized laboratories with well-trained experts on wood species identification. Conventional automatic wood species identification systems consist of two major parts: a feature extractor and a classifier. Feature extractors require hand-engineering to obtain optimal features to quantify the content of an image. A Convolutional Neural Network (CNN), which is one of the Deep Learning methods, trained for wood species can extract intrinsic feature representations and classify them correctly. It usually outperforms classifiers built on top of extracted features with a hand-tuning process. We developed an automatic wood species identification system utilizing CNN models such as LeNet, MiniVGGNet, and their variants. A smartphone camera was used for obtaining macroscopic images of rough sawn surfaces from cross sections of woods. Five Korean softwood species (cedar, cypress, Korean pine, Korean red pine, and larch) were under classification by the CNN models. The highest and most stable CNN model was LeNet3 that is two additional layers added to the original LeNet architecture. The accuracy of species identification by LeNet3 architecture for the five Korean softwood species was 99.3%. The result showed the automatic wood species identification system is sufficiently fast and accurate as well as small to be deployed to a mobile device such as a smartphone.