• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.028 seconds

Verification of Spatial Resolution in DMC Imagery using Bar Target (Bar 타겟을 이용한 DMC 영상의 공간해상력 검증)

  • Lee, Tae Yun;Lee, Jae One;Yun, Bu Yeol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.5
    • /
    • pp.485-492
    • /
    • 2012
  • Today, a digital airborne imaging sensor plays an important role in construction of the numerous National Spatial Data Infrastructure. However, an appropriate quality assesment procedure for the acquired digital images should be preceded to make them useful data with high precision and reliability. A lot of studies therefore have been conducted in attempt to assess quality of digital images at home and abroad. In this regard, many test fields have been already established and operated to calibrate digital photogrammetric airborne imaging systems in Europe and America. These test fields contain not only GCPs(Ground Control Points) to test geometric performance of a digital camera but also various types of targets to evaluate its spatial and radiometric resolution. The purpose of this paper is to present a method to verify the spatial resolution of the Intergraph DMC digital camera and its results based on an experimental field testing. In field test, a simple bar target to be easily identified in image is used to check the spatial resolution. Images, theoretically designed to 12cm GSD(Ground Sample Distance), were used to calculate the actual resolution for all sub-images and virtual images in flight direction as well as in cross flight direction. The results showed that the actual image resolution was about 0.6cm worse than theoretically expected resolution. In addition, the greatest difference of 1.5cm between them was found in the image of block edge.

Phenophase Extraction from Repeat Digital Photography in the Northern Temperate Type Deciduous Broadleaf Forest (온대북부형 낙엽활엽수림의 디지털 카메라 반복 이미지를 활용한 식물계절 분석)

  • Han, Sang Hak;Yun, Chung Weon;Lee, Sanghun
    • Journal of Korean Society of Forest Science
    • /
    • v.109 no.4
    • /
    • pp.361-370
    • /
    • 2020
  • Long-term observation of the life cycle of plants allows the identification of critical signals of the effects of climate change on plants. Indeed, plant phenology is the simplest approach to detect climate change. Observation of seasonal changes in plants using digital repeat imaging helps in overcoming the limitations of both traditional methods and satellite remote sensing. In this study, we demonstrate the utility of camera-based repeat digital imaging in this context. We observed the biological events of plants and quantified their phenophases in the northern temperate type deciduous broadleaf forest of Jeombong Mountain. This study aimed to identify trends in seasonal characteristics of Quercus mongolica (deciduous broadleaf forest) and Pinus densiflora (evergreen coniferous forest). The vegetation index, green chromatic coordinate (GCC), was calculated from the RGB channel image data. The magnitude of the GCC amplitude was smaller in the evergreen coniferous forest than in the deciduous forest. The slope of the GCC (increased in spring and decreased in autumn) was moderate in the evergreen coniferous forest compared with that in the deciduous forest. In the pine forest, the beginning of growth occurred earlier than that in the red oak forest, whereas the end of growth was later. Verification of the accuracy of the phenophases showed high accuracy with root-mean-square error (RMSE) values of 0.008 (region of interest [ROI]1) and 0.006 (ROI3). These results reflect the tendency of the GCC trajectory in a northern temperate type deciduous broadleaf forest. Based on the results, we propose that repeat imaging using digital cameras will be useful for the observation of phenophases.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.33-43
    • /
    • 2022
  • In this paper, we propose an ensemble model facilitated by multi-channel palm images with attention U-Net models and pretrained convolutional neural networks (CNNs) for establishing a contactless palm-based user identification system using conventional inexpensive camera sensors. Attention U-Net models are used to extract the areas of interest including hands (i.e., with fingers), palms (i.e., without fingers) and palm lines, which are combined to generate three channels being ped into the ensemble classifier. Then, the proposed palm information-based user identification system predicts the class using the classifier ensemble with three outperforming pre-trained CNN models. The proposed model demonstrates that the proposed model could achieve the classification accuracy, precision, recall, F1-score of 98.60%, 98.61%, 98.61%, 98.61% respectively, which indicate that the proposed model is effective even though we are using very cheap and inexpensive image sensors. We believe that in this COVID-19 pandemic circumstances, the proposed palm-based contactless user identification system can be an alternative, with high safety and reliability, compared with currently overwhelming contact-based systems.

Algorithm on Detection and Measurement for Proximity Object based on the LiDAR Sensor (LiDAR 센서기반 근접물체 탐지계측 알고리즘)

  • Jeong, Jong-teak;Choi, Jo-cheon
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.192-197
    • /
    • 2020
  • Recently, the technologies related to autonomous drive has studying the goal for safe operation and prevent accidents of vehicles. There is radar and camera technologies has used to detect obstacles in these autonomous vehicle research. Now a day, the method for using LiDAR sensor has considering to detect nearby objects and accurately measure the separation distance in the autonomous navigation. It is calculates the distance by recognizing the time differences between the reflected beams and it allows precise distance measurements. But it also has the disadvantage that the recognition rate of object in the atmospheric environment can be reduced. In this paper, point cloud data by triangular functions and Line Regression model are used to implement measurement algorithm, that has improved detecting objects in real time and reduce the error of measuring separation distances based on improved reliability of raw data from LiDAR sensor. It has verified that the range of object detection errors can be improved by using the Python imaging library.

Super-Pixel-Based Segmentation and Classification for UAV Image (슈퍼 픽셀기반 무인항공 영상 영역분할 및 분류)

  • Kim, In-Kyu;Hwang, Seung-Jun;Na, Jong-Pil;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.2
    • /
    • pp.151-157
    • /
    • 2014
  • Recently UAV(unmanned aerial vehicle) is frequently used not only for military purpose but also for civil purpose. UAV automatically navigates following the coordinates input in advance using GPS information. However it is impossible when GPS cannot be received because of jamming or external interference. In order to solve this problem, we propose a real-time segmentation and classification algorithm for the specific regions from UAV image in this paper. We use the super-pixels algorithm using graph-based image segmentation as a pre-processing stage for the feature extraction. We choose the most ideal model by analyzing various color models and mixture color models. Also, we use support vector machine for classification, which is one of the machine learning algorithms and can use small quantity of training data. 18 color and texture feature vectors are extracted from the UAV image, then 3 classes of regions; river, vinyl house, rice filed are classified in real-time through training and prediction processes.

A Study on Robot Arm Control System using Detection of Foot Movement (발 움직임 검출을 통한 로봇 팔 제어에 관한 연구)

  • Ji, H.;Lee, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2015
  • The system for controlling the robotic arm through the foot motion detection was implemented for the disabled who not free to use of the arm. In order to get an image on foot movement, two cameras were setup in front of both foot. After defining multiple regions of interest by using LabView-based Vision Assistant from acquired images, we could detect foot movement based on left/right and up/down edge detection within the left/right image area. After transferring control data which was obtained according to left/right and up/down edge detection numbers from two foot images of left/right sides through serial communication, control system was implemented to control 6-joint robotic arm into up/down and left/right direction by foot. As a result of experiment, we was able to get within 0.5 second reaction time and operational recognition rate of more 88%.

  • PDF

Design for Safety System get On or Off the Kindergarten Bus using User Authentication based on Deep-learning (딥러닝 기반의 사용자인증을 활용한 어린이 버스에서 안전한 승차 및 하차 시스템 설계)

  • Mun, Hyung-Jin
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.5
    • /
    • pp.111-116
    • /
    • 2020
  • Recently, many safety accidents involving children shuttle buses take place. Without a teacher for help, a safety accident occurs when the driver can't see a child who is getting off in the blind spot of both frontside and backside. A deep learning-based smart mirror allows user authentication and provides various services. Especially, It can be a role of helper for children, and prevent accidents that can occur when drivers or assistant teachers do not see them. User authentication is carried out with children's face registered in advance. Safety accidents can be prevented by an approximate sensor and a camera in frontside and backside of the bus. This study suggests a way of checking out whether children are missed in the process of getting in and out of the bus, designs a system that reduce blind spots in the front and back of the vehicle, and builds a safety system that provide various services using GPS.

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Development of Korean-to-English and English-to-Korean Mobile Translator for Smartphone (스마트폰용 영한, 한영 모바일 번역기 개발)

  • Yuh, Sang-Hwa;Chae, Heung-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.229-236
    • /
    • 2011
  • In this paper we present light weighted English-to-Korean and Korean-to-English mobile translators on smart phones. For natural translation and higher translation quality, translation engines are hybridized with Translation Memory (TM) and Rule-based translation engine. In order to maximize the usability of the system, we combined an Optical Character Recognition (OCR) engine and Text-to-Speech (TTS) engine as a Front-End and Back-end of the mobile translators. With the BLEU and NIST evaluation metrics, the experimental results show our E-K and K-E mobile translation equality reach 72.4% and 77.7% of Google translators, respectively. This shows the quality of our mobile translators almost reaches the that of server-based machine translation to show its commercial usefulness.

Computer Vision-based Method of detecting a Approaching Vehicle or the Safety of a Bus Passenger Getting off (버스 승객의 안전한 하차를 위한 컴퓨터비전 기반의 차량 탐지 시스템 개발)

  • Lee Kwang-Soon;Lee Kyung-Bok;Rho Kwang-Hyun;Han Min-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • This paper describes the system for detecting vehicles in the rear and rear-side that access between sidewalk and bus stopped to city road at day by computer vision-based method. This system informs appearance of vehicles to bus driver and passenger for the safety of a bus passenger getting off. The camera mounted on the top portion of the bus exit door gets the rear and rear-side image of the bus whenever a bus stops at the stop. The system sets search area between bus and sidewalk from this image and detects a vehicle by using change of image and sobel filtering in this area. From a central point of the vehicle detected, we can find out the distance, speed and direction by its location, width and length. It alarms the driver and passengers when it's judged that dangerous situation for the passenger getting off happens. This experiment results in a detection rate more than 87% in driving by bus on the road.

  • PDF