• Title/Summary/Keyword: 무인센서카메라

Search Result 84, Processing Time 0.024 seconds

Implementation of A Monitoring System using Image Data and Environment Data (영상정보와 환경정보를 이용한 실내 공간 모니터링 시스템 구현)

  • Cha, Kyung-Ae;Kwon, Cha-Uk
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.1
    • /
    • pp.1-8
    • /
    • 2009
  • The objective of this study is to design a system that automatically monitors the state of interior spaces like offices where lots of people are coming and going through image data and environment data, which includes temperature, humidity, and other conditions, and implement and test related application programs. In practice, there are lots of image data automatically obtained by unmanned equipments, such as certain types of CCTVs, for monitoring situation in usual interior spaces. This image data can be used as a more effective manner by establishing a system that recognizes situation in specific interior spaces based on the relationship between image and environment data. For instance, it is possible to perform unmanned on/off controls for various electronic equipments, such as air conditioners, lights, and other devices, through analyzing the data acquisited from environment sensors (temperature, humidity, and illumination) as dynamic states are not maintained for a specified period of time. For implementing these controls, this study analyzes environment data acquisited from temperature and humidity sensors and image data input from wireless cameras to recognize situation and that can be used to automatically control environment variables configured by users. Experiments were applied in a laboratory where unmanned controls were effectively performed as automatic on/off controls for the air conditioner and lights installed in the laboratory as certain motions were detected or undetected for a specified period of time.

Detection of Moving Objects using Depth Frame Data of 3D Sensor (3D센서의 Depth frame 데이터를 이용한 이동물체 감지)

  • Lee, Seong-Ho;Han, Kyong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.243-248
    • /
    • 2014
  • This study presents an investigation into the ways to detect the areas of object movement with Kinect's Depth Frame, which is capable of receiving 3D information regardless of external light sources. Applied to remove noises along the boundaries of objects among the depth information received from sensors were the blurring technique for the x and y coordinates of pixels and the frequency filter for the z coordinate. In addition, a clustering filter was applied according to the changing amounts of adjacent pixels to extract the areas of moving objects. It was also designed to detect fast movements above the standard according to filter settings, being applicable to mobile robots. Detected movements can be applied to security systems when being delivered to distant places via a network and can also be expanded to large-scale data through concerned information.

Accuracy Analysis According to the Number of GCP Matching (지상기준점 정합수에 따른 정확도 분석)

  • LEE, Seung-Ung;MUN, Du-Yeoul;SEONG, Woo-Kyung;KIM, Jae-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.127-137
    • /
    • 2018
  • Recently, UAVs and Drones have been used for various applications. In particular, in the field of surveying, there are studies on the technology for monitoring the terrain based on the high resolution image data obtained by using the UAV-equipped digital camera or various sensors, or for generating high resolution orthoimage, DSM, and DEM. In this study, we analyzed the accuracy of GCP(Ground control point) matching using UAV and VRS-GPS. First, we used VRS-GPS to pre-empt the ground reference point, and then imaged at a base altitude of 150m using UAV. To obtain DSM and orthographic images of 646 images, RMSE was analyzed using pix4d mapper version As a result, even if the number of GCP matches is more than five, the error range of the national basic map(scale : 1/5,000) production work regulations is observed, and it is judged that the digital map revision and gauging work can be utilized sufficiently.

Design of Deep Learning-Based Automatic Drone Landing Technique Using Google Maps API (구글 맵 API를 이용한 딥러닝 기반의 드론 자동 착륙 기법 설계)

  • Lee, Ji-Eun;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.18 no.1
    • /
    • pp.79-85
    • /
    • 2020
  • Recently, the RPAS(Remote Piloted Aircraft System), by remote control and autonomous navigation, has been increasing in interest and utilization in various industries and public organizations along with delivery drones, fire drones, ambulances, agricultural drones, and others. The problems of the stability of unmanned drones, which can be self-controlled, are also the biggest challenge to be solved along the development of the drone industry. drones should be able to fly in the specified path the autonomous flight control system sets, and perform automatically an accurate landing at the destination. This study proposes a technique to check arrival by landing point images and control landing at the correct point, compensating for errors in location data of the drone sensors and GPS. Receiving from the Google Map API and learning from the destination video, taking images of the landing point with a drone equipped with a NAVIO2 and Raspberry Pi, camera, sending them to the server, adjusting the location of the drone in line with threshold, Drones can automatically land at the landing point.

Development of Android-Based Photogrammetric Unmanned Aerial Vehicle System (안드로이드 기반 무인항공 사진측량 시스템 개발)

  • Park, Jinwoo;Shin, Dongyoon;Choi, Chuluong;Jeong, Hohyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.215-226
    • /
    • 2015
  • Normally, aero photography using UAV uses about 430 MHz bandwidth radio frequency (RF) modem and navigates and remotely controls through the connection between UAV and ground control system. When using the exhausting method, it has communication range of 1-2 km with frequent cross line and since wireless communication sends information using radio wave as a carrier, it has 10 mW of signal strength limitation which gave restraints on life my distance communication. The purpose of research is to use communication technologies such as long-term evolution (LTE) of smart camera, Bluetooth, Wi-Fi and other communication modules and cameras that can transfer data to design and develop automatic shooting system that acquires images to UAV at the necessary locations. We conclude that the android based UAV filming and communication module system can not only film images with just one smart camera but also connects UAV system and ground control system together and also able to obtain real-time 3D location information and 3D position information using UAV system, GPS, a gyroscope, an accelerometer, and magnetic measuring sensor which will allow us to use real-time position of the UAV and correction work through aerial triangulation.

Abundance and Occupancy of Forest Mammals at Mijiang Area in the Lower Tumen River (두만강 하류 밀강 지역의 산림성 포유류 풍부도와 점유율)

  • Hai-Long Li;Chang-Yong Choi
    • Korean Journal of Environment and Ecology
    • /
    • v.37 no.6
    • /
    • pp.429-438
    • /
    • 2023
  • The forest in the lower Tumen River serves as an important ecosystem spanning the territories of North Korea, Russia, and China, and it provides habitat and movement corridors for diverse mammals, including the endangered Amur tiger (Panthera tigris) and Amur leopard (Panthera pardus). This study focuses on the Mijiang area, situated as a potential ecological corridor connecting North Korea and China in the lower Tumen River, playing a crucial role in conserving and restoring the biodiversity of the Korean Peninsula. This study aimed to identify mammal species and estimate their relative abundance, occupancy, and distribution based on the 48 camera traps installed in the Mijiang area from May 2019 to May 2021. The results confirmed the presence of 18 mammal species in the Mijiang area, including large carnivores like tigers and leopards. Among the dominant mammals, four species of ungulates showed high occupancy and detection rates, particularly the Roe deer (Capreolus pygargus) and Wild boar (Sus scrofa). The roe deer was distributed across all areas with a predicted high occupancy rate of 0.97, influenced by altitude, urban residential areas, and patch density. Wild boars showed a predicted occupancy rate of 0.73 and were distributed throughout the entire area, with factors such as wetland ratio, grazing intensity, and spatial heterogeneity in aspects of the landscape influencing their occupancy and detection rates. Sika deer (Cervus nippon) exhibited a predicted occupancy rate of 0.48, confined to specific areas, influenced by slope, habitat fragmentation diversity affecting detection rates, and the ratio of open forests impacting occupancy. Water deer (Hydropotes inermis) displayed a very low occupancy rate of 0.06 along the Tumen River Basin, with higher occupancy in lower altitude areas and increased detection in locations with high spatial heterogeneity in aspects. This study confirmed that the Mijiang area serves as a habitat supporting diverse mammals in the lower Tumen River while also playing a crucial role in facilitating animal movement and habitat connectivity. Additionally, the occupancy prediction model developed in this study is expected to contribute to predicting mammal distribution within the disrupted Tumen River basin due to human interference and identifying and protecting potential ecological corridors in this transboundary region.

Navigation System Using Drone for Visitors (드론을 활용한 방문객 길 안내 시스템)

  • Seo, Yeji;Jin, Youngseo;Park, Taejung
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.109-114
    • /
    • 2017
  • In our modern society, the utilization of the advanced drone which is capable of performing variety of tasks has been gradually increasing. In this paper, we present an application, similar to the prototype "Skycall" that had been introduced in the MIT Senseable City. To assess this concept, we have implemented a prototype of drone-based pedestrian navigation depending on the Android smartphone. Our system is not only able to guide the user in a very complicated place, where buildings are compacted, but also to block unauthorized visitors from accessing the facilities. And we discuss some problems we found and suggest the direction to address them.

Unmanned accident prevention Arduino Robot using color detection algorithm (색 검지 알고리즘을 이용한 무인 사고방지 아두이노 로봇 개발)

  • Lee, Ho-Jeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.493-497
    • /
    • 2015
  • This study was started with concern about problem of increasing physical and personal injury caused by traffic accidents, despite of technological advances in transportation. As the vehicles, which is currently produced, informs the driver only detecting the proximity of an object by the front and rear sensor, this study implemented the color detection algorithm, the circular shape recognition algorithm, and the distance recognition algorithm and built the accident prevention beyond accident perception, which commends to avoid the object or to stop the robot, if object was detected by algorithms. For the simulation, we made the Arduino vehicle robot equipped with compact wireless communication camera and confirmed that the robot successfully avoids an object or stops itself in simulated driving.

  • PDF

Implementation of Efficient Container Number Recognition System at Automatic Transfer Crane in Container Terminal Yard (항만 야드 자동화크레인(ATC)에서 효율적인 컨테이너번호 인식시스템 개발)

  • Hong, Dong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.57-65
    • /
    • 2010
  • This paper describes the method of efficient container number recognition in colored container image with number plate at ATC(Automatic Transfer Crane) in container terminal yard. At the Sinseondae terminal gate in Busan, the container number recognition system is installed by "intelligent port-logistics system technology development", that is government research and development project. It is the method that it sets up the tunnel structure inside camera on the gate and it recognizes the container number in order to recognize the export container cargo automatically. However, as the automation equipment is introduced to the container terminal and the unmanned of a task is gradually accomplished, the container number recognition system for the confirmation of the object of work is required at ATC in container terminal yard. Therefore, the container number recognition system fitted for it is necessary for ATC in container terminal yard in which there are many intrusive of the character recognition through image including a sunlight, rain, snow, shadow, and etc. unlike the gate. In this paper, hardware components of the camera, illumination, and sensor lamp were altered and software elements of an algorithm were changed. that is, the difference of the brightness of the surrounding environment, and etc. were regulated for recognize a container number. Through this, a shadow problem, and etc. that it is thickly below hung with the sunlight or the cargo equipment were solved and the recognition time was shortened and the recognition rate was raised.

A Study of Smart Robot Architecture and Movement for Observation of Dangerous Region (위험지역 감시스마트로봇의 설계와 동작에 관한 연구)

  • Koo, Kyung-Wan;Baek, Dong-Hyun
    • Fire Science and Engineering
    • /
    • v.27 no.6
    • /
    • pp.83-88
    • /
    • 2013
  • Catastrophic disasters are sprouting out recently, i.e., the radiation leaks and the hydrofluoric acid gas leaks, etc. The restoration work for these kinds of disasters is very harmful and dangerous for human beings to handle themselves, thus allowing manless robots to fly the reconnaissance planes over to the disaster stricken areas and do the necessary work instead. For this endeavor and purpose, we created and tested an intelligent robot that can inspect those areas, using Mbed (ARM processor) technology temperature sensors and gas sensors aided by CAM (Computer-Aided Manufacturing) cameras. Also, HTTP Server, PC, androids and their combined efforts allow their remote controlled operation from far away with timing control. These intelligent robots can be on duty for 24 hours, minimizing the accidents and crimes and what not, and can respond more quickly when these misfortunes actually happen. We can anticipate the economic effects as well, derived from the reduced needs for hiring human resources.