• Title/Summary/Keyword: 맵센서

Search Result 114, Processing Time 0.028 seconds

An Efficient Bi-directional Routing Protocol for Wireless Sensor Networks (무선 센서네트워크에서 효율적인 양방향 라우팅 프로토콜)

  • Ahn Tae-Won;Joe In-Whee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06d
    • /
    • pp.46-48
    • /
    • 2006
  • 현재 컴퓨터에서 사용되는 수의 체계는 컴퓨터의 메모리구조에서 사용하기 가장 알맞은 2진법과16진법이며 여러 프로그램들에서 효율적으로 사용되어 왔다. 그러나 어떠한 수의 체계든지 프로그램의 성질에 따라 예외적으로 드러나는 비효율적인 특성을 가지고 있다. 본 논문에서, 우리는 라우팅 관점에서의 2진법과 16진법의 예외적인 부분을 이용한 새로운 라우팅 어드레스의 표기법을 제안하며, 제안된 어드레스의 표기법을 이용한 라우팅 테이블과 라우팅 어드레스맵을 구성, 최소한의 메모리를 사용하여 센서네트워크 상에서 효율적인 Ad-Hoc 양방향 라우팅 방식을 구현하였다.

  • PDF

Development of Interior Self-driving Service Robot Using Embedded Board Based on Reinforcement Learning (강화학습 기반 임베디드 보드를 활용한 실내자율 주행 서비스 로봇 개발)

  • Oh, Hyeon-Tack;Baek, Ji-Hoon;Lee, Seung-Jin;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.537-540
    • /
    • 2018
  • 본 논문은 Jetson_TX2(임베디드 보드)의 ROS(Robot Operating System)기반으로 맵 지도를 작성하고, SLAM 및 DQN(Deep Q-Network)을 이용한 목적지까지의 이동명령(목표 선속도, 목표 각속도)을 자이로센서로 측정한 현재 각속도를 이용하여 Cortex-M3의 기반의 MCU(Micro Controllor Unit)에 하달하여 엔코더(encoder) 모터에서 측정한 현재 선속도와 자이로센서에서 측정한 각속도 값을 이용하여 PID제어를 통한 실내 자율주행 서비스 로봇.

Assistant tool consists of electronic glasses, smartphone and server (전자안경, 스마트폰, 웹서버로 구성된 시각장애인용 보조도구)

  • Yu, Jeong-Ho;Kim, Se-Hwan;Lee, Jin-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.612-615
    • /
    • 2012
  • 시각장애인이 단순히 눈을 가리는 용도로 쓰이는 안경에 장애물 감지를 위한 초음파 센서와 바라보는 시선의 영상을 담기 위한 카메라모듈, 기울기와 방향을 감시하기 위한 9축센서를 탑재시켜 전자 안경을 만들고, 전자안경에서 수집된 데이터를 처리하기 위해 스마트폰과 연동하여 영상처리 기술을 접목시켜 보이지 않는 시각을 청각으로 대체해주는 장치이다. 또한 보호자에게 웹을 통해서 실시간으로 시각장애인 위치정보를 구글맵 지도에 표시하여 알려준다.

Positioning by using Speed and GeoMagnetic Sensor Data base on Vehicle Network (차량 네트워크 기반 속도 및 지자기센서 데이터를 이용한 측위 시스템)

  • Moon, Hye-Young;Kim, Jin-Deog;Yu, Yun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2730-2736
    • /
    • 2010
  • Recently, various networks have been introduced in the car of the internal and external sides. These have been integrated by one HMI(Human Machine Interface) to control devices of each network and provide information service. The existing vehicle navigation system, providing GPS based vehicle positioning service, has been included to these integrated networks as a default option. The GPS has been used to the most universal device to provide position information by using satellites' signal. But It is impossible to provide the position information when the GPS can't receive the satellites' signal in the area of tunnel, urban canyon, or forest canopy. Thus, this paper propose and implement the method of measuring vehicle position by using the sensing data of internal CAN network and external Wi-Fi network of the integrated car navigation circumstances when the GPS doesn't work normally. The results obtained by implementation shows the proposed method works well by map matching.

Analysis of Crop Damage Caused by Natural Disasters in UAS Monitoring for Smart Farm (스마트 팜을 위한 UAS 모니터링의 자연재해 작물 피해 분석)

  • Kang, Joon Oh;Lee, Yong Chang
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.583-589
    • /
    • 2020
  • Recently, the utility of UAS (Unmanned Aerial System) for a smart farm using various sensors and ICT (Information & Communications Technology) is expected. In particular, it has proven its effectiveness as an outdoor crop monitoring method through various indices and is being studied in various fields. This study analyzes damage to crops caused by natural disasters and measures the damage area of rice plants. To this end, data is acquired using BG-NIR (Blue Green_Near Infrared Red) and RGB sensors, and image analysis and NDWI (Normalized Difference Water Index) index performed to review crop damage caused by in the rainy season. Also, point cloud data based on image analysis is generated, and damage is measured by comparing data before and after the typhoon through an inspection map. As a result of the study, the growth and rainy season damage of rice was examined through NDWI index analysis, and the damage area caused by typhoon was measured by analysis of the inspection map.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

A Study on The Automatic Map Building and Reliable Navigation of Combining Fuzzy Logic and Inference Theory (추론 이론과 퍼지 이론 결합에 의한 자율 이동 로봇의 지도 구축 및 안전한 네비게이션에 관한 연구)

  • Kim, Young-Chul;Cho, Sung-Bae;Oh, Sang-Rok;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2744-2746
    • /
    • 2001
  • 이 논문에서는 이동 로봇을 위하여 퍼지이론과 Dempster-Shafer 이론을 이용한 불확실한 환경에서의 센서기반 네비게이션 방법을 제안한다. 제안된 제어기는 장애물 회피 동작과 목적지 찾기 동작을 위한 2개의 행동 모듈로 구성되어 있다. 2개의 행동 모듈은 각각 퍼지 이론으로 학습되었고, 적절한 행동 선택 방법으로 선택되게끔 하였다. 견고한 퍼지 제어기를 가진 로봇이 실험 환경내에서 안전하게 움직이기 위하여 자동으로 지도를 구축(Map Building) 하도록 하였다. 이 실험에서 구성된 맵은 평면상의 격자를 중심으로 작성되었고 로봇의 센서에서 읽어들인 센서 값은 D-S 추론 이론을 이용하여 기존의 맵과 혼합되어진다. 즉, 로봇이 움직일때 마다 실험 환경내에서 새로운 정보를 읽어 들이고, 그 정보로 인하여 기존의 지도가 새로운 지도로 갱신되는 것이다. 이러한 작업을 거치면서 로봇은 장애물과 충돌없이 배회하는 것 뿐 아니라 설정된 목적지까지도 쉽게 찾아갈 수가 있다. 실험에 대한 안정성과 확신을 검증 받기 위하여 실제 로봇에 적용하기보다는 먼저 이동 로봇의 시뮬레이션으로 실험 해 보고자 한다.

  • PDF

Intelligent Navigation of a Mobile Robot based on Intention Inference of Obstacles (장애물의 의도 추론에 기초한 이동 로봇의 지능적 주행)

  • Kim, Seong-Hun;Byeon, Jeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.2
    • /
    • pp.21-34
    • /
    • 2002
  • Different from ordinary mobile robots used in a well-structured industrial workspace, a guide mobile robot for the visually impaired should be designed in consideration of a moving obstacle, which mostly refers to pedestrians in intentional motions. Thus, the navigation of the guide robot can be facilitated if the intention of each detected obstacle can be known in advance. In this paper, we propose an inference method to understand an intention of a detected obstacle. In order to represent the environment with ultrasonic sensors, the fuzzy grid-type map is first constructed. Then, we detect the obstacle and infer the intention for collision avoidance with the CLA(Centroid of Largest Area) point of the fuzzy grid-type map. To verify the proposed method, some experiments are performed.

Path-planning using Modified Genetic Algorithm and SLAM based on Feature Map for Autonomous Vehicle (자율주행 장치를 위한 수정된 유전자 알고리즘을 이용한 경로계획과 특징 맵 기반 SLAM)

  • Kim, Jung-Min;Heo, Jung-Min;Jung, Sung-Young;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.3
    • /
    • pp.381-387
    • /
    • 2009
  • This paper is presented simultaneous localization and mapping (SLAM) based on feature map and path-planning using modified genetic algorithm for efficient driving of autonomous vehicle. The biggest problem for autonomous vehicle from now is environment adaptation. There are two cases that its new location is recognized in the new environment and is identified under unknown or new location in the map related kid-napping problem. In this paper, SLAM based on feature map using ultrasonic sensor is proposed to solved the environment adaptation problem in autonomous driving. And a modified genetic algorithm employed to optimize path-planning. We designed and built an autonomous vehicle. The proposed algorithm is applied the autonomous vehicle to show the performance. Experimental result, we verified that fast optimized path-planning and efficient SLAM is possible.

Design of Deep Learning-Based Automatic Drone Landing Technique Using Google Maps API (구글 맵 API를 이용한 딥러닝 기반의 드론 자동 착륙 기법 설계)

  • Lee, Ji-Eun;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.18 no.1
    • /
    • pp.79-85
    • /
    • 2020
  • Recently, the RPAS(Remote Piloted Aircraft System), by remote control and autonomous navigation, has been increasing in interest and utilization in various industries and public organizations along with delivery drones, fire drones, ambulances, agricultural drones, and others. The problems of the stability of unmanned drones, which can be self-controlled, are also the biggest challenge to be solved along the development of the drone industry. drones should be able to fly in the specified path the autonomous flight control system sets, and perform automatically an accurate landing at the destination. This study proposes a technique to check arrival by landing point images and control landing at the correct point, compensating for errors in location data of the drone sensors and GPS. Receiving from the Google Map API and learning from the destination video, taking images of the landing point with a drone equipped with a NAVIO2 and Raspberry Pi, camera, sending them to the server, adjusting the location of the drone in line with threshold, Drones can automatically land at the landing point.