• 제목/요약/키워드: occupancy sensor

검색결과 47건 처리시간 0.024초

LEAP 모델을 이용한 대학의 온실가스 배출량 및 감축잠재량 분석 (Estimation of GHG emission and potential reduction on the campus by LEAP Model)

  • 우정호;최경식
    • 환경영향평가
    • /
    • 제21권3호
    • /
    • pp.409-415
    • /
    • 2012
  • Post-kyoto regime has been discussing with the GHG reduction commitment. GHG energy target management system also has been applied for the domestic measures in the country. Universities are major emission sources for GHG. It is very important for campus to built the GHG inventory system and estimate the potential GHG emission reduction. In general, GHG inventory on the campus was taken by the IPCC guidance with the classification of scope 1, 2, and 3. Electricity was the highest portion of GHG emission on the campus as 5,053.90 $tonsCO_2eq/yr$ in 2009. Manufacturing sector was the second high emission and meant GHG in laboratory. Potential GHG reduction was planned by several assumptions such as installation of occupancy sensor, exchanging LED lamp and photovoltaic power generation. These reduction scenarios was simulated by LEAP model. In 2020, outlook of GHG emission was estimated by 17,435.98 tons of $CO_2$ without any plans of reduction. If the reduction scenarios was applied in 2020, GHG emission would be 16,507.60 tons of $CO_2$ as 5.3% potential reduction.

RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차 (Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors)

  • 김지환;정민국;송재복
    • 로봇학회논문지
    • /
    • 제15권1호
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

AEB 시험평가 방법에 관한 연구 (A Study on Evaluation Method of AEB Test)

  • 김봉주;이선봉
    • 자동차안전학회지
    • /
    • 제10권2호
    • /
    • pp.20-28
    • /
    • 2018
  • Currently, sharp increase of car is on the rise as a serious social problem due to loss of lives from car accident and environmental pollution. There is a study on ITS (Intelligent Transportation System) to seek coping measures. As for the commercialization of ITS, we aim for occupancy of world market through ASV (Advanced Safety Vehicle) related system development and international standardization. However, the domestic environment is very insufficient. Core factor technologies of ITS are Adaptive Cruise Control, Lane Keeping Assist System, Forward Collision Warning System, AEB (Autonomous Emergency Braking) system etc. These technologies are applied to cars to support driving of a driver. AEB system is stop the car automatically based on the result decided by the relative speed and distance with obstacle detected through sensor attached on car rather than depending on the driver. The purpose of AEB system is to measure the distance and speed of car and to prevent accident. Thus, AEB will be a system useful for prevention of accident by decreasing car accident along with the development of automobile technology. This study suggests a scenario to suggest a test evaluation method that accords with domestic environment and active response of international standard regarding the test evaluation method of AEB. Also, by setting the goal with function for distance, it suggests theoretic model according to the result. And the study aims to verify the theoretic evaluation standard per proposed scenario using car which is installed with AEB device through field car driving test on test road. It will be useful to utilize the suggested scenario and theoretical model when conducting AEB test evaluation.

도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘 (LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving)

  • 노한석;이현성;이경수
    • 자동차안전학회지
    • /
    • 제14권2호
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정 (Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving)

  • 황승준;박성준;백중환
    • 한국항행학회논문지
    • /
    • 제27권2호
    • /
    • pp.182-189
    • /
    • 2023
  • 깊이 추정은 차량, 로봇, 드론의 자율주행을 위한 3차원 지도 생성의 핵심 기술이다. 기존의 센서 기반 깊이 추정 방식은 정확도는 높지만 가격이 비싸고 해상도가 낮다. 반면 카메라 기반 깊이 추정 방식은 해상도가 높고 가격이 저렴하지만 정확도가 낮다. 본 연구에서는 무인항공기 카메라의 깊이 추정 성능 향상을 위해 Self-Attention 기반의 비지도 단안 카메라 영상 깊이 추정을 제안한다. 네트워크에 Self-Attention 연산을 적용하여 전역 특징 추출 성능을 향상시킨다. 또한 카메라 파라미터를 학습하는 네트워크를 추가하여 카메라 칼리브레이션이 안되어있는 이미지 데이터에서도 사용 가능하게 한다. 공간 데이터 생성을 위해 추정된 깊이와 카메라 포즈는 카메라 파라미터를 이용하여 포인트 클라우드로 변환되고, 포인트 클라우드는 Octree 구조의 점유 그리드를 사용하여 3D 맵으로 매핑된다. 제안된 네트워크는 합성 이미지와 Mid-Air 데이터 세트의 깊이 시퀀스를 사용하여 평가된다. 제안하는 네트워크는 이전 연구에 비해 7.69% 더 낮은 오류 값을 보여주었다.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • 제65권3호
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

배경잡음에 적응하는 진동센서 기반 목표물 탐지 알고리즘 (Target Detection Algorithm Based on Seismic Sensor for Adaptation of Background Noise)

  • 이재일;이종현;배진호;권지훈
    • 전자공학회논문지
    • /
    • 제50권7호
    • /
    • pp.258-266
    • /
    • 2013
  • 본 논문에서는 진동센서를 기반으로 하는 탐지 시스템에서 불규칙적으로 변화는 잡음의 특성을 고려하여 허위경보(false alarm)를 감소하기 위한 적응형 탐지 알고리즘을 제안한다. 제안된 알고리즘은 커널 함수(Kerenl function)을 이용한 1차 검출과 탐지 확정 단계를 적용한 2차 검출로 구성된다. 1차 검출기의 커널 함수는 측정된 신호로부터 잡음의 확률적 모수를 이용하여 잡음 변화에 따라 Neyman-Pearson 결정법으로 문턱 값을 찾아 구한다. 그리고 2차 탐지기는 1차 탐지된 표본수를 이용하여 발걸음 신호의 점유시간을 계산한 후 4단계의 탐지 확정 단계로 구성된다. 본 논문에서 제안된 알고리즘을 검증하기 위해 측정된 걷기와 뛰기 진동 신호를 이용하여 발걸음 신호에 대한 탐지 실험을 수행 하였으며 고정 문턱 값을 이용한 탐지 결과와 비교 하였다. 제안된 1차 검출기의 목표물 탐지 결과 사람의 걷기와 뛰기에 대하여 10m 구간까지 95%의 높은 탐지 성능을 획득하였다. 또한, 허위경보 확률은 고정 문턱 값과 비교할 때 40%에서 20%로 감소하였으며 탐지 확정 단계를 적용한 결과 4%미만으로 크게 감소한 결과를 얻었다.