• Title/Summary/Keyword: Occupancy Sensor

Search Result 50, Processing Time 0.023 seconds

A Study on Incident Detection Model using Fuzzy Logic and Traffic Pattern (퍼지논리와 교통패턴을 이용한 유고검지 모형에 관한 연구)

  • Hong, Nam-Kwan;Choi, Jin-Woo;Yang, Young-Kyu
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.1
    • /
    • pp.79-90
    • /
    • 2007
  • In this paper we proposed and implemented an incident detection model which combines fuzzy algorithm and traffic pattern in order to enhance the efficiency of incident detection for the highways with lamps. Most of the existing algorithms dealt with highways without lamps and can not be used for detecting incidents in the highways with lamps. The data used for model building are traffic volume, occupancy, and speed data. They have been collected by a loop sensor at 5 minutes interval at a point in the Internal Circular Highway of Seoul for the period of 3 months. In this model, the three parameters collected by sensor were fuzzified and combined with the daily traffic pattern of the link. The test of efficiency of the propsed model was performed by comparing the result of proposed model with traditional APID algorithm and fuzzy algorithm without the pattern data respectively. The result showed significant amount of improvement in reducing the false incident detection rate by 18%.

  • PDF

A Bio-inspired Hybrid Cross-Layer Routing Protocol for Energy Preservation in WSN-Assisted IoT

  • Tandon, Aditya;Kumar, Pramod;Rishiwal, Vinay;Yadav, Mano;Yadav, Preeti
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1317-1341
    • /
    • 2021
  • Nowadays, the Internet of Things (IoT) is adopted to enable effective and smooth communication among different networks. In some specific application, the Wireless Sensor Networks (WSN) are used in IoT to gather peculiar data without the interaction of human. The WSNs are self-organizing in nature, so it mostly prefer multi-hop data forwarding. Thus to achieve better communication, a cross-layer routing strategy is preferred. In the cross-layer routing strategy, the routing processed through three layers such as transport, data link, and physical layer. Even though effective communication achieved via a cross-layer routing strategy, energy is another constraint in WSN assisted IoT. Cluster-based communication is one of the most used strategies for effectively preserving energy in WSN routing. This paper proposes a Bio-inspired cross-layer routing (BiHCLR) protocol to achieve effective and energy preserving routing in WSN assisted IoT. Initially, the deployed sensor nodes are arranged in the form of a grid as per the grid-based routing strategy. Then to enable energy preservation in BiHCLR, the fuzzy logic approach is executed to select the Cluster Head (CH) for every cell of the grid. Then a hybrid bio-inspired algorithm is used to select the routing path. The hybrid algorithm combines moth search and Salp Swarm optimization techniques. The performance of the proposed BiHCLR is evaluated based on the Quality of Service (QoS) analysis in terms of Packet loss, error bit rate, transmission delay, lifetime of network, buffer occupancy and throughput. Then these performances are validated based on comparison with conventional routing strategies like Fuzzy-rule-based Energy Efficient Clustering and Immune-Inspired Routing (FEEC-IIR), Neuro-Fuzzy- Emperor Penguin Optimization (NF-EPO), Fuzzy Reinforcement Learning-based Data Gathering (FRLDG) and Hierarchical Energy Efficient Data gathering (HEED). Ultimately the performance of the proposed BiHCLR outperforms all other conventional techniques.

Estimation of GHG emission and potential reduction on the campus by LEAP Model (LEAP 모델을 이용한 대학의 온실가스 배출량 및 감축잠재량 분석)

  • Woo, Jeong-Ho;Choi, Kyoung-Sik
    • Journal of Environmental Impact Assessment
    • /
    • v.21 no.3
    • /
    • pp.409-415
    • /
    • 2012
  • Post-kyoto regime has been discussing with the GHG reduction commitment. GHG energy target management system also has been applied for the domestic measures in the country. Universities are major emission sources for GHG. It is very important for campus to built the GHG inventory system and estimate the potential GHG emission reduction. In general, GHG inventory on the campus was taken by the IPCC guidance with the classification of scope 1, 2, and 3. Electricity was the highest portion of GHG emission on the campus as 5,053.90 $tonsCO_2eq/yr$ in 2009. Manufacturing sector was the second high emission and meant GHG in laboratory. Potential GHG reduction was planned by several assumptions such as installation of occupancy sensor, exchanging LED lamp and photovoltaic power generation. These reduction scenarios was simulated by LEAP model. In 2020, outlook of GHG emission was estimated by 17,435.98 tons of $CO_2$ without any plans of reduction. If the reduction scenarios was applied in 2020, GHG emission would be 16,507.60 tons of $CO_2$ as 5.3% potential reduction.

Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors (RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차)

  • Kim, Jihwan;Jung, Minkuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

A Study on Evaluation Method of AEB Test (AEB 시험평가 방법에 관한 연구)

  • Kim, BongJu;Lee, SeonBong
    • Journal of Auto-vehicle Safety Association
    • /
    • v.10 no.2
    • /
    • pp.20-28
    • /
    • 2018
  • Currently, sharp increase of car is on the rise as a serious social problem due to loss of lives from car accident and environmental pollution. There is a study on ITS (Intelligent Transportation System) to seek coping measures. As for the commercialization of ITS, we aim for occupancy of world market through ASV (Advanced Safety Vehicle) related system development and international standardization. However, the domestic environment is very insufficient. Core factor technologies of ITS are Adaptive Cruise Control, Lane Keeping Assist System, Forward Collision Warning System, AEB (Autonomous Emergency Braking) system etc. These technologies are applied to cars to support driving of a driver. AEB system is stop the car automatically based on the result decided by the relative speed and distance with obstacle detected through sensor attached on car rather than depending on the driver. The purpose of AEB system is to measure the distance and speed of car and to prevent accident. Thus, AEB will be a system useful for prevention of accident by decreasing car accident along with the development of automobile technology. This study suggests a scenario to suggest a test evaluation method that accords with domestic environment and active response of international standard regarding the test evaluation method of AEB. Also, by setting the goal with function for distance, it suggests theoretic model according to the result. And the study aims to verify the theoretic evaluation standard per proposed scenario using car which is installed with AEB device through field car driving test on test road. It will be useful to utilize the suggested scenario and theoretical model when conducting AEB test evaluation.

LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving (도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘)

  • Noh, Hanseok;Lee, Hyunsung;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

Target Detection Algorithm Based on Seismic Sensor for Adaptation of Background Noise (배경잡음에 적응하는 진동센서 기반 목표물 탐지 알고리즘)

  • Lee, Jaeil;Lee, Chong Hyun;Bae, Jinho;Kwon, Jihoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.258-266
    • /
    • 2013
  • We propose adaptive detection algorithm to reduce a false alarm by considering the characteristics of the random noise on the detection system based on a seismic sensor. The proposed algorithm consists of the first step detection using kernel function and the second step detection using detection classes. Kernel function of the first step detection is obtained from the threshold of the Neyman-Pearon decision criterion using the probability density functions varied along the noise from the measured signal. The second step detector consists of 4 step detection class by calculating the occupancy time of the footstep using the first detected samples. In order to verify performance of the proposed algorithm, the detection of the footsteps using measured signal of targets (walking and running) are performed experimentally. The detection results are compared with a fixed threshold detector. The first step detection result has the high detection performance of 95% up to 10m area. Also, the false alarm probability is decreased from 40% to 20% when it is compared with the fixed threshold detector. By applying the detection class(second step detector), it is greatly reduced to less than 4%.

Implementation of a real-time public transportation monitoring system (실시간 대중교통 모니터링 시스템 구현)

  • Eun-seo Oh;So-ryeong Gwon;Joung-min Oh;Bo Peng;Tae-kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.4
    • /
    • pp.9-19
    • /
    • 2024
  • In this paper, a real-time public transportation monitoring system is proposed. The proposed system was implemented by developing a public transportation app and utilizing optical sensors, pressure sensors, and an object detection algorithm. Additionally, a bus model was created to verify the system's functionality. The proposed real-time public transportation monitoring system has three key features. First, the app can monitor congestion levels within public transportation by detecting seat occupancy and the total number of passengers based on changes in optical and pressure sensor readings. Second, to prevent errors in the optical sensor that can occur when multiple passengers board or disembark simultaneously, we explored the possibility of using the YOLO object detection algorithm to verify the number of passengers through CCTV footage. Third, convenience is enhanced by displaying occupied seats in different colors on a separate screen. The system also allows users to check their current location, available public transportation options, and remaining time until arrival. Therefore, the proposed system is expected to offer greater convenience to public transportation users.