• 제목/요약/키워드: sensor data fusion

검색결과 382건 처리시간 0.022초

깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법 (Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera)

  • 강윤석;호요성
    • 한국통신학회논문지
    • /
    • 제37C권9호
    • /
    • pp.751-756
    • /
    • 2012
  • 본 논문에서는 색상 카메라와 Time-of-Flight (TOF) 깊이 카메라를 이용해 촬영된 장면에서 전경 영역을 분리하고 영상의 고해상도 깊이 정보를 구하는 방법에 대해 제안한다. 깊이 카메라는 장면의 깊이 정보를 실시간으로 측정할 수 있는 장점이 있지만 잡음과 왜곡이 발생하고 색상 영상과의 상관도도 떨어진다. 따라서 이를 색상 영상과 함께 사용하기 위한 색상 영상의 영역화 및 깊이 카메라 영상의 3차원 투영(warping) 작업, 깊이 경계 영역 탐색 등을 진행한 후, 전경의 객체를 분리하고, 객체와 배경에 대하여 깊이 값 계산한다. 깊이 카메라로부터 얻은 초기 깊이 정보를 이용하여 색상 영상에서 구해진 깊이 맵은 기존의 방법인 스테레오 정합 등의 방법보다 우수한 성능을 나타내었고, 무늬가 없는 영역이나 객체 경계 영역에서도 정확한 깊이 정보를 구할 수 있었다.

Intelligent Healthcare Service Provisioning Using Ontology with Low-Level Sensory Data

  • Khattak, Asad Masood;Pervez, Zeeshan;Lee, Sung-Young;Lee, Young-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제5권11호
    • /
    • pp.2016-2034
    • /
    • 2011
  • Ubiquitous Healthcare (u-Healthcare) is the intelligent delivery of healthcare services to users anytime and anywhere. To provide robust healthcare services, recognition of patient daily life activities is required. Context information in combination with user real-time daily life activities can help in the provision of more personalized services, service suggestions, and changes in system behavior based on user profile for better healthcare services. In this paper, we focus on the intelligent manipulation of activities using the Context-aware Activity Manipulation Engine (CAME) core of the Human Activity Recognition Engine (HARE). The activities are recognized using video-based, wearable sensor-based, and location-based activity recognition engines. An ontology-based activity fusion with subject profile information for personalized system response is achieved. CAME receives real-time low level activities and infers higher level activities, situation analysis, personalized service suggestions, and makes appropriate decisions. A two-phase filtering technique is applied for intelligent processing of information (represented in ontology) and making appropriate decisions based on rules (incorporating expert knowledge). The experimental results for intelligent processing of activity information showed relatively better accuracy. Moreover, CAME is extended with activity filters and T-Box inference that resulted in better accuracy and response time in comparison to initial results of CAME.

정풍량 공조시스템의 고장검출 및 진단 시뮬레이션 (Fault Detection and Diagnosis Simulation for CAV AHU System)

  • 한동원;장영수;김서영;김용찬
    • 설비공학논문집
    • /
    • 제22권10호
    • /
    • pp.687-696
    • /
    • 2010
  • In this study, FDD algorithm was developed using the normalized distance method and general pattern classifier method that can be applied to constant air volume air handling unit(CAV AHU) system. The simulation model using TRNSYS and EES was developed in order to obtain characteristic data of CAV AHU system under the normal and the faulty operation. Sensitivity analysis of fault detection was carried out with respect to fault progress. When differential pressure of mixed air filter increased by more than about 105 pascal, FDD algorithm was able to detect the fault. The return air temperature is very important measurement parameter controlling cooling capacity. Therefore, it is important to detect measurement error of the return air temperature. Measurement error of the return air temperature sensor can be detected at below $1.2^{\circ}C$ by FDD algorithm. FDD algorithm developed in this study was found to indicate each failure modes accurately.

정밀 지도에 기반한 자율 주행 시스템 개발 (A Development of the Autonomous Driving System based on a Precise Digital Map)

  • 김병광;이철하;권수림;정창영;천창환;박민우;나용천
    • 자동차안전학회지
    • /
    • 제9권2호
    • /
    • pp.6-12
    • /
    • 2017
  • An autonomous driving system based on a precise digital map is developed. The system is implemented to the Hyundai's Tucsan fuel cell car, which has a camera, smart cruise control (SCC) and Blind spot detection (BSD) radars, 4-Layer LiDARs, and a standard GPS module. The precise digital map has various information such as lanes, speed bumps, crosswalks and land marks, etc. They can be distinguished as lane-level. The system fuses sensed data around the vehicle for localization and estimates the vehicle's location in the precise map. Objects around the vehicle are detected by the sensor fusion system. Collision threat assessment is performed by detecting dangerous vehicles on the precise map. When an obstacle is on the driving path, the system estimates time to collision and slow down the speed. The vehicle has driven autonomously in the Hyundai-Kia Namyang Research Center.

퍼지-슬라이딩모드 제어기를 이용한 외바퀴 로봇의 자세제어 및 방향제어 (Attitude and Direction Control of the Unicycle Robot Using Fuzzy-Sliding Mode Control)

  • 이재오;한성익;한인우;이석인;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제18권3호
    • /
    • pp.275-284
    • /
    • 2012
  • This paper proposes an attitude and direction control of a single wheel balanced robot. A unicycle robot is controlled by two independent control laws: the mobile inverted pendulum control method for pitch axis and the reaction wheel pendulum control method for roll axis. It is assumed that both roll dynamics and pitch dynamics are decoupled. Therefore the roll and pitch dynamics are obtained independently considering the interaction as disturbances to each other. Each control law is implemented by a controller separately. The unicycle robot has two DC motors to drive the disk for roll and to drive the wheel for pitch. Since there is no force to change the yaw direction, the present paper proposes a method for changing the yaw direction. The angle data are obtained by a fusion of a gyro sensor and an accelerometer. Experimental results show the performance of the controller and verify the effectiveness of the proposed control algorithm.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • 제3권1호
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Application of Internet of Things Based Monitoring System for indoor Ganoderma Lucidum Cultivation

  • Quoc Cuong Nguyen;Hoang Tan Huynh;Tuong So Dao;HyukDong Kwon
    • International journal of advanced smart convergence
    • /
    • 제12권2호
    • /
    • pp.153-158
    • /
    • 2023
  • Most agriculture plantings are based on traditional farming and demand a lot of human work processes. In order to improve the efficiency as well as the productivity of their farms, modern agricultural technology was proven to be better than traditional practices. Internet of Things (IoT) is usually related in modern agriculture which provides the farmer with a real-time monitoring condition of their farm from anywhere and anytime. Therefore, the application of IoT with a sensor to measure and monitors the humidity and the temperature in the mushroom farm that can overcome this problem. This paper proposes an IoT based monitoring system forindoor Ganoderma lucidum cultivation at a minimal cost in terms of hardware resources and practicality. The results show that the data of temperature and humidity are changing depending on the weather and the preliminary experimental results demonstrated that all parameters of the system were optimized and successful to achieve the objective. In addition, the analysis results show that the quality of Ganoderma lucidum produced on the research method conforms to regulations in Vietnam.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

스마트 디바이스를 활용한 노약자 근감소증 진단과 딥러닝 알고리즘 (Diagnosis of Sarcopenia in the Elderly and Development of Deep Learning Algorithm Exploiting Smart Devices)

  • 윤영욱;손정우
    • 한국재난정보학회 논문집
    • /
    • 제18권3호
    • /
    • pp.433-443
    • /
    • 2022
  • 연구목적: 본 논문에서는 스마트 디바이스의 높은 보급률을 활용하여 근감소증을 추정 및 예측하는 딥러닝 알고리즘을 제안과 연구를 수행한다. 연구방법: 딥러닝 학습을 위해 스마트 디바이스에 내장된 관성센서를 활용하여 실험 데이터를 수집하였다. 데이터를 수집하는 테스트용 어플리케이션 구현하여 '정상'과 '비정상'걸음과 '달리기', '낙상', '스쿼트' 자세의 5 가지 상태를 구분하여 데이터를 수집하였다. 연구결과: LSTM, CNN, RNN model 사용 시 예측 정확도를 분석했고 CNN-LSTM 융합형 모델을 활용하여 이진분류 정확도 99.87%, 다중 분류 92.30%의 정확도를 보였다. 결론: 근감소증이 있는 사람의 경우 걸음걸이의 이상이 생긴다는 점에 착안하여 스마트 디바이스를 활용한 연구를 진행하였다. 본 연구를 활용하여 근감소증으로 인해 생기는 재난안전을 강화 할 수 있을 것이다.

시뮬레이션을 통한 광학 및 레인지 센서 간의 효율적인 시스템 캘리브레이션 설계 (A Study for Efficient Methods of System Calibration between Optical and Range Sensors by Using Simulation)

  • 최원석;김창재;김용일
    • 한국측량학회지
    • /
    • 제33권2호
    • /
    • pp.95-101
    • /
    • 2015
  • 본 연구에서는 레인지 센서와 광학 센서 간의 시스템 캘리브레이션을 효율적으로 수행하기 위한 방법을 시뮬레이션 수행을 통하여 검증한다. 이를 위하여 먼저 레인지 및 광학 센서의 특성을 반영한 단일 캘리브레이션 검정 대상지를 디자인하였으며, 각 센서의 다양한 특징 및 오차 수준을 반영하여 시뮬레이션 환경을 설계하였다. 시뮬레이션 데이터는 영상 및 거리 데이터의 획득 위치가 시스템 캘리브레이션 정확도에 미치는 영향을 확인하기 위하여 다양한 위치에서 획득된 것으로 가정하여 제작되었다. 이와 같이 획득된 시뮬레이션 데이터는 단사진 표정과 블록 조정의 두 가지 방법의 시스템 캘리브레이션을 통하여 처리하고, 각각의 정확도를 비교 평가하였다. 시뮬레이션 결과, 검정 대상지를 기준으로 2~4m 거리에서 다양한 각도로 촬영한 데이터들을 이용하여 블록 조정을 수행할 경우, 보다 효율적이고 정확도 높은 시스템 캘리브레이션을 수행할 수 있었다. 또한 레인지 센서의 거리관측값을 포함하여 시스템 캘리브레이션을 수행할 경우 보다 높은 정확도의 결과를 얻을 수 있었다.