• Title/Summary/Keyword: Multiple sensors

Search Result 726, Processing Time 0.022 seconds

Image Fusion Framework for Enhancing Spatial Resolution of Satellite Image using Structure-Texture Decomposition (구조-텍스처 분할을 이용한 위성영상 융합 프레임워크)

  • Yoo, Daehoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.21-29
    • /
    • 2019
  • This paper proposes a novel framework for image fusion of satellite imagery to enhance spatial resolution of the image via structure-texture decomposition. The resolution of the satellite imagery depends on the sensors, for example, panchromatic images have high spatial resolution but only a single gray band whereas multi-spectral images have low spatial resolution but multiple bands. To enhance the spatial resolution of low-resolution images, such as multi-spectral or infrared images, the proposed framework combines the structures from the low-resolution image and the textures from the high-resolution image. To improve the spatial quality of structural edges, the structure image from the low-resolution image is guided filtered with the structure image from the high-resolution image as the guidance image. The combination step is performed by pixel-wise addition of the filtered structure image and the texture image. Quantitative and qualitative evaluation demonstrate the proposed method preserves spectral and spatial fidelity of input images.

A collaborative Serious Game for fire disaster evacuation drill in Metaverse (재난 탈출 협동 훈련 기능성 게임의 메타버스 플랫폼 구현)

  • Lee, Sangho;Ha, Gyutae;Kim, Hongseok;Kim, Shiho
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.70-77
    • /
    • 2021
  • The purpose of Serious games in immersive Metaverse platform to provide users both fun and intriguing learning experiences. We proposes a serious game for self-trainable fire evacuation drill with collaboration among avatars synchronized with multiple trainees and optionally with real-time supervising placed at different remote physical locations. The proposed system architecture is composed of wearable motion sensors and a Head Mounted Display to synchronize each user's intended motions to her/his avatar activities in a cyberspace in Metaverse environment. The proposed system provides immersive as well as inexpensive environments for easy-to-use user interface for cyber experience-based fire evacuation training system. The proposed configuration of the user-avatar interface, the collaborative learning environment, and the evaluation system on the VR serious game are expected to be applied to other serious games. The game was implemented only for the predefined fire scenario for buildings, but the platform can extend its configuration for various disaster situations that may happen to the public.

Users' Preference and Acceptance of Smart Home Technologies (사용자의 스마트 주거 기술 선호와 수용에 관한 연구)

  • Cho, Myung Eun;Kim, Mi Jeong
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.11
    • /
    • pp.75-84
    • /
    • 2018
  • This study analyzed users' acceptance and intention to use in addition to needs and preferences of smart home technologies, and identified the differences in technology preference and acceptance by different factors. The subjects were residents in the 40s and 60s residing in the Seoul or suburbs of Seoul, and questionnaires were conducted in the 40s while interviews with questionnaires were conducted in the 60s. A total of 105 questionnaires were used as data, and frequency, mean, crossover, independent sample t test, one-way ANOVA and multiple regression analysis were performaed using SPSS23. The results of this study are as follows. First, hypertension, hyperlipidemia and hypercholesterolemia were the most common diseases among respondents and if there was no discomfort, they would like to continue living in the homes of the current residence. Therefore, the direction of smart home development should support the daily living and health care so that residents can live a healthy life for a long time in their living space. Second, the technologies that residents most need were a control technology of residential environments and a monitoring technology of residents' health and physiological changes. The most preferred sensor types are motion sensors and speech recognition while video cameras have a very low preference. Third, technology anxiety was the most significant factor influencing intention to accept smart home technology. The greater the technology anxiety is, the weaker the acceptance of technology. Fourth, when applying smart residential technology in homes, various resident characteristics should be considered. Age and technology intimacy were the most influential variables, and accordingly there were differences in technology preference and acceptance. Therefore, a user-friendly smart home plan should be done in the consideration of the results.

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.

Design and Implementation of Real Time Device Monitoring and History Management System based on Multiple devices in Smart Factory (스마트팩토리에서 다중장치기반 실시간 장비 모니터링 및 이력관리 시스템 설계 및 구현)

  • Kim, Dong-Hyun;Lee, Jae-min;Kim, Jong-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.124-133
    • /
    • 2021
  • Smart factory is a future factory that collects, analyzes, and monitors various data in real time by attaching sensors to equipment in the factory. In a smart factory, it is very important to inquire and generate the status and history of equipment in real time, and the emergence of various smart devices enables this to be performed more efficiently. This paper proposes a multi device-based system that can create, search, and delete equipment status and history in real time. The proposed system uses the Android system and the smart glass system at the same time in consideration of the special environment of the factory. The smart glass system uses a QR code for equipment recognition and provides a more efficient work environment by using a voice recognition function. We designed a system structure for real time equipment monitoring based on multi devices, and we show practicality by implementing and Android system, a smart glass system, and a web application server.

Development of a complex sensor software for measuring the exhaustion rate of dyeing factories (염색공장의 흡진율 계측을 위한 복합센서 흡진율 계측 모델 개발)

  • Lee, Jeong-in;Park, Wan-Ki;Kim, Sang-Ha
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.219-225
    • /
    • 2022
  • The textile industry in Korea, the dyeing sector is an energy-intensive sector and has low per-unit productivity due to its labor-intensive nature. If the defective rate of dyed fabrics is high, additional costs are incurred due to an increase in production cost due to re-dyeing. Therefore, the goal of the dyeing factory was to minimize the defect rate rather than to save energy. It was difficult to check the dyeing state of the fabric in real time due to the risk of accidents due to burns or pressure when dyeing in a high-temperature and high-pressure environment. In this paper, a complex sensor that can measure the exhaustion rate of dye solution in the dyeing machine using turbidity, pH, and conductivity sensors was proposed, and the experimental method and experimental results were analyzed.

Control Signal Computation using Wireless Channel (무선 채널을 활용한 제어 신호 컴퓨팅)

  • Jung, Mingyu;Park, Pangun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.986-992
    • /
    • 2021
  • To stabilize closed-loop wireless control systems, the state-of-the-art approach receives the individual sensor measurements at the controller and then sends the computed control signal to the actuators. We propose an over-the-air controller scheme where all sensors attached to the plant transmit scaled sensing signals simultaneously to the actuator, and the actuator then computes the feedback control signal by scaling the received signal. The over-the-air controller essentially adopts the over-the-air computation concept to compute the control signal for closed-loop wireless control systems. In contrast to the state-of-the-art sensor-to-controller and controller-to-actuator communication approach, the over-the-air controller exploits the superposition properties of multiple-access wireless channels to complete the communication and computation of a large number of sensing signals in a single communication resource unit. Therefore, the proposed scheme can obtain significant benefits in terms of low actuation delay and low resource utilization with a simple network architecture that does not require a dedicated controller.

Risk Situation Detection Safety Helmet using Multiple Sensors (다중 센서를 이용한 위험 상황 감지 안전모)

  • Woo-Yong, Choi;Hyo-Sang, Kim;Dong-Hyeon, Ko;Jang-Hoon, Lee;Seung-Dae, Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1226-1274
    • /
    • 2022
  • In this paper, we dealt with a safety helmet for detecting dangerous situations that focuses on falling accidents and gas leaks, which are the main causes of industrial accidents. the fall situation range was set through gravity acceleration measurement using an acceleration sensor, and as a result, a fall detection rate of 80% could be confirmed. .In addition, the dangerous gas concentration was measured through a gas sensor, and when a digital value of 188 or more was output through a serial monitor, it was determined as a gas dangerous situation, and a fall warning message and a gas warning message could be checked through a smart-phone application produced based on the app inventor program.

Computer vision and deep learning-based post-earthquake intelligent assessment of engineering structures: Technological status and challenges

  • T. Jin;X.W. Ye;W.M. Que;S.Y. Ma
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.311-323
    • /
    • 2023
  • Ever since ancient times, earthquakes have been a major threat to the civil infrastructures and the safety of human beings. The majority of casualties in earthquake disasters are caused by the damaged civil infrastructures but not by the earthquake itself. Therefore, the efficient and accurate post-earthquake assessment of the conditions of structural damage has been an urgent need for human society. Traditional ways for post-earthquake structural assessment rely heavily on field investigation by experienced experts, yet, it is inevitably subjective and inefficient. Structural response data are also applied to assess the damage; however, it requires mounted sensor networks in advance and it is not intuitional. As many types of damaged states of structures are visible, computer vision-based post-earthquake structural assessment has attracted great attention among the engineers and scholars. With the development of image acquisition sensors, computing resources and deep learning algorithms, deep learning-based post-earthquake structural assessment has gradually shown potential in dealing with image acquisition and processing tasks. This paper comprehensively reviews the state-of-the-art studies of deep learning-based post-earthquake structural assessment in recent years. The conventional way of image processing and machine learning-based structural assessment are presented briefly. The workflow of the methodology for computer vision and deep learning-based post-earthquake structural assessment was introduced. Then, applications of assessment for multiple civil infrastructures are presented in detail. Finally, the challenges of current studies are summarized for reference in future works to improve the efficiency, robustness and accuracy in this field.