• Title/Summary/Keyword: RGB

Search Result 1,656, Processing Time 0.026 seconds

Lightweight Video-based Approach for Monitoring Pigs' Aggressive Behavior (돼지 공격 행동 모니터링을 위한 영상 기반의 경량화 시스템)

  • Mluba, Hassan Seif;Lee, Jonguk;Atif, Othmane;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.704-707
    • /
    • 2021
  • Pigs' aggressive behavior represents one of the common issues that occur inside pigpens and which harm pigs' health and welfare, resulting in a financial burden to farmers. Continuously monitoring several pigs for 24 hours to identify those behaviors manually is a very difficult task for pig caretakers. In this study, we propose a lightweight video-based approach for monitoring pigs' aggressive behavior that can be implemented even in small-scale farms. The proposed system receives sequences of frames extracted from an RGB video stream containing pigs and uses MnasNet with a DM value of 0.5 to extract image features from pigs' ROI identified by predefined annotations. These extracted features are then forwarded to a lightweight LSTM to learn temporal features and perform behavior recognition. The experimental results show that our proposed model achieved 0.92 in recall and F1-score with an execution time of 118.16 ms/sequence.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Recognition of Occupants' Cold Discomfort-Related Actions for Energy-Efficient Buildings

  • Song, Kwonsik;Kang, Kyubyung;Min, Byung-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.426-432
    • /
    • 2022
  • HVAC systems play a critical role in reducing energy consumption in buildings. Integrating occupants' thermal comfort evaluation into HVAC control strategies is believed to reduce building energy consumption while minimizing their thermal discomfort. Advanced technologies, such as visual sensors and deep learning, enable the recognition of occupants' discomfort-related actions, thus making it possible to estimate their thermal discomfort. Unfortunately, it remains unclear how accurate a deep learning-based classifier is to recognize occupants' discomfort-related actions in a working environment. Therefore, this research evaluates the classification performance of occupants' discomfort-related actions while sitting at a computer desk. To achieve this objective, this study collected RGB video data on nine college students' cold discomfort-related actions and then trained a deep learning-based classifier using the collected data. The classification results are threefold. First, the trained classifier has an average accuracy of 93.9% for classifying six cold discomfort-related actions. Second, each discomfort-related action is recognized with more than 85% accuracy. Third, classification errors are mostly observed among similar discomfort-related actions. These results indicate that using human action data will enable facility managers to estimate occupants' thermal discomfort and, in turn, adjust the operational settings of HVAC systems to improve the energy efficiency of buildings in conjunction with their thermal comfort levels.

  • PDF

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System (다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법)

  • Ji Dong Choi;Min Young Kim;Byeong Hak Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

A Scene Change Detection Technique using the Weighted $\chi^2$-test and the Automated Threshold-Decision Algorithm (변형된 $\chi^2$- 테스트와 자동 임계치-결정 알고리즘을 이용한 장면전환 검출 기법)

  • Ko, Kyong-Cheol;Rhee, Yang-Won
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.4 s.304
    • /
    • pp.51-58
    • /
    • 2005
  • This paper proposes a robust scene change detection technique that uses the weighted chi-square test and the automated threshold-decision algorithms. The weighted chi-square test can subdivide the difference values of individual color channels by calculating the color intensities according to NTSC standard, and it can detect the scene change by joining the weighted color intensities to the predefined chi-square test which emphasize the comparative color difference values. The automated threshold-decision at algorithm uses the difference values of frame-to-frame that was obtained by the weighted chi-square test. At first, The Average of total difference values is calculated and then, another average value is calculated using the previous average value from the difference values, finally the most appropriate mid-average value is searched and considered the threshold value. Experimental results show that the proposed algorithms are effective and outperform the previous approaches.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

A study on implementing or real time multi-viewer system (실시간 화면 분할 시스템 구현에 관한 연구)

  • Paik, Cheul;Park, In-Gyu
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.879-882
    • /
    • 1998
  • 현재 보안 시스템으로 가장 많이 쓰이고 있는 것 중에 하나가 여러 지역의 카메라로부터 영상 신호를 받아서 하나의 모니터에 여러 영상을 분할 해서 보여주는 시스템이다. 이 시스템의 기능 중에서 가장 중요한 것은 각 지역의 영상을 실시간으로 처리해줄 수 있어야 하는데, 이를 위해서는 영상 데이터를 놓치지 않고 모두 메모리에 저장할 수 있어야 한다. 본 논문에서는 4개의 영상을 하나의 화면으로 4분할 하여 출력하기 위한 시스템을 FPGA를 사용하여 구현했다. 일반적으로 화면 분할하는 시스템은 흑백의 영상만을 출력하는데, 컬러 영상 신호인 RGB 5:6:5모드의 데이터를 사용하여 컬러 영상을 그대로 화면 분할하여 출력하는 시스템을 구성했다. 또한, 화면을 나누기 위한 PIP(Picture In Picture) 등의 전용칩은 분할 화면의 수가 늘어날수록 그 시스템의 크기가 커지므로 순수하게 FPGA를 이용하여 로직을 설계해서 직접 필드 메모리 (FIFO)를 콘트롤 하도록 설계했다. 동기화 되어 있지 않은 메모리에 저장한 각 영상 데이터를 하나의 영상화면에 동기화시키기 위한 방법으로 일정한 타이밍마다 각 영상 데이터를 선택하는 선택 알고리즘(Choice Algorithm)을 제시하여 적용하였다. 선택 알고리즘에 따라서 동기화 되어 있지 않은 메모리에 저장한 각 영상 데이터를 하나의 영상화면에 동기화 시키기위한 방법을 로직으로 구현하여 적용한 시스템을 만들어서 직접 실험 및 테스트를 실행하였다. 로직을 구현하기 위해 사용한 FPGA(Xilinx 5200 Series)는 XC5210-5이고, 비디오 데이터를 저장하기 위한 필드 메모리(FIFO)는 μPD42280-30를 사용하였는데, 좀더 여유 있는 데이터 저장을 통해 선명한 화질을 얻기 위해서는 FPGA와 메모리를 더 빠른 타입으로 사용하는 것이 바람직하다. 내용 전개를 살펴보면 제 1절에서 본 시스템의 필요성 및 개발 동기, 개발 배경등에 대해서 간단히 설명하고 제 2절에서는 전체 시스템의 구조에 대해서 설명하고 제 3절에서는 본 시스템의 구조 중에서 가장 중요한 메모리 컨트롤에 대해서 간단히 설명하고, 제 4절에서는 시스템을 구현시켜 실험 및 결과에 대해서 분석한다. 마직막으로 결론 및 향후 계획에 대해서 기술한다.

  • PDF

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF

Video Segmentation using the Automated Threshold Decision Algorithm (비디오 분할을 위한 자동 임계치 결정 알고리즘)

  • Ko Kyong-Cheol;Lee Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.65-74
    • /
    • 2005
  • This Paper Propose a robust scene change detection technique that use the weighted chi-square test and the automated threshold-decision algorithm. The weighted chi-test can subdivide the difference values of individual color channels by calculating the color intensities according to mSC standard, and it can detect the scene change by joining the weighted color intensities to the predefined chi-test which emphasize the comparative color difference values. The automated decision algorithm uses the difference values of frame-to-frame that was obtained by the weighted chi-test. In the first step, The average of total difference value and standard deviation value is calculated and then, subtract the mean value from the each difference values. In the next step, the same process is performed on the remained difference value. The propose method is tested on various sources and in the experimental results, it is shown that the Proposed method is efficiently estimates the thresholds and reliably detects scene changes.

  • PDF

Enhancing Visual Perception Using Color Processing Of Mobile Display (색상처리를 통한 감성 모바일 디스플레이)

  • Kang, Yun-Cheol;Ryu, Mi-Ohk;Park, Kyoung-Ju
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.697-702
    • /
    • 2008
  • Mobile display panel is small so that users are often difficult to perceive images clearly. About image we perceive much through colors and therefore we propose color fitting approach for clear perception even on the small and low quality LCD panels. Various color modifications have been studied and used in commercial software packages. For mobile usage, our approach instantly enhances color images by modifying colors in a way to contrast differences of them. The method includes tone enhancements (which contrast dark and bright sides) and color enhancements (which reduce saturation for pure colorants). Based on color theory, our method also modifies color values towards specified complementary and preference colors. We term this color fitting. This approach enables displaying photos, multimedia messages, videos and digital media broadcasting (DMB) for better perception in real-time on mobile devices. Index Terms.) color fitting, visualization on small display, mobile graphics, visual perception.

  • PDF