• Title/Summary/Keyword: Multiple view

Search Result 859, Processing Time 0.028 seconds

A Study on Remote Monitoring of Harmful Gases using LabVIEW (LabVIEW를 활용한 유해가스 원격 모니터링에 관한 연구)

  • Han, Sang-Bae;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.462-464
    • /
    • 2022
  • This paper is a study on remote monitoring by measuring harmful gases using LabVIEW and MyRIO. Most gas measurements using Arduino are somewhat limited in using multiple sensors. MyRIO can measure multiple sensors, so it suggests a method of wirelessly transmitting them and monitoring them through LabVIEW.

  • PDF

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

Learning-Based Multiple Pooling Fusion in Multi-View Convolutional Neural Network for 3D Model Classification and Retrieval

  • Zeng, Hui;Wang, Qi;Li, Chen;Song, Wei
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1179-1191
    • /
    • 2019
  • We design an ingenious view-pooling method named learning-based multiple pooling fusion (LMPF), and apply it to multi-view convolutional neural network (MVCNN) for 3D model classification or retrieval. By this means, multi-view feature maps projected from a 3D model can be compiled as a simple and effective feature descriptor. The LMPF method fuses the max pooling method and the mean pooling method by learning a set of optimal weights. Compared with the hand-crafted approaches such as max pooling and mean pooling, the LMPF method can decrease the information loss effectively because of its "learning" ability. Experiments on ModelNet40 dataset and McGill dataset are presented and the results verify that LMPF can outperform those previous methods to a great extent.

Partitioning of Field of View by Using Hopfield Network (홉필드 네트워크를 이용한 FOV 분할)

  • Cha, Young-Youp;Choi, Bum-Sick
    • Proceedings of the KSME Conference
    • /
    • 2001.11a
    • /
    • pp.667-672
    • /
    • 2001
  • An optimization approach is used to partition the field of view. A cost function is defined to represent the constraints on the solution, which is then mapped onto a two-dimensional Hopfield neural network for minimization. Each neuron in the network represents a possible match between a field of view and one or multiple objects. Partition is achieved by initializing each neuron that represents a possible match and then allowing the network to settle down into a stable state. The network uses the initial inputs and the compatibility measures between a field of view and one or multiple objects to find a stable state.

  • PDF

Hopfield Network for Partitioning of Field of View (FOV 분할을 위한 Hopfield Network)

  • Cha, Young-Youp
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.2
    • /
    • pp.120-125
    • /
    • 2002
  • An optimization approach is used to partition the field of view. A cost function is defined to represent the constraints on the solution, which is then mapped onto a two-dimensional Hopfield neural network for minimization. Each neuron in the network represents a possible match between a field of view and one or multiple objects. Partition is achieved by initializing each neuron that represents a possible match and then allowing the network to settle down into a stable state. The network uses the initial inputs and the compatibility measures between a field of view and one or multiple objects to find a stable state.

Fast Mode Decision using Global Disparity Vector for Multi-view Video Coding (다시점 영상 부호화에서 전역 변이 벡터를 이용한 고속 모드 결정)

  • Han, Dong-Hoon;Cho, Suk-Hee;Hur, Nam-Ho;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.328-338
    • /
    • 2008
  • Multi-view video coding (MVC) based on H.264/AVC encodes multiple views efficiently by using a prediction scheme that exploits inter-view correlation among multiple views. However, with the increase of the number of views and use of inter-view prediction among views, total encoding time will be increased in multiview video coding. In this paper, we propose a fast mode decision using both MB(Macroblock)-based region segmentation information corresponding to each view in multiple views and global disparity vector among views in order to reduce encoding time. The proposed method achieves on average 40% reduction of total encoding time with the objective video quality degradation of about 0.04 dB peak signal-to-noise ratio (PSNR) by using joint multi-view video model (JMVM) 4.0 that is the reference software of the multiview video coding standard.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

A Reduction Method of Search Space for Polyhedral Object Recognition (다면체 인식을 위한 탐색 공간 감소 기법)

  • Lee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.381-385
    • /
    • 2003
  • We suggest a method which reduces the search space of a model-base on multiple-view approach for polyhedral object recognition using the ART-1 neural network. In this approach, the model-base is consisted of extracted features from two-dimensional projections observed at the predetermined viewpoints of a viewing sphere enclosing the object.

A Study of Incremental and Multiple Entry Support Parser for Multi View Editing Environment (다중 뷰 편집환경을 위한 점진적 다중진입 지원 파서에 대한 연구)

  • Yeom, Saehun;Bang, Hyeja
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.3
    • /
    • pp.21-28
    • /
    • 2018
  • As computer performance and needs of user convenience increase, computer user interface are also changing. This changes had great effects on software development environment. In past, text editors like vi or emacs on UNIX OS were the main development environment. These editors are very strong to edit source code, but difficult and not intuitive compared to GUI(Graphical User Interface) based environment and were used by only some experts. Moreover, the trends of software development environment was changed from command line to GUI environment and GUI Editor provides usability and efficiency. As a result, the usage of text based editor had decreased. However, because GUI based editor use a lot of computer resources, computer performance and efficiency are decreasing. The more contents are, the more time to verify and display the contents it takes. In this paper, we provide a new parser that provide multi view editing, incremental parsing and multiple entry of abstract syntax tree.

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.