• Title/Summary/Keyword: multi-cameras

Search Result 255, Processing Time 0.024 seconds

The Walkers Tracking Algorithm using Color Informations on Multi-Video Camera (다중 비디오카메라에서 색 정보를 이용한 보행자 추적)

  • 신창훈;이주신
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1080-1088
    • /
    • 2004
  • In this paper, the interesting moving objects tracking algorithm using color information on Multi-Video camera against variance of intensity, shape and background is proposed. Moving objects are detected by using difference image method and integral projection method to background image and objects image only with hue area, after converting RGB color coordination of image which is input from multi-video camera into HSI color coordination. Hue information of the detected moving area are segmented to 24 levels from $0^{\circ}$ to $360^{\circ}$. It is used to the feature parameter of the moving objects that are three segmented hue levels with the highest distribution and difference among three segmented hue levels. To examine propriety of the proposed method, human images with variance of intensity and shape and human images with variance of intensity, shape and background are targeted for moving objects. As surveillance results of the interesting human, hue distribution level variation of the detected interesting human at each camera is under 2 level, and it is confirmed that the interesting human is tracked and surveilled by using feature parameters at cameras, automatically.

Flexible GGOP prediction structure for multi-view video coding (다시점 동영상 부호화를 위한 가변형 다시점GOP 예측 구조)

  • Yoon, Jae-Won;Seo, Jung-Dong;Kim, Yong-Tae;Park, Chang-Seob;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.420-430
    • /
    • 2006
  • In this paper, we propose a flexible GGOP prediction structure to improve coding efficiency for multi-view video coding. In general, reference software used for MVC uses the fixed GGOP prediction structure. However, the performance of MVC depends on the base view and numbers of B-pictures between I-picture(or P-picture) and P-picture. In order to implement the flexible GGOP prediction structure, the location of base view is decided according to the global disparities among the adjacent sequences. Numbers of B-pictures between I-picture(or P-picture) and P-picture are decided by camera arrangement such as the baseline distance among the cameras. The proposed method shows better result than the reference software of MVC. The proposed prediction structure shows considerable reduction of coded bits by 7.1%.

A Synchronized Multiplexing Scheme for Multi-view HD Video Transport System over IP Networks (실시간 다시점 고화질 비디오 전송 시스템을 위한 동기화된 다중화 기법)

  • Kim, Jong-Ryool;Kim, Jong-Won
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.930-940
    • /
    • 2008
  • This paper proposes a prototype realization of multi-view HD video transport system with synchronized multiplexing over IP networks. The proposed synchronized multiplexing considers the synchronization during video acquisition and the multiplexing for the interactive view-selection during transport. For the synchronized acquisition from multiple HDV camcorders through IEEE 1394 interface, we estimate the timeline differences among MPEG-2 compressed video streams by using global time of network between the cameras and a server and correct timelines of video streams by changing the time stamp of the MPEG-2 system stream. Also, we multiplex a selected number of acquired HD views at the MPEG-2 TS (transport stream) level for the interactive view-selection during transport. Thus, with the proposed synchronized multiplexing scheme, we can display synchronized HD view.

Simple Pyramid RAM-Based Neural Network Architecture for Localization of Swarm Robots

  • Nurmaini, Siti;Zarkasi, Ahmad
    • Journal of Information Processing Systems
    • /
    • v.11 no.3
    • /
    • pp.370-388
    • /
    • 2015
  • The localization of multi-agents, such as people, animals, or robots, is a requirement to accomplish several tasks. Especially in the case of multi-robotic applications, localization is the process for determining the positions of robots and targets in an unknown environment. Many sensors like GPS, lasers, and cameras are utilized in the localization process. However, these sensors produce a large amount of computational resources to process complex algorithms, because the process requires environmental mapping. Currently, combination multi-robots or swarm robots and sensor networks, as mobile sensor nodes have been widely available in indoor and outdoor environments. They allow for a type of efficient global localization that demands a relatively low amount of computational resources and for the independence of specific environmental features. However, the inherent instability in the wireless signal does not allow for it to be directly used for very accurate position estimations and making difficulty associated with conducting the localization processes of swarm robotics system. Furthermore, these swarm systems are usually highly decentralized, which makes it hard to synthesize and access global maps, it can be decrease its flexibility. In this paper, a simple pyramid RAM-based Neural Network architecture is proposed to improve the localization process of mobile sensor nodes in indoor environments. Our approach uses the capabilities of learning and generalization to reduce the effect of incorrect information and increases the accuracy of the agent's position. The results show that by using simple pyramid RAM-base Neural Network approach, produces low computational resources, a fast response for processing every changing in environmental situation and mobile sensor nodes have the ability to finish several tasks especially in localization processes in real time.

Study on Optical Characteristics of Nano Hollow Silica with TiO2 Shell Formation

  • Roh, Gi-Yeon;Sung, Hyeong-Seok;Lee, Yeong-Cheol;Lee, Seong-Eui
    • Journal of the Korean Ceramic Society
    • /
    • v.56 no.1
    • /
    • pp.98-103
    • /
    • 2019
  • Optical filters to control light wavelength of displays or cameras are fabricated by multi-layer stacking process of low and high index thin films. The process of multi-layer stacking of thin films has received much attention as an optimal process for effective manufacturing in the optical filter industry. However, multi-layer processing has disadvantages of complicated thin film process, and difficulty of precise control of film morphology and material selection, all of which are critical for transmittance and coloring effect on filters. In this study, the composite $TiO_2$, which can be used to control of UV absorption, coated on nano hollow silica sol, was synthesized as a coating material for optical filters. Furthermore, systematic analysis of the process parameters during the chemical reaction, and of the structural properties of the coating solutions was performed using SEM, TEM, XRD and photo spectrometry. From the structural analysis, we found that the 85 nm nano hollow silica with 2.5 nm $TiO_2$ shell formation was successfully synthesized at proper pH control and titanium butoxide content. Photo luminescence characteristics, excited by UV irradiation, show that stable absorption of 350 nm-light, correlated with a 3.54 eV band gap, existed for the $TiO_2$ shell-nano hollow silica reacted with 8.8 mole titanium butoxide solution. Transmittance observed on substrate of the $TiO_2$ shell-nano hollow silica showed effective absorption of 200-300 nm UV light without deterioration of visible light transparency.

Implementation of High-definition Digital Signage Reality Image Using Chroma Key Technique (크로마키 기법을 이용한 고해상도 디지털 사이니지 실감 영상 구현)

  • Moon, Dae-Hyuk
    • Journal of Industrial Convergence
    • /
    • v.19 no.6
    • /
    • pp.49-57
    • /
    • 2021
  • Digital Signage and multi-view image system are used as the 4th media to deliver stories and information due to their strong immersion. A content image displayed on large Digital Signage is produced with the use of computer graphics, rather than reality image. That is because the images shot for content making have an extremely limited range of production and their limitation to high resolution, and thereby have difficulty being displayed in a large and wide Digital Signage screen. In case of Screen X and Escape that employ the left and right walls of in the center a movie theater as a screen, images are shot with three cameras for Digital Cinema, and are screened in a cinema with multi-view image system after stitching work is applied. Such realistic images help viewers experience real-life content. This research will be able to display high-resolution images on Digital Signage without quality degradation by using the multi-view image making technique of Screen X and Chroma key technique are showed the high-resolution Digital Signage content making method.

Object Pose Estimation and Motion Planning for Service Automation System (서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획)

  • Youngwoo Kwon;Dongyoung Lee;Hosun Kang;Jiwook Choi;Inho Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

The Forecasting a Maximum Barbell Weight of Snatch Technique in Weightlifting (역도 인상동작 성공 시 최대 바벨무게 예측)

  • Hah, Chong-Ku;Ryu, Ji-Seon
    • Korean Journal of Applied Biomechanics
    • /
    • v.15 no.3
    • /
    • pp.143-152
    • /
    • 2005
  • The purpose of this study was to predict the failure or success of the Snatch-lifting trial as a consequence of the stand-up phase simulated in Kane's equation of motion that was effective for the dynamic analysis of multi-segment. This experiment was a case study in which one male athlete (age: 23yrs, height: 154.4cm, weight: 64.5kg) from K University was selected The system of a simulation included a multi-segment system that had one degree of freedom and one generalized coordinate for the shank segment angle. The reference frame was fixed by the Nonlinear Trans formation (NLT) method in order to set up a fixed Cartesian coordinate system in space. A weightlifter lifted a 90kg-barbell that was 75% of subject's maximum lifting capability (120kg). For this study, six cameras (Qualisys Proreflex MCU240s) and two force-plates (Kistler 9286AAs) were used for collecting data. The motion tracks of 11 land markers were attached on the major joints of the body and barbell. The sampling rates of cameras and force-plates were set up 100Hz and 1000Hz, respectively. Data were processed via the Qualisys Track manager (QTM) software. Landmark positions and force-plate amplitudes were simultaneously integrated by Qualisys system The coordinate data were filtered using a fourth-order Butterworth low pass filtering with an estimated optimum cut-off frequency of 9Hz calculated with Andrew & Yu's formula. The input data of the model were derived from experimental data processed in Matlab6.5 and the solution of a model made in Kane's method was solved in Matematica5.0. The conclusions were as follows; 1. The torque motor of the shank with 246Nm from this experiment could lift a maximum barbell weight (158.98kg) which was about 246 times as much as subject's body weight (64.5kg). 2. The torque motor with 166.5 Nm, simulated by angular displacement of the shank matched to the experimental result, could lift a maximum barbell weight (90kg) which was about 1.4 times as much as subject's body weight (64.5kg). 3. Comparing subject's maximum barbell weight (120kg) with a modeling maximum barbell weight (155.51kg) and with an experimental maximum barbell weight (90kg), the differences between these were about +35.7kg and -30kg. These results strongly suggest that if the maximum barbell weight is decided, coaches will be able to provide further knowledge and information to weightlifters for the performance improvement and then prevent injuries from training of weightlifters. It hopes to apply Kane's method to other sports skill as well as weightlifting to simulate its motion in the future study.

Multi-View Video System using Single Encoder and Decoder (단일 엔코더 및 디코더를 이용하는 다시점 비디오 시스템)

  • Kim Hak-Soo;Kim Yoon;Kim Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.116-129
    • /
    • 2006
  • The progress of data transmission technology through the Internet has spread a variety of realistic contents. One of such contents is multi-view video that is acquired from multiple camera sensors. In general, the multi-view video processing requires encoders and decoders as many as the number of cameras, and thus the processing complexity results in difficulties of practical implementation. To solve for this problem, this paper considers a simple multi-view system utilizing a single encoder and a single decoder. In the encoder side, input multi-view YUV sequences are combined on GOP units by a video mixer. Then, the mixed sequence is compressed by a single H.264/AVC encoder. The decoding is composed of a single decoder and a scheduler controling the decoding process. The goal of the scheduler is to assign approximately identical number of decoded frames to each view sequence by estimating the decoder utilization of a Gap and subsequently applying frame skip algorithms. Furthermore, in the frame skip, efficient frame selection algorithms are studied for H.264/AVC baseline and main profiles based upon a cost function that is related to perceived video quality. Our proposed method has been performed on various multi-view test sequences adopted by MPEG 3DAV. Experimental results show that approximately identical decoder utilization is achieved for each view sequence so that each view sequence is fairly displayed. As well, the performance of the proposed method is examined in terms of bit-rate and PSNR using a rate-distortion curve.

Multiple SL-AVS(Small size & Low power Around View System) Synchronization Maintenance Method (다중 SL-AVS 동기화 유지기법)

  • Park, Hyun-Moon;Park, Soo-Huyn;Seo, Hae-Moon;Park, Woo-Chool
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.73-82
    • /
    • 2009
  • Due to the many advantages including low price, low power consumption, and miniaturization, the CMOS camera has been utilized in many applications, including mobile phones, the automotive industry, medical sciences and sensoring, robotic controls, and research in the security field. In particular, the 360 degree omni-directional camera when utilized in multi-camera applications has displayed issues of software nature, interface communication management, delays, and a complicated image display control. Other issues include energy management problems, and miniaturization of a multi-camera in the hardware field. Traditional CMOS camera systems are comprised of an embedded system that consists of a high-performance MCU enabling a camera to send and receive images and a multi-layer system similar to an individual control system that consists of the camera's high performance Micro Controller Unit. We proposed the SL-AVS (Small Size/Low power Around-View System) to be able to control a camera while collecting image data using a high speed synchronization technique on the foundation of a single layer low performance MCU. It is an initial model of the omni-directional camera that takes images from a 360 view drawing from several CMOS camera utilizing a 110 degree view. We then connected a single MCU with four low-power CMOS cameras and implemented controls that include synchronization, controlling, and transmit/receive functions of individual camera compared with the traditional system. The synchronization of the respective cameras were controlled and then memorized by handling each interrupt through the MCU. We were able to improve the efficiency of data transmission that minimizes re-synchronization amongst a target, the CMOS camera, and the MCU. Further, depending on the choice of users, respective or groups of images divided into 4 domains were then provided with a target. We finally analyzed and compared the performance of the developed camera system including the synchronization and time of data transfer and image data loss, etc.