• Title/Summary/Keyword: Multi-camera system

Search Result 474, Processing Time 0.024 seconds

Verification of the Accuracy of Photogrammetry in 3D Full-Body Scanning -A Case Study for Apparel Applications-

  • Eun Joo Ryu;Lu Zhang;Hwa Kyung Song
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.47 no.1
    • /
    • pp.137-151
    • /
    • 2023
  • Stationary 3D whole-body scanners generally require 5 to 20 seconds of scanning time and cannot effectively detect armpit and crotch areas. Therefore, this study aimed to analyze the accuracy of a photogrammetric technique using a multi-camera system. First, dimensional accuracy was analyzed using a mannequin scan, comparing the differences between the scan-derived measurements and the direct measurements, with an allowable tolerance of ISO 20685-1:2018. Only 2 of 59 measurement items (ankle height and upper arm circumference, specifically) exceeded the ISO 20685-1:2018 criteria. When compared with the results of the eight stationary whole-body scanners assessed by the literature, the photogrammetric technique was found to have the advantage of scanning the top of the head, armpit, and crotch areas clearly. Second, this study found the photogrammetric technique is suitable for obtaining the body scans because it can minimize the perform scanning, resulting in a reduction of measurement errors due to breathing and uncontrolled movements. The error rate of the photogrammetry method was much lower than that of stationary 3D whole-body scanners.

A system for automatically generating activity photos of infants based on facial recognition in a multi-camera environment (다중 카메라 환경에서의 안면인식 기반의 영유아 활동 사진 자동 생성 시스템)

  • Jung-seok Lee;Kyu-ho Lee;Kun-hee Kim;Chang-hun Choi;Kyoung-ro Park;Ho-joun Son;Hongseok Yoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.481-483
    • /
    • 2023
  • 본 논문에서는 다중 카메라환경에서의 안면인식 기반 영유아 활동 사진 자동 생성 시스템을 개발했다. 개발한 시스템은 어린이집에서 알림장 작성을 위한 촬영하는 동안 보육에 부주의하여 안전사고가 발생하는 것을 방지 할 수 있다. 시스템은 이동식 수집기와 분류 서버로 나뉘어 작동하게 된다. 이동식 수집기는 Raspberry Pi를 이용하였고 초당 1장 내외의 사진을 촬영하여 SAMBA를 사용 공유폴더에 저장한다. 분류 서버에서는 YOLOv5를 사용해 안면을 인식해 분류한다. OpenCV와 TensorFlow-Keras를 통해 분류된 사진에서의 표정을 파악하여 부모에게 전송할 웃는사진만을 분류하여 남겨둔다. 이외의 사진은 /dev/null로 이동하여 삭제된다.

  • PDF

Visual Interpretation about the Underground Information using Borehole Camera (휴대용 시추공 카메라를 이용한 지하정보의 가시화 기법)

  • Matsui Kikuo;Jeong Yun-Young
    • Tunnel and Underground Space
    • /
    • v.15 no.1 s.54
    • /
    • pp.28-38
    • /
    • 2005
  • According to the recent development of measurement system utilizing one or a set of boreholes, visualization of the explored underground became to be a major issue. It induced even the introduction of monitoring apparatuses on the borehole wall with multi-function tool, but the usage of these was often limited by where is unfavorable rock condition and a few of engineers can approach. And so, a portable type of borehole camera with only the essential function has been investigated and a few of commercial models about this is recently being applied into the field condition. This paper was based on the monitoring results obtained using a commercial model by Dr. Nakagawa. Discontinuities in rock mass were the topic for the visualization, and it was studied how can visualize their three dimensional distribution and what a numerical formulation is needed and how to understand the visualization result. The numerical formulation was based on the geometric correlation between the dip direction / dip of discontinuous plane and the trend / plunge of borehole, a set of the equation of a plane was induced. As field application of this into two places, it is found that the above visualization methodology will be especially an useful geotechlical tool for analyzing the local distribution of discontinuities.

Development of a Real Time Video Image Processing System for Vehicle Tracking (실시간 영상처리를 이용한 개별차량 추적시스템 개발)

  • Oh, Ju-Taek;Min, Joon-Young
    • International Journal of Highway Engineering
    • /
    • v.10 no.3
    • /
    • pp.19-31
    • /
    • 2008
  • Video image processing systems(VIPS) offer numerous benefits to transportation models and applications, due to their ability to monitor traffic in real time. VIPS based on wide-area detection, i.e., multi-lane surveillance algorithm provide traffic parameters with single camera such as flow and velocity, as well as occupancy and density. However, most current commercial VIPS utilize a tripwire detection algorithm that examines image intensity changes in the detection regions to indicate vehicle presence and passage, i.e., they do not identify individual vehicles as unique targets. If VIPS are developed to track individual vehicles and thus trace vehicle trajectories, many existing transportation models will benefit from more detailed information of individual vehicles. Furthermore, additional information obtained from the vehicle trajectories will improve incident detection by identifying lane change maneuvers and acceleration/deceleration patterns. The objective of this research was to relate traffic safety to VIPS tracking and this paper has developed a computer vision system of monitoring individual vehicle trajectories based on image processing, and offer the detailed information, for example, volumes, speed, and occupancy rate as well as traffic information via tripwire image detectors. Also the developed system has been verified by comparing with commercial VIP detectors.

  • PDF

Multi-sensor Intelligent Robot (멀티센서 스마트 로보트)

  • Jang, Jong-Hwan;Kim, Yong-Ho
    • The Journal of Natural Sciences
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 1992
  • A robotically assisted field material handling system designed for loading and unloading of a planar pallet with a forklift in unstructured field environment is presented. The system uses combined acoustic/visual sensing data to define the position/orientation of the pallet and to determine the specific locations of the two slots of the pallet, so that the forklift can move close to the slot and engage it for transport. In order to reduce the complexity of the material handling operation, we have developed a method based on the integration of 2-D range data of Poraloid ultrasonic sensor along with 2-D visual data of an optical camera. Data obtained from the two separate sources complements each other and is used in an efficient algorithm to control this robotically assisted field material handling system . Range data obtained from two linear scannings is used to determine the pan and tilt angles of a pallet using least mean square method. Then 2-D visual data is used to determine the swing angle and engagement location of a pallet by using edge detection and Hough transform techniques. The limitations of the pan and tilt orientation to be determined arc discussed. The system developed is evaluated through the hardware and software implementation. The experimental results are presented.

  • PDF

Monovision Charging Terminal Docking Method for Unmanned Automatic Charging of Autonomous Mobile Robots (자율이동로봇의 무인 자동 충전을 위한 모노비전 방식의 충전단자 도킹 방법)

  • Keunho Park;Juhwan Choi;Seonhyeong Kim;Dongkil Kang;Haeseong Jo;Joonsoo Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.3
    • /
    • pp.95-103
    • /
    • 2024
  • The diversity of smart EV(electric vehicle)-related industries is increasing due to the growth of battery-based eco-friendly electric vehicle component material technology, and labor-intensive industries such as logistics, manufacturing, food, agriculture, and service have invested in and studied automation for a long time. Accordingly, various types of robots such as autonomous mobile robots and collaborative robots are being utilized for each process to improve industrial engineering such as optimization, productivity management, and work management. The technology that should accompany this unmanned automobile industry is unmanned automatic charging technology, and if autonomous mobile robots are manually charged, the utility of autonomous mobile robots will not be maximized. In this paper, we conducted a study on the technology of unmanned charging of autonomous mobile robots using charging terminal docking and undocking technology using an unmanned charging system composed of hardware such as a monocular camera, multi-joint robot, gripper, and server. In an experiment to evaluate the performance of the system, the average charging terminal recognition rate was 98%, and the average charging terminal recognition speed was 0.0099 seconds. In addition, an experiment was conducted to evaluate the docking and undocking success rate of the charging terminal, and the experimental results showed an average success rate of 99%.

A Study on the Restoration of a Low-Resoltuion Iris Image into a High-Resolution One Based on Multiple Multi-Layered Perceptrons (다중 다층 퍼셉트론을 이용한 저해상도 홍채 영상의 고해상도 복원 연구)

  • Shin, Kwang-Yong;Kang, Byung-Jun;Park, Kang-Ryoung;Shin, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.438-456
    • /
    • 2010
  • Iris recognition uses a unique iris pattern of user to identify person. In order to enhance the performance of iris recognition, it is reported that the diameter of iris region should be greater than 200 pixels in the captured iris image. So, the previous iris system used zoom lens camera, which can increase the size and cost of system. To overcome these problems, we propose a new method of enhancing the accuracy of iris recognition on low-resolution iris images which are captured without a zoom lens. This research is novel in the following two ways compared to previous works. First, this research is the first one to analyze the performance degradation of iris recognition according to the decrease of the image resolution by excluding other factors such as image blurring and the occlusion of eyelid and eyelash. Second, in order to restore a high-resolution iris image from single low-resolution one, we propose a new method based on multiple multi-layered perceptrons (MLPs) which are trained according to the edge direction of iris patterns. From that, the accuracy of iris recognition with the restored images was much enhanced. Experimental results showed that when the iris images down-sampled by 6% compared to the original image were restored into the high resolution ones by using the proposed method, the EER of iris recognition was reduced as much as 0.133% (1.485% - 1.352%) in comparison with that by using bi-linear interpolation

Video event control system by recognition of depth touch (깊이 터치를 통한 영상 이벤트 제어 시스템)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.35-42
    • /
    • 2016
  • Various events of stop, playback, capture, and zoom-in/out in playing video is available in the monitor of a small size such as smart phones. However, if the size of the display increases, then the cost of the touch recognition is increased, thus provision of a touch event is not possible in practice. In this paper, we propose a video event control system that recognizes a touch inexpensively from the depth information, then provides the variety of events of the toggle, the pinch-in / out by the single or multi-touch. The proposed method finds a touch location and the touch path by the depth information from a depth camera, and determines the touch gesture type. This touch interface algorithm is implemented in a small single-board system, and can control the video event by sending the gesture information through the UART communication. Simulation results show that the proposed depth touch method can control the various events in a large screen.

S-FDS : a Smart Fire Detection System based on the Integration of Fuzzy Logic and Deep Learning (S-FDS : 퍼지로직과 딥러닝 통합 기반의 스마트 화재감지 시스템)

  • Jang, Jun-Yeong;Lee, Kang-Woon;Kim, Young-Jin;Kim, Won-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.50-58
    • /
    • 2017
  • Recently, some methods of converging heterogeneous fire sensor data have been proposed for effective fire detection, but the rule-based methods have low adaptability and accuracy, and the fuzzy inference methods suffer from detection speed and accuracy by lack of consideration for images. In addition, a few image-based deep learning methods were researched, but it was too difficult to rapidly recognize the fire event in absence of cameras or out of scope of a camera in practical situations. In this paper, we propose a novel fire detection system combining a deep learning algorithm based on CNN and fuzzy inference engine based on heterogeneous fire sensor data including temperature, humidity, gas, and smoke density. we show it is possible for the proposed system to rapidly detect fire by utilizing images and to decide fire in a reliable way by utilizing multi-sensor data. Also, we apply distributed computing architecture to fire detection algorithm in order to avoid concentration of computing power on a server and to enhance scalability as a result. Finally, we prove the performance of the system through two experiments by means of NIST's fire dynamics simulator in both cases of an explosively spreading fire and a gradually growing fire.

Implementation of Web-based Remote Multi-View 3D Imaging Communication System Using Adaptive Disparity Estimation Scheme (적응적 시차 추정기법을 이용한 웹 기반의 원격 다시점 3D 화상 통신 시스템의 구현)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.1C
    • /
    • pp.55-64
    • /
    • 2006
  • In this paper, a new web-based remote 3D imaging communication system employing an adaptive matching algorithm is suggested. In the proposed method, feature values are extracted from the stereo image pair through estimation of the disparity and similarities between each pixel of the stereo image. And then, the matching window size for disparity estimation is adaptively selected depending on the magnitude of this feature value. Finally, the detected disparity map and the left image is transmitted into the client region through the network channel. And then, in the client region, right image is reconstructed and intermediate views be synthesized by a linear combination of the left and right images using interpolation in real-time. From some experiments on web based-transmission in real-time and synthesis of the intermediate views by using two kinds of stereo images of 'Joo' & 'Hoon' captured by real camera, it is analyzed that PSNRs of the intermediate views reconstructed by using the proposed transmission scheme are highly measured by 30dB for 'Joo', 27dB for 'Hoon' and the delay time required to obtain the intermediate image of 4 view is also kept to be very fast value of 67.2ms on average, respectively.