• Title/Summary/Keyword: Single camera

Search Result 775, Processing Time 0.032 seconds

Design and Inplementation of S/W for a Davinci-based Smart Camera (다빈치 기반 스마트 카메라 S/W 설계 및 구현)

  • Yu, Hui-Jse;Chung, Sun-Tae;Jung, Souhwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.116-120
    • /
    • 2008
  • Smart Camera provides intelligent vision functionalities which can interpret captured video, extract context-aware information and execute a necessary action in real-timeliness in addition to the functionality of network cameras which transmit the compressed acquired videos through networks. Intelligent vision algorithms demand tremendous computations so that real-time processing of computation of intelligent vision algorithms as well as compression and transmission of videos simultaneously is too much burden for a single CPU. Davinci processor of Texas Instruments is a popular ASSP(Application Specific Standard Product) which has dual core architecture of ARM core and DSP core and provides various I/O interfaces as well as networking interface and video acquiring interface necessary for developing digital video embedded applications. In this paper, we report the results of designing and implementing S/W for Davinci-based smart camera. We implement a face detection as an example of vision application and verify the implementation works well. In the future, for the development of a smart camera with more broad and real-time vision functionalities, it is necessary to study about more efficient vision application S/W architecture and optimization of vision algorithms on DSP core of Davichi processor.

  • PDF

Derivation and Comparison of Narrow and Broadband Algorithms for the Retrieval of Ocean Color Information from Multi-Spectral Camera on Kompsat-2 Satellite

  • Ahn, Yu-Hwan;Shanmugam, Palanisamy;Ryu, Joo-Hyung;Moon, Jeong-Eom
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.173-188
    • /
    • 2005
  • The present study aims to derive and compare narrow and broad bandwidths of ocean color sensor’s algorithms for the study of monitoring highly dynamic coastal oceanic environmental parameters using high-resolution imagery acquired from Multi-spectral Camera (MSC) on KOMPSAT-2. These algorithms are derived based on a large data set of remote sensing reflectances ($R_{rs}$) generated by using numerical model that relates $b_b/(a + b_b)$ to $R_{rs}$ as functions of inherent optical properties, such as absorption and backscattering coefficients of six water components including water, phytoplankton (chl), dissolved organic matter (DOM), suspended sediment (SS) concentration, heterotropic organism (he) and an unknown component, possibly represented by bubbles or other particulates unrelated to the first five components. The modeled $R_{rs}$ spectra appear to be consistent with in-situ spectra collected from Korean waters. As Kompsat-2 MSC has similar spectral characteristics with Landsat-5 Thematic Mapper (TM), the model generated $R_{rs}$ values at 2 ㎚ interval are converted to the equivalent remote sensing reflectances at MSC and TM bands. The empirical relationships between the spectral ratios of modeled $R_{rs}$ and chlorophyll concentrations are established in order to derive algorithms for both TM and MSC. Similarly, algorithms are obtained by relating a single band reflectance (band 2) to the suspended sediment concentrations. These algorithms derived by taking into account the narrow and broad spectral bandwidths are compared and assessed. Findings suggest that there was less difference between the broad and narrow band relationships, and the determination coefficient $(r^2)$ for log-transformed data [ N = 500] was interestingly found to be $(r^2)$ = 0.90 for both TM and MSC. Similarly, the determination coefficient for log-transformed data [ N = 500] was 0.93 and 0.92 for TM and MSC respectively. The algorithms presented here are expected to make significant contribution to the enhanced understanding of coastal oceanic environmental parameters using Multi-spectral Camera.

Tackling range uncertainty in proton therapy: Development and evaluation of a new multi-slit prompt-gamma camera (MSPGC) system

  • Youngmo Ku;Sehoon Choi;Jaeho Cho;Sehyun Jang;Jong Hwi Jeong;Sung Hun Kim;Sungkoo Cho;Chan Hyeong Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3140-3149
    • /
    • 2023
  • In theory, the sharp dose falloff at the distal end of a proton beam allows for high conformal dose to the target. However, conformity has not been fully achieved in practice, primarily due to beam range uncertainty, which is approximately 4% and varies slightly across institutions. To address this issue, we developed a new range verification system prototype: a multi-slit prompt-gamma camera (MSPGC). This system features high prompt-gamma detection sensitivity, an advanced range estimation algorithm, and a precise camera positioning system. We evaluated the range measurement precision of the prototype for single spot beams with varying energies, proton quantities, and positions, as well as for spot-scanning proton beams in a simulated SSPT treatment using a phantom. Our results demonstrated high accuracy (<0.4 mm) in range measurement for the tested beam energies and positions. Measurement precision increased significantly with the number of protons, achieving 1% precision with 5 × 108 protons. For spot-scanning proton beams, the prototype ensured more than 5 × 108 protons per spot with a 7 mm or larger spot aggregation, achieving 1% range measurement precision. Based on these findings, we anticipate that the clinical application of the new prototype will reduce range uncertainty (currently approximately 4%) to 1% or less.

Utility of intraoral scanner imaging for dental plaque detection

  • Chihiro Yoshiga;Kazuya Doi;Hiroshi Oue;Reiko Kobatake;Maiko Kawagoe;Hanako Umehara;Kazuhiro Tsuga
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.43-48
    • /
    • 2024
  • Purpose: Oral hygiene, maintained through plaque control, helps prevent periodontal disease and dental caries. This study was conducted to examine the accuracy of plaque detection with an intraoral scanner(IOS) compared to images captured with an optical camera. Materials and Methods: To examine the effect of color tone, artificial tooth resin samples were stained red, blue, and green, after which images were acquired with a digital single-lens reflex (DSLR) camera and an IOS device. Stained surface ratios were then determined and compared. Additionally, the deviation rate of the IOS relative to the DSLR camera was computed for each color. In the clinical study, following plaque staining with red disclosing solution, the staining was captured by the DSLR and IOS devices, and the stained area on each image was measured. Results: The stained surface ratios did not differ significantly between DSLR and IOS images for any color group. Additionally, the deviation rate did not vary significantly across colors. In the clinical test, the stained plaque appeared slightly lighter in color, and the delineation of the stained areas less distinct, on the IOS compared to the DSLR images. However, the stained surface ratio was significantly higher in the IOS than in the DSLR group. Conclusion: When employing IOS with dental plaque staining, the impact of color was minimal, suggesting that the traditional red stain remains suitable for plaque detection. IOS images appeared relatively blurred and enlarged relative to the true state of the teeth, due to inferior sharpness compared to camera images.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Camera App of Smartphone with Multi-Focus Shooting and Focus Post-processing Functions (다초점 촬영과 초점후처리 기능을 가진 스마트폰 카메라 앱)

  • Chae-Won Park;Kyung-Mi Kim;Song-Yeon Yoo;Yu-Jin Kim;Kitae, Hwang;In-Hwang Jung;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.189-196
    • /
    • 2024
  • Currently, it is almost impossible to move the focus of a previously taken photo to a different location. This paper challenges a technology that can move the focus of a captured photo to another location after shooting. To achieve this goal, this paper proposed and implemented a method for taking photos with various focuses at the moment the camera took pictures and storing them in a single JPEG file to extract photos focused on the user's preferred location. In this paper, two methods are implemented: taking various photos by quickly moving the focal length of the lens from close to far away, and taking various photos focused on each object by recognizing objects in the camera viewfinder. Various photos taken are stored in a single JPEG to maintain compatibility with traditional photo viewers. At this time, this JPEG file used the All-in-JPEG format proposed in previous research to store a variety of images. This paper verified its practicality by implementing these technologies in an Android app named OnePIC.

Moving Object Following by a Mobile Robot using a Single Curvature Trajectory and Kalman Filters (단일곡률궤적과 칼만필터를 이용한 이동로봇의 동적물체 추종)

  • Lim, Hyun-Seop;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.7
    • /
    • pp.599-604
    • /
    • 2013
  • Path planning of mobile robots has a purpose to design an optimal path from an initial position to a target point. Minimum driving time, minimum driving distance and minimum driving error might be considered in choosing the optimal path and are correlated to each other. In this paper, an efficient driving trajectory is planned in a real situation where a mobile robot follows a moving object. Position and distance of the moving object are obtained using a web camera, and the rotation angular and linear velocities are estimated using Kalman filters to predict the trajectory of the moving object. Finally, the mobile robot follows the moving object using a single curvature trajectory by estimating the trajectory of the moving object. Using the estimation by Kalman filters and the single curvature in the trajectory planning, the total tracking distance and time saved amounts to about 7%. The effectiveness of the proposed algorithm has been verified through real tracking experiments.

Experimental Study on the Soot Formation Characteristics of Alkane-based Single Fuel Droplet (알케인계 단일 연료 액적의 Soot 생성 특성에 관한 실험적 연구)

  • Lim, Young Chan;Suh, Hyun Kyu
    • Journal of ILASS-Korea
    • /
    • v.22 no.2
    • /
    • pp.80-86
    • /
    • 2017
  • The soot formation characteristics of various alkane-based single fuel droplets were studied in this work. Also, This study was performed to provide the database of the soot behavior and formation of alkane-based single fuel droplet. The experimental conditions were set to 1.0 atm of ambient pressure ($P_{amb}$), 21% of oxygen concentration ($O_2$) and 79% of nitrogen concentration ($N_2$). Combustion and soot formation of single fuel droplet was visualized by visualization system with high speed camera. At the same time, ambient pressure, oxygen concentration and nitrogen concentration were maintained by ambient condition control system. Soot formation characteristics was analyzed and compared on the basis of intensity ratio ($I/I_0$) of background image. The results of toluene fuel droplet showed the largest soot generation. Soot volume fraction ($f_v$) was almost the same under the identical fuel types regardless of various initial droplet diameter ($d_0$) since thermophoretic flux was not much changed under the same ambient conditions.

Anomaly detection of isolating switch based on single shot multibox detector and improved frame differencing

  • Duan, Yuanfeng;Zhu, Qi;Zhang, Hongmei;Wei, Wei;Yun, Chung Bang
    • Smart Structures and Systems
    • /
    • v.28 no.6
    • /
    • pp.811-825
    • /
    • 2021
  • High-voltage isolating switches play a paramount role in ensuring the safety of power supply systems. However, their exposure to outdoor environmental conditions may cause serious physical defects, which may result in great risk to power supply systems and society. Image processing-based methods have been used for anomaly detection. However, their accuracy is affected by numerous uncertainties due to manually extracted features, which makes the anomaly detection of isolating switches still challenging. In this paper, a vision-based anomaly detection method for isolating switches, which uses the rotational angle of the switch system for more accurate and direct anomaly detection with the help of deep learning (DL) and image processing methods (Single Shot Multibox Detector (SSD), improved frame differencing method, and Hough transform), is proposed. The SSD is a deep learning method for object classification and localization. In addition, an improved frame differencing method is introduced for better feature extraction and a hough transform method is adopted for rotational angle calculation. A number of experiments are conducted for anomaly detection of single and multiple switches using video frames. The results of the experiments demonstrate that the SSD outperforms the You-Only-Look-Once network. The effectiveness and robustness of the proposed method have been proven under various conditions, such as different illumination and camera locations using 96 videos from the experiments.