• Title/Summary/Keyword: Camera parameter

Search Result 248, Processing Time 0.022 seconds

Augmented Reality Based Tangible Interface For Digital Lighting of CAID System (CAID 시스템의 디지털 라이팅을 위한 증강 현실 기반의 실체적 인터페이스에 관한 연구)

  • Hwang, Jung-Ah;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.119-128
    • /
    • 2007
  • With the development of digital technologies, CAID became an essential part in the industrial design process. Creating photo-realistic images from a virtual scene with 3D models is one of the specialized task for CAID users. This task requires a complex interface of setting the positions and the parameters of camera and lights for optimal rendering results. However, the user interface of existing CAID tools are not simple for designers because the task is mostly accomplished in a parameter setting dialogue window. This research address this interface issues, in particular the issues related to lighting, by developing and evaluating TLS(Tangible Lighting Studio) that uses Augmented Reality and Tangible User Interface. The interface of positioning objects and setting parameters become tangible and distributed in the workspace to support more intuitive rendering task. TLS consists of markers, and physical controller, and a see-through HMD(Head Mounted Display). The user can directly control the lighting parameters in the AR workspace. In the evaluation experiment, TLS provide higher effectiveness, efficiency and user satisfaction compared to existing GUI(Graphic User Interface) method. It is expected that the application of TLS can be expanded to photography education and architecture simulation.

  • PDF

Calibration of a UAV Based Low Altitude Multi-sensor Photogrammetric System (UAV기반 저고도 멀티센서 사진측량 시스템의 캘리브레이션)

  • Lee, Ji-Hun;Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.31-38
    • /
    • 2012
  • The geo-referencing accuracy of the images acquired by a UAV based multi-sensor system is affected by the accuracy of the mounting parameters involving the relationship between a camera and a GPS/INS system as well as the performance of a GPS/INS system. Therefore, the estimation of the accurate mounting parameters of a multi-sensor system is important. Currently, we are developing a low altitude multi-sensor system based on a UAV, which can monitor target areas in real time for rapid responses for emergency situations such as natural disasters and accidents. In this study, we suggest a system calibration method for the estimation of the mounting parameters of a multi-sensor system like our system. We also generate simulation data with the sensor specifications of our system, and derive an effective flight configuration and the number of ground control points for accurate and efficient system calibration by applying the proposed method to the simulated data. The experimental results indicate that the proposed method can estimate accurate mounting parameters using over five ground control points and flight configuration composed of six strips. In the near future, we plan to estimate mounting parameters of our system using the proposed method and evaluate the geo-referencing accuracy of the acquired sensory data.

A Study on the Estimation of Object's Dimension based on the Vision System Model of Extended Kalman filtering (확장칼만 필터링의 비젼시스템 모델을 이용한 물체 치수 측정에 관한 연구)

  • Jang, W.S.;Ahn, H.C.;Kim, K.S.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.25 no.2
    • /
    • pp.110-116
    • /
    • 2005
  • It is very important to reduce the computational processing time for the application of the vision system in real time such as inspection, the determination of object's dimension and welding etc, because the vision system model involves a lot of measurement data acquired by CCD camera. Also, a lot of computation time is required in estimating the parameters in the vision system model if the iterative batch estimation method such as Newton Raphson is used. Thus, the effective computation method such as the Extended Kalman Filtering(EKF) is required to solve the above problems. The EKF has much advantages in that it takes explicitly into account the measurement uncertainties, and is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm to compute the parameters in the vision system model in real time. This vision system model involves the six parameters to account for the cameras inner and outer parameters. Also the EKF is applied to estimate the object's dimension. Finally, practicality of the estimation scheme of the vision system based on the EKF is verified experimently by performing the estimation of object's dimension.

Pace and Facial Element Extraction in CCD-Camera Images by using Snake Algorithm (스네이크 알고리즘에 의한 CCD 카메라 영상에서의 얼굴 및 얼굴 요소 추출)

  • 판데홍;김영원;김정연;전병환
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.535-542
    • /
    • 2002
  • 최근 IT 산업이 급성장하면서 화상 회의, 게임, 채팅 등에서의 아바타(avatar) 제어를 위한 자연스러운 인터페이스 기술이 요구되고 있다. 본 논문에서는 동적 윤곽선 모델(active contour models; snakes)을 이용하여 복잡한 배경이 있는 컬러 CCD 카메라 영상에서 얼굴과 눈, 입, 눈썹, 코 등의 얼굴 요소에 대해 윤곽선을 추출하거나 위치를 파악하는 방법을 제안한다. 일반적으로 스네이크 알고리즘은 잡음에 민감하고 초기 모델을 어떻게 설정하는가에 따라 추출 성능이 크게 좌우되기 때문에 주로 단순한 배경의 영상에서 정면 얼굴의 추출에 사용되어왔다 본 연구에서는 이러한 단점을 파악하기 위해, 먼저 YIQ 색상 모델의 I 성분을 이용한 색상 정보와 차 영상 정보를 사용하여 얼굴의 최소 포함 사각형(minimum enclosing rectangle; MER)을 찾고, 이 얼굴 영역 내에서 기하학적인 위치 정보와 에지 정보를 이용하여 눈, 입, 눈썹, 코의 MER을 설정한다. 그런 다음, 각 요소의 MER 내에서 1차 미분과 2차 미분에 근거한 내부 에너지와 에지에 기반한 영상 에너지를 이용한 스네이크 알고리즘을 적용한다. 이때, 에지 영상에서 얼굴 주변의 복잡한 잡음을 제거하기 위하여 색상 정보 영상과 차 영상에 각각 모폴로지(morphology)의 팽창(dilation) 연산을 적용하고 이들의 AND 결합 영상에 팽창 연산을 다시 적용한 이진 영상을 필터로 사용한다. 총 7명으로부터 양 눈이 보이는 정면 유사 방향의 영상을 20장씩 취득하여 총 140장에 대해 실험한 결과, MER의 오차율은 얼굴, 눈, 입에 대해 각각 6.2%, 11.2%, 9.4%로 나타났다. 또한, 스네이크의 초기 제어점을 얼굴은 44개, 눈은 16개, 입은 24개로 지정하여 MER추출에 성공한 영상에 대해 스네이크 알고리즘을 수행한 결과, 추출된 영역의 오차율은 각각 2.2%, 2.6%, 2.5%로 나타났다.해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of

  • PDF

Comparison of the fit of the coping pattern constructed by manual and CAD/CAM, depending on the margin of the abutment tooth (지대치 변연 형태에 따른 수작업과 CAD/CAM으로 제작한 coping 패턴의 적합도 비교)

  • Han, Min-soo;Kwon, Eun-Ja;Chio, Esther;Kim, Si-chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.10
    • /
    • pp.6611-6617
    • /
    • 2015
  • The purpose of this study is to compare the marginal and internal fit of metal and zirconia coping which is fabricated by manual and CAD/CAM(Computer Aided Design/Computer Aided Manufacturing). The model is prepared with Urethane material and two abutment teeth are fabricated with a knife and chamfer margin. Silicon replica technique is used to measure the marginal fit of manually fabricated and the CAD/CAM coping. Internal fitting level is measured with a microscope and the image is captured with a CCD camera. The distance between abutment teeth and coping is measured with a callibrated image analyzer software; marginal opening (MO), marginal gap (MG), internal gap (IG) at maximum curvature area, axial gap (AG), and occlusal gap (OG). Two-way ANOVA test is applied to compare fabrication technique and to analysis of abutment pattern. In addition, one-way ANOVA and Scheffe's test is used to analyze each parameter of the test. The result shows that the fit is < $120{\mu}m$ except OG of CAD/CAM and MO of knife margin. The CAD/CAM fabricated coping showed higher fit level at chamfer margin. However, knife margin showed better fitness compared to chamfer margin at MG. AG showed the minimum dimension with a constant result (< $38{\mu}m$).

The Estimation of Physical/Biological Parameters of Greenhouse Soil by Image Processing (컬러 영상처리에 의한 시설재배지 토양의 생물 물리적 환경변수 추정)

  • Kim, H.T.;Kim, J.D.;Moon, J.H.;Lee, K.S.;Kang, K.H.;Kim, W.;Lee, D.W.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.4
    • /
    • pp.343-350
    • /
    • 2003
  • This study was conducted to find out the coefficient relationships between intensity values of image processing and biological/physical parameters of soil in greenhouses. Soil images were obtained by an image processing system consisting of a personal computer and a CCD earners. A software written in Visual C$\^$++/ systematically integrated the functions of image capture, image processing, and image analysis. Image processing data of the soil samples were analyzed by the method of regression analysis. The results are as follows. For detecting soil density of unbroken soil samples, the highest correlation coefficients of 0.82 and 0.84, respectively were obtained fur R-value and S-value among image processing data while it was 0.97 for G-value. Considering the relationship between biological characteristics and image processing data of soil in greenhouse, the correlation was found generally low. For pH of unbroken soil sample, the correlation coefficients were found 0.87, 0.85, and 0.94, respectively with G, I, and H values of image processing data. In the case of bacteria, any correlation was not found with the image processing data For Actinomyctes, they were 0.86 and 0.85, respectively with G-value and B-value of image processing data showing high correlation coefficient compared to the other variables. The correlation coefficient between Fungi and H-value was shown 0.88, the highest among the variables higher than 0.8 while the other variables showed low correlation. For broken soil samples from greenhouse, the relation between biological parameter and image processing data were rarely shown in this study. The results of this study indicated that most of correlation coefficient between the variables were usually lower than 0.01. Accordingly, it was assumed that the soil should be used without broken to fairly estimate biological characteristics using CCD camera.

Extragalactic Sciences from SPICA/FPC-S

  • Jeong, Woong-Seob;Matsumoto, Toshio;Im, Myungshin;Lee, Hyung Mok;Lee, Jeong-Eun;Tsumura, Kohji;Tanaka, Masayuki;Shimonishi, Takashi;Lee, Dae-Hee;Pyo, Jeonghyun;Park, Sung-Joon;Moon, Bongkon;Park, Kwijong;Park, Youngsik;Han, Wonyong;Nam, Ukwon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.36.2-36.2
    • /
    • 2013
  • The SPICA (SPace Infrared Telescope for Cosmology & Astrophysics) project is a next-generation infrared space telescope optimized for mid- and far-infrared observation with a cryogenically cooled 3m-class telescope. The focal plane instruments onboard SPICA will enable us to resolve many astronomical key issues from the formation and evolution of galaxies to the planetary formation. The FPC-S (Focal Plane Camera - Sciecne) is a near-infrared instrument proposed by Korea as an international collaboration. Owing to the capability of both low-resolution imaging spectroscopy and wide-band imaging with a field of view of $5^{\prime}{\times}5^{\prime}$, it has large throughput as well as high sensitivity for diffuse light compared with JWST. In order to strengthen advantages of the FPC-S, we propose the studies of probing population III stars by the measurement of cosmic near-infrared background radiation and the star formation history at high redshift by the discoveries of active star-forming galaxies. In addition to the major scientific targets, to survey large area opens a new parameter space to investigate the deep Universe. The good survey capability in the parallel imaging mode allows us to study the rare, bright objects such as quasars, bright star-forming galaxies in the early Universe as a way to understand the formation of the first objects in the Universe, and ultra-cool brown dwarfs. Observations in the warm mission will give us a unique chance to detect high-z supernovae, ices in young stellar objects (YSOs) even with low mass, the $3.3{\mu}$ feature of shocked circumstance in supernova remnants. Here, we report the current status of SPICA/FPC project and its extragalactic sciences.

  • PDF

A Study on the transition of Explosion to Eire of LPG and Its' Prevention (LP가스 폭발 후 화재 전이 현상 및 전이 방지에 관한 연구)

  • 오규형;이성은
    • Fire Science and Engineering
    • /
    • v.18 no.2
    • /
    • pp.20-26
    • /
    • 2004
  • The purpose of this study is to investigate the transition mechanism and prevention mechanism of gas explosion to fire. Transition phenomena of explosion to fire of LPG in the explosion vessel of its size of TEX>$100 cm {\times} 60 cm {\times} 45 cm$ was visualized using the high speed video camera and the mechanism was analysed from the videograph. Newspaper size of $30cm {\times} 20cm$ was used for combustible sample in this experiments and LPG-air mixture was ignited by 10 ㎸ electric spark. Experimental parameter was gas concentration, size of vent area and position of combustible solid. Size of vent area were varied as $10cm {\times} 9cm, 13cm {\times} 10cm, 27cm {\times} 20cm, 40cm {\times} 27cm$, and the position of combustible was varied in 4 point. Carbon dioxide was used to study the prevention mechanism of explosion to fire transition of LPG. Based on this experiment we can find that transition possibility of explosion to fire on solid combustible from explosion is depends on concentration of LPG-air mixture and the exposure time of solid combustibles in high temperature atmosphere of flame and burnt gas. And cooling or inerting of the atmosphere after explosion can be prevent the transition of explosion to fire on solid combustibles from gas explosion.

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery of Non-Accessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Proceedings of the KSRS Conference
    • /
    • 2001.03a
    • /
    • pp.140-148
    • /
    • 2001
  • The satellite sensor model is typically established using ground control points acquired by ground survey Of existing topographic maps. In some cases where the targeted area can't be accessed and the topographic maps are not available, it is difficult to obtain ground control points so that geospatial information could not be obtained from satellite image. The paper presents several satellite sensor models and satellite image decomposition methods for non-accessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then the behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in 1$^{st}$, 2$^{nd}$ and 3$^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\phi$(phi) correlated highly with positional parameters could be assigned to constant values. For non-accessible area, satellite images were decomposed, which means that two consecutive images were combined as one image. The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1$^{st}$ order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

  • PDF