• Title/Summary/Keyword: Camera Technology

Search Result 2,565, Processing Time 0.039 seconds

A Technique on the 3-D Terrain Analysis Modeling for Optimum Site Selection and development of Stereo Tourism in the Future (미래입체관광의 최적지선정 및 개발을 위한 3차원지형분석모델링 기법)

  • Yeon, Sang-Ho;Choi, Seung-Kuk
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.11
    • /
    • pp.415-422
    • /
    • 2013
  • The contents development for the Internet and cyber tour has been attempted in a number of areas. 3D topography of the spatial environment, land planning and land information contents as a 3D tour of the future ubiquitous city safe for tourism due to the implementation of information made available major area. Domestic service, and in urban areas of the country where land and precise spatial information in order to shoot satellites and aircraft in the area you want to mount the camera on a variety of photo images taken by conducting 3D spatial that is required is able to obtain the information. Geo spatial information in a variety of direct or indirect acquisition of the initial spatial data into a database for accurate collection, storage, editing, manipulation and application technology changes in the future by establishing a database of 3D spatial by securing content organization ubiquitous tourist to take advantage of new tourism industry was greatly. As a result of this study for future tourism using geo spatial information and analysis of 3D modeling by intelligent land information indirectly, with quite a few stereo site experience and a variety of tourist spatial acquisition and utilization of information could prove.

Study on 3D AR of Education Robot for NURI Process (누리과정에 적용할 교육로봇의 가상환경 3D AR 연구)

  • Park, Young-Suk;Park, Dea-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.209-212
    • /
    • 2013
  • The Nuri process of emphasis by the Ministry of Education to promote is standardized curriculum at the national level for the education and care. It is to improve the quality of pre-school education and Ensure a fair starting line early in life and It emphasizes character education in all areas of the window. Nuri the process of development of a the insect robot for the Creativity education Increased the interesting and educational effects. Assembly and the effect on learning of educational content using a VR educational robot using the existing floor assembly using the online website to help assemble and learning raised. Order to take advantage of information technology in the information-based society requires the active interest and motivation in learning, creative learning toddlers learning robot are also needed. A three-dimensional model of the robot, and augmented by linking through the marker, the target marker and the camera relative to the coordinate system of augmented reality, seeking to convert the marker to be used in augmented reality marker patterns within a pre-defined patternto be able to make a decision on what of. The fusion of a smart education through training and reinforcement the educational assembly of the robot in the real world window that is represented by a virtual environment in this paper to present a new form of state-of-the-art smart training, you will want to lay the foundation of the nation through the early national talent nurturing talent.

  • PDF

Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing (순차적 이중 전방 사상의 병렬 처리를 통한 다중 시점 고속 영상 합성)

  • Choi, Ji-Youn;Ryu, Sae-Woon;Shin, Hong-Chang;Park, Jong-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1303-1310
    • /
    • 2009
  • Glassless 3D display requires multiple images taken from different viewpoints to show a scene. The simplest way to get multi-view image is using multiple camera that as number of views are requires. To do that, synchronize between cameras or compute and transmit lots of data comes critical problem. Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is an algorithm for generating various virtual viewpoint images using a limited number of views and depth maps. In this paper, because the virtual view image can be express as a transformed image from real view with some depth condition, we propose an algorithm to compute multi-view synthesis from two reference view images and their own depth-map by stepwise duplex forward mapping. And also, because the geometrical relationship between real view and virtual view is repetitively, we apply our algorithm into OpenGL Shading Language which is a programmable Graphic Process Unit that allow parallel processing to improve computation time. We demonstrate the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.

Study on the Visual Characteristics and Subjectivity in the Live Action Based Virtual Reality (실사기반 가상현실 영상의 특징과 주체 구성에 대한 연구)

  • Jeon, Gyongran
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.117-139
    • /
    • 2017
  • The possibility of interactivity of digital media environment is adopted in human expression system and integrates the dynamic aspect of digital technology with expressive structure, thereby transforming the paradigm of image acceptance as well as image expression range. Virtual reality images have an important meaning in that they are changing the one-way mechanism of production and acceptance of images that lead to producers-video-audiences beyond the problem of verisimilitude such as how vividly they simulate reality. First of all, the virtual reality image is not one-sided but interactive image composed by the user. Viewing a virtual reality image does not just see the camera shine, but it gets the same view as in the real world. Therefore, the image that was controlled through framing changes to be configured positively by the user. This implies a change in the paradigm of image acceptance as well as a change in the existing form of the image itself. In addition, the narrative structure of the image and the subjects that are formed in the process are also required to be discussed. In the virtual reality image, the user 's gaze is a fusion of the gaze inside the image and the gaze outside the image. This is because the position of the user as the subject of the gaze in the virtual reality image is continuously restricted by the device of the discourse such as the editing and the narration of the shot. The significance of the virtual reality image is not aesthetically perfect but it is reconstructed according to the user to reflect the existence of the user positively and engage the user in the image.

Development and Comparative Analysis of Mapping Quality Prediction Technology Using Orientation Parameters Processed in UAV Software (무인기 소프트웨어에서 처리된 표정요소를 이용한 도화품질 예측기술 개발 및 비교분석)

  • Lim, Pyung-Chae;Son, Jonghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.895-905
    • /
    • 2019
  • Commercial Unmanned Aerial Vehicle (UAV) image processing software products currently used in the industry provides camera calibration information and block bundle adjustment accuracy. However, they provide mapping accuracy achievable out of input UAV images. In this paper, the quality of mapping is calculated by using orientation parameters from UAV image processing software. We apply the orientation parameters to the digital photogrammetric workstation (DPW) for verifying the reliability of the mapping quality calculated. The quality of mapping accuracy was defined as three types of accuracy: Y-parallax, relative model and absolute model accuracy. The Y-parallax is an accuracy capable of determining stereo viewing between stereo pairs. The Relative model accuracy is the relative bundle adjustment accuracy between stereo pairs on the model coordinates system. The absolute model accuracy is the bundle adjustment accuracy on the absolute coordinate system. For the experimental data, we used 723 images of GSD 5 cm obtained from the rotary wing UAV over an urban area and analyzed the accuracy of mapping quality. The quality of the relative model accuracy predicted by the proposed technique and the maximum error observed from the DPW showed precise results with less than 0.11 m. Similarly, the maximum error of the absolute model accuracy predicted by the proposed technique was less than 0.16 m.

Accuracy Assessment of Feature Collection Method with Unmanned Aerial Vehicle Images Using Stereo Plotting Program StereoCAD (수치도화 프로그램 StereoCAD를 이용한 무인 항공영상의 묘사 정확도 평가)

  • Lee, Jae One;Kim, Doo Pyo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.257-264
    • /
    • 2020
  • Vectorization is currently the main method in feature collection (extraction) during digital mapping using UAV-Photogrammetry. However, this method is time consuming and prone to gross elevation errors when extracted from a DSM (Digital Surface Model), because three-dimensional feature coordinates are vectorized separately: plane information from an orthophoto and height from a DSM. Consequently, the demand for stereo plotting method capable of acquiring three- dimensional spatial information simultaneously is increasing. However, this method requires an expensive equipment, a Digital Photogrammetry Workstation (DPW), and the technology itself is still incomplete. In this paper, we evaluated the accuracy of low-cost stereo plotting system, Menci's StereoCAD, by analyzing its three-dimensional spatial information acquisition. Images were taken with a FC 6310 camera mounted on a Phantom4 pro at a 90 m altitude with a Ground Sample Distance (GSD) of 3 cm. The accuracy analysis was performed by comparing differences in coordinates between the results from the ground survey and the stereo plotting at check points, and also at the corner points by layers. The results showed that the Root Mean Square Error (RMSE) at check points was 0.048 m for horizontal and 0.078 m for vertical coordinates, respectively, and for different layers, it ranged from 0.104 m to 0.127 m for horizontal and 0.086 m to 0.092 m for vertical coordinates, respectively. In conclusion, the results showed 1: 1,000 digital topographic map can be generated using a stereo plotting system with UAV images.

Real-Time Object Tracking Algorithm based on Pattern Classification in Surveillance Networks (서베일런스 네트워크에서 패턴인식 기반의 실시간 객체 추적 알고리즘)

  • Kang, Sung-Kwan;Chun, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.183-190
    • /
    • 2016
  • This paper proposes algorithm to reduce the computing time in a neural network that reduces transmission of data for tracking mobile objects in surveillance networks in terms of detection and communication load. Object Detection can be defined as follows : Given image sequence, which can forom a digitalized image, the goal of object detection is to determine whether or not there is any object in the image, and if present, returns its location, direction, size, and so on. But object in an given image is considerably difficult because location, size, light conditions, obstacle and so on change the overall appearance of objects, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact object detection which overcomes some restrictions by using neural network. Proposed system can be object detection irrelevant to obstacle, background and pose rapidly. And neural network calculation time is decreased by reducing input vector size of neural network. Principle Component Analysis can reduce the dimension of data. In the video input in real time from a CCTV was experimented and in case of color segment, the result shows different success rate depending on camera settings. Experimental results show proposed method attains 30% higher recognition performance than the conventional method.

Preparation and Heating Characteristics of N-doped Graphite Fiber as a Heating Element (질소가 도핑 된 흑연섬유 발열체의 제조 및 발열특성)

  • Kim, Min-Ji;Lee, Kyeong Min;Lee, Sangmin;Yeo, Sang Young;Choi, Suk Soon;Lee, Young-Seak
    • Applied Chemistry for Engineering
    • /
    • v.28 no.1
    • /
    • pp.80-86
    • /
    • 2017
  • In this study, nitrogen functional groups were introduced on graphite fiber (GF) to modify their electrical properties, and heating properties were investigated according to the treatment conditions. GF was prepared by a thermal solid-state reaction at $200^{\circ}C$ for 2 h. Surface properties of the nitrogen doped GF were examined by XPS, and its resistance and heating temperature were measured using a programmable electrometer and thermo-graphic camera, respectively. The XPS result showed that the nitrogen functional groups on the GF surface were increased with increasing of urea contents, and the heating property of the GF was also improved as nitrogen functional groups were introduced. The maximum heating temperature of GF treated by urea was $53.8^{\circ}C$ at 60 V, which showed 55% improved heating characteristics compared to that of non-treated GF. We ascribe this effect to introduced nitrogen functional groups on the GF surface by thermal solid-state reaction, which significantly affects the heating characteristics of GF.

An Image Processing System to Estimate Pollutant Concentration of Animal Wastes (가축 분뇨의 오염물질 농도 추정을 위한 영상처리 시스템)

  • 이대원;김현태
    • Journal of Animal Environmental Science
    • /
    • v.7 no.3
    • /
    • pp.177-182
    • /
    • 2001
  • This study was conducted to find out the coefficient relationships between intensity values image processing and pollution density of slurries. Slurry images were obtained from the image processing system using personnel computer and CCD-camera. Software, written in Visual $c^{++}$, combined the functions of the image capture, image processing and image analysis. The data of image processing for slurries were analyzed by the method of regression analysis. The results are as follows. 1. Red(R)-values among image processing data were obtained the highest correlation coefficient 0.9213 for detecting COD. Also, green(G)-value were obtained the highest correlation coefficient 0.9019 fur detecting BOD. Blue(B)-value could not find significant values to detect the pollution resources density. 2. Hue(H)-values among image processing data were obtained the highest correlation coefficient 0.9466 for detecting BOD. This fact could be used in detecting BOD 3. Green(G)-value, GRAY-value, Hue(H)-value, Saturation(5)-value and Intensity(I)-value were the correlation coefficient more than 0.8 for BOD. Hue(H)-value was higher correlation coefficient than any other value. It was possible to detect pollution density of slurries by using the image processing system. 4. Red(R)-value, GRAY-value and Saturation(5)-value were obtained the correlation coefficient more than 0.8 for detecting COD. a-value had the highest correlation coefficient Among these values. It was possible to detect density indirectly by using the image processing system. 5. SS-density were obtained the correlation coefficient less than 0.8 by using the image processing system. The density of $NH_4$-N and $NO_3$-N were obtained correlation coefficient less than 0.2.

  • PDF

The effects of clouds on enhancing surface solar irradiance (구름에 의한 지표 일사량의 증가)

  • Jung, Yeonjin;Cho, Hi Ku;Kim, Jhoon;Kim, Young Joon;Kim, Yun Mi
    • Atmosphere
    • /
    • v.21 no.2
    • /
    • pp.131-142
    • /
    • 2011
  • Spectral solar irradiances were observed using a visible and UV Multi-Filter Rotating Shadowband Radiometer on the rooftop of the Science Building at Yonsei University, Seoul ($37.57^{\circ}N$, $126.98^{\circ}E$, 86 m) during one year period in 2006. 1-min measurements of global(total) and diffuse solar irradiances over the solar zenith angle (SZA) ranges from $20^{\circ}$ to $70^{\circ}$ were used to examine the effects of clouds and total optical depth (TOD) on enhancing four solar irradiance components (broadband 395-955 nm, UV channel 304.5 nm, visible channel 495.2 nm, and infrared channel 869.2 nm) together with the sky camera images for the assessment of cloud conditions at the time of each measurement. The obtained clear-sky irradiance measurements were used for empirical model of clear-sky irradiance with the cosine of the solar zenith angle (SZA) as an independent variable. These developed models produce continuous estimates of global and diffuse solar irradiances for clear sky. Then, the clear-sky irradiances are used to estimate the effects of clouds and TOD on the enhancement of surface solar irradiance as a difference between the measured and the estimated clear-sky values. It was found that the enhancements occur at TODs less than 1.0 (i.e. transmissivity greater than 37%) when solar disk was not obscured or obscured by optically thin clouds. Although the TOD is less than 1.0, the probability of the occurrence for the enhancements shows 50~65% depending on four different solar radiation components with the low UV irradiance. The cumulus types such as stratoculmus and altoculumus were found to produce localized enhancement of broadband global solar irradiance of up to 36.0% at TOD of 0.43 under overcast skies (cloud cover 90%) when direct solar beam was unobstructed through the broken clouds. However, those same type clouds were found to attenuate up to 80% of the incoming global solar irradiance at TOD of about 7.0. The maximum global UV enhancement was only 3.8% which is much lower than those of other three solar components because of the light scattering efficiency of cloud drops. It was shown that the most of the enhancements occurred under cloud cover from 40 to 90%. The broadband global enhancement greater than 20% occurred for SZAs ranging from 28 to $62^{\circ}$. The broadband diffuse irradiance has been increased up to 467.8% (TOD 0.34) by clouds. In the case of channel 869.0 nm, the maximum diffuse enhancement was 609.5%. Thus, it is required to measure irradiance for various cloud conditions in order to obtain climatological values, to trace the differences among cloud types, and to eventually estimate the influence on solar irradiance by cloud characteristics.