• Title/Summary/Keyword: multi-camera

Search Result 878, Processing Time 0.026 seconds

The effects of clouds on enhancing surface solar irradiance (구름에 의한 지표 일사량의 증가)

  • Jung, Yeonjin;Cho, Hi Ku;Kim, Jhoon;Kim, Young Joon;Kim, Yun Mi
    • Atmosphere
    • /
    • v.21 no.2
    • /
    • pp.131-142
    • /
    • 2011
  • Spectral solar irradiances were observed using a visible and UV Multi-Filter Rotating Shadowband Radiometer on the rooftop of the Science Building at Yonsei University, Seoul ($37.57^{\circ}N$, $126.98^{\circ}E$, 86 m) during one year period in 2006. 1-min measurements of global(total) and diffuse solar irradiances over the solar zenith angle (SZA) ranges from $20^{\circ}$ to $70^{\circ}$ were used to examine the effects of clouds and total optical depth (TOD) on enhancing four solar irradiance components (broadband 395-955 nm, UV channel 304.5 nm, visible channel 495.2 nm, and infrared channel 869.2 nm) together with the sky camera images for the assessment of cloud conditions at the time of each measurement. The obtained clear-sky irradiance measurements were used for empirical model of clear-sky irradiance with the cosine of the solar zenith angle (SZA) as an independent variable. These developed models produce continuous estimates of global and diffuse solar irradiances for clear sky. Then, the clear-sky irradiances are used to estimate the effects of clouds and TOD on the enhancement of surface solar irradiance as a difference between the measured and the estimated clear-sky values. It was found that the enhancements occur at TODs less than 1.0 (i.e. transmissivity greater than 37%) when solar disk was not obscured or obscured by optically thin clouds. Although the TOD is less than 1.0, the probability of the occurrence for the enhancements shows 50~65% depending on four different solar radiation components with the low UV irradiance. The cumulus types such as stratoculmus and altoculumus were found to produce localized enhancement of broadband global solar irradiance of up to 36.0% at TOD of 0.43 under overcast skies (cloud cover 90%) when direct solar beam was unobstructed through the broken clouds. However, those same type clouds were found to attenuate up to 80% of the incoming global solar irradiance at TOD of about 7.0. The maximum global UV enhancement was only 3.8% which is much lower than those of other three solar components because of the light scattering efficiency of cloud drops. It was shown that the most of the enhancements occurred under cloud cover from 40 to 90%. The broadband global enhancement greater than 20% occurred for SZAs ranging from 28 to $62^{\circ}$. The broadband diffuse irradiance has been increased up to 467.8% (TOD 0.34) by clouds. In the case of channel 869.0 nm, the maximum diffuse enhancement was 609.5%. Thus, it is required to measure irradiance for various cloud conditions in order to obtain climatological values, to trace the differences among cloud types, and to eventually estimate the influence on solar irradiance by cloud characteristics.

DEVELOPMENT OF THE MECHANICAL STRUCTURE OF THE MIRIS SOC (MIRIS 우주관측카메라의 기계부 개발)

  • Moon, B.K.;Jeong, W.S.;Cha, S.M.;Ree, C.H.;Park, S.J.;Lee, D.H.;Yuk, I.S.;Park, Y.S.;Park, J.H.;Nam, U.W.;Matsumoto, Toshio;Yoshida, Seiji;Yang, S.C.;Lee, S.H.;Rhee, S.W.;Han, W.
    • Publications of The Korean Astronomical Society
    • /
    • v.24 no.1
    • /
    • pp.53-64
    • /
    • 2009
  • MIRIS is the main payload of the STSAT-3 (Science and Technology Satellite 3) and the first infrared space telescope for astronomical observation in Korea. MIRIS space observation camera (SOC) covers the observation wavelength from $0.9{\mu}m$ to $2.0{\mu}m$ with a wide field of view $3.67^{\circ}\times3.67^{\circ}$. The PICNIC HgCdTe detector in a cold box is cooled down below 100K by a micro Stirling cooler of which cooling capacity is 220mW at 77K. MIRIS SOC adopts passive cooling technique to chill the telescope below 200 K by pointing to the deep space (3K). The cooling mechanism employs a radiator, a Winston cone baffle, a thermal shield, MLI (Multi Layer Insulation) of 30 layers, and GFRP (Glass Fiber Reinforced Plastic) pipe support in the system. Optomechanical analysis was made in order to estimate and compensate possible stresses from the thermal contraction of mounting parts at cryogenic temperatures. Finite Element Analysis (FEA) of mechanical structure was also conducted to ensure safety and stability in launching environments and in orbit. MIRIS SOC will mainly perform Galactic plane survey with narrow band filters (Pa $\alpha$ and Pa $\alpha$ continuum) and CIB (Cosmic Infrared Background) observation with wide band filters (I and H) driven by a cryogenic stepping motor.

Real-time Discharge Measurement of the River Using Fixed-type Surface Image Velocimetry (고정식 표면영상유속계 (FSIV)를 이용한 실시간 하천 유량 산정)

  • Kim, Seo-Jun;Yu, Kwon-Kyu;Yoon, Byung-Man
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.5
    • /
    • pp.377-388
    • /
    • 2011
  • Surface Image Velocimetry (SIV) is a recently-developed discharge measurement instrument. It uses image processing techniques to measure the water surface velocity and estimate water discharge with given cross section. The present study aims to implement a FSIV (Fixed-type Surface Image Velocimetry) at Soojeon Bridge in the Dalcheon. The hardware system consists of two digital cameras, a computer, and a pressure-type water stage gauge. The images taken with the hardware system are sent to a server computer via a wireless internet, and analyzed with a image processing software (SIV software). The estimated discharges were compared with the observed discharges through Goesan dam spillway and index velocity method using ADVM. The computed results showed a good agreement with the observed one, except for the night time. The results compared with discharges through Goesan dam spillway reached around 5~10% in the case of discharge over 30 m3/s, and the results compared with discharges through index velocity method using ADVM reached around 5% in the case of discharge over 200 $m^3/s$. Considering the low cost of the system and the visual inspection of the site situation with the images, the SIV would be fairly good way to measure water discharge in real time.

RPC Model Generation from the Physical Sensor Model (영상의 물리적 센서모델을 이용한 RPC 모델 추출)

  • Kim, Hye-Jin;Kim, Jae-Bin;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.4 s.27
    • /
    • pp.21-27
    • /
    • 2003
  • The rational polynomial coefficients(RPC) model is a generalized sensor model that is used as an alternative for the physical sensor model for IKONOS-2 and QuickBird. As the number of sensors increases along with greater complexity, and as the need for standard sensor model has become important, the applicability of the RPC model is also increasing. The RPC model can be substituted for all sensor models, such as the projective camera the linear pushbroom sensor and the SAR This paper is aimed at generating a RPC model from the physical sensor model of the KOMPSAT-1(Korean Multi-Purpose Satellite) and aerial photography. The KOMPSAT-1 collects $510{\sim}730nm$ panchromatic images with a ground sample distance (GSD) of 6.6m and a swath width of 17 km by pushbroom scanning. We generated the RPC from a physical sensor model of KOMPSAT-1 and aerial photography. The iterative least square solution based on Levenberg-Marquardt algorithm is used to estimate the RPC. In addition, data normalization and regularization are applied to improve the accuracy and minimize noise. And the accuracy of the test was evaluated based on the 2-D image coordinates. From this test, we were able to find that the RPC model is suitable for both KOMPSAT-1 and aerial photography.

  • PDF

A Study on the Marker Tracking for Virtual Construction Simulation based Mixed-Reality (융합현실 기반의 가상건설 시뮬레이션을 위한 마커 추적 방식에 관한 연구)

  • Baek, Ji-Woong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.12
    • /
    • pp.660-668
    • /
    • 2018
  • The main object of this study was to find a way to operate the marker for simulating a virtual construction using a MR(mixed reality) device. The secondary object was to find a way to extract the form-data from BIM data, and to represent the virtual object by the MR device. A tiny error of scale causes large errors of length because the architectural objects are very large. The scale was affected by the way that the camera of the MR device recognizes the marker. The method of installing and operating the marker causes length errors in the virtual object in the MR system. The experimental results showed that the error factor of the Virtual object's length was 0.47%. In addition, the distance between the markers can be decided through the results of an experiment for the multi-marker tracking system. The minimum distance between markers should be more than 5 m, and the error of length was approximately 23mm. If the represented virtual object must be less than 20mm in error, the particular mark should be installed within a 5 m radius of it. Based on this research, it is expected that utilization of the MR device will increase for the application of virtual construction simulations to construction sites.

Land Cover Classification of High-Spatial Resolution Imagery using Fixed-Wing UAV (고정익 UAV를 이용한 고해상도 영상의 토지피복분류)

  • Yang, Sung-Ryong;Lee, Hak-Sool
    • Journal of the Society of Disaster Information
    • /
    • v.14 no.4
    • /
    • pp.501-509
    • /
    • 2018
  • Purpose: UAV-based photo measurements are being researched using UAVs in the space information field as they are not only cost-effective compared to conventional aerial imaging but also easy to obtain high-resolution data on desired time and location. In this study, the UAV-based high-resolution images were used to perform the land cover classification. Method: RGB cameras were used to obtain high-resolution images, and in addition, multi-distribution cameras were used to photograph the same regions in order to accurately classify the feeding areas. Finally, Land cover classification was carried out for a total of seven classes using created ortho image by RGB and multispectral camera, DSM(Digital Surface Model), NDVI(Normalized Difference Vegetation Index), GLCM(Gray-Level Co-occurrence Matrix) using RF (Random Forest), a representative supervisory classification system. Results: To assess the accuracy of the classification, an accuracy assessment based on the error matrix was conducted, and the accuracy assessment results were verified that the proposed method could effectively classify classes in the region by comparing with the supervisory results using RGB images only. Conclusion: In case of adding orthoimage, multispectral image, NDVI and GLCM proposed in this study, accuracy was higher than that of conventional orthoimage. Future research will attempt to improve classification accuracy through the development of additional input data.

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.

Analysis of UAV-based Multispectral Reflectance Variability for Agriculture Monitoring (농업관측을 위한 다중분광 무인기 반사율 변동성 분석)

  • Ahn, Ho-yong;Na, Sang-il;Park, Chan-won;Hong, Suk-young;So, Kyu-ho;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1379-1391
    • /
    • 2020
  • UAV in the agricultural application are capable of collecting ultra-high resolution image. It is possible to obtain timeliness images for phenological phases of the crop. However, the UAV uses a variety of sensors and multi-temporal images according to the environment. Therefore, it is essential to use normalized image data for time series image application for crop monitoring. This study analyzed the variability of UAV reflectance and vegetation index according to Aviation Image Making Environment to utilize the UAV multispectral image for agricultural monitoring time series. The variability of the reflectance according to environmental factors such as altitude, direction, time, and cloud was very large, ranging from 8% to 11%, but the vegetation index variability was stable, ranging from 1% to 5%. This phenomenon is believed to have various causes such as the characteristics of the UAV multispectral sensor and the normalization of the post-processing program. In order to utilize the time series of unmanned aerial vehicles, it is recommended to use the same ratio function as the vegetation index, and it is recommended to minimize the variability of time series images by setting the same time, altitude and direction as possible.

A Study on The Metaverse Content Production Pipeline using ZEPETO World (제페토 월드를 활용한 메타버스 콘텐츠 제작 공정에 관한 연구)

  • Park, MyeongSeok;Cho, Yunsik;Cho, Dasom;Na, Giri;Lee, Jamin;Cho, Sae-Hong;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.91-100
    • /
    • 2022
  • This study proposes the metaverse content production pipeline using ZEPETO World, one of the representative metaverse platforms in Korea. Based on the Unity 3D engine, the ZEPETO world is configured using the ZEPETO template, and the core functions of the metaverse content that enable multi-user participation such as logic, interaction, and property control are implemented through the ZEPETO script. This study utilizes the basic functions such as properties, events, and components of the ZEPETO script as well as the ZEPETO player which includes avatar loading, character movement, and camera control functions. In addition, based on ZEPETO's properties such as World Multiplayer and Client Starter, it summarizes the core synchronization process required for multiplay metaverse content production, such as object transformation, dynamic object creation, property addition, and real-time property control. Based on this, we check the proposed production pipeline by directly producing multiplay metaverse content using ZEPETO World.

Smartphone Fundus Photography in an Infant with Abusive Head Trauma (학대뇌손상 영아에서 스마트폰으로 촬영한 안저소견)

  • Kim, Yong Hyun;Choi, Shin Young;Lee, Ji Sook;Yoon, Soo Han;Chung, Seung Ah
    • Journal of The Korean Ophthalmological Society
    • /
    • v.58 no.11
    • /
    • pp.1313-1316
    • /
    • 2017
  • Purpose: To report fundus photography using a smartphone in an infant with abusive head trauma. Case summary: An 8-month-old male infant presented to the emergency room with decreased consciousness and epileptic seizures that the parents attributed to a fall from a chair. He had no external wounds or fractures to the skull or elsewhere. However, computerized tomography of the brain revealed an acute subdural hematoma in the right cranial convexity and diffuse cerebral edema, leading to a midline shift to the left and effacement of the right lateral ventricle and basal cistern. The attending neurosurgeon promptly administered a decompressive craniectomy. Immediately after the emergency surgery, a fundus examination revealed numerous multi-layered retinal hemorrhages in the posterior pole extending to the periphery in each eye. He also had white retinal ridges with cherry hemorrhages in both eyes. We acquired retinal photographs using the native camera of a smartphone in video mode. The photographer held the smartphone with one hand, facing the patient's eye at 15-20 cm, and held a 20 diopter condensing lens at 5 cm from the eye in the other hand. Our documentation using a smartphone led to a diagnosis of abusive head trauma and to obtain the criminal's confession, because the findings were specific for repetitive acceleration-deceleration forces to an infant's eye with a strong vitreoretinal attachment. Conclusions: This ophthalmic finding had a key role in the diagnosis of abusive head trauma. This case presented the diagnostic use of a smartphone for fundus photography in this important medicolegal case.