• Title/Summary/Keyword: Space Images

Search Result 2,338, Processing Time 0.032 seconds

A Study on the Inner Space Characteristics of Local Government Civil Service Space of Seoul City (서울시 구청 대민서비스공간의 실내공간특성에 관한 연구)

  • Park, Sun-Ju;Kim, Moon-Duck
    • Proceedings of the Korean Institute of Interior Design Conference
    • /
    • 2008.05a
    • /
    • pp.258-263
    • /
    • 2008
  • This study aims to investigate the changing roles of Local Government and the ways to improve the space plan, especially focusing on the Local Government civil service space of Seoul City. Recently, many Local Government have endeavored to get rid of the authoritative images and change themselves to a space featuring cultural functions and efficient performance. Local Government buildings in Seoul City are trying to decorate the inner and outer images through new construction or extension of buildings so as to become more friendly to residents. Such efforts are visualized especially in civil service space. Civil service space represents the whole image and role of the Local Government. In addition, civil service space and resting space, which are frequently used, are correlated and more and more accessible to each other as administrative and civil services are increasing. By integrating and developing such features, civil service space of Local Government that meets the requirements of the new era will be presented.

  • PDF

A Study on the Acquisition of Multi-Viewpoint Image for the Analysis of form and Space and its Effectiveness (형태 및 공간분석을 위한 다시점(多視點) 이미지 획득 및 유효성에 관한 연구)

  • Lee, Hyok-Jun;Lee, Jong-Suk
    • Korean Institute of Interior Design Journal
    • /
    • no.34
    • /
    • pp.149-156
    • /
    • 2002
  • This study intends to acquire objective models for basic quantitative analysis of pattern and space through image-recognition technique, and verify the effectiveness of such acquired models. Many experiments showed that the recognized result can be varied depending on the different viewpoints and the analysis based on the single-viewpoint images does not provide objectivity. The experiment using multi-viewpoint image models, which was attempted as an alternative for the disadvantages, showed the recognition similar to that of the actual model. Especially, images generated at laboratory using miniature model may be useful in comparing and understanding plural number of patterns. The models that have been acquired using such images may be hard to use in acquiring images for analyzing actual building patterns or indoor space, although they may be useful in pattern analysis using miniature model. The disadvantage, however, can be supplemented with panorama VR and C. G. simulation technique. Steady researches are required on the application of visual information to the image recognition principle and the model for quantitative analysis of pattern and space in addition to the research on the construction of the model that can be used in comparing and analyzing not only form and space but also miniature models.

Design and Verification of Spacecraft Pose Estimation Algorithm using Deep Learning

  • Shinhye Moon;Sang-Young Park;Seunggwon Jeon;Dae-Eun Kang
    • Journal of Astronomy and Space Sciences
    • /
    • v.41 no.2
    • /
    • pp.61-78
    • /
    • 2024
  • This study developed a real-time spacecraft pose estimation algorithm that combined a deep learning model and the least-squares method. Pose estimation in space is crucial for automatic rendezvous docking and inter-spacecraft communication. Owing to the difficulty in training deep learning models in space, we showed that actual experimental results could be predicted through software simulations on the ground. We integrated deep learning with nonlinear least squares (NLS) to predict the pose from a single spacecraft image in real time. We constructed a virtual environment capable of mass-producing synthetic images to train a deep learning model. This study proposed a method for training a deep learning model using pure synthetic images. Further, a visual-based real-time estimation system suitable for use in a flight testbed was constructed. Consequently, it was verified that the hardware experimental results could be predicted from software simulations with the same environment and relative distance. This study showed that a deep learning model trained using only synthetic images can be sufficiently applied to real images. Thus, this study proposed a real-time pose estimation software for automatic docking and demonstrated that the method constructed with only synthetic data was applicable in space.

The Generation of True Orthophotos from High Resolution Satellites Images

  • Chen, Liang-Chien;Wen, Jen-Yu;Teo, Tee-Ann
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.885-887
    • /
    • 2003
  • The purpose of this investigation is to generate true orthophotos from high resolution satellite images. The major works of this research include 4 parts: (1) determination of orientation parameters, (2) generating traditional orthophotos using terrain model, (3) relief correction for buildings, and (4) process for hidden areas. To determine the position of satellites, we correct the onboard orientation parameters to fine tune the orbit. In the generation of traditional orthophotos, we employ orientation parameters and digital terrain model(DTM) to rectify tilt displacements and relief displacements for terrain. We, then, compute relief displacements for buildings with digital building model (DBM). To avoid double mapping, we detect hidden areas. Due to the satellite’s small field of view, an efficient method for the detection of hidden areas and building rectification will be proposed in this paper. Test areas cover the city of Kaohsiung in southern Taiwan. Test images are from the QuickBird satellite.

  • PDF

Generation of modern satellite data from Galileo sunspot drawings by deep learning

  • Lee, Harim;Park, Eunsu;Moon, Young-Jae
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.1-41.1
    • /
    • 2021
  • We generate solar magnetograms and EUV images from Galileo sunspot drawings using a deep learning model based on conditional generative adversarial networks. We train the model using pairs of sunspot drawing from Mount Wilson Observatory (MWO) and their corresponding magnetogram (or UV/EUV images) from 2011 to 2015 except for every June and December by the SDO (Solar Dynamic Observatory) satellite. We evaluate the model by comparing pairs of actual magnetogram (or UV/EUV images) and the corresponding AI-generated one in June and December. Our results show that bipolar structures of the AI-generated magnetograms are consistent with those of the original ones and their unsigned magnetic fluxes (or intensities) are well consistent with those of the original ones. Applying this model to the Galileo sunspot drawings in 1612, we generate HMI-like magnetograms and AIA-like EUV images of the sunspots. We hope that the EUV intensities can be used for estimating solar EUV irradiance at long-term historical times.

  • PDF

Automatic Detection of Type II Solar Radio Burst by Using 1-D Convolution Neutral Network

  • Kyung-Suk Cho;Junyoung Kim;Rok-Soon Kim;Eunsu Park;Yuki Kubo;Kazumasa Iwai
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.213-224
    • /
    • 2023
  • Type II solar radio bursts show frequency drifts from high to low over time. They have been known as a signature of coronal shock associated with Coronal Mass Ejections (CMEs) and/or flares, which cause an abrupt change in the space environment near the Earth (space weather). Therefore, early detection of type II bursts is important for forecasting of space weather. In this study, we develop a deep-learning (DL) model for the automatic detection of type II bursts. For this purpose, we adopted a 1-D Convolution Neutral Network (CNN) as it is well-suited for processing spatiotemporal information within the applied data set. We utilized a total of 286 radio burst spectrum images obtained by Hiraiso Radio Spectrograph (HiRAS) from 1991 and 2012, along with 231 spectrum images without the bursts from 2009 to 2015, to recognizes type II bursts. The burst types were labeled manually according to their spectra features in an answer table. Subsequently, we applied the 1-D CNN technique to the spectrum images using two filter windows with different size along time axis. To develop the DL model, we randomly selected 412 spectrum images (80%) for training and validation. The train history shows that both train and validation losses drop rapidly, while train and validation accuracies increased within approximately 100 epoches. For evaluation of the model's performance, we used 105 test images (20%) and employed a contingence table. It is found that false alarm ratio (FAR) and critical success index (CSI) were 0.14 and 0.83, respectively. Furthermore, we confirmed above result by adopting five-fold cross-validation method, in which we re-sampled five groups randomly. The estimated mean FAR and CSI of the five groups were 0.05 and 0.87, respectively. For experimental purposes, we applied our proposed model to 85 HiRAS type II radio bursts listed in the NGDC catalogue from 2009 to 2016 and 184 quiet (no bursts) spectrum images before and after the type II bursts. As a result, our model successfully detected 79 events (93%) of type II events. This results demonstrates, for the first time, that the 1-D CNN algorithm is useful for detecting type II bursts.

A reversible data hiding scheme in JPEG bitstreams using DCT coefficients truncation

  • Zhang, Mingming;Zhou, Quan;Hu, Yanlang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.404-421
    • /
    • 2020
  • A reversible data hiding scheme in JPEG compressed bitstreams is proposed, which could avoid decoding failure and file expansion by means of removing of bitstreams corresponding to high frequency coefficients and embedding of secret data in file header as comment part. We decode original JPEG images to quantified 8×8 DCT blocks, and search for a high frequency as an optimal termination point, beyond which the coefficients are set to zero. These blocks are separated into two parts so that termination point in the latter part is slightly smaller to make the whole blocks available in substitution. Then spare space is reserved to insert secret data after comment marker so that data extraction is independent of recovery in receiver. Marked images can be displayed normally such that it is difficult to distinguish deviation by human eyes. Termination point is adaptive for variation in secret size. A secret size below 500 bits produces a negligible distortion and a PSNR of approximately 50 dB, while PSNR is also mostly larger than 30 dB for a secret size up to 25000 bits. The experimental results show that the proposed technique exhibits significant advantages in computational complexity and preservation of file size for small hiding capacity, compared to previous methods.

SPECKLE IMAGING TECHNIQUE FOR LUNAR SURFACES

  • Kim, Jinkyu;Sim, Chae Kyung;Jeong, Minsup;Moon, Hong-Kyu;Choi, Young-Jun;Kim, Sungsoo S.;Jin, Ho
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.4
    • /
    • pp.87-97
    • /
    • 2022
  • Polarimetric measurements of the lunar surface from lunar orbit soon will be available via Wide-Field Polarimetric Camera (PolCam) onboard the Korea Pathfinder Lunar Orbiter (KPLO), which is planned to be launched in mid 2022. To provide calibration data for the PolCam, we are conducting speckle polarimetric measurements of the nearside of the Moon from the Earth's ground. It appears that speckle imaging of the Moon for scientific purposes has not been attempted before, and there is need for a procedure to create a "lucky image" from a number of observed speckle images. As a first step of obtaining calibration data for the PolCam from the ground, we search for the best sharpness measure for lunar surfaces. We then calculate the minimum number of speckle images and the number of images to be shift-and-added for higher resolution (sharpness) and signal-to-noise ratio.

Climatology of Equatorial Plasma Bubbles in Ionospheric Connection Explorer/Far-UltraViolet (ICON/FUV) Limb Images

  • Park, Jaeheung;Mende, Stephen B.;Eastes, Richard W.;Frey, Harald U.
    • Journal of Astronomy and Space Sciences
    • /
    • v.39 no.3
    • /
    • pp.87-98
    • /
    • 2022
  • The Far-UltraViolet (FUV) imager onboard the Ionospheric Connection Explorer (ICON) spacecraft provides two-dimensional limb images of oxygen airglow in the nightside low-latitude ionosphere that are used to determine the oxygen ion density. As yet, no FUV limb imager has been used for climatological analyses of Equatorial Plasma Bubbles (EPBs). To examine the potential of ICON/FUV for this purpose, we statistically investigate small-scale (~180 km) fluctuations of oxygen ion density in its limb images. The seasonal-longitudinal variations of the fluctuation level reasonably conform to the EPB statistics in existing literature. To further validate the ICON/FUV data quality, we also inspect climatology of the ambient (unfiltered) nightside oxygen ion density. The ambient density exhibits (1) the well-known zonal wavenumber-4 signatures in the Equatorial Ionization Anomaly (EIA) and (2) off-equatorial enhancement above the Caribbean, both of which agree with previous studies. Merits of ICON/FUV observations over other conventional data sets are discussed in this paper. Furthermore, we suggest possible directions of future work, e.g., synergy between ICON/FUV and the Global-scale Observations of the Limb and Disk (GOLD) mission.

The Interactive Virtual Space with Scent Display for Song-Do Tomorrow-City Experience Complex (향 디스플레이가 가능한 송도 Tomorrow-city 체험관의 상호작용 가상공간)

  • Kim, Jeong-Do;Park, Sung-Dae;Lee, Jung-Hwan;Kim, Jung-Ju;Lee, Sang-Goog
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.585-593
    • /
    • 2010
  • Recently, we designed an interactive virtual space for the multi-purpose hall in Songdo Future City, located in Incheon, Korea. The goal of the design is to make a virtual space that is flexible and can be adjusted thanks to its unfixed seats in order to accommodate different and unspecified audience sizes. Virtual images are interactively adjusted according to the distance, position and size of audiences, information about which is detected by 9 photo sensors. To increase the sense of immersion, intensity and reality, we utilized the technology of scent display that can create appropriate scents to match the images on the screen. The intensity and persistence of scents were determined by the size, distance and position of audiences. The virtual image contains background images and reactive images. The background images repeatedly project images of spring, summer, autumn and winter. The reactive images consist of small portraits or pictures or icons that define or characterize the season types, and these are added to the background image according to the distance, position and size of the audiences.