• Title/Summary/Keyword: Feature Fusion

Search Result 294, Processing Time 0.021 seconds

Research on the Spatial Expression Characteristics of Illustration in Picture Books (그림책 속 일러스트레이션의 공간 표현 특징 연구)

  • Han, YongGang;Kim, KieSu
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.3
    • /
    • pp.131-142
    • /
    • 2021
  • This research is based on the design of pictures in picture books, and the spatial representation of illustrations in the picture books contains the significantly important objective. Various texts, pictures, spaces, etc. in a picture must have the operator's various editing skills so that spatial arrangement is made smoothly.In this paper, the characteristics of spatial expression of design in picture books are derived by analyzing several examples of paintings and studies on classic picture books. First, the fusion of picture and text, that is, that both picture and text convey spatial information together as elements of the screen. Second, as a characteristic of the coherence of the space design in picture books, the story and content must be smoothly connected when reading the book. Third, when expressing a space, the creator should utilize the strengths and weaknesses of each other between abstract and conceived spatial expressions as needed. Fourth, as a symbolic feature of picture book spatial expression, it can be seen that many symbolic expression techniques are applied to the spatial expression of picture books according to the semiotic principle, which greatly improves the cognitive efficiency of reading picture books. The fifth characteristic is that the spatial expression of an excellent picture book has excellent interesting element, rich design means, and interestingly conveys screen contents and screen format to readers. In this research, it is thought that designers and artists should guide the creation within a spatial framework as designing picture books, thus greatly improving the efficiency of the creation process, while also provide a reader-centered visual Interesting experience.

A Study on Lightweight CNN-based Interpolation Method for Satellite Images (위성 영상을 위한 경량화된 CNN 기반의 보간 기술 연구)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.167-177
    • /
    • 2022
  • In order to obtain satellite image products using the image transmitted to the ground station after capturing the satellite images, many image pre/post-processing steps are involved. During the pre/post-processing, when converting from level 1R images to level 1G images, geometric correction is essential. An interpolation method necessary for geometric correction is inevitably used, and the quality of the level 1G images is determined according to the accuracy of the interpolation method. Also, it is crucial to speed up the interpolation algorithm by the level processor. In this paper, we proposed a lightweight CNN-based interpolation method required for geometric correction when converting from level 1R to level 1G. The proposed method doubles the resolution of satellite images and constructs a deep learning network with a lightweight deep convolutional neural network for fast processing speed. In addition, a feature map fusion method capable of improving the image quality of multispectral (MS) bands using panchromatic (PAN) band information was proposed. The images obtained through the proposed interpolation method improved by about 0.4 dB for the PAN image and about 4.9 dB for the MS image in the quantitative peak signal-to-noise ratio (PSNR) index compared to the existing deep learning-based interpolation methods. In addition, it was confirmed that the time required to acquire an image that is twice the resolution of the 36,500×36,500 input image based on the PAN image size is improved by about 1.6 times compared to the existing deep learning-based interpolation method.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

CAS 500-1/2 Image Utilization Technology and System Development: Achievement and Contribution (국토위성정보 활용기술 및 운영시스템 개발: 성과 및 의의)

  • Yoon, Sung-Joo;Son, Jonghwan;Park, Hyeongjun;Seo, Junghoon;Lee, Yoojin;Ban, Seunghwan;Choi, Jae-Seung;Kim, Byung-Guk;Lee, Hyun jik;Lee, Kyu-sung;Kweon, Ki-Eok;Lee, Kye-Dong;Jung, Hyung-sup;Choung, Yun-Jae;Choi, Hyun;Koo, Daesung;Choi, Myungjin;Shin, Yunsoo;Choi, Jaewan;Eo, Yang-Dam;Jeong, Jong-chul;Han, Youkyung;Oh, Jaehong;Rhee, Sooahm;Chang, Eunmi;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.867-879
    • /
    • 2020
  • As the era of space technology utilization is approaching, the launch of CAS (Compact Advanced Satellite) 500-1/2 satellites is scheduled during 2021 for acquisition of high-resolution images. Accordingly, the increase of image usability and processing efficiency has been emphasized as key design concepts of the CAS 500-1/2 ground station. In this regard, "CAS 500-1/2 Image Acquisition and Utilization Technology Development" project has been carried out to develop core technologies and processing systems for CAS 500-1/2 data collecting, processing, managing and distributing. In this paper, we introduce the results of the above project. We developed an operation system to generate precision images automatically with GCP (Ground Control Point) chip DB (Database) and DEM (Digital Elevation Model) DB over the entire Korean peninsula. We also developed the system to produce ortho-rectified images indexed to 1:5,000 map grids, and hence set a foundation for ARD (Analysis Ready Data)system. In addition, we linked various application software to the operation system and systematically produce mosaic images, DSM (Digital Surface Model)/DTM (Digital Terrain Model), spatial feature thematic map, and change detection thematic map. The major contribution of the developed system and technologies includes that precision images are to be automatically generated using GCP chip DB for the first time in Korea and the various utilization product technologies incorporated into the operation system of a satellite ground station. The developed operation system has been installed on Korea Land Observation Satellite Information Center of the NGII (National Geographic Information Institute). We expect the system to contribute greatly to the center's work and provide a standard for future ground station systems of earth observation satellites.