• Title/Summary/Keyword: Color Mapping

Search Result 263, Processing Time 0.027 seconds

Analysis of Flood Inundated Area Using Multitemporal Satellite Synthetic Aperture Radar (SAR) Imagery (시계열 위성레이더 영상을 이용한 침수지 조사)

  • Lee, Gyu-Seong;Kim, Yang-Su;Lee, Seon-Il
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.4
    • /
    • pp.427-435
    • /
    • 2000
  • It is often crucial to obtain a map of flood inundated area with more accurate and rapid manner. This study attempts to evaluate the potential of satellite synthetic aperture radar (SAR) data for mapping of flood inundated area in Imjin river basin. Multitemporal RADARSAT SAR data of three different dates were obtained at the time of flooding on August 4 and before and after the flooding. Once the data sets were geometrically corrected and preprocessed, the temporal characteristics of relative radar backscattering were analyzed. By comparing the radar backscattering of several surface features, it was clear that the flooded rice paddy showed the distinctive temporal pattern of radar response. Flooded rice paddy showed significantly lower radar signal while the normally growing rice paddy show high radar returns, which also could be easily interpreted from the color composite imagery. In addition to delineating the flooded rice fields, the multitemporal radar imagery also allow us to distinguish the afterward condition of once-flooded rice field.

  • PDF

A Subchannel Allocation Algorithm for Femtocells in OFDMA Cellular Systems (OFDMA 셀룰러 시스템에서 펨토셀 Subchannel 할당 기법)

  • Kwon, Jeong-Ahn;Kim, Byung-Gook;Lee, Jang-Won;Lim, Jae-Won;Kim, Byoung-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4A
    • /
    • pp.350-359
    • /
    • 2010
  • In this paper, we provide a subchannel allocation algorithm for a femtocell system with OFDMA. This algorithm aims to maximize the minimum number of allocated subchannels among all femtocells and in addition, to maximize the total usage of subchannels in all femtocells. The subchannel allocation algorithm consists of three steps: constructing an interference graph, coloring algorithm, and mapping subchannels to colors. In the first step, the femtocell system is modelled by an interference graph, in which each femtocell is modeled as a node and two nodes that interfere with each other are connected by an edge. Based on this interference graph, by using a coloring scheme and mapping subchannels to each color, we can allocate subchannels to each femtocell. Finally, the performance of this algorithm is provided by simulation.

KOMPSAT Data Processing System: An Overview and Preliminary Acceptance Test Results

  • Kim, Yong-Seung;Kim, Youn-Soo;Lim, Hyo-Suk;Lee, Dong-Han;Kang, Chi-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.4
    • /
    • pp.357-365
    • /
    • 1999
  • The optical sensors of Electro-Optical Camera (EOC) and Ocean Scanning Multi-spectral Imager (OSMI) aboard the KOrea Multi-Purpose SATellite (KOMPSAT) will be placed in a sun synchronous orbit in late 1999. The EOC and OSMI sensors are expected to produce the land mapping imagery of Korean territory and the ocean color imagery of world oceans, respectively. Utilization of the EOC and OSMI data would encompass the various fields of science and technology such as land mapping, land use and development, flood monitoring, biological oceanography, fishery, and environmental monitoring. Readiness of data support for user community is thus essential to the success of the KOMPSAT program. As a part of testing such readiness prior to the KOMPSAT launch, we have performed the preliminary acceptance test for the KOMPSAT data processing system using the simulated EOC and OSMI data sets. The purpose of this paper is to demonstrate the readiness of the KOMPSAT data processing system, and to help data users understand how the KOMPSAT EOC and OSMI data are processed, archived, and provided. Test results demonstrate that all requirements described in the data processing specification have been met, and that the image integrity is maintained for all products. It is however noted that since the product accuracy is limited by the simulated sensor data, any quantitative assessment of image products can not be made until actual KOMPSAT images will be acquired.

Characterization of Purple-discolored, Uppermost Leaves of Soybean; QTL Mapping, HyperspectraI Imaging, and TEM Observation

  • JaeJin Lee;Jeongsun Lee;Seongha Kwon;Heejin You;Sungwoo Lee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.187-187
    • /
    • 2022
  • Purple-discoloration of the uppermost leaves has been observed in some soybean cultivars in recent years. The purpose of this study was to characterize the novel phenotypic changes between the uppermost and middle leaves via multiple approaches. First, quantitative trait loci mapping was conducted to detect loci associated with the novel phenotype using 85 recombinant inbred lines (RILs) of the 'Daepung' × PI 96983 population. 180K SNP data, a major quantitative trait locus (QTL) was identified at around 60 cM of chromosome 6, which accounts for 56% of total phenotypic variance. The genomic interval is about ~700kb, and a list of annotated genes includes the T-gene which is known to control pubescence and seed coat color and is presumed to encode flavonoid 35-hydroxylase (F3'H). Based on Hyperspectral imaging, the reflectance at 528-554 nm wavelength band was extremely reduced in the uppermost leaves compared to the middle (green leaves), which is presumed die to the accumulation of anthocyanins. In addition, purple-discolored leaf tissues were observed and compared to normal leaves using a transmission electronic microscope (TEM). Base on observations of the cell organelles, the purple-discolored uppermost leaves had many pigments formed in the epidermal cells unlike the normal middle leaves, and the cell wall thickness was twice as thick in the discolored leaves. The thickness of the thylakoid layer in the chloroplast the number of starch grains, the size of starch all decreased in the discolored leaves, while the number of plastoglobule and mitochondria increased.

  • PDF

Immersive Visualization of Casting Solidification by Mapping Geometric Model to Reconstructed Model of Numerical Simulation Result (주물 응고 수치해석 복원모델의 설계모델 매핑을 통한 몰입형 가시화)

  • Park, Ji-Young;Suh, Ji-Hyun;Kim, Sung-Hee;Rhee, Seon-Min;Kim, Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.15A no.3
    • /
    • pp.141-149
    • /
    • 2008
  • In this research we present a novel method which combines and visualizes the design model and the FDM-based simulation result of solidification. Moreover we employ VR displays and visualize stereoscopic images to provide an effective analysis environment. First we reconstruct the solidification simulation result to a rectangular mesh model using a conventional simulation software. Then each point color of the reconstructed model represents a temperature value of its position. Next we map the two models by finding the nearest point of the reconstructed model for each point of the design model and then assign the point color of the design model as that of the reconstructed model. Before this mapping we apply mesh subdivision because the design model is composed of minimum number of points and that makes the point distribution of the design model not uniform compared with the reconstructed model. In this process the original shape is preserved in the manner that points are added to the mesh edge which length is longer than a predefined threshold value. The implemented system visualizes the solidification simulation data on the design model, which allows the user to understand the object geometry precisely. The immersive and realistic working environment constructed with use of VR display can support the user to discover the defect occurrence faster and more effectively.

Usefulness of applying Macro for Brain SPECT Processing (Brain SPECT Processing에 있어서 Macro Program 사용의 유용성)

  • Kim, Gye-Hwan;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyeon-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.35-39
    • /
    • 2009
  • Purpose: Diagnostic and functional imaging softwares in Nuclear Medicine have been developed significantly. But, there are some limitations which like take a lot of time. In this article, we introduced that the basic concept of macro to help understanding macro and its application to Brain SPECT processing. We adopted macro software to SPM processing and PACS verify processing of Brain SPECT processing. Materials and Methods: In Brain SPECT, we choose SPM processing and two PACS works which have large portion of a work. SPM is the software package to analyze neuroimaging data. And purpose of SPM is quantitative analysis between groups. Results are made by complicated process such as realignment, normalization, smoothing and mapping. We made this process to be more simple by using macro program. After sending image to PACS, we directly input coordinates of mouse using simple macro program for processes of color mapping, adjustment of gray scale, copy, cut and match. So we compared time for making result by hand with making result by macro program. Finally, we got results by applying times to number of studies in 2007. Results: In 2007, the number of SPM studies were 115 and the number of PACS studies were 834 according to Diamox study. It was taken 10 to 15 minutes for SPM work by hand according to expertness and 5 minutes and a half was uniformly needed using Macro. After applying needed time to the number of studies, we calculated an average time per a year. When using SPM work by hand according to expertness, 1150 to 1725 minutes (19 to 29 hours) were needed and 632 seconds (11 hours) were needed for using Macro. When using PACS work by hand, 2 to 3 minutes were needed and for using Macro, 45 seconds were needed. After applying theses time to the number of studies, when working by hand, 1668 to 2502 minutes (28 to 42 hours) were needed and for using Macro, 625 minutes (10 hours) were needed. Following by these results, it was shown that 1043 to 1877 (17 to 31 hours were saved. Therefore, we could save 45 to 63% for SPM, 62 to 75% for PACS work and 55 to 70% for total brain SPECT processing in 2007. Conclusions: On the basis of the number of studies, there was significant time saved when we applied Macro to brain SPECT processing and also it was shown that even though work is taken a little time, there is a possibility to save lots of time according to the number of studies. It gives time on technologist's side which makes radiological technologist more concentrate for patients and reduce probability of mistake. Appling Macro to brain SPECT processing helps for both of radiological technologists and patients and contribute to improve quality of hospital service.

  • PDF

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.

Gamut Mapping and Extension Method in the xy Chromaticity Diagram for Various Display Devices (다양한 디스플레이 장치를 위한 xy 색도도상에서의 색역 사상 및 확장 기법)

  • Cho Yang-Ho;Kwon Oh-Seol;Son Chang-Hwan;Park Tae-Yong;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.45-54
    • /
    • 2006
  • This paper proposed color matching technique, including display characterization, chromatic adaptation model, and gamut mapping and extension, to generate consistent colors for the same input signal in each display device. It is necessary to characterize the relationship between input and output colors for display device, to apply chromatic adaptation model considering the difference of reference white, and to compensate for the gamut which display devices can represent for reproducing consistent colors on DTV display devices. In this paper, 9 channel-independent GOG model, which is improved from conventional 3 channel GOG(gain, offset gamma) model, is used to consider channel interaction and enhance the modeling accuracy. Then, the input images have to be adjusted to compensate for the limited gamut of each display device. We proposed the gamut mapping and extension method, preserving lightness and hue of an original image and enhancing the saturation of an original image in xy chromaticity diagram. Since the hmm visual system is more sensitive to lightness and hue, these values are maintained as the values of input signal, and the enhancement of saturation is changed to the ratio of input and output gamut. Also the xy chromaticity diagram is effective to reduce the complexity of establishing gamut boundary and the process of reproducing moving-pictures in DTV display devices. As a result, reproducing accurate colors can be implemented when the proposed method is applied to LCD and PDP display devices

Cartoon Character Rendering based on Shading Capture of Concept Drawing (원화의 음영 캡쳐 기반 카툰 캐릭터 렌더링)

  • Byun, Hae-Won;Jung, Hye-Moon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1082-1093
    • /
    • 2011
  • Traditional rendering of cartoon character cannot revive the feeling of concept drawings properly. In this paper, we propose capture technology to get toon shading model from the concept drawings and with this technique, we provide a new novel system to render 3D cartoon character. Benefits of this system is to cartoonize the 3D character according to saliency to emphasize the form of 3D character and further support the sketch-based user interface for artists to edit shading by post-production. For this, we generate texture automatically by RGB color sorting algorithm to analyze color distribution and rates of selected region. In the cartoon rendering process, we use saliency as a measure to determine visual importance of each area of 3d mesh and we provide a novel cartoon rendering algorithm based on the saliency of 3D mesh. For the fine adjustments of shading style, we propose a user interface that allow the artists to freely add and delete shading to a 3D model. Finally, this paper shows the usefulness of the proposed system through user evaluation.

New N-dimensional Basis Functions for Modeling Surface Reflectance (표면반사율 모델링을 위한 새로운 N차원 기저함수)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.195-198
    • /
    • 2012
  • The N basis functions are typically chosen so that Surface reflectance functions(SRFs) and spectral power distributions (SPDs) can be accurately reconstructed from their N-dimensional vector codes. Typical rendering applications assume that the resulting mapping is an isomorphism where vector operations of addition, scalar multiplication, component-wise multiplication on the N-vectors can be used to model physical operations such as superposition of lights, light-surface interactions and inter-reflection. The vector operations do not mirror the physical. However, if the choice of basis functions is restricted to characteristic functions then the resulting map between SPDs/SRFs and N-vectors is anisomorphism that preserves the physical operations needed in rendering. This paper will show how to select optimal characteristic function bases of any dimension N (number of basis functions) and also evaluate how accurately a large set of Munsell color chips can approximated as basis functions of dimension N.