• Title/Summary/Keyword: 각도 매핑

Search Result 271, Processing Time 0.024 seconds

Mapping Precise Two-dimensional Surface Deformation on Kilauea Volcano, Hawaii using ALOS2 PALSAR2 Spotlight SAR Interferometry (ALOS-2 PALSAR-2 Spotlight 영상의 위성레이더 간섭기법을 활용한 킬라우에아 화산의 정밀 2차원 지표변위 매핑)

  • Hong, Seong-Jae;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1235-1249
    • /
    • 2019
  • Kilauea Volcano is one of the most active volcano in the world. In this study, we used the ALOS-2 PALSAR-2 satellite imagery to measure the surface deformation occurring near the summit of the Kilauea volcano from 2015 to 2017. In order to measure two-dimensional surface deformation, interferometric synthetic aperture radar (InSAR) and multiple aperture SAR interferometry (MAI) methods were performed using two interferometric pairs. To improve the precision of 2D measurement, we compared root-mean-squared deviation (RMSD) of the difference of measurement value as we change the effective antenna length and normalized squint value, which are factors that can affect the measurement performance of the MAI method. Through the compare, the values of the factors, which can measure deformation most precisely, were selected. After select optimal values of the factors, the RMSD values of the difference of the MAI measurement were decreased from 4.07 cm to 2.05 cm. In each interferograms, the maximum deformation in line-of-sight direction is -28.6 cm and -27.3 cm, respectively, and the maximum deformation in the along-track direction is 20.2 cm and 20.8 cm, in the opposite direction is -24.9 cm and -24.3 cm, respectively. After stacking the two interferograms, two-dimensional surface deformation mapping was performed, and a maximum surface deformation of approximately 30.4 cm was measured in the northwest direction. In addition, large deformation of more than 20 cm were measured in all directions. The measurement results show that the risk of eruption activity is increasing in Kilauea Volcano. The measurements of the surface deformation of Kilauea volcano from 2015 to 2017 are expected to be helpful for the study of the eruption activity of Kilauea volcano in the future.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Registration of Three-Dimensional Point Clouds Based on Quaternions Using Linear Features (선형을 이용한 쿼터니언 기반의 3차원 점군 데이터 등록)

  • Kim, Eui Myoung;Seo, Hong Deok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.175-185
    • /
    • 2020
  • Three-dimensional registration is a process of matching data with or without a coordinate system to a reference coordinate system, which is used in various fields such as the absolute orientation of photogrammetry and data combining for producing precise road maps. Three-dimensional registration is divided into a method using points and a method using linear features. In the case of using points, it is difficult to find the same conjugate point when having different spatial resolutions. On the other hand, the use of linear feature has the advantage that the three-dimensional registration is possible by using not only the case where the spatial resolution is different but also the conjugate linear feature that is not the same starting point and ending point in point cloud type data. In this study, we proposed a method to determine the scale and the three-dimensional translation after determining the three-dimensional rotation angle between two data using quaternion to perform three-dimensional registration using linear features. For the verification of the proposed method, three-dimensional registration was performed using the linear features constructed an indoor and the linear features acquired through the terrestrial mobile mapping system in an outdoor environment. The experimental results showed that the mean square root error was 0.001054m and 0.000936m, respectively, when the scale was fixed and if not fixed, using indoor data. The results of the three-dimensional transformation in the 500m section using outdoor data showed that the mean square root error was 0.09412m when the six linear features were used, and the accuracy for producing precision maps was satisfied. In addition, in the experiment where the number of linear features was changed, it was found that nine linear features were sufficient for high-precision 3D transformation through almost no change in the root mean square error even when nine linear features or more linear features were used.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.

Quantification of Soil Properties using Visible-NearInfrared Reflectance Spectroscopy (가시·근적외 분광 스펙트럼을 이용한 토양 이화학성 추정)

  • Choe, Eunyoung;Hong, S. Young;Kim, Yi-Hyun;Song, Kwan-Cheol;Zhang, Yong-Seon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.6
    • /
    • pp.522-528
    • /
    • 2009
  • This study focused on establishing prediction models using visible-near infrared spectrum to simultaneously detect multiple components of soils and enhancing the performance quality by suitably transformed input spectra and classification of soil spectral types for prediction model input. The continuum-removed spectra showed significant result for all cases in terms of soil properties and classified or bulk predictions. The prediction model using classified soil spectra at an absorption peak area around 500nm and 950nm efficiently indicating soil color showed slightly better performance. Especially, Ca and CEC were well estimated by the classified prediction model at $R^{2}$ > 0.8. For organic carbon, both classified and bulk prediction model had a good performance with $R^{2}$ > 0.8 and RPD> 2. This prediction model may be applied in global soil mapping, soil classification, and remote sensing data analysis.

Analysis on Topographic Normalization Methods for 2019 Gangneung-East Sea Wildfire Area Using PlanetScope Imagery (2019 강릉-동해 산불 피해 지역에 대한 PlanetScope 영상을 이용한 지형 정규화 기법 분석)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.179-197
    • /
    • 2020
  • Topographic normalization reduces the terrain effects on reflectance by adjusting the brightness values of the image pixels to be equal if the pixels cover the same land-cover. Topographic effects are induced by the imaging conditions and tend to be large in high mountainousregions. Therefore, image analysis on mountainous terrain such as estimation of wildfire damage assessment requires appropriate topographic normalization techniques to yield accurate image processing results. However, most of the previous studies focused on the evaluation of topographic normalization on satellite images with moderate-low spatial resolution. Thus, the alleviation of topographic effects on multi-temporal high-resolution images was not dealt enough. In this study, the evaluation of terrain normalization was performed for each band to select the optimal technical combinations for rapid and accurate wildfire damage assessment using PlanetScope images. PlanetScope has considerable potential in the disaster management field as it satisfies the rapid image acquisition by providing the 3 m resolution daily image with global coverage. For comparison of topographic normalization techniques, seven widely used methods were employed on both pre-fire and post-fire images. The analysis on bi-temporal images suggests the optimal combination of techniques which can be applied on images with different land-cover composition. Then, the vegetation index was calculated from the images after the topographic normalization with the proposed method. The wildfire damage detection results were obtained by thresholding the index and showed improvementsin detection accuracy for both object-based and pixel-based image analysis. In addition, the burn severity map was constructed to verify the effects oftopographic correction on a continuous distribution of brightness values.

BER Performance of an Offset Stacked Spreading CDMA System Based on Orthogonal Complementary Codes (직교 상보코드 기반의 옵셋누적 확산 CDMA 시스템의 비트오율 성능)

  • Kim, Myoung-Jin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.3
    • /
    • pp.1-8
    • /
    • 2009
  • DS-CDMA system has very low bandwidth efficiency, hence it is difficult to maintain high spreading gain for high speed data transmission. Offset stacked spreading CDMA(OSS-CDMA) is a transmission scheme where spreading codes with chip offsets are overlapped, then transmitted. This kind of system requires a code set that guarantees orthogonality between codes in the set of any cjip offset. An orthogonal complementary code set has a property that the crosscorrelation function between codes in the group is zero for all shifts, hence it can be used for an OSS-CDMA system. In an OCC-OSS CDMA system each user is assigned an orthogonal complementary code group. User data bit is spread by the given codes and overlapped, and the code sequences are transmitted with multicarrier. However, the offset stacked spread sequences are multilevel, and the number of symbol levels is increases as the spreading efficiency is increased. When the OSS sequence is transmitted with MPSK mapping, the signal constellation becomes dense, and the system is easily affected by channel impairments. In this paper, we propose a level clipping scheme on OSS sequence before MPSK modulated. Simulations have been carried out to investigate the BER performance of the OCC-OSS CDMA system in AWGN environment. The results show that proposed scheme outperform the scheme without level clipping.

An Architecture of UPnP Bridge for Non-lP Devices with Heterogeneous Interfaces (다양한 Non-lP 장치를 위한 UPnP 브리지 구조)

  • Kang, Jeong-Seok;Choi, Yong-Soon;Park, Hong-Seong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.12B
    • /
    • pp.779-789
    • /
    • 2007
  • This paper presents an architecture of UPnP Bridge for interconnecting Non-lP devices with heterogeneous network interfaces to UPnP devices on UPnP networks. The proposed UPnP Bridge provides a Virtual UPnP device that performs generic UPnP Device's functionalities on behalf of Non-lP device. This paper defines 3 types of descriptions, Device Description, Message Field Description, and Extended UPnP Service Description in order to reduce the amount of effort required to connect a non-lP device with a new interface or message format to UPnP network. By these three types of descriptions and Message conversion module, developers for Non-lP devices can easily connect the devices to UPnP network without additional programming. So UPnP control point controls Non-lP devices as generic UPnP device. Some experiments validate the proposed architecture, which are performed on a test bed consisting of UPnP network the proposed bridge, and non-lP devices with CAN and RS232 interfaces.

A Classification Model Supporting Dynamic Features of Product Databases (상품 데이터베이스의 동적 특성을 지원하는 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Choi Dong-Hoon
    • The KIPS Transactions:PartD
    • /
    • v.12D no.1 s.97
    • /
    • pp.165-178
    • /
    • 2005
  • A product classification scheme is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eCl@ss, however, have a lot of limitations to meet these requirements for dynamic features of classification. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this Paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes, and describe the semantic classification model proposed in [1], which satisfies the requirements for dynamic features of product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph.

Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface (메타데이터 인터페이스를 이용한 DTD 기반 XML 문서 변환기의 골격 원시 코드 생성)

  • Choe Gui-Ja;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.549-556
    • /
    • 2006
  • In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.