• Title/Summary/Keyword: Mapping Method

Search Result 2,608, Processing Time 0.032 seconds

Mobile Cloud Context-Awareness System based on Jess Inference and Semantic Web RL for Inference Cost Decline (추론 비용 감소를 위한 Jess 추론과 시멘틱 웹 RL기반의 모바일 클라우드 상황인식 시스템)

  • Jung, Se-Hoon;Sim, Chun-Bo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • The context aware service is the service to provide useful information to the users by recognizing surroundings around people who receive the service via computer based on computing and communication, and by conducting self-decision. But CAS(Context Awareness System) shows the weak point of small-scale context awareness processing capacity due to restricted mobile function under the current mobile environment, memory space, and inference cost increment. In this paper, we propose a mobile cloud context system with using Google App Engine based on PaaS(Platform as a Service) in order to get context service in various mobile devices without any subordination to any specific platform. Inference design method of the proposed system makes use of knowledge-based framework with semantic inference that is presented by SWRL rule and OWL ontology and Jess with rule-based inference engine. As well as, it is intended to shorten the context service reasoning time with mapping the regular reasoning of SWRL to Jess reasoning engine by connecting the values such as Class, Property and Individual which are regular information in the form of SWRL to Jess reasoning engine via JessTab plug-in in order to overcome the demerit of queries reasoning method of SparQL in semantic search which is a previous reasoning method.

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

Fire Severity Mapping Using a Single Post-Fire Landsat 7 ETM+ Imagery (단일 시기의 Landsat 7 ETM+ 영상을 이용한 산불피해지도 작성)

  • 원강영;임정호
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.85-97
    • /
    • 2001
  • The KT(Kauth-Thomas) and IHS(Intensity-Hue-Saturation) transformation techniques were introduced and compared to investigate fire-scarred areas with single post-fire Landsat 7 ETM+ image. This study consists of two parts. First, using only geometrically corrected imagery, it was examined whether or not the different level of fire-damaged areas could be detected by simple slicing method within the image enhanced by the IHS transform. As a result, since the spectral distribution of each class on each IHS component was overlaid, the simple slicing method did not seem appropriate for the delineation of the areas of the different level of fire severity. Second, the image rectified by both radiometrically and topographically was enhanced by the KT transformation and the IHS transformation, respectively. Then, the images were classified by the maximum likelihood method. The cross-validation was performed for the compensation of relatively small set of ground truth data. The results showed that KT transformation produced better accuracy than IHS transformation. In addition, the KT feature spaces and the spectral distribution of IHS components were analyzed on the graph. This study has shown that, as for the detection of the different level of fire severity, the KT transformation reflects the ground physical conditions better than the IHS transformation.

Photochemical Reflectance Index (PRI) Mapping using Drone-based Hyperspectral Image for Evaluation of Crop Stress and its Application to Multispectral Imagery (작물 스트레스 평가를 위한 드론 초분광 영상 기반 광화학반사지수 산출 및 다중분광 영상에의 적용)

  • Na, Sang-il;Park, Chan-won;So, Kyu-ho;Ahn, Ho-yong;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.637-647
    • /
    • 2019
  • The detection of crop stress is an important issue for the accurate assessment of yield decline. The photochemical reflectance index (PRI) was developed as a remotely sensed indicator of light use efficiency (LUE). The PRI has been tested in crop stress detection and a number of studies demonstrated the feasibility of using it. However, only few studies have focused on the use of PRI from remote sensing imagery. The monitoring of PRI using drone and satellite is made difficult by the low spectral resolution image captures. In order to estimate PRI from multispectral sensor, we propose a band fusion method using adjacent bands. The method is applied to the drone-based hyperspectral and multispectral imagery and estimated PRI explain 79% of the original PRI. And time series analyses showed that two PRI data (drone-based and SRS sensor) had very similar temporal variations. From these results, PRI from multispectral imagery using band fusion can be used as a new method for evaluation of crop stress.

Development of KBIMS Architectural and Structural Element Library and IFC Property Name Conversion Methodology (KBIMS 건축 및 구조 부재 라이브러리 및 IFC 속성명 변환 방법 개발)

  • Kim, Seonwoo;Kim, Sunjung;Kim, Honghyun;Bae, Kiwoo
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.6
    • /
    • pp.505-514
    • /
    • 2020
  • This research introduces the method of developing Korea BIM standard (KBIMS) architectural and structural element library and the methodology of converting KBIMS IFC property names with special characters. Diverse BIM tools are utilizing in project, however BIM library researches lack diversity on BIM tool selection. This research described the method to generate twelve categories and seven hundred and ninety-three elements library containing geometrical and numerical data in CATIA V6. KBIMS has its special property data naming systems which was the challenge inputting to ENOVIA IFC database. Three mapping methods for special naming characters had been developed and the ASCII code method was applied. In addition, the convertor prototype had been developed for searching and replacing the ASCII codes into the original KBIMS IFC property names. The methodology was verified by exporting 2,443 entities without data loss in the sample model conversion test. This research would provide a wider choice of BIM tool selection for applying KBIMS. Furthermore, the research would help on the reduction of data interoperability issues in projects. The developed library would be open to the public, however the continuous update and maintenance would be necessary.

Application study of random forest method based on Sentinel-2 imagery for surface cover classification in rivers - A case of Naeseong Stream - (하천 내 지표 피복 분류를 위한 Sentinel-2 영상 기반 랜덤 포레스트 기법의 적용성 연구 - 내성천을 사례로 -)

  • An, Seonggi;Lee, Chanjoo;Kim, Yongmin;Choi, Hun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.5
    • /
    • pp.321-332
    • /
    • 2024
  • Understanding the status of surface cover in riparian zones is essential for river management and flood disaster prevention. Traditional survey methods rely on expert interpretation of vegetation through vegetation mapping or indices. However, these methods are limited by their ability to accurately reflect dynamically changing river environments. Against this backdrop, this study utilized satellite imagery to apply the Random Forest method to assess the distribution of vegetation in rivers over multiple years, focusing on the Naeseong Stream as a case study. Remote sensing data from Sentinel-2 imagery were combined with ground truth data from the Naeseong Stream surface cover in 2016. The Random Forest machine learning algorithm was used to extract and train 1,000 samples per surface cover from ten predetermined sampling areas, followed by validation. A sensitivity analysis, annual surface cover analysis, and accuracy assessment were conducted to evaluate their applicability. The results showed an accuracy of 85.1% based on the validation data. Sensitivity analysis indicated the highest efficiency in 30 trees, 800 samples, and the downstream river section. Surface cover analysis accurately reflects the actual river environment. The accuracy analysis identified 14.9% boundary and internal errors, with high accuracy observed in six categories, excluding scattered and herbaceous vegetation. Although this study focused on a single river, applying the surface cover classification method to multiple rivers is necessary to obtain more accurate and comprehensive data.

Comparative Study of Indocyanine Green Intravenous Injection and the Inflation-Deflation Method for Assessing Resection Margins in Segmentectomy for Lung Cancer: A Single-Center Retrospective Study

  • Seon Yong Bae;Taeyoung Yun;Ji Hyeon Park;Bubse Na;Kwon Joong Na;Samina Park;Hyun Joo, Lee;In Kyu Park;Chang Hyun Kang;Young Tae Kim
    • Journal of Chest Surgery
    • /
    • v.57 no.5
    • /
    • pp.450-457
    • /
    • 2024
  • Background: The inflation-deflation (ID) method has long been the standard for intraoperative margin assessment in segmentectomy. However, with advancements in vision technology, the use of near-infrared mapping with indocyanine green (ICG) has become increasingly common. This study was conducted to compare the perioperative outcomes and resection margins achieved using these methods. Methods: This retrospective study included patients who underwent direct segmentectomy for clinical stage I lung cancer between January 2018 and September 2022. We compared perioperative factors, including bronchial and parenchymal resection margins, according to the margin assessment method and the type of segmentectomy performed. Since the ICG approach was adopted in April 2021, we also examined a recent subgroup of patients treated from then onward. Results: A total of 319 segmentectomies were performed. ID and ICG were utilized for 261 (81.8%) and 58 (18.2%) patients, respectively. Following April 2021, 61 patients (51.3%) were treated with ID, while 58 (48.7%) received ICG. We observed no significant difference in resection margins between ID and ICG for bronchial (2.7 cm vs. 2.3 cm, p=0.07) or parenchymal (2.5 cm vs. 2.3 cm, p=0.46) margins. Additionally, the length of hospitalization and the complication rate were comparable between groups. Analysis of the recent subgroup confirmed these findings, showing no significant differences in resection margins (bronchial: 2.6 cm vs. 2.3 cm, p=0.25; parenchymal: 2.4 cm vs. 2.3 cm, p=0.75), length of hospitalization, or complication rate. Conclusion: The perioperative outcomes and resection margins achieved using ID and ICG were comparable, suggesting that both methods can safely guide segmentectomy procedures.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Design and Performance Evaluation of Selective DFT Spreading Method for PAPR Reduction in Uplink OFDMA System (OFDMA 상향 링크 시스템에서 PAPR 저감을 위한 선택적 DFT Spreading 기법의 설계와 성능 평가)

  • Kim, Sang-Woo;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.3 s.118
    • /
    • pp.248-256
    • /
    • 2007
  • In this paper, we propose a selective DFT spreading method to solve a high PAPR problem in uplink OFDMA system. A selective characteristic is added to the DFT spreading, so the DFT spreading method is mixed with SLM method. However, to minimize increment of computational complexity, differently with common SLM method, our proposed method uses only one DFT spreading block. After DFT, several copy branches are generated by multiplying with each different matrix. This matrix is obtained by linear transforming the each phase rotation in front of DFT block. And it has very lower computational complexity than one DFT process. For simulation, we suppose that the 512 point IFFT is used, the number of effective sub-carrier is 300, the number of allowed sub-carrier to each user's is 1/4 and 1/3 and QPSK modulation is used. From the simulation result, when the number of copy branch is 4, our proposed method has more than about 5.2 dB PAPR reduction effect. It is about 1.8 dB better than common DFT spreading method and 0.95 dB better than common SLM which uses 32 copy branches. And also, when the number of copy branch is 2, it is better than SLM using 32 copy branches. From the comparison, the proposed method has 91.79 % lower complexity than SLM using 32 copy branches in similar PAPR reduction performance. So, we can find a very good performance of our proposed method. Also, we can expect the similar performance when all number of sub-carrier is allocated to one user like the OFDM.