• Title/Summary/Keyword: 공간시스템

Search Result 10,451, Processing Time 0.039 seconds

Availability Assessment of Single Frequency Multi-GNSS Real Time Positioning with the RTCM-State Space Representation Parameters (RTCM-SSR 보정요소 기반 1주파 Multi-GNSS 실시간 측위의 효용성 평가)

  • Lee, Yong-Chang;Oh, Seong-Jong
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.107-123
    • /
    • 2020
  • With stabilization of the recent multi-GNSS infrastructure, and as multi-GNSS has been proven to be effective in improving the accuracy of the positioning performance in various industrial sectors. In this study, in view that SF(Single frequency) GNSS receivers are widely used due to the low costs, evaluate effectiveness of SF Real Time Point Positioning(SF-RT-PP) based on four multi-GNSS surveying methods with RTCM-SSR correction streams in static and kinematic modes, and also derive response challenges. Results of applying SSR correction streams, CNES presented good results compared to other SSR streams in 2D coordinate. Looking at the results of the SF-RT-PP surveying using SF signals from multi-GNSS, were able to identify the common cause of large deviations in the altitude components, as well as confirm the importance of signal bias correction according to combinations of different types of satellite signals and ionospheric delay compensation algorithm using undifferenced and uncombined observations. In addition, confirmed that the improvement of the infrastructure of Multi-GNSS allows SF-RT-SPP surveying with only one of the four GNSS satellites. In particular, in the case of code-based SF-RT-SPP measurements using SF signals from GPS satellites only, the difference in the application effect between broadcast ephemeris and SSR correction for satellite orbits/clocks was small, but in the case of ionospheric delay compensation, the use of SBAS correction information provided more than twice the accuracy compared to result of the Klobuchar model. With GPS and GLONASS, both the BDS and GALILEO constellations will be fully deployed in the end of 2020, and the greater benefits from the multi-GNSS integration can be expected. Specially, If RT-ionospheric correction services reflecting regional characteristics and SSR correction information reflecting atmospheric characteristics are carried out in real-time, expected that the utilization of SF-RT-PPP survey technology by multi-GNSS and various demands will be created in various industrial sectors.

Developing Korean Forest Fire Occurrence Probability Model Reflecting Climate Change in the Spring of 2000s (2000년대 기후변화를 반영한 봄철 산불발생확률모형 개발)

  • Won, Myoungsoo;Yoon, Sukhee;Jang, Keunchang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.199-207
    • /
    • 2016
  • This study was conducted to develop a forest fire occurrence model using meteorological characteristics for practical forecasting of forest fire danger rate by reflecting the climate change for the time period of 2000yrs. Forest fire in South Korea is highly influenced by humidity, wind speed, temperature, and precipitation. To effectively forecast forest fire occurrence, we developed a forest fire danger rating model using weather factors associated with forest fire in 2000yrs. Forest fire occurrence patterns were investigated statistically to develop a forest fire danger rating index using times series weather data sets collected from 76 meteorological observation centers. The data sets were used for 11 years from 2000 to 2010. Development of the national forest fire occurrence probability model used a logistic regression analysis with forest fire occurrence data and meteorological variables. Nine probability models for individual nine provinces including Jeju Island have been developed. The results of the statistical analysis show that the logistic models (p<0.05) strongly depends on the effective and relative humidity, temperature, wind speed, and rainfall. The results of verification showed that the probability of randomly selected fires ranges from 0.687 to 0.981, which represent a relatively high accuracy of the developed model. These findings may be beneficial to the policy makers in South Korea for the prevention of forest fires.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Performance Characteristics of PM10 and PM2.5 Samplers with an Advanced Chamber System (챔버 기술 개발을 통한 PM10과 PM2.5 시료채취기의 수행 특성)

  • Kim, Do-Hyeon;Kim, Seon-Hong;Kim, Ji-Hoon;Cho, Seung-Yeon;Park, Ju-Myon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.8
    • /
    • pp.739-746
    • /
    • 2010
  • The purposes of this study are 1) to develop an advanced chamber system within ${\pm}10%$ of air velocity at the particulate matter (PM) collection area, 2) to research theoretical characteristics of PM10 and PM2.5 samplers, 3) to assess the performance characteristics of PM10 and PM2.5 samplers through chamber experiments. The total six one-hour experiments were conducted using the cornstarch with an mass median aerodynamic diameter (MMAD) of $20\;{\mu}m$ and an geometric standard deviation of 2.0 at the two different air velocity conditions of 0.67 m/s and 2.15 m/s in the chamber. The aerosol samplers used in the present study are one APM PM10 and one PM2.5 samplers accordance with the US federal reference methods and specially designed three mini-volume aerosol samplers (two for PM10 and one for PM2.5). The overall results indicate that PM10 and PM2.5 mini-volume samplers need correction factors of 0.25 and 0.39 respectively when APM PM samplers considered as reference samplers and there is significant difference between two mini-volume aerosol samplers when a two-way analysis of variance is tested using the measured PM10 mass concentrations. The PM10 and PM2.5 samplers with the cutpoints and slopes (PM10: $10{\pm}0.5\;{\mu}m$ and $1.5{\pm}0.1$, PM2.5: $2.5{\pm}0.2\;{\mu}m$ and $1.3{\pm}0.03$) theoretically collect the ranges of 86~114% and 64~152% considering the cornstarch characteristics used in this research. Furthermore, the calculated mass concentrations of PM samplers are higher than the ideal mass concentrations when the airborne MMADs for the cornstarch used are smaller than the cutpoints of PM samplers and the PM samplers collected less PM in another case. The chamber experiment also showed that PM10 and PM2.5 samplers had the bigger collection ranges of 37~158% and 55~149% than the theocratical calculated mass concentration ranges and the relatively similar mass concentration ranges were measured at the air velocity of 2.15 m/s comparing with the 0.67 m/s.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Patient's Selection for Extracorporeal Shock Wave Lithotripsy for Treatment of Common Bile Duct Stones Resistant to Endoscopic Extraction (체외충격파쇄석술 적용을 위한 총담관결석의 선택)

  • Lee, Won-Hong;Son, Soon-Yong;Kim, Chang-Bok;Park, Cheon-Kyoo;Kang, Seong-Ho;Ryu, Meung-Sun;Lee, Yong-Moon
    • Journal of radiological science and technology
    • /
    • v.28 no.2
    • /
    • pp.105-110
    • /
    • 2005
  • Background/Aim : Common bile duct (CBD) stones may cause jaundice, cholangitis, or pancreatitis. Extracorporeal shock wave lithotripsy (ESWL) may be needed whenever endoscopic procedure are failed to extract common bile duct stones. The aim of this study is to provide the standard for patient's best choice on ESWL for treatment of CBD stones resistant to endoscopic extraction. Materials and Methods : Fourty-six patients failed in endoscopic stone extraction including mechanical lithotripsy were treated by ESWL. In all patients, endoscopic sphincterotomy and nasobiliary drainage tube was done before ESWL using the ultrasonography for stone localization with a spark-gap type lithotriptor. Patients were sedated with an intravenous injection of 50 mg of Demerol. None were treated under general anesthesia. Results : Overall complete clearance rate of CBD stone was 89.1% (41/46). In 82.6% of the patients, the stones were extracted endoscopically after ESWL, and spontaneous passage was observed in 6.5%. In the clearance rate after ESWL, there were no noticeable differences with regard to number (single: 82.8%, two or three: 100%, more than three: 100%) and size of the stone (less than 33mm: 92.9%, 33 mm or larger: 83.3%), whereas there were significant differences with regard to the ratio of sum of long-axis length of the all stones to sum of long-axis length of the CBD excluding stone (1:2.4, 1:2.1) and diameter of the largest stone to diameter of CBD excluding stone (1:0.9, 1:0.4) for patients with complete clearance compared with those without. Conclusion : We propose that stones without the fragments are travelable sufficient space in CBD or extractable sufficient diameter of CBD regardless of stone size and number should be treated by other technique to prevent time and cost consuming, such as percutaneous transhepatic cholangioscopylithotomy.

  • PDF

Monitoring the Coastal Waters of the Yellow Sea Using Ferry Box and SeaWiFS Data (정기여객선 현장관측 시스템과 SeaWiFS 자료를 이용한 서해 연안 해수환경 모니터링)

  • Ryu, Joo-Hyung;Moon, Jeong-Eon;Min, Jee-Eun;Ahn, Yu-Hwan
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.323-334
    • /
    • 2007
  • We analyzed the ocean environmental data from water sample and automatic measurement instruments with the Incheon-Jeju passenger ship for 18 times during 4 years from 2001 to 2004. The objectives of this study are to monitor the spatial and temporal variations of ocean environmental parameters in coastal waters of the Yellow Sea using water sample analysis, and to compare and analyze the reliability of automatic measurement sensors for chlorophyll and turbidity using in situ measurements. The chlorophyll concentration showed the ranges between 0.1 to $6.0mg/m^3$. High concentrations occurred in the Gyeonggi Bay through all the cruises. The maximum value of chlorophyll concentration was $16.5mg/m^3$ in this area during September 2004. The absorption coefficients of dissolve organic matter at 400 nm showed below $0.5m^{-1}$ except those in August 2001 During 2002-2003, it did not distinctly change the seasonal variations with the ranges 0.1 to $0.4m^{-1}$. In the case of suspended sediment (SS) concentration, most of the area showed below $20g/m^3$ through all seasons except the Gyeonggi Bay and around Mokpo area. In general SS concentration of autumn and winter season was higher than that of summer. The central area of the Yellow Sea appeared to have lower value $10g/m^3$. The YSI fluorometer for chlorophyll concentration had a very low reliability and turbidity sensor had a $R^2$ value of 0.77 through the 4 times measurements comparing with water sampling method. For the automatic measurement using instruments for chlorphlyll and suspended sediment concentration, McVan and Choses sensor was greater than YSI multisensor. The SeaWiFS SS distribution map was well spatially matched with in situ measurement, however, there was a little difference in quantitative concentration.

A Study on Perception Change in Bicycle users' Outdoor Activity by Particulate Matter: Based on the Social Network Analysis (미세먼지로 인한 자전거 이용객의 야외활동 인식변화에 관한 연구: 사회네트워크분석을 중심으로)

  • Kim, Bomi;Lee, Dong Kun
    • Journal of Environmental Impact Assessment
    • /
    • v.28 no.5
    • /
    • pp.440-456
    • /
    • 2019
  • The controversy of the risk perception related to particulate matters becomes significant. Therefore, in order to understand the nature of the particulate matters, we gathered articles and comments in on-line community related to bicycling which is affected by exposure of the particulate matters. As a result, firstly, the government - led particulate matter policy was strengthened and segmented every period, butthe risk perception related to particulate matters in the bicycle community has become active and serious. Second, as a result of analyzing the perception change of outdoor activities related to particulate matters, bicycle users in community showed a tendency of outdoor activity depending on the degree of particulate matters ratherthan the weather. In addition, the level of the risk perception related to particulate matters has been moved from fears of serious threat in daily life and health, combined with the disregard of domestic particulate matter levels or mask performance. Ultimately, these risk perception related to particulate matters have led some of the bicycling that were mainly enjoyed outdoors to the indoor space. However, in comparison with outdoor bicycling enjoyed by various factors such as scenery, people, and weather, the monotonous indoor bicycling was converted into another type of indoor exercise such as fitness and yoga. In summary, it was derived from mistrust of excessive information or policy provided by the government or local governments. It is considered that environmental policy should be implemented after discussion of risk communication that can reduce the gap between public anxiety and concern so as to cope with the risk perception related to particulate matters. Therefore,this study should be provided as an academic basis for the effective communication direction when decision makers establish the policy related to particulate matters.

Evaluation of Oil Spill Detection Models by Oil Spill Distribution Characteristics and CNN Architectures Using Sentinel-1 SAR data (Sentienl-1 SAR 영상을 활용한 유류 분포특성과 CNN 구조에 따른 유류오염 탐지모델 성능 평가)

  • Park, Soyeon;Ahn, Myoung-Hwan;Li, Chenglei;Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1475-1490
    • /
    • 2021
  • Detecting oil spill area using statistical characteristics of SAR images has limitations in that classification algorithm is complicated and is greatly affected by outliers. To overcome these limitations, studies using neural networks to classify oil spills are recently investigated. However, the studies to evaluate whether the performance of model shows a consistent detection performance for various oil spill cases were insufficient. Therefore, in this study, two CNNs (Convolutional Neural Networks) with basic structures(Simple CNN and U-net) were used to discover whether there is a difference in detection performance according to the structure of CNN and distribution characteristics of oil spill. As a result, through the method proposed in this study, the Simple CNN with contracting path only detected oil spill with an F1 score of 86.24% and U-net, which has both contracting and expansive path showed an F1 score of 91.44%. Both models successfully detected oil spills, but detection performance of the U-net was higher than Simple CNN. Additionally, in order to compare the accuracy of models according to various oil spill cases, the cases were classified into four different categories according to the spatial distribution characteristics of the oil spill (presence of land near the oil spill area) and the clarity of border between oil and seawater. The Simple CNN had F1 score values of 85.71%, 87.43%, 86.50%, and 85.86% for each category, showing the maximum difference of 1.71%. In the case of U-net, the values for each category were 89.77%, 92.27%, 92.59%, and 92.66%, with the maximum difference of 2.90%. Such results indicate that neither model showed significant differences in detection performance by the characteristics of oil spill distribution. However, the difference in detection tendency was caused by the difference in the model structure and the oil spill distribution characteristics. In all four oil spill categories, the Simple CNN showed a tendency to overestimate the oil spill area and the U-net showed a tendency to underestimate it. These tendencies were emphasized when the border between oil and seawater was unclear.