• Title/Summary/Keyword: External User

Search Result 525, Processing Time 0.021 seconds

Evaluating of the Effectiveness of RTK Surveying Performance Based on Low-cost Multi-Channel GNSS Positioning Modules (다채널 저가 GNSS 측위 모듈기반 RTK 측량의 효용성 평가)

  • Kim, Chi-Hun;Oh, Seong-Jong;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.2
    • /
    • pp.53-65
    • /
    • 2022
  • According to the advancement of the GNSS satellite positioning system, the module of hardware and operation software reflecting accuracy and economical efficiency is implemented in the user sector including the multi-channel GNSS receiver, the multi-frequency external antenna and the mobile app (App) base public positioning analysis software etc., and the multichannel GNSS RTK positioning of the active configuration method (DIY, Do it yourself) is possible according to the purpose of user. Especially, as the infrastructure of multi-GNSS satellite is expanded and the potential of expansion of utilization according to various modules is highlighted, interest in the utilization of multi-channel low-cost GNSS receiver module is gradually increasing. The purpose of this study is to review the multi-channel low-cost GNSS receivers that are appearing in the mass market in various forms and to analyze the utilization plan of the "address information facility investigation project" of the Ministry of Public Administration and Security by constructing the multi-channel low-cost GNSS positioning module based RTK survey system (hereinafter referred to as "multi-channel GNSS RTK module positioning system"). For this purpose, we constructed a low-cost "multi-channel GNSS RTK module positioning system" by combining related modules such as U-blox's F9P chipset, antenna, Ntrip transmission of GNSS observation data and RTK positioning analysis app through smartphone. Kinematic positioning was performed for circular trajectories, and static positioning was performed for address information facilities. The results of comparative analysis with the Static positioning performance of the geodetic receivers were obtained with 5 fixed points in the experimental site, and the good static surveying performance was obtained with the standard deviation of average ±1.2cm. In addition, the results of the test point for the outline of the circular structure in the orthogonal image composed of the drone image analysis and the Kinematic positioning trajectory of the low cost RTK GNSS receiver showed that the trajectory was very close to the standard deviation of average ±2.5cm. Especially, as a result of applying it to address information facilities, it was possible to verify the utility of spatial information construction at low cost compared to expensive commercial geodetic receivers, so it is expected that various utilization of "multi-channel GNSS RTK module positioning system"

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

A case study of blockchain-based public performance video platform establishment: Focusing on Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do (블록체인 기반 공연영상 공공 플랫폼 구축 사례 연구: 경기도 뉴미디어 예술방송국 경기아트온을 중심으로)

  • Lee, Seung Hyun
    • Journal of Service Research and Studies
    • /
    • v.13 no.1
    • /
    • pp.108-126
    • /
    • 2023
  • This study explored the sustainability of a blockchain-based cultural art performance video platform through the construction of Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do. In addition, the technical limitations of video content transaction using block chain, legal and institutional issues, and the protection of personal information and intellectual property rights were reviewed. As for the research method, participatory observation methods such as in-depth interviews with developers and operators and participation in meetings were conducted. The researcher participated in and observed the entire development process, including designing and developing blockchain nodes, smart contracts, APIs, UI/UX, and testing interworking between blockchain and content distribution services. Research Question 1: The results of the study on 'Which technology model is suitable for a blockchain-based performance video content distribution public platform?' are as follows. 1) The blockchain type suitable for the public platform for distribution of art performance video contents based on the blockchain is the private type that can be intervened only when the blockchain manager directly invites it. 2) In public platforms such as Gyeonggi ArtOn, among the copyright management model, which is an art based on NFT issuance, and the BC token and cloud-based content distribution model, the model that provides content to external demand organizations through API and uses K-token for fee settlement is suitable. 3) For public platform initial services such as Gyeonggi ArtOn, a closed blockchain that provides services only to users who have been granted the right to use content is suitable. Research question 2: What legal and institutional problems should be reviewed when operating a blockchain-based performance video distribution public platform? The results of the study are as follows. 1) Blockchain-based smart contracts have a party eligibility problem due to the nature of blockchain technology in which the identities of transaction parties may not be revealed. 2) When a security incident occurs in the block chain, it is difficult to recover the loss because it is unclear how to compensate or remedy the user's loss. 3) The concept of default cannot be applied to smart contracts, and even if the obligations under the smart contract have already been fulfilled, the possibility of incomplete performance must be reviewed.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.