• Title/Summary/Keyword: Reuse Network

Search Result 249, Processing Time 0.028 seconds

A Study on Public Policy through Semantic Network Analysis of Public Data related News in Korea (국내 공공데이터 관련 뉴스 의미망 분석을 통한 공공정책 연구)

  • Moon, HyeJung;Lee, Kyungseo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.4
    • /
    • pp.536-548
    • /
    • 2018
  • Public data has been transformed from provider-oriented information disclosure to a form of personalized information sharing centered on individual citizens since government 3.0. As a result, the government is implementing policies and projects to maximize the value of public data and increase reuse. This study analyzes the issues related to public data in the news and seeks the status of government agencies and government projects by issue. We conducted semantic analysis on domestic online news and public agency bidding information including public data and conducted the work of linking major key words derived with social and economic values inherent in public data. As a result, major issues related to public data were divided into broader access to public data, growth of new technology, cooperation and conflict among stakeholders, and utilization of the private sector, which were closely related to transparency, efficiency, participation, and innovation mechanisms. Also major agencies of four issues include the Ministry of Strategy and Finance and Seoul, Ministry of Culture, Sports and Tourism and Gyeonggi-do, Ministry of Trade, Industry and Energy and Incheon, and Ministry of Land, Infrastructure and Transport and Gyeongsangbuk-do. Most of the issues are being led by the government.

Comparison of Performance Between Incremental and Batch Learning Method for Information Analysis of Cyber Surveillance and Reconnaissance (사이버 감시정찰의 정보 분석에 적용되는 점진적 학습 방법과 일괄 학습 방법의 성능 비교)

  • Shin, Gyeong-Il;Yooun, Hosang;Shin, DongIl;Shin, DongKyoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.99-106
    • /
    • 2018
  • In the process of acquiring information through the cyber ISR (Intelligence Surveillance Reconnaissance) and research into the agent to help decision-making, periodic communication between the C&C (Command and Control) server and the agent may not be possible. In this case, we have studied how to effectively surveillance and reconnaissance. Due to the network configuration, agents planted on infiltrated computers can not communicate seamlessly with C&C servers. In this case, the agent continues to collect data continuously, and in order to analyze the collected data within a short time in When communication is possible with the C&C server, it can utilize limited resources and time to continue its mission without being discovered. This research shows the superiority of incremental learning method over batch method through experiments. At an experiment with the restricted memory of 500 mega bytes, incremental learning method shows 10 times decrease in learning time. But at an experiment with the reuse of incorrectly classified data, the required time for relearn takes twice more.

Implementation of UDP-Tunneling Based Multicast Connectivity Solution for Multi-Party Collaborative Environments (다자간 협업 환경을 위한 UDP 터널링 기반의 멀티캐스트 연결성 솔루션의 구현)

  • Kim, Nam-Gon;Kim, Jong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.3
    • /
    • pp.153-164
    • /
    • 2007
  • The Access Grid (AG) provides collaboration environments over the IP multicast networks by enabling efficient exchange of multimedia contents among remote users; however, since lots of current networks are still multicast-disabled, it is not easy to deploy this multicast-based multi-party AG. For this problem, the AG provides multicast bridges as a solution by putting a relay server into the multicast networks. Multicast-disabled clients make UDP connections with this relay server and receive forwarded multicast traffics in unicast UDP packets. This solution is facing several limitations since it requires duplicate forwarding of the same packet for each unicast peer. Thus, in this paper, we propose an alternate solution for the multicast connectivity problem of the AG based on the UMTP (UDP multicast tunneling protocol). By taking advantage of flexibilities of UMTP, the proposed solution is designed to improve the efficiency of network and system utilization, to allow reuse of multicast-based AG applications without modification, and to partially address the NAT/firewall traversal issues. To verify the feasibility of proposed solution, we have implemented a prototype AG connectivity tool based on the UMTP, named as the AG Connector.

Design and Implementation of Lesson Plan System for teacher-student based on XML (XML 기반 교수-학생 학습지도 시스템의 설계 및 구현)

  • Choi, Mun-Kyoung;Kim, Haeng-Kon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1055-1062
    • /
    • 2002
  • Recently, the lesson plan document that is imported in the educational area is not provided to the educational information systematically, and the teachers are not easy to compose the lessen plan documentation. So, it needs additional time and effort to develope the lesson plan documents. Because of increasing the distributing network. web-based lesson plan system is required to all of the education area. Therefore, we need to compose the lesson plan that is possible to obtain the various teacher's requirement by providing creation, retrival, and reusability of document using the standard XML on web. In this paper, we developed the system for creating the common DTD (Document Type Definition), providing the standard XML document through the common DTD over the lesson plan analysis. In this system, it provides the editor to compose the lesson plan and supports the searching function to improvement of reusability on the existing lesson plan. We design the searching functions such as the structure base, facet and keyword. The composed lesson plans are interoperated with Database. Consequently, we can share the information on web by composing the lesson plan using the XML and save the time and cost by directly writing the lesson plan on web. We can also provide the improved learning environment.

An Implementation of IEEE 1516.1-2000 Standard with the Hybrid Data Communication Method (하이브리드 데이터 통신 방식을 적용한 IEEE 1516.1-2000 표준의 구현)

  • Shim, Jun-Yong;Wi, Soung-Hyouk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.11
    • /
    • pp.1094-1103
    • /
    • 2012
  • Recently, software industry regarding national defense increases system development of distributed simulation system of M&S based to overcome limit of resource and expense. It is one of key technologies for offering of mutual validation among objects and reuse of objects which are discussed for developing these systems. RTI, implementation of HLA interface specification as software providing these technologies uses Federation Object Model for exchanging information with joined federates in the federation and each federate has a characteristic that is supposed to have identical FOM in the federation. This technology is a software which is to provide the core technology which was suggested by the United state's military M&S standard framework. Simulator, virtual simulation, and inter-connection between military weapons system S/W which executes on network which is M&S's core base technology, and it is a technology which also can be used for various inter-connection between S/W such as game and on-line phone. These days although RTI is used in military war game or tactical training unit field, there is none in Korea. Also, it is used in mobile-game, distribution game, net management, robot field, and other civilian field, but the number of examples are so small and informalized. Through this developing project, we developed the core technique and RTI software and provided performance of COTS level to improve communication algorithms.

Enhanced Spatial Covariance Matrix Estimation for Asynchronous Inter-Cell Interference Mitigation in MIMO-OFDMA System (3GPP LTE MIMO-OFDMA 시스템의 인접 셀 간섭 완화를 위한 개선된 Spatial Covariance Matrix 추정 기법)

  • Moon, Jong-Gun;Jang, Jun-Hee;Han, Jung-Su;Kim, Sung-Soo;Kim, Yong-Serk;Choi, Hyung-Jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.5C
    • /
    • pp.527-539
    • /
    • 2009
  • In this paper, we propose an asynchonous ICI (Inter-Cell Interference) mitigation techniques for 3GPP LTE MIMO-OFDMA down-link receiver. An increasing in symbol timing misalignments may occur relative to sychronous network as the result of BS (Base Station) timing differences. Such symbol synchronization errors that exceed the guard interval or the cyclic prefix duration may result in MAI (Multiple Access Interference) for other carriers. In particular, at the cell boundary, this MAI becomes a critical factor, leading to degraded channel throughput and severe asynchronous ICI. Hence, many researchers have investigated the interference mitigation method in the presence of asynchronous ICI and it appears that the knowledge of the SCM (Spatial Covariance Matrix) of the asynchronous ICI plus background noise is an important issue. Generally, it is assumed that the SCM estimated by using training symbols. However, it is difficult to measure the interference statistics for a long time and training symbol is also not appropriate for MIMO-OFDMA system such as LTE. Therefore, a noise reduction method is required to improve the estimation accuracy. Although the conventional time-domain low-pass type weighting method can be effective for noise reduction, it causes significant estimation error due to the spectral leakage in practical OFDM system. Therefore, we propose a time-domain sinc type weighing method which can not only reduce the noise effectively minimizing estimation error caused by the spectral leakage but also implement frequency-domain moving average filter easily. By using computer simulation, we show that the proposed method can provide up to 3dB SIR gain compared with the conventional method.

Economic Analysis and CO2 Emissions Analysis by Circulating the Industrial Waste Resource between Companies (국내 기업들의 폐기물자원 순환에 따른 탄소배출량 및 경제성 분석)

  • Kim, Young-Woon;Kim, Jun-Beum;Hwang, Yong-Woo;Park, Ji-Hyoung
    • Clean Technology
    • /
    • v.18 no.1
    • /
    • pp.111-119
    • /
    • 2012
  • These days many companies are trying to reduce, recycle and reuse their wastes. Even though many wastes can be recycled, those are incinerated or landfilled. To solve these problems, there are many projects to make efforts to recycle wastes in especially the industrial complexes. But, due to the absence of information about waste recycling, recyclable wastes are still incinerated or landfilled. Based on these research background, this study aims to suggest the evaluation methodology of the $CO_2$ emissions and cost reduced by circulating the industrial waste to resource. We evaluated the environmental and economic effect between companies which emit the plastic waste and organic solvent waste and use them as raw-materials in the off-line recycling information exchange network. The environmental and economic aspects were analyzed comparing waste recycling with waste incineration. By recycling the plastic waste as raw-materials, $CO_2$ emission were reduced 1,070 ton in 2009 and 1,234 ton in 2010 and 657.4 million won in 2009 and 755.0 million won in 2010 were reduced. In recycling the organic solvent waste, 7.3 ton-$CO_2$ in 2010 and 5.6 ton-$CO_2$ in 2011 were reduced and 15.9 million won in 2010 and 12.2 million won in 2011 were reduced.

Implementation of XML Query Processing System Using the Materialized View Cache-Answerability (실체뷰 캐쉬 기법을 이용한 XML 질의 처리 시스템의 구현)

  • Moon, Chan-Ho;Park, Jung-Kee;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.293-304
    • /
    • 2004
  • Recently, caching for the database-backed web applications has received much attention. The results of frequent queries could be cached for repeated reuse or for efficient processing of the relevant queries. Since the emergence of XML as a standard for data exchange on the web, today's web applications are to retrieve information from the remote XML sources across the network, and thus it is desirable to maintain the XML query results in the cache for the web applications. In this paper, we describe implementation of an XML query processing system that supports cache-answerability of XML queries, and evaluate its performance. XML path expression, which is one of the core features of XML query languages including XQuery, XPath, and XQL was considered as the XML query. Their result is maintained as an XML materialized view in the XML cache. The algorithms to rewrite the given XML path expression using its relevant materialized view proposed in [13] were implemented with RDBMS as XML store. The major issues of implementation are described in detail. The results of performance experiments conducted with the implemented system showed effectiveness of cache-answerability of XML queries. Comparison with previous research in terms of performance is also Provided.

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.