• Title/Summary/Keyword: e-document

Search Result 557, Processing Time 0.029 seconds

Simplified Clearance Formalities of Northeast Asia port (동북아 항만의 입출항 수속 간소화 방안)

  • Choi Hyung-Rim;Park Nam-Kyu;Park Young-Jae;Cho Jae-Hyung
    • Journal of Navigation and Port Research
    • /
    • v.29 no.5 s.101
    • /
    • pp.439-445
    • /
    • 2005
  • Recently, owing to the increasing demand on the simplification of arrival and departure procedures, IMO's (International Maritime Organization) Facilitation Committee (FAL) is carrying out the standardization project of arrival and departure formalities and clearance form. Also, many port authorities of developed countries are making active researches for the smooth flow and efficiency of the information inbound and outbound ships by way of simplifying their formalities or through electronic means. However, this standardization project cannot be done by one country but by mutual cooperation among related nations. And to carry out this task, the first thing to be done is to standardize the formalities and document form, and to integrate information. To this end, this study has reviewed the model cases of advanced ports of developed countries with regard to their simplification and standardization efforts. And also we have analyzed the formalities and clearance form of the three countries Korea, China, and Japan. And then for the solution of common problems of three countries, this paper has suggested an ebXML-based Global Port B2B framework. Through this framework, we can reuse and automate the necessary information on the arrival and departure of ships, consequently realizing simplification, and laying a foundation for the introduction of e-commerce to the port industry.

A Study On Managing Electronic Mail Messages as Records of Public Institutions (공공기관의 이메일기록 관리 방안 연구)

  • Song, Ji Hyoun
    • The Korean Journal of Archival Studies
    • /
    • no.15
    • /
    • pp.141-183
    • /
    • 2007
  • It is not an overstatement that nowadays electronic mails are communicated more frequently as well as conveniently than phones and facsimiles, not only in routine life hot also in business transactions. Also, it is evident that emails will be used more and more as a communication method between internal and external organizations. If the information transferred and received via emails takes a role of business records, it is no wonder that emails should be uniformly managed as public records. Currently, however, specific policies or guidelines for the management of email records are not available, nor do most of public employees realize that emails are the actual records of the organization. In fact, the three research methods have been used for this study in the purpose of the establishment of email records management scheme. First of all, bibliographic research has been conducted in an effort to describes the definition and types of email records indicated in the guidelines of each nation, as well as the differences from the transitory email messages. Secondly, email management guidelines and policies of public institutions of England, The United States, Australia, and Canada, so-called the advanced countries of the records management, have been analyzed to examine the advanced examples of email management. In order to manage email records effectively, the functional requirements - capture, classification, storage, access, tracking, disposition, and role and responsibility were categorized in this thesis, based on the ISO 15489. As the designs of these foreign guidelines vary one another, common factors of them were extracted to be included in the realm of the seven stages. Lastly, this thesis has analyzed characteristics of the email system within the Electronic Document Management System of existing administrative institutions. Also, it has examined the overall environment of the email records management of public institutions and sought out its improvement. In essence, focused on the crucial factors on email management drawn out from the email management guidelines of foreign nations and the analysis of the policies, this thesis proposes an email records management scheme for Korean public intuitions, as well as an email management model suitable for forthcoming e-government era.

A Study on the Analysis of Related Information through the Establishment of the National Core Technology Network: Focused on Display Technology (국가핵심기술 관계망 구축을 통한 연관정보 분석연구: 디스플레이 기술을 중심으로)

  • Pak, Se Hee;Yoon, Won Seok;Chang, Hang Bae
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.2
    • /
    • pp.123-141
    • /
    • 2021
  • As the dependence of technology on the economic structure increases, the importance of National Core Technology is increasing. However, due to the nature of the technology itself, it is difficult to determine the scope of the technology to be protected because the scope of the relation is abstract and information disclosure is limited due to the nature of the National Core Technology. To solve this problem, we propose the most appropriate literature type and method of analysis to distinguish important technologies related to National Core Technology. We conducted a pilot test to apply TF-IDF, and LDA topic modeling, two techniques of text mining analysis for big data analysis, to four types of literature (news, papers, reports, patents) collected with National Core Technology keywords in the field of Display industry. As a result, applying LDA theme modeling to patent data are highly relevant to National Core Technology. Important technologies related to the front and rear industries of displays, including OLEDs and microLEDs, were identified, and the results were visualized as networks to clarify the scope of important technologies associated with National Core Technology. Throughout this study, we have clarified the ambiguity of the scope of association of technologies and overcome the limited information disclosure characteristics of national core technologies.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Method of Differential Corrections Using GPS/Galileo Pseudorange Measurement for DGNSS RSIM (DGNSS RSIM을 위한 GPS/Galileo 의사거리 보정기법)

  • Seo, Ki-Yeol;Kim, Young-Ki;Jang, Won-Seok;Park, Sang-Hyun
    • Journal of Navigation and Port Research
    • /
    • v.38 no.4
    • /
    • pp.373-378
    • /
    • 2014
  • In order to prepare for recapitalization of differential GNSS (DGNSS) reference station and integrity monitor (RSIM) due to GNSS diversification, this paper focuses on differential correction algorithm using GPS/Galileo pesudorange. The technical standards on operation and broadcast of DGNSS RSIM are described as operation of differential GPS (DGPS) RSIM for conversion of DGNSS RSIM. Usually, in order to get the differential corrections of GNSS pesudorange, the system must know the real positions of satellites and user. Therefore, for calculating the position of Galileo satellites correctly, using the equation for calculating the SV position in Galileo ICD (Interface Control Document), it estimates the SV position based on Ephemeris data obtained from user receiver, and calculates the clock offset of satellite and user receiver, system time offset between GPS and Galileo, then determines the pseudorange corrections of GPS/Galileo. Based on a platform for performance verification connected with GPS/Galileo integrated signal simulator, it compared the PRC (pseudorange correction) errors of GPS and Galileo, analyzed the position errors of DGPS, DGalileo, and DGPS/DGalileo respectively. The proposed method was evaluated according to PRC errors and position accuracy at the simulation platform. When using the DGPS/DGalileo corrections, this paper could confirm that the results met the performance requirements of the RTCM.

A study on the Elements of Communication in the Tasks of Function of Mathematics in Context Textbook (MiC 교과서의 함수 과제에 대한 의사소통의 유형별 요소에 관한 탐색)

  • Hwang, Hye Jeang;Choe, Seon A
    • Communications of Mathematical Education
    • /
    • v.30 no.3
    • /
    • pp.353-374
    • /
    • 2016
  • Communication is one of 6 core competencies suggested newly in mathematics curriculum revised in 2015 in Korea. Also, it's importance has been emphasized through NCTM and CCSSI. By the subject of Mathematics in Context(MiC) textbook, this study planned to explore the communication elements according to the types of communication such as discourse, representation, operation. Namely, this study dealt with 316 questions in a total of 34 tasks relevant to function content in the MiC textbook, and this study explored the communication elements on the questions of each task. To accomplish this, this study first of all was to reconstruct and establish an analytic framework, on the basis of 'D.R.O.C type' of communication developed by Kim & Pang in 2010. In addition, based on the achievement standards of function domain in mathematics curriculum revised in 2015 in Korea, this study basically compared with the function content included in MiC textbook and Korean mathematics curriculum document. Also, it tried to explore the distribution of communication elements according to the types of communication.

Hardware-Based High Performance XML Parsing Technique Using an FPGA (FPGA를 이용한 하드웨어 기반 고성능 XML 파싱 기법)

  • Lee, Kyu-hee;Seo, Byeong-seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2469-2475
    • /
    • 2015
  • A structured XML has been widely used to present services on various Web-services. The XML is also used for digital documents and digital signatures and for the representation of multimedia files in email systems. The XML document should be firstly parsed to access elements in the XML. The parsing is the most compute-instensive task in the use of XML documents. Most of the previous work has focused on hardware based XML parsers in order to improve parsing performance, while a little work has studied parsing techniques. We present the high performance parsing technique which can be used all of XML parsers and design hardware based XML parser using an FPGA. The proposed parsing technique uses element analyzers instead of the state machine and performs multibyte-based element matching. As a result, our parsing technique can reduce the number of clock cycles per byte(CPB) and does not need to require any preprocessing, such as loading XML data into memory. Compared to other parsers, our parser acheives 1.33~1.82 times improvement in the system performance. Therefore, the proposed parsing technique can process XML documents in real time and is suitable for applying to all of XML parsers.

Using Web as CAI in the Classroom of Information Age (정보화시대를 대비한 CAI로서의 Web 활용)

  • Lee, Kwang-Hi
    • Journal of The Korean Association of Information Education
    • /
    • v.1 no.1
    • /
    • pp.38-48
    • /
    • 1997
  • This study is an attempt to present a usage of the Web as CAI in the classroom and to give a direction to the future education in the face of information age. Characteristcs of information society, current curriculum, educational and teacher education are first analyzed in this article. The features of internet and 'Web are then summarized to present benefits of usage in the classroom as a CAI tool. The literature shows several characteristics of information society as follows : a technological computer, a provision and sharing of information, multi functional society, a participative democracy', an autonomy, a time value..A problem solving and 4 Cs(e.g., cooperation, copying, communication, creativity) are newly needed in this learning environment. The Internet is a large collection of networks that are tied together so that users can share their vast resources, a wealth of information, and give a key to a successful, efficient. individual study over a time and space. The 'Web increases an academic achievement, a creativity, a problem solving, a cognitive thinking, and a learner's motivation through an easy access to : documents available on the Internet, files containing programs, pictures, movies, and sounds from an FTP site, Usenet newsgroups, WAIS seraches, computers accessible through telnet, hypertext document, Java applets and other multimedia browser enhancements, and much more, In the Web browser will be our primary tool in searching for information on the Internet in this information age.

  • PDF

Error factors and uncertainty measurement for determinations of amino acid in beef bone extract (사골농축액 시료 중에 함유된 아미노산 정량분석에 대한 오차 요인 및 측정불확도 추정)

  • Kim, Young-Jun;Kim, Ji-Young;Jung, Min-Yu;Shin, Young-Jae
    • Analytical Science and Technology
    • /
    • v.26 no.2
    • /
    • pp.125-134
    • /
    • 2013
  • This study was demonstrated to estimate the measurement uncertainty of 23 multiple-component amino acids from beef bone extract by high performance liquid chromatography (HPLC). The sources of measurement uncertainty (i.e. sample weight, final volume, standard weight, purity, standard solution, calibration curve, recovery and repeatability) in associated with the analysis of amino acids were evaluated. The estimation of uncertainty obtained on the GUM (Guide to the expression of uncertainty in measurement) and EURACHEM document with mathematical calculation and statistical analysis. The content of total amino acids from beef bone extract was 36.18 g/100 g and the expanded uncertainty by multiplying coverage factor (k, 2.05~2.36) was 3.81 g/100 g at a 95% confidence level. The major contributors to the measurement uncertainty were identified in the order of recovery and repeatability (25.2%), sample pretreatment (24.5%), calibration-curve (24.0%) and weight of the reference material (10.4%). Therefore, more careful experiments are required in these steps to reduce uncertainties of amino acids analysis with a better personal proficiency improvement.