• Title/Summary/Keyword: Document Expansion

Search Result 95, Processing Time 0.025 seconds

A Scheme that Transcodes from Dynamic Object of PC Web Page to Mobile Web Contents with DOM (DOM을 이용하여 PC 웹 페이지의 다이나믹 오브젝트를 모바일 웹 컨텐츠로 변환하는 기법)

  • Kim, Jong-Keun;Ko, Hee-Ae;Sim, Kun-Ho;Kang, Eui-Sun;Lim, Young-Hwan
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.648-653
    • /
    • 2008
  • With development of mobile communications terminal and expansion of mobile Internet, a large number of users can use mobile web contents at anytime and anywhere and will demand service of greater contents. Due to such demand, many studies are being carried out on transcoding contents so that wired web contents can be used in mobile web. However, transcoding and creation of mobile web contents involve difficulties because specifications available from telecommunications companies and mobile terminals have not been standardized. Especially, in order for serving dynamic object of wired web page contents to dynamically change according to time or user, it is required not only to program scripts to suit each terminal, but also to transcode the resources used in advance. For solution to this problem, this study uses the hierarchy structure of DOM (document object model) to display structural characteristics of wired web page. In other words, this study proposes the following technique Wired web pages are analyzed and the results are established as data structure. Then, dynamic object is extracted and the domain is indexed so that, when serving mobile web page, information can be extracted at the indexed position to create mobile web contents for service on real-time basis.

  • PDF

A Scheme that Transcodes and Services from PC Web Page to Mobile Web Page of Dynamic Object with DOM (DOM을 이용한 PC 웹 페이지에서 모바일 웹 페이지로의 다이나믹 오브젝트 변환 및 서비스 기법)

  • Kim, Jong-Keun;Kang, Eui-Sun;Sim, Kun-Jung;Ko, Hee-Ae;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.3 s.113
    • /
    • pp.355-362
    • /
    • 2007
  • With development of mobile communications terminal and expansion of mobile Internet, a large number of users can use mobile web contents at anytime and anywhere and will demand service of greater contents. Due to such demand, many studies are being carried out on transcoding contents so that wired web contents can be used in mobile web. However, transcoding and creation of mobile web contents involve difficulties because specifications available from telecommunications companies and mobile terminals have not been standardized. Especially, in order for serving dynamic object of wired web page contents to dynamically change according to time or user, it is required not only to program scripts to suit each terminal, but also to transcode the resources used in advance. for solution to this problem, this study uses the hierarchy structure of DOM (document object model) to display structural characteristics of wired web page. In other words, this study proposes the following technique. Wired web pages are analyzed and the results are established as data structure. Then, dynamic object is extracted and the domain is indexed so that, when serving mobile web page, information can be extracted at the indexed position to create mobile web contents for service on real-time basis. Also, this study aims at developing an editing device to edit mobile web contents and mobile web service server to service the edited contents by applying the above technique.

The Archival Method Study For Female Worker in the 1970s : Focused on (1970년대 여성 노동자 아카이빙 방법론 연구 전시 를 중심으로)

  • Lee, Hye Rin;Park, Ju Seok
    • The Korean Journal of Archival Studies
    • /
    • no.63
    • /
    • pp.145-165
    • /
    • 2020
  • , in collaboration with Mary Kelly, Kay Hunt and Margaret Harrison, tells the story of workers in the 1970s. Since the late 1960s, the world has undergone many political and social changes, and social movements have been active to protect the socially underprivileged, including women, children and workers. This phenomenon led to the diversification of the collection of the general public, the community, and the minority, and the expansion of the artist's political remarks and themes in the art world. , completed in conjunction with these social issues, surveyed and recorded the reality of workers in a factory in London and produced it as a artwork. is a collaborative work of three artists, a record of workers in the 1970s, and a record of the labor situation, factory, and even the history of the region. Therefore, this study examined the methods and features of , which dealt with the lives of women workers in the 1970s, based on social conditions.

A Functional Analysis of NEIS School Affairs Business System : From the Records Management Perspective (교무업무시스템의 기록관리 기능 분석 - 학교생활기록부를 중심으로 -)

  • Lim, Mi-Suk
    • The Korean Journal of Archival Studies
    • /
    • no.18
    • /
    • pp.91-138
    • /
    • 2008
  • A fast foot of information communication technology is appearing as expansion of prompt administrative service desire and national participation desire regarding administration. This is following again by government innovation and demand of customer-oriented governmental implementation with information technology. The Ministries of Education and Human Resources Development proceeded with National Education Information System(NEIS) for the aim of educational informatization at a highest global level. NEIS that was operated from 2003 established a system in Educational Offices in 16 cities/province and the Ministry of Education and Human Resources Development, and connected all educational administration organizations and primary and middle schools with Internet. Thus, NEIS processes electronically the general administration affairs in educational administration organizations each unit school. The NEIS school affairs business system that is newly enforced is producing an important documentation (of semi-permanent level) such as personal information and grade of students including School Human Document with electronic methods. However, we need to guarantee authenticity, integrity, reliability and usability of documentation because school affairs business system is producing important documentation under poor circumstances. According to this necessity, school affairs business system analyzed how many a record management function includes by the ISO 15489 that was an international record managerial standard. On the basis of these analyses, I will present a plan for management of a school documentation in this study. These researches are meaningful in electronically analyzing a record management function of the National Education Information System(NEIS) and in documentarily approaching management plan. I expect theses researches to be able to used as useful data for documental management plan preparations regarding a productive documentation of all kinds of business systems using in public institutions as well as National Education Information System(NEIS).

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

CCR7 Ligands Induced Expansion of Memory CD4+ T Cells and Protection from Viral Infection (CCR7 Ligand의 Memory CD4+ T 세포 증가유도 및 바이러스 감염에 대한 방어효과)

  • Eo, Seong-Kug;Cho, Jeong-Gon
    • IMMUNE NETWORK
    • /
    • v.3 no.1
    • /
    • pp.29-37
    • /
    • 2003
  • Background: CC chemokine receptor (CCR) 7 and cognate CCR7 ligands, CCL21 (formerly secondary lymphoid tissue chemokine [SLC]) and CCL19 (formerly Epstein-Barr virus-induced molecule 1 ligand chemokine [ELC]), were known to establish microenvironment for the initiation of immune responses in secondary lymphoid tissue. As described previously, coadministration of DNA vaccine with CCR7 ligand-encoding plasmid DNA elicited enhanced humoral and cellular immunity via increasing the number of dendritic cells (DC) in secondary lymphoid tissue. The author hypothesized here that CCR7 ligand DNA could effectively expand memory CD4+ T cells to protect from viral infection likely via increasing DC number. Methods: To evaluate the effect of CCR7 ligand DNA on the expansion of memory CD4+ T cells, DO11.10.BALB/c transgenic (Tg)-mice, which have highly frequent ovalbumin $(OVA)_{323-339}$ peptide-specific CD4+ T cells, were used. Tg-mice were previously injected with CCR7 ligand DNA, then immunized with $OVA_{323-339}$ peptide plus complete Freund's adjuvant. Subsequently, memory CD4+ T cells in peripheral blood lymphocytes (PBL) were analyzed by FACS analysis for memory phenotype ($CD44^{high}$ and CD62 $L^{low}$) at memory stage. Memory CD4+ T cells recruited into inflammatory site induced with OVA-expressing virus were also analyzed. Finally, the protective efficacy against viral infection was evaluated. Results: CCR7 ligand DNA-treated Tg-mice showed more expanded $CD44^{high}$ memory CD4+ T cells in PBL than control vector-treated animals. The increased number of memory CD4+ T cells recruited into inflammatory site was also observed in CCR7 ligand DNA-treated Tg-mice. Such effectively expanded memory CD4+ T cell population increased the protective immunity against virulent viral infection. Conclusion: These results document that CCR7 and its cognate ligands play an important role in intracellular infection through establishing optimal memory T cell. Moreover, CCR7 ligand could be useful as modulator in DNA vaccination against viral infection as well as cancer.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Semantic Information Retrieval Based on User-Word Intelligent Network (U-WIN 기반의 의미적 정보검색 기술)

  • Im, Ji-Hui;Choi, Ho-Seop;Ock, Cheol-Young
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.547-550
    • /
    • 2006
  • The criterion which judges an information retrieval system performance is to how many accurately retrieve an information that the user wants. The search result which uses only homograph has been appears the various documents that relates to each meaning of the word or intensively appears the documents that relates to specific meaning of it. So in this paper, we suggest semantic information retrieval technique using relation within User-Word Intelligent Network(U-WIN) to solve a disambiguation of query In our experiment, queries divide into two classes, the homograph used in terminology and the general homograph, and it sets the expansion query forms at "query + hypemym". Thus we found that only web document search's precision is average 73.5% and integrated search's precision is average 70% in two portal site. It means that U-WIN-Based semantic information retrieval technique can be used efficiently for a IR system.

  • PDF

A Study on the Present Situation of Landscape Management System through Analysis of the Landscape Review Results - Focused on Jeju Special Self-Governing Province Landscape Review- (경관 심의결과 분석을 통한 경관관리제도의 현황에 대한 연구 - 제주특별자치도 경관 심의를 중심으로 -)

  • Park, Hye-Jung;Park, Chul-Min
    • Journal of the Korean Institute of Rural Architecture
    • /
    • v.20 no.4
    • /
    • pp.9-17
    • /
    • 2018
  • The purpose of the study is to suggest ways to improve the Landscape Review system and Landscape Management System of Jeju Special Self-governing Province through Analysis of the Landscape Review Results and Jeju Special Self-governing Ordinance. For this purpose, the research methods were reviewed for preliminary study and reviewing the laws and ordinances related to landscape, and 318 cases of landscape review, which have been implemented since 2010, were analyzed by item by item along with the result of the review. The main results of the analysis are as follows. First, Jeju Special Self-governing Province, which currently operates an enhanced ordinance of development project review, is experiencing problems such as building the wrong construction projects due to the weak legal basis for follow-up management after landscape Review. Second, Jeju Special Self-governing Province expects efficient management of the province through expansion of the scope of the landscape review. Third, the current status of the decisions by the Landscape review showed that 57.7% of the bills passed, the lowest at 41.9% for the development projects. Fourth, analysis of the landscape review contents by categorization by item showed that ' Landscape Control Guideline' and 'Document not completed' are relatively high. Thus, eight years have passed since the start of the Landscape Management System and the Landscape Review, but systematic institutional stability is not sufficient, so it is necessary to make the Landscape Control Guideline easier to understand.

The Refinement Effect of Foreign Word Transliteration Query on Meta Search (메타 검색에서 외래어 질의 정제 효과)

  • Lee, Jae-Sung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.171-178
    • /
    • 2008
  • Foreign word transliterations are not consistently used in documents, which hinders retrieving some important relevant documents in exact term matching information retrieval systems. In this paper, a meta search method is proposed, which expands and refines relevant variant queries from an original input foreign word transliteration query to retrieve the more relevant documents. The method firstly expands a transliteration query to the variants using a statistical method. Secondly the method selects the valid variants: it queries each variant to the retrieval systems beforehand and checks the validity of each variant by counting the number of appearance of the variant in the retrieved document and calculating the similarity of the context of the variant. Experiment result showed that querying with the variants produced at the first step, which is a base method of the test, performed 38% in average F measure, and querying with the refined variants at the second step, which is a proposed method, significantly improved the performance to 81% in average F measure.