• Title/Summary/Keyword: 문맥 관리

Search Result 55, Processing Time 0.031 seconds

Deterministic Real-Time Task Scheduling (시간 결정성을 보장하는 실시간 태스크 스케줄링)

  • Cho, Moon-Haeng;Lee, Soong-Yeol;Lee, Won-Yong;Jeong, Geun-Jae;Kim, Yong-Hee;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.73-82
    • /
    • 2007
  • In recent years, embedded systems have been expanding their application domains from traditional applications (such as defense, robots, and artificial satellites) to portable devices which execute more complicated applications such as cellular phones, digital camcoders, PMPs, and MP3 players. So as to manage restricted hardware resources efficiently and to guarantee both temporal and logical correctness, every embedded system use a real-time operating system (RTOS). Only when the RTOS makes kernel services deterministic in time by specifying how long each service call will take to execute, application programers can write predictable applications. Moreover, so as for an RTOS to be deterministic, its scheduling and context switch overhead should also be predictable. In this paper, we present the complete generalized algorithm to determine the highest priority in the ready list with 22r levels of priorities in a constant time without additional memory overhead.

Weighting of XML Tag using User's Query (사용자 질의를 이용한 XML 태그의 가중치 결정)

  • Woo Seon-Mi;Yoo Chun-Sik;Kim Yong-Sung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.439-446
    • /
    • 2005
  • XML is the standard that can manage systematically WWW documents and increase retrieval efficiency. Because XML documents have the information of contents and that of structure in single document, users can get more suitable retrieval result by retrieving the information of content as well as that of logical structure. In this paper, we will propose a method to calculate the weights of XML tags so that the information of XML tag is used to index decision. A proposed method creates term vector and weight vector for XML tags, and calculates weight of tag by reflecting user's retrieval behavior (user's query). And it decides the weights of index terms of XML document by reflecting the weights of tags. And we will perform an evaluation of proposed method by comparison with existing researches using weights of paragraphs.

Research Trends Investigation Using Text Mining Techniques: Focusing on Social Network Services (텍스트마이닝을 활용한 연구동향 분석: 소셜네트워크서비스를 중심으로)

  • Yoon, Hyejin;Kim, Chang-Sik;Kwahk, Kee-Young
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.513-519
    • /
    • 2018
  • The objective of this study was to examine the trends on social network services. The abstracts of 308 articles were extracted from web of science database published between 1994 and 2016. Time series analysis and topic modeling of text mining were implemented. The topic modeling results showed that the research topics were mainly 20 topics: trust, support, satisfaction model, organization governance, mobile system, internet marketing, college student effect, opinion diffusion, customer, information privacy, health care, web collaboration, method, learning effectiveness, knowledge, individual theory, child support, algorithm, media participation, and context system. The time series regression results indicated that trust, support satisfaction model, and remains of the topics were hot topics. This study also provided suggestions for future research.

An XML Tag Indexing Method Using on Lexical Similarity (XML 태그를 분류에 따른 가중치 결정)

  • Jeong, Hye-Jin;Kim, Yong-Sung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.71-78
    • /
    • 2009
  • For more effective index extraction and index weight determination, studies of extracting indices are carried out by using document content as well as structure. However, most of studies are concentrating in calculating the importance of context rather than that of XML tag. These conventional studies determine its importance from the aspect of common sense rather than verifying that through an objective experiment. This paper, for the automatic indexing by using the tag information of XML document that has taken its place as the standard for web document management, classifies major tags of constructing a paper according to its importance and calculates the term weight extracted from the tag of low weight. By using the weight obtained, this paper proposes a method of calculating the final weight while updating the term weight extracted from the tag of high weight. In order to determine more objective weight, this paper tests the tag that user considers as important and reflects it in calculating the weight by classifying its importance according to the result. Then by comparing with the search performance while using the index weight calculated by applying a method of determining existing tag importance, it verifies effectiveness of the index weight calculated by applying the method proposed in this paper.

Database Interface System with Dialog (대화를 통한 데이타베이스 인터페이스 시스템)

  • Woo, Yo-Seop;Kang, Seok-Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.417-428
    • /
    • 1996
  • In this paper, a database interface system with natural language dialogue is designed and implemented. The system is made up of language analysis, context processing, dialogue processing and DB processing unit. The method for classifying and processing an undefined word in language analysis is proposed. It reduces the dictionary size, which gives difficulties in DB Interface. And the current DB Interfaces dealt with an input utterance independently. But the system in this paper provides a user with the interface environment in which he or she can have a continuous conversation with the system and retrieve DB information. Thus in this paper, speech acts which include user's inattentions well as propositional contents are defined, and user action hierarchical model for library DB retrieval is constructed. And the system uses the defined knowledge to recognize-user's plan, effectively understanding and managing the ongoing dialogue. And the system is implemented in the domain of library database in order to prove the proposed methods in this paper.

  • PDF

Longitudinal Analysis of Information Science Research in JASIST 1985-2009 (정보학연구의 25년간 동향 분석 : JASIST 논문을 중심으로)

  • Seo, Eun-Gyoung
    • Journal of the Korean Society for information Management
    • /
    • v.27 no.2
    • /
    • pp.129-155
    • /
    • 2010
  • In recent years, the changes in information technology have been so dramatic and the rate of changes has increased so much that information science research rigorously evolves with the passage of time and proliferates in diverging research directions dynamically. The aims of this study are to provide a global overview of research trends in information science and to trace its changes in the main topics over time. The study examined the topics of research articles published in JASIST between 1985 and 2009 and identified its changes during five 5 year periods. The study found that the most productive area has consistently been 'Information Retrieval', followed by 'Informetrics', 'Information Use and Users', 'Network and Technology', and 'Publishing and Services'. Information retrieval is a predominant core area in Information Science covering computer-based handling of multimedia information, employment of new semantic methods from other disciplines, and mass information handling on virtual environments. Currently Informetric studies shift from finding existing phenomena to seeking valuable descriptive results and researchers of information use have concentrated especially on information-seeking aspects, so adding greater sophistication to the relatively simple approach taken in information retrieval.

A Cost-Efficient Job Scheduling Algorithm in Cloud Resource Broker with Scalable VM Allocation Scheme (클라우드 자원 브로커에서 확장성 있는 가상 머신 할당 기법을 이용한 비용 적응형 작업 스케쥴링 알고리즘)

  • Ren, Ye;Kim, Seong-Hwan;Kang, Dong-Ki;Kim, Byung-Sang;Youn, Chan-Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.3
    • /
    • pp.137-148
    • /
    • 2012
  • Cloud service users request dedicated virtual computing resource from the cloud service provider to process jobs in independent environment from other users. To optimize this process with automated method, in this paper we proposed a framework for workflow scheduling in the cloud environment, in which the core component is the middleware called broker mediating the interaction between users and cloud service providers. To process jobs in on-demand and virtualized resources from cloud service providers, many papers propose scheduling algorithms that allocate jobs to virtual machines which are dedicated to one machine one job. With this method, the isolation of being processed jobs is guaranteed, but we can't use each resource to its fullest computing capacity with high efficiency in resource utilization. This paper therefore proposed a cost-efficient job scheduling algorithm which maximizes the utilization of managed resources with increasing the degree of multiprogramming to reduce the number of needed virtual machines; consequently we can save the cost for processing requests. We also consider the performance degradation in proposed scheme with thrashing and context switching. By evaluating the experimental results, we have shown that the proposed scheme has better cost-performance feature compared to an existing scheme.

Web Learning Systems Development based on Product Line (프로덕트 라인 기반의 웹 학습 시스템 개발)

  • Kim Haeng-Hon;Kim Su-Youn
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.589-600
    • /
    • 2005
  • Application developers need effective reuseable methodology to meet rapidly changes and variety of users requirements. Product Line and CBD(Component Based Development) offer the great benefits on quality and productivity for developing the software that is mainly associate with reusable architectures and components in a specific domain and rapidly changing environments. Product line can dynamically focus on the commonality and variety feature model among the products. The product line uses the feature modeling for discovering, analyzing, and mediating interactions between products. Reusable architectures include many variety plans and mechanisms. In case of those architecture are use in product version for a long time, It is very important in architecture product line context for product line design phase. Application developer need to identify the proper location of architecture changing for variety expression. It is lack of specific variety managements to design the product line architecture until nowdays. In this paper, we define various variety types to identify the proper location of architecture changing for variety expression and to design the reusable architecture. We also propose architecture variety on feature model and describe variety expression on component relations. We implemented the web learning system based on the methodology. We finally describe how these methodology may assist in increasing the efficiency, reusability, productivity and quality to develop an application. In the future, we are going to apply the methodology into various domain and suggest international and domestic's standardization.

Maritime Safety Tribunal Ruling Analysis using SentenceBERT (SentenceBERT 모델을 활용한 해양안전심판 재결서 분석 방법에 대한 연구)

  • Bori Yoon;SeKil Park;Hyerim Bae;Sunghyun Sim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.843-856
    • /
    • 2023
  • The global surge in maritime traffic has resulted in an increased number of ship collisions, leading to significant economic, environmental, physical, and human damage. The causes of these maritime accidents are multifaceted, often arising from a combination of crew judgment errors, negligence, complexity of navigation routes, weather conditions, and technical deficiencies in the vessels. Given the intricate nuances and contextual information inherent in each incident, a methodology capable of deeply understanding the semantics and context of sentences is imperative. Accordingly, this study utilized the SentenceBERT model to analyze maritime safety tribunal decisions over the last 20 years in the Busan Sea area, which encapsulated data on ship collision incidents. The analysis revealed important keywords potentially responsible for these incidents. Cluster analysis based on the frequency of specific keyword appearances was conducted and visualized. This information can serve as foundational data for the preemptive identification of accident causes and the development of strategies for collision prevention and response.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.