• Title/Summary/Keyword: data consistency

Search Result 1,469, Processing Time 0.026 seconds

An Implementation of Expert System wiht Knowledge Acquisition System (지식 획득 시스템을 갖춘 전문가 시스템의 구현)

  • Seo, Ui-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1434-1445
    • /
    • 2000
  • An expert system executes the inference, based on the knowledge of specific domain. the reliability on the results of inference depends upon both the consistency and accuracy of knowledge. This is the reason why expert system requires the facilities which can practice an access to the various kinds of knowledge and maintain the consistency and accuracy of knowledge an maintain the consistency and accuracy of knowledge. This paper is to implement an expert system permitting an access of declarative and procedural knowledge in the knowledge base and in the data base. This paper is also to implement a knowledge acquisition system which adds the knowledge a only if its accuracy and consistency are maintained, after verifying the potential errors such as contradiction, redundancy, circulation, non-reachable rule and non-lined rule. In consequence, the expert system realizes a good access to the various sorts of knowledge and increases the reliability on the results of inference. The knowledge acquisition system contributes tro strengthening man-machine interface that enables users to add the knowledge easily to the knowledge base.

  • PDF

A Study on Determining Weight of Lifetime Value(LTV) using Analytic hierarchy Process(AHP) (계층분석과정을 활용한 고객생애가치 가중치 결정에 관한 연구)

  • 양광모;강경식
    • Journal of the Korea Safety Management & Science
    • /
    • v.4 no.3
    • /
    • pp.131-140
    • /
    • 2002
  • Today's environment of enterprise is changing, They have to face customer' demands with the right product, the right service and supply them at the right time. And also cut down logistics and inventory cost and bring up the profit as much as they can. This means the change of putting enterprise first in importance to putting customer first importance. therefore to correspond to customer's demand, shorting lead time is becoming a essential condition. The answer to this changes of environment is supply chain management. In this paper, It consolidates the necessity on a LTV(Life Time Value) and analyzes data which is concerned of Customer Value. Under the these environments, defines the LTV(Life Time Value) rule that can improve the customer value. We solved this problems using AHP(Analytic Hierarchy Process) for consistency at relationship matrix, AHP(Analytic Hierarchy Process) is based on Saaty's consistency rate. If consistency rate is under 0.1 point, preference rate's weights are acceptable. This study develop a program for AHP weights and support Satty's consistency rate.

ANALYSIS OF ASTRONOMICAL ALMANAC DATA FOR NATIONAL STANDARD REFERENCE DATA (참조표준 등록을 위한 천문역법 자료 분석)

  • Yang, Hong-Jin;Ahn, Young-Sook;Lee, Ki-Won
    • Publications of The Korean Astronomical Society
    • /
    • v.23 no.2
    • /
    • pp.53-63
    • /
    • 2008
  • Korea Astronomy and Space Science Institute (KASI), direct decendant of Korea National Astronomy Observatory, has been publishing Korean Astronomical Almanac since in 1976. The almanac contains essential data in our daily lives such as the times of sunrise, sunset, moonrise, and moonset, conversion tables between luni-solar and solar calendars, and so forth. So, we are planning to register Korean astronomical almanac data for national Standard Reference Data(SRD), which is a scientific/technical data whose the reliablity and the accuracy are authorized by scientific analysis and evalution. To be certificated as national SRD, reference data has to satisfy several criteria such as traceability, consistency, uncertainty, and so on. Based on similarity among calculation processes, we classified astronomical almanac data into three groups: Class I, II, and III. We are planning to register them for national SRD in consecutive order. In this study, we analyzed Class I data which is aimed to register in 2009, and presented the results. Firstly, we found that the traceability and the consistency can be ensured by the usage of NASA/JPL DE405 ephemeris and by the comparsion with international data, respectively. To evaluate uncertainty in Class I data, we solved the mathematical model and determined the factors influencing the calculations. As a result, we found that the atmospheric refraction is the main factor and leads to a variation of ${\pm}16$ seconds in the times of sunrise and sunset. We also briefly review the histories of astronomical almanac data and of standard reference data in Korea.

A Heterogeneous Mobile Data Synchronization Technique Using the Tuple Based Message Digest (튜플 단위 메시지 다이제스트를 이용한 이기종 모바일 데이터 동기화 기법)

  • Park, Seong-Jin
    • Journal of Internet Computing and Services
    • /
    • v.7 no.5
    • /
    • pp.1-12
    • /
    • 2006
  • In mobile database environments, the efficient synchronization technique is required to maintain the consistency of replicated data because the same data can be replicated between so many different databases, In this paper, we propose a message digest based synchronization technique to maintain the consistency of replicated data between client databases and a server database in mobile environments. The proposed data synchronization technique has the advantage of generality aspect and extensibility aspect by using the tuple-based message digest output to detect the data conflicts.

  • PDF

A STUDY ON BIM-BASED 5D SIMULATION IN WEB ENVIRONMENT

  • Jae-Bok Lim;Jae-Hong Ahn;Ju-Hyung Kim;Jae-Jun Kim
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.169-172
    • /
    • 2013
  • Building Information Modeling (BIM) is an effective decision-making platform that helps to save project cost and enhance quality of construction. By generating and linking a wide variety of objects data, BIM can be effectively utilized, and it should be ensured that object properties maintain consistency throughout the project period of design, estimates, construction, maintenance and repair. This study examined how to utilize BIM data in a construction project, by linking cost and schedule data in web environment, to better utilize the information and maintain consistency of the BIM information. To do so, the model integrated WBS data and CBS data, linked them with BIM model to realize 5D simulation in web environment. As a result, cost and schedule data could be simultaneously acquired, and object properties-cost, schedule, location-as well. These are expected to contribute to developing a BIM-based automatic data-processing system in web environment.

  • PDF

Rollback Dependency Detection and Management with Data Consistency in Collaborative Transactional Workflows (협력 트랜잭셔널 워크플로우에서 데이터 일관성을 고려한 철회 종속성 감지 및 관리)

  • Byun, Chang-Woo;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.30 no.2
    • /
    • pp.197-208
    • /
    • 2003
  • Abstract Workflow is not appropriately applied to coordinated execution of applications(steps) that comprise business process such as a collaborative series of tasks because of the lacks of network infra, standard of information exchange and data consistency management with conflict mode of shared data. Particularly we have not mentioned the problem which can be occurred by shared data with conflict mode. In this paper, to handle data consistency in the process of rollback for failure handling or recovery policy, we have classified rollback dependency into three types such as implicit rollback dependency in a transactional workflow, implicit rollback dependency in collaborative transactional workflows and explicit rollback dependency in collaborative transactional workflows. Also, we have proposed the rollback dependency compiler that determines above three types of rollback dependency. A workflow designer specifies the workflow schema and the resources accessed by the steps from a global database of resources. The rollback dependency compiler generates the enhanced workflow schema with the rollback dependency specification. The run-time system interprets this specification and executes the rollback policy with data consistency if failure of steps is occurred. After all, this paper can offer better correctness and performance than state-of-the-art WFMSs.

A Study on Data Availability Improvement using Mobility Prediction Technique with Location Information (위치 정보와 이동 예측 기법을 이용한 데이터 가용성 향상에 관한 연구)

  • Yang, Hwan Seok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.143-149
    • /
    • 2012
  • MANET is a network that is a very useful application to build network environment in difficult situation to build network infrastructure. But, nodes that configures MANET have difficulties in data retrieval owing to resources which aren't enough and mobility. Therefore, caching scheme is required to improve accessibility and availability for frequently accessed data. In this paper, we proposed a technique that utilize mobility prediction of nodes to retrieve quickly desired information and improve data availability. Mobility prediction of modes is performed through distance calculation using location information. We used technique which global cluster table and local member table is managed by cluster head to reduce data consistency and query latency time. We compared COCA and CacheData and experimented to confirm performance of proposed scheme in this paper and efficiency of the proposed technique through experience was confirmed.

Asynchronous Cache Consistency Technique (비동기적 캐쉬 일관성 유지 기법)

  • 이찬섭
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.2
    • /
    • pp.33-40
    • /
    • 2004
  • According as client/server is generalized by development of computer performance and information communication technology, Servers uses local cache for extensibility and early response time, and reduction of limited bandwidth. Consistency of cached data need between server and client this time and much technique are proposed according to this. This Paper improved update frequency cache consistency in old. Existent consistency techniques is disadvantage that response time is late because synchronous declaration or abort step increases because delaying write intention declaration. Techniques that is proposed in this paper did to perform referring update time about object that page request or when complete update operation happens to solve these problem. Therefore, have advantage that response is fast because could run write intention declaration or update by sel_mode electively asynchronously when update operation consists and abort step decreases and clearer selection.

  • PDF

A Study On IoT Data Consistency in IoT Environment (사물인터넷 환경에서 IoT 데이터 정합성 연구)

  • Choi, Changwon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.5
    • /
    • pp.127-132
    • /
    • 2022
  • As the IoT technology is more developed, it is more important for the accuracy of IoT data. Since the IoT data supports a different formats and protocols, it is often happened that the IoT system is failed or the incorrect data is generated with the unreliable IoT devices(sensor, actuator). Because the abnormality of IoT device or the user situation is not detected correctly, this problem makes the user to be unsatisfied with the IoT system. This study proposes the decision methodology of IoT data consistency whether the IoT data is generated in normal range or not by using the mathematical functions('gradient descent function' and 'linear regression function'). It may be concluded that the gradient function method is suitable for the IoT data which the 'increasing velocity' is related with the next generated pattern(eg. sensor devices), the linear regression function method is suitable for the IoT data which the 'the difference from linear regression function' is related with the next generated pattern in case the data has a linear pattern(eg. water meter, electric meter).

DJFS: Providing Highly Reliable and High-Performance File System with Small-Sized NVRAM

  • Kim, Junghoon;Lee, Minho;Song, Yongju;Eom, Young Ik
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.820-831
    • /
    • 2017
  • File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under-utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force-unit-access (FUA) commands. In this paper, we present DJFS (Delta-Journaling File System) that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small-sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC-C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.