• Title/Summary/Keyword: real-time databases

Search Result 188, Processing Time 0.026 seconds

Data Exchange between Cadastre and Physical Planning by Database Coupling

  • Kim, Kam-Rae;Choi, Won-Jun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.1
    • /
    • pp.69-75
    • /
    • 2007
  • The information in physical planning field shows the socio-economic potentials of land resources while cadastral data does the physical and legal realities of the land. The two domains commonly deal with land information but have different views. Cadastre has to evolved to the multi-purpose ones which provide value-added information and support a wide spectrum of decision makers by mixing their own information with other spatial/non-spatial databases. In this context, the demands of data exchange between the two domains is growing up but this cannot be done without resolving the heterogeneity between the two information applications. Both of either discipline sees the reality within its own scope, which means each has a unique way to abstract real world phenomena to the database. The heterogeneity problem emerges when an GIS is autonomously and independently established. It causes considerable communication difficulties since heterogeneity of representations forms unique data semantics for each database. The semantic heterogeneity obviously creates an obstacle to data exchange but, at the same time, it can be a key to solve the problems too. Therefore, the study focuses on facilitating data sharing between the fields of cadastre and physical planning by resolving the semantic heterogeneity. The core job is developing a conversion mechanism of cadastral data into the information for the physical planning by DB coupling techniques.

A Popularity-driven Cache Management and its Performance Evaluation in Meta-search Engines (메타 검색 엔진을 위한 인기도 기반 캐쉬 관리 및 성능 평가)

  • Hong, Jin-Seon;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.148-157
    • /
    • 2002
  • Caching in meta-search engines can improve the response time of users' request. We describe the cache scheme in our meta-search engine in terms of its architecture and operational flow. In particular, we propose a popularity-driven cache algorithm that utilizes popularities of queries to determine cached data to be purged. The popularity is a value that represents the normalized occurrence frequency of user queries. This paper presents how to collect popular queries and how to calculate query popularities. An empirical performance evaluation of the popularity-driven caching with the traditional schemes (i.e., least recently used (LRU) and least frequently used (LFU)) has been carried out on a collection of real data. In almost all cases, the proposed replacement policy outperforms LRU and LFU.

Wearable Approach of ECG Monitoring System for Wireless Tele-Home Care Application

  • Kew, Hsein-Ping;Noh, Yun-Hong;Jeong, Do-Un
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.337-340
    • /
    • 2009
  • Wireless tele-home-care application gives new possibilities for ECG (electrocardiogram) monitoring system with wearable biomedical sensors. Thus, continuously development of high convenient ECG monitoring system for high-risk cardiac patients is essential. This paper describes to monitor a person's ECG using wearable approach. A wearable belt-type ECG electrode with integrated electronics has been developed and has proven long-term robustness and monitoring of all electrical components. The measured ECG signal is transmitted via an ultra low power consumption wireless sensor node. ECG signals carry a lot clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed thus it bring errors due to motion artifacts and signal size changes. Variable threshold method is used to detect the R-peak which is more accurate and efficient. In order to evaluate the performance analysis, R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research. This concept able to allow patient to follow up critical patients from their home and early detecting rarely occurrences of cardiac arrhythmia.

  • PDF

Resolution Conversion of SAR Target Images Using Conditional GAN (Conditional GAN을 이용한 SAR 표적영상의 해상도 변환)

  • Park, Ji-Hoon;Seo, Seung-Mo;Choi, Yeo-Reum;Yoo, Ji Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.12-21
    • /
    • 2021
  • For successful automatic target recognition(ATR) with synthetic aperture radar(SAR) imagery, SAR target images of the database should have the identical or highly similar resolution with those collected from SAR sensors. However, it is time-consuming or infeasible to construct the multiple databases with different resolutions depending on the operating SAR system. In this paper, an approach for resolution conversion of SAR target images is proposed based on conditional generative adversarial network(cGAN). First, a number of pairs consisting of SAR target images with two different resolutions are obtained via SAR simulation and then used to train the cGAN model. Finally, the model generates the SAR target image whose resolution is converted from the original one. The similarity analysis is performed to validate reliability of the generated images. The cGAN model is further applied to measured MSTAR SAR target images in order to estimate its potential for real application.

Enhancing the Text Mining Process by Implementation of Average-Stochastic Gradient Descent Weight Dropped Long-Short Memory

  • Annaluri, Sreenivasa Rao;Attili, Venkata Ramana
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.352-358
    • /
    • 2022
  • Text mining is an important process used for analyzing the data collected from different sources like videos, audio, social media, and so on. The tools like Natural Language Processing (NLP) are mostly used in real-time applications. In the earlier research, text mining approaches were implemented using long-short memory (LSTM) networks. In this paper, text mining is performed using average-stochastic gradient descent weight-dropped (AWD)-LSTM techniques to obtain better accuracy and performance. The proposed model is effectively demonstrated by considering the internet movie database (IMDB) reviews. To implement the proposed model Python language was used due to easy adaptability and flexibility while dealing with massive data sets/databases. From the results, it is seen that the proposed LSTM plus weight dropped plus embedding model demonstrated an accuracy of 88.36% as compared to the previous models of AWD LSTM as 85.64. This result proved to be far better when compared with the results obtained by just LSTM model (with 85.16%) accuracy. Finally, the loss function proved to decrease from 0.341 to 0.299 using the proposed model

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Development of a 3-D Immersion Type Training Simulator

  • Jung, Young-Beom;Park, Chang-Hyun;Jang, Gil-Soo
    • KIEE International Transactions on Power Engineering
    • /
    • v.4A no.4
    • /
    • pp.171-177
    • /
    • 2004
  • In the current age of the information oriented society in which we live, many people use PCs and are dependant on the databases provided by the network server. However, online data can be missed during the occurrence of a blackout and furthermore, power failure can greatly effect Power Quality. This has resulted in the trend of using interruption-free live-line work when trouble occurs in a power system. However, 83% of the population receives an electric shock experience when a laborer is performing interruption-free live-line work. In the interruption-free method, education and training problems have been pinpointed. However, there are few instructors to implement the necessary training. Furthermore, the trainees undergo only a short training period of just 4 weeks. In this paper, to develop a method with no restrictions on time and place and to ensure a reduction in the misuse of materials, immersion type virtual reality (or environment) technology is used. The users of a 3D immersion type VR training system can interact with the system by performing the equivalent action in a safe environment. Thus, it can be valuable to apply this training system to such dangerous work as 'Interruption-free live-line work exchanging COS (Cut-Out-Switch)'. In this program, the user carries out work according to instructions displayed through the window and speaker and cannot perform other tasks until each part of the task is completed in the proper sequence. The workers using this system can utilize their hands and viewpoint movement since they are in a real environment but the trainee cannot use all parts and senses of a real body with the current VR technology. Despite these weak points, when we consider the trends of improvement in electrical devices and communication technology, we can say that 3D graphic VR application has high potentiality.

A Recovery Scheme of Single Node Failure using Version Caching in Database Sharing Systems (데이타베이스 공유 시스템에서 버전 캐싱을 이용한 단일 노드 고장 회복 기법)

  • 조행래;정용석;이상호
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.409-421
    • /
    • 2004
  • A database sharing system (DSS) couples a number of computing nodes for high performance transaction processing, and each node in DSS shares database at the disk level. In case of node failures in DSS, database recovery algorithms are required to recover the database in a consistent state. A database recovery process in DSS takes rather longer time compared with single database systems, since it should include merging of discrete log records in several nodes and perform REDO tasks using the merged lo9 records. In this paper, we propose a two version caching (2VC) algorithm that improves the cache fusion algorithm introduced in Oracle 9i Real Application Cluster (ORAC). The 2VC algorithm can achieve faster database recovery by eliminating the use of merged log records in case of single node failure. Furthermore, it can improve the performance of normal transaction processing by reducing the amount of unnecessary disk force overhead that occurs in ORAC.

An Indexing Technique of Moving Point Objects using Projection (추출 연산을 활용한 이동 점 객체 색인 기법)

  • 정영진;장승연;안윤애;류근호
    • Journal of KIISE:Databases
    • /
    • v.30 no.1
    • /
    • pp.52-63
    • /
    • 2003
  • Spatiotemporal moving objects are changing their Positions and/or shape over time in real world. As most of the indices of moving object are based on the R-tree. they have defects of the R-tree which are dead space and overlap. Some of the indices amplify the defects of the R-tree. In the paper, to solve the problems, we propose the MPR-tree(Moving Point R-tree) using Projection operation which has more effective search than existing moving point indices on time slice query and spatiotemporal range query. The MPR-tree connects positions of the same moving objects over time by using linked list, so it processes the combined query about trajectory effectively. The usefulness of the Projection operation is confirmed during processing moving object queries and in practical use of space from experimentation to compare MPR-tree with existing indices of moving objects. The proposed MPR-tree would be useful in the LBS, the car management using GPS, and the navigation system.

Generalization of Window Construction for Subsequence Matching in Time-Series Databases (시계열 데이터베이스에서의 서브시퀀스 매칭을 위한 윈도우 구성의 일반화)

  • Moon, Yang-Sae;Han, Wook-Shin;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.357-372
    • /
    • 2001
  • In this paper, we present the concept of generalization in constructing windows for subsequence matching and propose a new subsequence matching method. GeneralMatch, based on the generalization. The earlier work of Faloutsos et al.(FRM in short) causes a lot of false alarms due to lack of the point-filtering effect. DualMatch, which has been proposed by the authors, improves performance significantly over FRM by exploiting the point filtering effect, but it has the problem of having a smaller maximum window size (half that FRM) given the minimum query length. GeneralMatch, an improvement of DualMatch, offers advantages of both methods: it can use large windows like FRM and, at the same time, can exploit the point-filtering effect like DualMatch. GeneralMatch divides data sequences into J-sliding windows (generalized sliding windows) and the query sequence into J-disjoint windows (generalized disjoint windows). We formally prove that our GeneralMatch is correct, i.e., it incurs no false dismissal. We also prove that, given the minimum query length, there is a maximum bound of the window size to guarantee correctness of GeneralMatch. We then propose a method of determining the value of J that minimizes the number of page accesses, Experimental results for real stock data show that, for low selectivities ($10^{-6}~10^{-4}$), GeneralMatch improves performance by 114% over DualMatch and by 998% iver FRM on the average; for high selectivities ($10^{-6}~10^{-4}$), by 46% over DualMatch and by 65% over FRM on the average.

  • PDF