• Title/Summary/Keyword: Network Computer

Search Result 12,553, Processing Time 0.046 seconds

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

On the Improvement of Precision in Gravity Surveying and Correction, and a Dense Bouguer Anomaly in and Around the Korean Peninsula (한반도 일원의 중력측정 및 보정의 정밀화와 고밀도 부우게이상)

  • Shin, Young-Hong;Yang, Chul-Soo;Ok, Soo-Suk;Choi, Kwang-Sun
    • Journal of the Korean earth science society
    • /
    • v.24 no.3
    • /
    • pp.205-215
    • /
    • 2003
  • A precise and dense Bouguer anomaly is one of the most important data to improve the knowledge of our environment in the aspect of geophysics and physical geodesy. Besides the precise absolute gravity station net, we should consider two parts; one is to improve the precision in gravity measurement and correction of it, and the other is the density of measurement both in number and distribution. For the precise positioning, we have tested how we could use the GPS properly in gravity measurement, and deduced that the GPS measurement for 5 minutes would be effective when we used DGPS with two geodetic GPS receivers and the baseline was shorter than 40km. In this case we should use a precise geoid model such as PNU95. By applying this method, we are able to reduce the cost, time, and number of surveyors, furthermore we also get the benefit of improving in quality. Two kind of computer programs were developed to correct crossover errors and to calculate terrain effects more precisely. The repeated measurements on the same stations in gravity surveying are helpful not only to correct the drifts of spring but also to approach the results statistically by applying network adjustment. So we can find out the blunders of various causes easily and also able to estimate the quality of the measurements. The recent developments in computer technology, digital elevation data, and precise positioning also stimulate us to improve the Bouguer anomaly by more precise terrain correction. The gravity data of various sources, such as land gravity data (by Choi, NGI, etc.), marine gravity data (by NORI), Bouguer anomaly map of North Korea, Japanese gravity data, altimetry satellite data, and EGM96 geopotential model, were collected and processed to get a precise and dense Bouguer anomaly in and around the Korean Peninsula.

The current state and prospects of travel business development under the COVID-19 pandemic

  • Tkachenko, Tetiana;Pryhara, Olha;Zatsepina, Nataly;Bryk, Stepan;Holubets, Iryna;Havryliuk, Alla
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.664-674
    • /
    • 2021
  • The relevance of this scientific research is determined by the negative impact of the COVID-19 pandemic on the current trends and dynamics of world tourism development. This article aims to identify patterns of development of the modern tourist market, analysis of problems and prospects of development in the context of the COVID-19 pandemic. Materials and methods. General scientific methods and methods of research are used in the work: analysis, synthesis, comparison, analysis of statistical data. The analysis of the viewpoints of foreign and domestic authors on the research of the international tourist market allowed us to substantiate the actual directions of tourism development due to the influence of negative factors connected with the spread of a new coronavirus infection COVID-19. Economic-statistical, abstract-logical, and economic-mathematical methods of research were used during the process of study and data processing. Results. The analysis of the current state of the tourist market by world regions was carried out. It was found that tourism is one of the most affected sectors from COVID-19, as, by the end of 2020, the total number of tourist arrivals in the world decreased by 74% compared to the same period in 2019. The consequence of this decline was a loss of total global tourism revenues by the end of 2020, which equaled $1.3 trillion. 27% of all destinations are completely closed to international tourism. At the end of 2020, the economy of international tourism has shrunk by about 80%. In 2020 the world traveled 98 million fewer people (-83%) relative to the same period last year. Tourism was hit hardest by the pandemic in the Asia-Pacific region, where travel restrictions are as strict as possible. International arrivals in this region fell by 84% (300 million). The Middle East and Africa recorded declines of 75 and 70 percent. Despite a small and short-lived recovery in the summer of 2020, Europe lost 71% of the tourist flow, with the European continent recording the largest drop in absolute terms compared with 2019, 500 million. In North and South America, foreign arrivals declined. It is revealed that a significant decrease in tourist flows leads to a massive loss of jobs, a sharp decline in foreign exchange earnings and taxes, which limits the ability of states to support the tourism industry. Three possible scenarios of exit of the tourist industry from the crisis, reflecting the most probable changes of monthly tourist flows, are considered. The characteristics of respondents from Ukraine, Germany, and the USA and their attitude to travel depending on gender, age, education level, professional status, and monthly income are presented. About 57% of respondents from Ukraine, Poland, and the United States were planning a tourist trip in 2021. Note that people with higher or secondary education were more willing to plan such a trip. The results of the empirical study confirm that interest in domestic tourism has increased significantly in 2021. The regression model of dependence of the number of domestic tourist trips on the example of Ukraine with time tendency (t) and seasonal variations (Turˆt = 7288,498 - 20,58t - 410,88∑5) it forecast for 2020, which allows stabilizing the process of tourist trips after the pandemic to use this model to forecast for any country. Discussion. We should emphasize the seriousness of the COVID-19 pandemic and the fact that many experts and scientists believe in the long-term recovery of the tourism industry. In our opinion, the governments of the countries need to refocus on domestic tourism and deal with infrastructure development, search for new niches, formats, formation of new package deals in new - domestic - segment (new products' development (tourist routes, exhibitions, sightseeing programs, special rehabilitation programs after COVID) -19 in sanatoriums, etc.); creation of individual offers for different target audiences). Conclusions. Thus, the identified trends are associated with a decrease in the number of tourist flows, the negative impact of the pandemic on employment and income from tourism activities. International tourism needs two to four years before it returns to the level of 2019.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Automatic Interpretation of F-18-FDG Brain PET Using Artificial Neural Network: Discrimination of Medial and Lateral Temporal Lobe Epilepsy (인공신경회로망을 이용한 뇌 F-18-FDG PET 자동 해석: 내.외측 측두엽간질의 감별)

  • Lee, Jae-Sung;Lee, Dong-Soo;Kim, Seok-Ki;Park, Kwang-Suk;Lee, Sang-Kun;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.233-240
    • /
    • 2004
  • Purpose: We developed a computer-aided classifier using artificial neural network (ANN) to discriminate the cerebral metabolic pattern of medial and lateral temporal lobe epilepsy (TLE). Materials and Methods: We studied brain F-18-FDG PET images of 113 epilepsy patients sugically and pathologically proven as medial TLE (left 41, right 42) or lateral TLE (left 14, right 16). PET images were spatially transformed onto a standard template and normalized to the mean counts of cortical regions. Asymmetry indices for predefined 17 mirrored regions to hemispheric midline and those for medial and lateral temporal lobes were used as input features for ANN. ANN classifier was composed of 3 independent multi-layered perceptrons (1 for left/right lateralization and 2 for medial/lateral discrimination) and trained to interpret metabolic patterns and produce one of 4 diagnoses (L/R medial TLE or L/R lateral TLE). Randomly selected 8 images from each group were used to train the ANN classifier and remaining 51 images were used as test sets. The accuracy of the diagnosis with ANN was estimated by averaging the agreement rates of independent 50 trials and compared to that of nuclear medicine experts. Results: The accuracy in lateralization was 89% by the human experts and 90% by the ANN classifier Overall accuracy in localization of epileptogenic zones by the ANN classifier was 69%, which was comparable to that by the human experts (72%). Conclusion: We conclude that ANN classifier performed as well as human experts and could be potentially useful supporting tool for the differential diagnosis of TLE.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Automated-Database Tuning System With Knowledge-based Reasoning Engine (지식 기반 추론 엔진을 이용한 자동화된 데이터베이스 튜닝 시스템)

  • Gang, Seung-Seok;Lee, Dong-Joo;Jeong, Ok-Ran;Lee, Sang-Goo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06a
    • /
    • pp.17-18
    • /
    • 2007
  • 데이터베이스 튜닝은 일반적으로 데이터베이스 어플리케이션을 "좀 더 빠르게" 실행하게 하는 일련의 활동을 뜻한다[1]. 데이터베이스 관리자가 튜닝에 필요한 주먹구구식 룰(Rule of thumb)들을 모두 파악 하고 상황에 맞추어 적용하는 것은 비싼 비용과 오랜 시간을 요구한다. 그렇게 때문에 서로 다른 어플 리케이션들이 맞물려 있는 복잡한 서비스는 필수적으로 자동화된 데이터베이스 성능 관리와 튜닝을 필 요로 한다. 본 논문에서는 이를 해결하기 위하여 지식 도매인(Knowledge Domain)을 기초로 한 자동화 된 데이터베이스 튜닝 원칙(Tuning Principle)을 제시하는 시스템을 제안한다. 각각의 데이터베이스 튜닝 이론들은 지식 도매인의 지식으로 활용되며, 성능에 영향을 미치는 요소들을 개체(Object)와 콘셉트 (Concept)로 구성하고 추론 시스템을 통해 튜닝 원칙을 추론하여 쉽고 빠르게 현재 상황에 맞는 튜닝 방법론을 적용시킬 수 있다. 자동화된 데이터베이스 튜닝에 대해 여러 분야에 걸쳐 학문적인 연구가 이루어지고 있다. 그 예로써 Microsoft의 AutoAdmin Project[2], Oracle의 SQL 튜닝 아키텍처[3], COLT[4], DBA Companion[5], SQUASH[6] 등을 들 수 있다. 이러한 최적화 기법들을 각각의 기능적인 방법론에 따라 다시 분류하면 크게 Design Tuning, Logical Structure Tuning, Sentence Tuning, SQL Tuning, Server Tuning, System/Network Tuning으로 나누어 볼 수 있다. 이 중 SQL Tuning 등은 수치적으로 결정되어 이미 존재하는 정보를 이용하기 때문에 구조화된 모델로 표현하기 쉽고 사용자의 다양한 요구에 의해 변화하는 조건들을 수용하기 쉽기 때문에 이에 중점을 두고 성능 문제를 해결하는 데 초점을 맞추었다. 데이터베이스 시스템의 일련의 처리 과정에 따라 DBMS를 구성하는 개체들과 속성, 그리고 연관 관계들이 모델링된다. 데이터베이스 시스템은 Application / Query / DBMS Level의 3개 레벨에 따라 구조화되며, 본 논문에서는 개체, 속성, 연관 관계 및 데이터베이스 튜닝에 사용되는 Rule of thumb들을 분석하여 튜닝 원칙을 포함한 지식의 형태로 변환하였다. 튜닝 원칙은 데이터베이스 시스템에서 발생하는 문제를 해결할 수 있게 하는 일종의 황금률로써 지식 도매인의 바탕이 되는 사실(Fact)과 룰(Rule) 로써 표현된다. Fact는 모델링된 시스템을 지식 도매인의 하나의 지식 개체로 표현하는 방식이고, Rule 은 Fact에 기반을 두어 튜닝 원칙을 지식의 형태로 표현한 것이다. Rule은 다시 시스템 모델링을 통해 사전에 정의되는 Rule와 튜닝 원칙을 추론하기 위해 사용되는 Rule의 두 가지 타업으로 나뉘며, 대부분의 Rule은 입력되는 값에 따라 다른 솔루션을 취하게 하는 분기의 역할을 수행한다. 사용자는 제한적으로 자동 생성된 Fact와 Rule을 통해 튜닝 원칙을 추론하여 데이터베이스 시스템에 적용할 수 있으며, 요구나 필요에 따라 GUI를 통해 상황에 맞는 Fact와 Rule을 수동으로 추가할 수도 었다. 지식 도매인에서 튜닝 원칙을 추론하기 위해 JAVA 기반의 추론 엔진인 JESS가 사용된다. JESS는 스크립트 언어를 사용하는 전문가 시스템[7]으로 선언적 룰(Declarative Rule)을 이용하여 지식을 표현 하고 추론을 수행하는 추론 엔진의 한 종류이다. JESS의 지식 표현 방식은 튜닝 원칙을 쉽게 표현하고 수용할 수 있는 구조를 가지고 있으며 작은 크기와 빠른 추론 성능을 가지기 때문에 실시간으로 처리 되는 어플리케이션 튜닝에 적합하다. 지식 기반 모률의 가장 큰 역할은 주어진 데이터베이스 시스템의 모델을 통하여 필요한 새로운 지식을 생성하고 저장하는 것이다. 이를 위하여 Fact와 Rule은 지식 표현 의 기본 단위인 트리플(Triple)의 형태로 표현된다, 트리플은 Subject, Property, Object의 3가지 요소로 구성되며, 대부분의 Fact와 Rule들은 트리플의 기본 형태 또는 트리플의 조합으로 이루어진 C Condition과 Action의 두 부분의 결합으로 구성된다. 이와 같이 데이터베이스 시스템 모델의 개체들과 속성, 그리고 연관 관계들을 표현함으로써 지식들이 추론 엔진의 Fact와 Rule로 기능할 수 있다. 본 시스템에서는 이를 구현 및 실험하기 위하여 웹 기반 서버-클라이언트 시스템을 가정하였다. 서버는 Process Controller, Parser, Rule Database, JESS Reasoning Engine으로 구성 되 어 있으며, 클라이 언트는 Rule Manager Interface와 Result Viewer로 구성되어 었다. 실험을 통해 얻어지는 튜닝 원칙 적용 전후의 실행 시간 측정 등 데이터베이스 시스템 성능 척도를 비교함으로써 시스템의 효용을 판단하였으며, 실험 결과 적용 전에 비하여 튜닝 원칙을 적용한 경우 최대 1초 미만의 전처리에 따른 부하 시간 추가와 최소 약 1.5배에서 최대 약 3배까지의 처리 시간 개선을 확인하였다. 본 논문에서 제안하는 시스템은 튜닝 원칙을 자동으로 생성하고 지식 형태로 변형시킴으로써 새로운 튜닝 원칙을 파생하여 제공하고, 성능에 영향을 미치는 요소와 함께 직접 Fact과 Rule을 추가함으로써 커스터마이정된 튜닝을 수행할 수 있게 하는 장점을 가진다. 추후 쿼리 자체의 튜닝 및 인텍스 최적화 등의 프로세스 자동화와 Rule을 효율적으로 정의하고 추가하는 방법 그리고 시스템 모델링을 효과적으로 구성하는 방법에 대한 연구를 통해 본 연구를 더욱 개선시킬 수 있을 것이다.

  • PDF