• Title/Summary/Keyword: Data dictionary

Search Result 346, Processing Time 0.025 seconds

Linear Programming Model Discovery from Databases (데이터베이스로부터의 선형계획모형 추출방법에 대한 연구)

  • 권오병;김윤호
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.290-293
    • /
    • 2000
  • Knowledge discovery refers to the overall process of discovering useful knowledge from data. The linear programming model is a special form of useful knowledge that is embedded in a database. Since formulating models from scratch requires knowledge-intensive efforts, knowledge-based formulation support systems have been proposed in the DSS area. However, they rely on the strict assumption that sufficient domain knowledge should already be captured as a specific knowledge representation form. Hence, the purpose of this paper is to propose a methodology that finds useful knowledge on building linear programming models from a database. The methodology consists of two parts. The first part is to find s first-cut model based on a data dictionary. To do so, we applied the GPS algorithm. The second part is to discover a second-cut model by applying neural network technique. An illustrative example is described to show the feasibility of the proposed methodology.

  • PDF

The Design and Implementation of Meta database and manager (메타 데이타베이스와 관리기의 설계 및 구현-통계 데이타베이스를 중심으로)

  • Ahn, Sung-Ohk
    • The Journal of Natural Sciences
    • /
    • v.8 no.1
    • /
    • pp.109-114
    • /
    • 1995
  • For effective management of statistical database, statistical summary information must be provided by accessing directly the precomputed summary data from summary database to store and manage meta database for supporting statistical analysis and providing users with statistical summary information. In order to support effectively the use of summary database, we do the design and implementation of meta database and manager having a hierarchical structure as a data dictionary/directory and operation method is presented.

  • PDF

Computer Program for Quality Control of Ready Mixed Concrete (레디믹스트 콘크리트의 품질관리 프로그램 개발)

  • 최재진;박원태
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.3 no.1
    • /
    • pp.20-26
    • /
    • 2002
  • To make practical application of mixing test results to concrete mix design, experimental tests of concrete were done and the relationship between cement-water ratio and compressive strength of concrete was obtained. A computer program which can be used for data base of air content, slump and compressive strength test results was developed. The program draws $\bar{X}$-R or X-Rs control charts and has data sheets for arrangement of material test results. The computer program also helps calculation of concrete mix proportions for mixing tests and contains dictionary of concrete technical terms.

  • PDF

Automatic Construction and Evaluation of Movie Domain Korean Sentiment Dictionary (영화도메인 한국어 감성사전의 자동구축과 평가)

  • Cho, Heeryon;Choi, Sang-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.585-587
    • /
    • 2015
  • 본 연구에서는 네이버 영화평을 학습데이터로 사용하여 영화평 감성분류에 필요한 감성사전을 자동으로 구축하는 방법에 대해 제안한다. 이 때 학습데이터의 분량과 긍정/부정 영화평의 비율을 달리하여 네 가지의 학습데이터를 마련하고, 각 경우에 대하여 감성사전과 나이브베이즈(이하, NB) 분류기를 구축한 후, 이 둘의 성능을 비교했다. 네 종류의 학습데이터로 구축한 감성사전과 NB 분류기를 이용하여 영화평 감성 자동분류 성능을 비교한 결과, 네 경우의 평균 균형정확도는 감성사전이 78.2%, NB 분류기가 66.1%였다.

Word Order and Cliticization in Sakizaya: A Corpus-based Approach

  • Lin, Chihkai
    • Asia Pacific Journal of Corpus Research
    • /
    • v.1 no.2
    • /
    • pp.41-56
    • /
    • 2020
  • This paper aims to investigate how word order interacts with cliticization in Sakizaya, a Formosan language. This paper looks into nominative and genitive case markers from a corpus-based approach. The data are collected from an online dictionary of Sakizaya, and they are classified into two word orders: nominative case marker preceding genitive case marker and vice versa. The data are also divided into three categories, according to the demarcation of the case markers, which include right, left, or no demarcation. The corpus includes 700 sentences in the construction of predicate + noun phrase + noun phrase. The results suggest that the two case markers tend to be parsed into the preceding word and show right demarcation. The results also reveal that there are type difference and distance effect of the case markers on the cliticization. Nominative case markers show more right demarcation than genitive case markers do in the corpus. Also, the closer the case markers are to the predicate, the more possible the case markers undergo cliticization.

Mutational Data Loading Routines for Human Genome Databases: the BRCA1 Case

  • Van Der Kroon, Matthijs;Ramirez, Ignacio Lereu;Levin, Ana M.;Pastor, Oscar;Brinkkemper, Sjaak
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.4
    • /
    • pp.291-312
    • /
    • 2010
  • The last decades a large amount of research has been done in the genomics domain which has and is generating terabytes, if not exabytes, of information stored globally in a very fragmented way. Different databases use different ways of storing the same data, resulting in undesired redundancy and restrained information transfer. Adding to this, keeping the existing databases consistent and data integrity maintained is mainly left to human intervention which in turn is very costly, both in time and money as well as error prone. Identifying a fixed conceptual dictionary in the form of a conceptual model thus seems crucial. This paper presents an effort to integrate the mutational data from the established genomic data source HGMD into a conceptual model driven database HGDB, thereby providing useful lessons to improve the already existing conceptual model of the human genome.

The Prediction of Cryptocurrency on Using Text Mining and Deep Learning Techniques : Comparison of Korean and USA Market (텍스트 마이닝과 딥러닝을 활용한 암호화폐 가격 예측 : 한국과 미국시장 비교)

  • Won, Jonggwan;Hong, Taeho
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.1-17
    • /
    • 2021
  • In this study, we predicted the bitcoin prices of Bithum and Coinbase, a leading exchange in Korea and USA, using ARIMA and Recurrent Neural Networks(RNNs). And we used news articles from each country to suggest a separated RNN model. The suggested model identifies the datasets based on the changing trend of prices in the training data, and then applies time series prediction technique(RNNs) to create multiple models. Then we used daily news data to create a term-based dictionary for each trend change point. We explored trend change points in the test data using the daily news keyword data of testset and term-based dictionary, and apply a matching model to produce prediction results. With this approach we obtained higher accuracy than the model which predicted price by applying just time series prediction technique. This study presents that the limitations of the time series prediction techniques could be overcome by exploring trend change points using news data and various time series prediction techniques with text mining techniques could be applied to improve the performance of the model in the further research.

A GIS Vector Data Compression Method Considering Dynamic Updates

  • Chun Woo-Je;Joo Yong-Jin;Moon Kyung-Ky;Lee Yong-Ik;Park Soo-Hong
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.355-364
    • /
    • 2005
  • Vector data sets (e.g. maps) are currently major sources of displaying, querying, and identifying locations of spatial features in a variety of applications. Especially in mobile environment, the needs for using spatial data is increasing, and the relative large size of vector maps need to be smaller. Recently, there have been several studies about vector map compression. There was clustering-based compression method with novel encoding/decoding scheme. However, precedent studies did not consider that spatial data have to be updated periodically. This paper explores the problem of existing clustering-based compression method. We propose an adaptive approximation method that is capable of handling data updates as well as reducing error levels. Experimental evaluation showed that when an updated event occurred the proposed adaptive approximation method showed enhanced positional accuracy compared with simple cluster based compression method.

  • PDF

A Homonym Disambiguation System based on Semantic Information Extracted from Dictionary Definitions (사전의 뜻풀이말에서 추출한 의미정보에 기반한 동형이의어 중의성 해결 시스템)

  • Hur, Jeong;Ock, Cheol-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.688-698
    • /
    • 2001
  • A homonym could be disambiguated by anther words in the context such as nouns, predicates used with the homonym. This paper proposes a homonym disambiguation system based on statistical semantic information which is extracted from definitions in dictionary. The semantic information consists of nouns and predicates that are used with the homonym in definitions. In order to extract accurate semantic information, definitions are used with the homonym in definitions. In order to extract accurate semantic information, definitions are classified into two types. One has hyponym-hypernym relation between title word and head word (homonym) in definition. The hyponym-hypernym relation is one level semantic hierarchy and can be extended to deeper levels in order to overcome the problem of data sparseness. The other is the case that the homonym is used in the middle of definition. The system considers nouns and predicates simultaneously to disambiguate the homonym. Nine homonyms are examined in order to determine the weight of nouns and predicates which affect accrutacy of homonym disambiguation. From experiments using training corpus(definitions in dictionary), the average accruracy of homonym disamguation is 96.11% when the weight is 0.9 and 0.1 for noun and verb respectively. And another experiment to meaure the generality of the homonym disambiguation system results in the 80.73% average accuracy to 1,796 untraining sentences from Korean Information Base I and ETRI corpus.

  • PDF

(A Method to Classify and Recognize Spelling Changes between Morphemes of a Korean Word) (한국어 어절의 철자변화 현상 분류와 인식 방법)

  • 김덕봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.476-486
    • /
    • 2003
  • There is no explicit spelling change information in part-of-speech tagged corpora of Korean. It causes some difficulties in acquiring the data to study Korean morphology, i.e. automatically in constructing a dictionary for morphological analysis and systematically in collecting the phenomena of the spelling changes from the corpora. To solve this problem, this paper presents a method to recognize spelling changes between morphemes of a Korean word in tagged corpora, only using a string matching, without using a dictionary and phonological rules. This method not only has an ability to robustly recognize the spelling changes because it doesn't use any phonological rules, but also can be implemented with few cost. This method has been experimented with a large tagged corpus of Korean, and recognized the 100% of spelling changes in the corpus with accuracy.