• Title/Summary/Keyword: Application common database

Search Result 86, Processing Time 0.026 seconds

A study on an implementation of IEC 61970 based EMS database (IEC 61970 기반의 EMS 데이터베이스 구축에 관한 연구)

  • Lee, J.H.;Sohn, J.M.;Nam, Y.W.;Lee, Y.I.;Park, J.H.;Kim, P.S.;Kim, B.S.;Shin, Y.H.
    • Proceedings of the KIEE Conference
    • /
    • 2006.11a
    • /
    • pp.323-325
    • /
    • 2006
  • 전력계통 에너지관리시스템(EMS)을 위한 공통 정보모델(CIM:Common Information Model)은 1990년대 후반에 EPRI의 CCAPI (Control Center Application Program Interface) 연구 프로젝트에서 시작하여, 현재는 IEC의 국제표준인 IEC 61970시리즈로 등록되어 있다. 특히, CIM은 IEC61970의 Part 301, 302에 정의되어 있는 공통데이터 모델을 지칭하며, IEC61970의 Part 401${\sim}$407에 정의되어 있는 CIS (Component Interface Specification)와 더불어 EMS의 API(Application Program Interface)를 정의하는 핵심 요소이다. CIM (Common Information Model)은 전격회사의 전력 시스템 운영을 포함하여 전력회사 전반에 사용되는 주요한 객체(object)들을 나타내는 추상적 모델(abstract model)이다. 또한, CIM은 전력계통 자원을 객체 클래스들(object classes)과 속성들(attributes)로 표현하는 표준 방법을 제공함으로써, 발전분야나 배전분야와 같이 전력 계통의 운용 영역이 다른 계통들과 EMS 시스템 연계, 독립적으로 개발된 전체 EMS 시스템간의 연계, 서로 다른 판매자에 의해서 독립적으로 개발된 EMS 응용 프로그램간의 통합을 용이하게 한다. 본 논문에서는 전력계통 공통 데이터 모델인 IEC 61970을 소개하고, 이에 기반을 둔 한국형 EMS 데이터베스 구축에 관해 설명한다.

  • PDF

An Intelligent Framework for Test Case Prioritization Using Evolutionary Algorithm

  • Dobuneh, Mojtaba Raeisi Nejad;Jawawi, Dayang N.A.
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.89-95
    • /
    • 2016
  • In a software testing domain, test case prioritization techniques improve the performance of regression testing, and arrange test cases in such a way that maximum available faults be detected in a shorter time. User-sessions and cookies are unique features of web applications that are useful in regression testing because they have precious information about the application state before and after making changes to software code. This approach is in fact a user-session based technique. The user session will collect from the database on the server side, and test cases are released by the small change configuration of a user session data. The main challenges are the effectiveness of Average Percentage Fault Detection rate (APFD) and time constraint in the existing techniques, so in this paper developed an intelligent framework which has three new techniques use to manage and put test cases in group by applying useful criteria for test case prioritization in web application regression testing. In dynamic weighting approach the hybrid criteria which set the initial weight to each criterion determines optimal weight of combination criteria by evolutionary algorithms. The weight of each criterion is based on the effectiveness of finding faults in the application. In this research the priority is given to test cases that are performed based on most common http requests in pages, the length of http request chains, and the dependency of http requests. To verify the new technique some fault has been seeded in subject application, then applying the prioritization criteria on test cases for comparing the effectiveness of APFD rate with existing techniques.

Robust Face Recognition based on 2D PCA Face Distinctive Identity Feature Subspace Model (2차원 PCA 얼굴 고유 식별 특성 부분공간 모델 기반 강인한 얼굴 인식)

  • Seol, Tae-In;Chung, Sun-Tae;Kim, Sang-Hoon;Chung, Un-Dong;Cho, Seong-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.35-43
    • /
    • 2010
  • 1D PCA utilized in the face appearance-based face recognition methods such as eigenface-based face recognition method may lead to less face representative power and more computational cost due to the resulting 1D face appearance data vector of high dimensionality. To resolve such problems of 1D PCA, 2D PCA-based face recognition methods had been developed. However, the face representation model obtained by direct application of 2D PCA to a face image set includes both face common features and face distinctive identity features. Face common features not only prevent face recognizability but also cause more computational cost. In this paper, we first develope a model of a face distinctive identity feature subspace separated from the effects of face common features in the face feature space obtained by application of 2D PCA analysis. Then, a novel robust face recognition based on the face distinctive identity feature subspace model is proposed. The proposed face recognition method based on the face distinctive identity feature subspace shows better performance than the conventional PCA-based methods (1D PCA-based one and 2D PCA-based one) with respect to recognition rate and processing time since it depends only on the face distinctive identity features. This is verified through various experiments using Yale A and IMM face database consisting of face images with various face poses under various illumination conditions.

TEST DB: The intelligent data management system for Toxicogenomics (독성유전체학 연구를 위한 지능적 데이터 관리 시스템)

  • Lee, Wan-Seon;Jeon, Ki-Seon;Um, Chan-Hwi;Hwang, Seung-Young;Jung, Jin-Wook;Kim, Seung-Jun;Kang, Kyung-Sun;Park, Joon-Suk;Hwang, Jae-Woong;Kang, Jong-Soo;Lee, Gyoung-Jae;Chon, Kum-Jin;Kim, Yang-Suk
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.66-72
    • /
    • 2003
  • Toxicogenomics is now emerging as one of the most important genomics application because the toxicity test based on gene expression profiles is expected more precise and efficient than current histopathological approach in pre-clinical phase. One of the challenging points in Toxicogenomics is the construction of intelligent database management system which can deal with very heterogeneous and complex data from many different experimental and information sources. Here we present a new Toxicogenomics database developed as a part of 'Toxicogenomics for Efficient Safety Test (TEST) project'. The TEST database is especially focused on the connectivity of heterogeneous data and intelligent query system which enables users to get inspiration from the complex data sets. The database deals with four kinds of information; compound information, histopathological information, gene expression information, and annotation information. Currently, TEST database has Toxicogenomics information fer 12 molecules with 4 efficacy classes; anti cancer, antibiotic, hypotension, and gastric ulcer. Users can easily access all kinds of detailed information about there compounds and simultaneously, users can also check the confidence of retrieved information by browsing the quality of experimental data and toxicity grade of gene generated from our toxicology annotation system. Intelligent query system is designed for multiple comparisons of experimental data because the comparison of experimental data according to histopathological toxicity, compounds, efficacy, and individual variation is crucial to find common genetic characteristics .Our presented system can be a good information source for the study of toxicology mechanism in the genome-wide level and also can be utilized fur the design of toxicity test chip.

  • PDF

Establishment of Valve Replacement Registry and Risk Factor Analysis Based on Database Application Program (데이터베이스 프로그램에 기반한 심장판막 치환수술 환자의 레지스트리 확립 및 위험인자 분석)

  • Kim, Kyung-Hwan;Lee, Jae-Ik;Lim, Cheong;Ahn, Hyuk
    • Journal of Chest Surgery
    • /
    • v.35 no.3
    • /
    • pp.209-216
    • /
    • 2002
  • Background: Valvular heart disease is still the most common health problem in Korea. By the end of the year 1999, there has been 94,586 cases of open heart surgery since the first case in 1958. Among them, 36,247 cases were acquired heart diseases and 20,704 of those had valvular heart disease. But there was no database system and every surgeon and physician had great difficulties in analysing and utilizing those tremendous medical resources. Therefore, we developed a valve registry database program and utilize it for risk factor analysis and so on. Material and Method: Personal computer-based multiuser database program was created using Microsoft AccessTM. That consisted of relational database structure with fine-tuned compact field variables and server-client architecture. Simple graphic user interface showed easy-to-use accessability and comprehensibility. User-oriented modular structure enabled easier modification through native AccessTM functions. Infinite application of query function aided users to extract, summarize, analyse and report the study result promptly. Result: About three-thousand cases of valve replacement procedure were performed in our hospital from 1968 to 1999. Total number of prosthesis replaced was 3,700. The numbers of cases for mitral, aortic and tricuspid valve replacement were 1600, 584, 76, respectively. Among them, 700 patients received prosthesis in more than two positions. Bioprosthesis or mechanical prosthesis were used in 1,280 and 1,500 patients respectively Redo valve replacements were performed in 460 patients totally and 40 patients annually Conclusion: Database program for registry of valvular heart disease was successfully developed and used in personal computer-based multiuser environment. This revealed promising results and perspectives in database management and utilization system.

Design and Implementation of Multilingual support method for 3-tiered softwares (3-TIER 구조 소프트웨어의 다국어 지원 방식의 설계와 구현)

  • Koh, Jeong-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.266-272
    • /
    • 2012
  • Multilingual support of software is necessary for entering global market. 3-tier architecture is a solution for problems of 2-tier architecture. It divides an application into a client-tier and an application-tier, and presentation logic and database are connected by middleware. The advantage of 3-tier architecture is the enhanced performance through load balancing, scalability, easier maintenance and reusability. This paper proposes a multilingual support method that utilizes common resource files for 3-tier enterprise softwares, applies the proposed method to development of multilingual version of billing solution, and verify the usefulness of it. It is easy for development and maintenance of software, the addition of language supported. Proposed method holds a resource file for each language and provides a multilingual support class library. Therefore this method reduces a waste of memory and disk space. Deployment of a class library into an application tier makes development and maintenance of software, the addition of new language supported easy. To avoid inappropriate modification of a resource file, a multilingual support class library is provided in a dll file.

Building a Database of DQT Information to Identify a Source of the SmartPhone JPEG Image File (스마트폰 JPEG 파일의 출처 식별을 위한 DQT 정보 데이터베이스 구축)

  • Kim, MinSik;Jung, Doowon;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.2
    • /
    • pp.359-367
    • /
    • 2016
  • As taking pictures by using smartphones has become more common in society, there are many incidents which are unexpected manipulation of images and leak of confidential information. Because of those incidents, demands that identify forgery/alteration of image file and proves of the original copy is constantly increasing. In general, smartphone saves image file as JPEG form and it has DQT which determines a compression rate of image in a header part of image. There is also DQT in Thumbnail image which inside of JPEG. In previous research, it identified a smartphone which take image by only using DQT, However, the research has low accuracy to identify the devices. There are two main purposes in this research. First, this research will analogize a smartphone and an application that takes a picture, edits and save an image file by testing not only about a DQT information but also a information of Thumbnail image. Second, the research will build a database of DQT and Thumbnail information in JPEG file to find more accurate image file's origin.

Efficient Management of Statistical Information of Keywords on E-Catalogs (전자 카탈로그에 대한 효율적인 색인어 통계 정보 관리 방법)

  • Lee, Dong-Joo;Hwang, In-Beom;Lee, Sang-Goo
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.1-17
    • /
    • 2009
  • E-Catalogs which describe products or services are one of the most important data for the electronic commerce. E-Catalogs are created, updated, and removed in order to keep up-to-date information in e-Catalog database. However, when the number of catalogs increases, information integrity is violated by the several reasons like catalog duplication and abnormal classification. Catalog search, duplication checking, and automatic classification are important functions to utilize e-Catalogs and keep the integrity of e-Catalog database. To implement these functions, probabilistic models that use statistics of index words extracted from e-Catalogs had been suggested and the feasibility of the methods had been shown in several papers. However, even though these functions are used together in the e-Catalog management system, there has not been enough consideration about how to share common data used for each function and how to effectively manage statistics of index words. In this paper, we suggest a method to implement these three functions by using simple SQL supported by relational database management system. In addition, we use materialized views to reduce the load for implementing an application that manages statistics of index words. This brings the efficiency of managing statistics of index words by putting database management systems optimize statistics updating. We showed that our method is feasible to implement three functions and effective to manage statistics of index words with empirical evaluation.

  • PDF

Bitmap Indexes and Query Processing Strategies for Relational XML Twig Queries (관계형 XML 가지 패턴 질의를 위한 비트맵 인덱스와 질의 처리 기법)

  • Lee, Kyong-Ha;Moon, Bong-Ki;Lee, Kyu-Chul
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.146-164
    • /
    • 2010
  • Due to an increasing volume of XML data, it is considered prudent to store XML data on an industry-strength database system instead of relying on a domain specific application or a file system. For shredded XML data stored in relational tables, however, it may not be straightforward to apply existing algorithms for twig query processing, since most of the algorithms require XML data to be accessed in a form of streams of elements grouped by their tags and sorted in a particular order. In order to support XML query processing within the common framework of relational database systems, we first propose several bitmap indexes and their strategies for supporting holistic twig joining on XML data stored in relational tables. Since bitmap indexes are well supported in most of the commercial and open-source database systems, the proposed bitmapped indexes and twig query processing strategies can be incorporated into relational query processing framework with more ease. The proposed query processing strategies are efficient in terms of both time and space, because the compressed bitmap indexes stay compressed during data access. In addition, we propose a hybrid index which computes twig query solutions with only bit-vectors, without accessing labeled XML elements stored in the relational tables.

Assessment of Coal Combustion Safety of DTF using Response Surface Method (반응표면법을 이용한 DTF의 석탄 연소 안전성 평가)

  • Lee, Eui Ju
    • Journal of the Korean Society of Safety
    • /
    • v.30 no.1
    • /
    • pp.8-13
    • /
    • 2015
  • The experimental design methodology was applied in the drop tube furnace (DTF) to predict the various combustion properties according to the operating conditions and to assess the coal plant safety. Response surface method (RSM) was introduced as a design of experiment, and the database for RSM was set with the numerical simulation of DTF. The dependent variables such as burnout ratios (BOR) of coal and $CO/CO_2$ ratios were mathematically described as a function of three independent variables (coal particle size, carrier gas flow rate, wall temperature) being modeled by the use of the central composite design (CCD), and evaluated using a second-order polynomial multiple regression model. The prediction of BOR showed a high coefficient of determination (R2) value, thus ensuring a satisfactory adjustment of the second-order polynomial multiple regression model with the simulation data. However, $CO/CO_2$ ratio had a big difference between calculated values and predicted values using conventional RSM, which might be mainly due to the dependent variable increses or decrease very steeply, and hence the second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, $CO/CO_2$ ratio was taken as common logarithms and worked again with RSM. The application of logarithms in the transformation of dependent variables showed that the accuracy was highly enhanced and predicted the simulation data well.