• Title/Summary/Keyword: Classical Database

Search Result 49, Processing Time 0.025 seconds

Development of Mobile Application for Preventive Management based on Korean Medicine: Mibyeongbogam (한의학 기반 예방관리를 위한 모바일 어플리케이션 개발: 미병보감)

  • Lee, Young Seop;Jin, Hee Jeong;Park, Dae Il;Lee, Si Woo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.30 no.1
    • /
    • pp.66-73
    • /
    • 2018
  • Objectives The purpose of this study was to develop mobile a application that evaluate the Mibyeong(deterioration of the health) in daily life and provide optimal Yangseng(養生) interventions according to the Korean medicine types. Methods The evaluation of Mibyeong utilized questionnaire or objective informations including Facial photographs and hemodynamic information. The Korean medicine type classification was reconstructed based on the concept of Sasang constitution and cold-heat pattern identification. Yangseng interventions were recommended based on Mibyeong symptoms, Korean medicine types, and demographic information. And we have developed tracking and ranking functions for user motivations. We used a Korean medicine database that focused on healthy people as a reference data, and used Yangseng interventions database that reinterpreted classical Yangseng in a modern way. Results and Conclusions We have developed a mobile application that evaluates the user's Mibyeong state and provides optimal Yangseng interventions based on Korean medicine types. This study are expected to improve the quality of health and contribute to the prevention of diseases.

Detection of ST-T Episode Based on the Global Curvature of Isoelectric Level in ECG (ECG 신호의 global curvature를 이용한 ST-T 에피소드 검출)

  • Kang, Dong-Won;Jun, Dae-Gun;Lee, Kyoung-Joung;Yoon, Hyung-Ro
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.4
    • /
    • pp.201-207
    • /
    • 2001
  • This paper describes an automated detection algorithm of ST-T episodes using global curvature which can connect the isoelectric level in ECG and can eliminate not only the slope of ST segment, but also difference of the baseline and global curve. This above method of baseline correction is very faster than the classical baseline correction methods. The optimal values of parameters for baseline correction were found as the value having the highest detection rate of ST episode. The features as input of backpropagation Neural Network were extracted from the whole ST segment. The European ST-T database was used as training and test data. Finally, ST elevation, ST depression and normal ST were classified. The average ST episode sensitivity and predictivity were 85.42%, 80.29%, respectively. This result shows the high speed and reliability in ST episode detection. In conclusion, the proposed method showed the possibility in various applications for the Holter system.

  • PDF

Concurrency Control Using the Update Graph in Replicated Database Systems (중복 데이터베이스 시스템에서 갱신그래프를 이용한 동시성제어)

  • Choe, Hui-Yeong;Lee, Gwi-Sang;Hwang, Bu-Hyeon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.587-602
    • /
    • 2002
  • Replicated database system was emerged to resolve the problem of reduction of the availability and the reliability due to the communication failures and site errors generated at centralized database system. But if update transactions are many occurred, the update is equally executed for all replicated data. Therefore, there are many problems the same thing a message overhead generated by synchronization and the reduce of concurrency happened because of delaying the transaction. In this paper, I propose a new concurrency control algorithm for enhancing the degree of parallelism of the transaction in fully replicated database designed to improve the availability and the reliability. To improve the system performance in the replicated database should be performed the last operations in the submitted site of transactions and be independently executed update-only transactions composed of write-only transactions in all sites. I propose concurrency control method to maintain the consistency of the replicated database and reflect the result of update-only transactions in all sites. The superiority of the proposed method has been tested from the respondence and withdrawal rate. The results confirm the superiority of the proposed technique over classical correlation based method.

A MULTI-DIMENSIONAL REDUCTION METHOD OF LARGE-SCALE SURVEY DATABASE

  • Lee, Y.;Kim, Y.S.;Kang, H.W.;Jung, J.H.;Lee, C.H.;Yim, I.S.;Kim, B.G.;Kim, H.G.;Kim, K.T.
    • Publications of The Korean Astronomical Society
    • /
    • v.28 no.1
    • /
    • pp.7-13
    • /
    • 2013
  • We present a multi-dimensional reduction method of the surveyed cube database obtained using a single- dish radio telescope in Taeduk Radio Astronomy Observatory (TRAO). The multibeam receiver system installed at the 14 m telescope in TRAO was not optimized at the initial stage, though it became more stabilized in the following season. We conducted a Galactic Plane survey using the multibeam receiver system. We show that the noise level of the first part of the survey was higher than expected, and a special reduction process seemed to be definitely required. Along with a brief review of classical methods, a multi-dimensional method of reduction is introduced; It is found that the 'background' task within IRAF (Image Reduction and Analysis Facility) can be applied to all three directions of the cube database. Various statistics of reduction results is tested using several IRAF tasks. The rms value of raw survey data is 0.241 K, and after primitive baseline subtraction and elimination of bad channel sections, the rms value turned out to be 0.210 K. After the one-dimensional reduction using 'background' task, the rms value is estimated to be 0.176 K. The average rms of the final reduced image is 0.137 K. Thus, the image quality is found to be improved about 43% using the new reduction method.

A Data-driven Multiscale Analysis for Hyperelastic Composite Materials Based on the Mean-field Homogenization Method (초탄성 복합재의 평균장 균질화 데이터 기반 멀티스케일 해석)

  • Suhan Kim;Wonjoo Lee;Hyunseong Shin
    • Composites Research
    • /
    • v.36 no.5
    • /
    • pp.329-334
    • /
    • 2023
  • The classical multiscale finite element (FE2 ) method involves iterative calculations of micro-boundary value problems for representative volume elements at every integration point in macro scale, making it a computationally time and data storage space. To overcome this, we developed the data-driven multiscale analysis method based on the mean-field homogenization (MFH). Data-driven computational mechanics (DDCM) analysis is a model-free approach that directly utilizes strain-stress datasets. For performing multiscale analysis, we efficiently construct a strain-stress database for the microstructure of composite materials using mean-field homogenization and conduct data-driven computational mechanics simulations based on this database. In this paper, we apply the developed multiscale analysis framework to an example, confirming the results of data-driven computational mechanics simulations considering the microstructure of a hyperelastic composite material. Therefore, the application of data-driven computational mechanics approach in multiscale analysis can be applied to various materials and structures, opening up new possibilities for multiscale analysis research and applications.

A decision support system for diagnosis of distress cause and repair in marine concrete structures

  • Champiri, Masoud Dehghani;Mousavizadegan, S.Hossein;Moodi, Faramarz
    • Computers and Concrete
    • /
    • v.9 no.2
    • /
    • pp.99-118
    • /
    • 2012
  • Marine Structures are very costly and need a continuous inspection and maintenance routine. The most effective way to control the structural health is the application of an expert system that can evaluate the importance of any distress on the structure and provide a maintenance program. An extensive literature review, interviews with expert supervisors and a national survey are used to build a decision support system for concrete structures in sea environment. Decision trees are the main rules in this system. The system input is inspection information and the system output is the main cause(s) of distress(es) and the best repair method(s). Economic condition, severity of distress, distress situation, and new technologies and the most repeated classical methods are considered to choose the best repair method. A case study demonstrates the application of the developed decision support system for a type of marine structure.

Variable Arrangement for Data Visualization

  • Huh, Moon Yul;Song, Kwang Ryeol
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.643-650
    • /
    • 2001
  • Some classical plots like scatterplot matrices and parallel coordinates are valuable tools for data visualization. These tools are extensively used in the modern data mining softwares to explore the inherent data structure, and hence to visually classify or cluster the database into appropriate groups. However, the interpretation of these plots are very sensitive to the arrangement of variables. In this work, we introduce two methods to arrange the variables for data visualization. First method is based on the work of Wegman (1999), and this is to arrange the variables using minimum distance among all the pairwise permutation of the variables. Second method is using the idea of principal components. We Investigate the effectiveness of these methods with parallel coordinates using real data sets, and show that each of the two proposed methods has its own strength from different aspects respectively.

  • PDF

Data Mining Research on Maehwado Painting Poetry in the Early Joseon Dynasty

  • Haeyoung Park;Younghoon An
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.474-482
    • /
    • 2023
  • Data mining is a technique for extracting valuable information from vast amounts of data by analyzing statistical and mathematical operations, rules, and relationships. In this study, we employed data mining technology to analyze the data concerning the painting poetry of Maehwado (plum blossom paintings) from the early Joseon Dynasty. The data was extracted from the Hanguk Munjip Chonggan (Korean Literary Collections in Classical Chinese) in the Hanguk Gojeon Jonghap database (Korea Classics DB). Using computer information processing techniques, we carried out web scraping and classification of the painting poetry from the Hanguk Munjip Chonggan. Subsequently, we narrowed down our focus to the painting poetry specifically related to Maehwado in the early Joseon Dynasty. Based on this, refined dataset, we conducted an in-depth analysis and interpretation of the text data at the syllable corpus level. As a result, we found a direct correlation between the corpus statistics for each syllable in Maehwado painting poetry and the symbolic meaning of plum blossoms.

A Research on Automatic Data Extract Method of Pulse Descriptions Using the List of Pulse Terminology - Based on 『Euijongsonik』 - (맥상용어목록을 이용한 맥상표현 자동추출방법 연구 -『의종손익』을 중심으로-)

  • Keum, Yujeong;Lee, Byungwook;Eom, Dongmyung;Song, Jichung
    • Journal of Korean Medical classics
    • /
    • v.33 no.4
    • /
    • pp.21-32
    • /
    • 2020
  • Objectives : Pulse descriptions in Korean Medical texts are comprised of combinations of pulse terminology, where various combinations of pulse terminology are used to describe disease symptoms. For Korean Medical doctors and professionals, however, it is impossible to identify the entirety of pulse description combinations, and their understanding is mostly limited to those learned from classical texts studied individually. Methods :This research was carried out by using Access of Microsoft Office 365 in Windows 10 of Microsoft. Pulse descriptions were extracted from the text, 『Euijongsonik』. In the final stages, the automatically extracted list of pulse descriptions was refined through [excluded terminology of pulse description]. Results : The PC environment of this research was Intel Core i7-1065G7 CPU 1.30GHz, with 8GB of RAM and a Windows 10 64bit operation system. Out of 6,115 verses 6,497 descriptions were primarily extracted, and after a refinement process, the final list contained 5,507 pulse descriptions. Conclusions : Based on the assumption that classical texts are available in data form to be processed by programs, current research methodology demonstrated that it was more efficient in regards to time and man power to create a pulse description database compared to when the researcher manually created one.

Pattern Recognition System Combining KNN rules and New Feature Weighting algorithm (KNN 규칙과 새로운 특징 가중치 알고리즘을 결합한 패턴 인식 시스템)

  • Lee Hee-Sung;Kim Euntai;Kim Dongyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.4 s.304
    • /
    • pp.43-50
    • /
    • 2005
  • This paper proposes a new pattern recognition system combining the new adaptive feature weighting based on the genetic algorithm and the modified KNN(K Nearest-Neighbor) rules. The new feature weighting proposed herein avoids the overfitting and finds the Proper feature weighting value by determining the middle value of weights using GA. New GA operators are introduced to obtain the high performance of the system. Moreover, a class dependent feature weighting strategy is employed. Whilst the classical methods use the same feature space for all classes, the Proposed method uses a different feature space for each class. The KNN rule is modified to estimate the class of test pattern using adaptive feature space. Experiments were performed with the unconstrained handwritten numeral database of Concordia University in Canada to show the performance of the proposed method.