• Title/Summary/Keyword: Neutral database

Search Result 38, Processing Time 0.027 seconds

Manipulating Geometry Instances in an STEP-based OODB from Commercial CAD Systems (상업용 CAD에서 STEP 기반 객체지향 데이터베이스 내부의 형상 인스턴스 검색 및 수정)

  • Kim, Junhwan;Han, Soonhung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.4
    • /
    • pp.435-442
    • /
    • 2002
  • It is difficult to access and share design data among heterogeneous CAD systems. Usually, different CAD systems exchange the design data using a neutral format such as IGES or STEP. A prototype CAD system which uses a geometric kernel and a commercial database management system has been implemented. The prototype system used the Open Cascade geometric kernel and the commercial object-oriented database ObjectStore. STEP provides the database schema. The database can be accessed from commercial CAD systems such as SolidWorks or Unigraphics. The data access module from a commercial CAD system is developed with the CAD system's native API, ObjectStore API functions, and ActiveX.

Developing a B2B Integration System based on XML Database System (XML 데이터베이스 시스템을 기반으로 한 B2B 통합 시스템 개발)

  • 이정수;정상혁;주경수
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.1
    • /
    • pp.1-14
    • /
    • 2003
  • E-commerce requires many different types of communications and an unprecedented amount of data changes hands. The many different Platforms and systems interacting require a platform neutral standard for data exchange. One of the technologies that can fill this niche is XML, the extensible markup language established as a standard by the W3C. By being standardized and platform neutral, XML to be factor in e-commerce application and using many systems In this paper, we designed XML document that is used in transaction between corporations, and implement the B2B integration system based on XML database system. Also we use XSLT that make efficient transformation XML documents for exchanging heterogeneous XML data between corporations. So, he or she can more easily and efficiently exchange XML documents between corporations using this system.

  • PDF

A methodology for XML documentation of the structural calculation document to build database supporting safety management of infrastructures (사회기반시설물 안전관리 지원 데이터베이스 구축을 위한 구조계산서의 XML 문서화 방법론)

  • Kim, Bong-Geun;Park, Sang-Il;Lee, Jin-Hoon;Lee, Sang-Ho
    • 한국방재학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.414-417
    • /
    • 2007
  • A methodology for XML documentation of the structural calculation document is presented to support manipulation of the design information on the internet. The text file format is chosen as a neutral format because it can be easily translated from office documents generated from engineering practice. The first word of each line is compared with the reserved numbering groups, and relative levels among the lines are defined to generate the hierarchically structured XML document of the text file. The demonstration subjected to sample general documents and structural calculation documents shows that the prototype application module based on the developed methodology can be adopted to build the database of design information which supports the safety management of infrastructures.

  • PDF

Development and validation of a Korean Affective Voice Database (한국형 감정 음성 데이터베이스 구축을 위한 타당도 연구)

  • Kim, Yeji;Song, Hyesun;Jeon, Yesol;Oh, Yoorim;Lee, Youngmee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.77-86
    • /
    • 2022
  • In this study, we reported the validation results of the Korean Affective Voice Database (KAV DB), an affective voice database available for scientific and clinical use, comprising a total of 113 validated affective voice stimuli. The KAV DB includes audio-recordings of two actors (one male and one female), each uttering 10 semantically neutral sentences with the intention to convey six different affective states (happiness, anger, fear, sadness, surprise, and neutral). The database was organized into three separate voice stimulus sets in order to validate the KAV DB. Participants rated the stimuli on six rating scales corresponding to the six targeted affective states by using a 100 horizontal visual analog scale. The KAV DB showed high internal consistency for voice stimuli (Cronbach's α=.847). The database had high sensitivity (mean=82.8%) and specificity (mean=83.8%). The KAV DB is expected to be useful for both academic research and clinical purposes in the field of communication disorders. The KAV DB is available for download at https://kav-db.notion.site/KAV-DB-75 39a36abe2e414ebf4a50d80436b41a.

A research on the emotion classification and precision improvement of EEG(Electroencephalogram) data using machine learning algorithm (기계학습 알고리즘에 기반한 뇌파 데이터의 감정분류 및 정확도 향상에 관한 연구)

  • Lee, Hyunju;Shin, Dongil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.27-36
    • /
    • 2019
  • In this study, experiments on the improvement of the emotion classification, analysis and accuracy of EEG data were proceeded, which applied DEAP (a Database for Emotion Analysis using Physiological signals) dataset. In the experiment, total 32 of EEG channel data measured from 32 of subjects were applied. In pre-processing step, 256Hz sampling tasks of the EEG data were conducted, each wave range of the frequency (Hz); Theta, Slow-alpha, Alpha, Beta and Gamma were then extracted by using Finite Impulse Response Filter. After the extracted data were classified through Time-frequency transform, the data were purified through Independent Component Analysis to delete artifacts. The purified data were converted into CSV file format in order to conduct experiments of Machine learning algorithm and Arousal-Valence plane was used in the criteria of the emotion classification. The emotions were categorized into three-sections; 'Positive', 'Negative' and 'Neutral' meaning the tranquil (neutral) emotional condition. Data of 'Neutral' condition were classified by using Cz(Central zero) channel configured as Reference channel. To enhance the accuracy ratio, the experiment was performed by applying the attributes selected by ASC(Attribute Selected Classifier). In "Arousal" sector, the accuracy of this study's experiments was higher at "32.48%" than Koelstra's results. And the result of ASC showed higher accuracy at "8.13%" compare to the Liu's results in "Valence". In the experiment of Random Forest Classifier adapting ASC to improve accuracy, the higher accuracy rate at "2.68%" was confirmed than Total mean as the criterion compare to the existing researches.

Image Analysis Fuzzy System

  • Abdelwahed Motwakel;Adnan Shaout;Anwer Mustafa Hilal;Manar Ahmed Hamza
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.163-177
    • /
    • 2024
  • The fingerprint image quality relies on the clearness of separated ridges by valleys and the uniformity of the separation. The condition of skin still dominate the overall quality of the fingerprint. However, the identification performance of such system is very sensitive to the quality of the captured fingerprint image. Fingerprint image quality analysis and enhancement are useful in improving the performance of fingerprint identification systems. A fuzzy technique is introduced in this paper for both fingerprint image quality analysis and enhancement. First, the quality analysis is performed by extracting four features from a fingerprint image which are the local clarity score (LCS), global clarity score (GCS), ridge_valley thickness ratio (RVTR), and the Global Contrast Factor (GCF). A fuzzy logic technique that uses Mamdani fuzzy rule model is designed. The fuzzy inference system is able to analyse and determinate the fingerprint image type (oily, dry or neutral) based on the extracted feature values and the fuzzy inference rules. The percentages of the test fuzzy inference system for each type is as follow: For dry fingerprint the percentage is 81.33, for oily the percentage is 54.75, and for neutral the percentage is 68.48. Secondly, a fuzzy morphology is applied to enhance the dry and oily fingerprint images. The fuzzy morphology method improves the quality of a fingerprint image, thus improving the performance of the fingerprint identification system significantly. All experimental work which was done for both quality analysis and image enhancement was done using the DB_ITS_2009 database which is a private database collected by the department of electrical engineering, institute of technology Sepuluh Nopember Surabaya, Indonesia. The performance evaluation was done using the Feature Similarity index (FSIM). Where the FSIM is an image quality assessment (IQA) metric, which uses computational models to measure the image quality consistently with subjective evaluations. The new proposed system outperformed the classical system by 900% for the dry fingerprint images and 14% for the oily fingerprint images.

Ship Outfitting Design Data Exchange between CAD Systems Using Different Primitive Set (서로 다른 프리미티브 집합을 사용하는 CAD 시스템 사이에 선박 의장 설계 데이터의 교환)

  • Lee, Seunghoon;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.3
    • /
    • pp.234-242
    • /
    • 2013
  • Different CAD systems are used in ship outfitting design on different usage and purpose. Therefore, data exchanges between CAD systems are required from different formats. For data exchange, boundary representation standard formats such as IGES and ISO 10303 (STEP) are widely used. However, they present only B-rep representation. Because of different CAD systems have their own geometry format, data exchanges with design intend are difficult. Especially, Tribon and PDMS use primitives for express their geometry in ship outfitting design. However, Tribon primitives are represented their parameter by values that are non-parametric. Therefore, data size of catalogue library is bigger than different CAD system using parametric primitive representation. And that system has difficulty on data reprocessing. To solve that problem, we discuss about shape DB which contains design parameters of primitive for exchange Tribon primitives. And geometry data exchange between Tribon and Shape Database that defines based on PDMS scheme are specified using primitive mapping that can represent design intend.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

COMPARISON OF LINEAR AND NON-LINEAR NIR CALIBRATION METHODS USING LARGE FORAGE DATABASES

  • Berzaghi, Paolo;Flinn, Peter C.;Dardenne, Pierre;Lagerholm, Martin;Shenk, John S.;Westerhaus, Mark O.;Cowe, Ian A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1141-1141
    • /
    • 2001
  • The aim of the study was to evaluate the performance of 3 calibration methods, modified partial least squares (MPLS), local PLS (LOCAL) and artificial neural network (ANN) on the prediction of chemical composition of forages, using a large NIR database. The study used forage samples (n=25,977) from Australia, Europe (Belgium, Germany, Italy and Sweden) and North America (Canada and U.S.A) with information relative to moisture, crude protein and neutral detergent fibre content. The spectra of the samples were collected with 10 different Foss NIR Systems instruments, which were either standardized or not standardized to one master instrument. The spectra were trimmed to a wavelength range between 1100 and 2498 nm. Two data sets, one standardized (IVAL) and the other not standardized (SVAL) were used as independent validation sets, but 10% of both sets were omitted and kept for later expansion of the calibration database. The remaining samples were combined into one database (n=21,696), which was split into 75% calibration (CALBASE) and 25% validation (VALBASE). The chemical components in the 3 validation data sets were predicted with each model derived from CALBASE using the calibration database before and after it was expanded with 10% of the samples from IVAL and SVAL data sets. Calibration performance was evaluated using standard error of prediction corrected for bias (SEP(C)), bias, slope and R2. None of the models appeared to be consistently better across all validation sets. VALBASE was predicted well by all models, with smaller SEP(C) and bias values than for IVAL and SVAL. This was not surprising as VALBASE was selected from the calibration database and it had a sample population similar to CALBASE, whereas IVAL and SVAL were completely independent validation sets. In most cases, Local and ANN models, but not modified PLS, showed considerable improvement in the prediction of IVAL and SVAL after the calibration database had been expanded with the 10% samples of IVAL and SVAL reserved for calibration expansion. The effects of sample processing, instrument standardization and differences in reference procedure were partially confounded in the validation sets, so it was not possible to determine which factors were most important. Further work on the development of large databases must address the problems of standardization of instruments, harmonization and standardization of laboratory procedures and even more importantly, the definition of the database population.

  • PDF

Standard Representation of Simulation Data Based on SEDRIS (SEDRIS기반의 모의자료 표현 표준화)

  • Kim, Hyung-Ki;Kang, Yun-A;Han, Soon-Hung
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.249-259
    • /
    • 2010
  • Synthetic environment data used in defense M&S fields, which came from various organization and source, are consumed and managed by their own native database system in distributed environment. But to manage these diverse data while interoperation in HLA/RTI environment, neutral synthetic environment data model is necessary to transmit the data between native database. By the support of DMSO, SEDRIS was developed to achieve this requirement and this specification guarantees loss-less data representation, interchange and interoperability. In this research, to use SEDRIS as a standard simulation database, base research, visualization for validation, data interchange experiment through test-bed was done. This paper shows each research case, result and future research direction, to propose standardized SEDRIS usage process.