• Title/Summary/Keyword: Maximum entropy processing

Search Result 39, Processing Time 0.028 seconds

Finite Element Analysis and Experimental Verification for the Cold-drawing of a FCC-based High Entropy Alloy (FCC계 고엔트로피 합금의 냉간 인발 유한요소해석 및 실험적 검증)

  • Cho, H.S.;Bae, S.J.;Na, Y.S.;Kim, J.H.;Lee, D.G.;Lee, K.S.
    • Transactions of Materials Processing
    • /
    • v.29 no.3
    • /
    • pp.163-171
    • /
    • 2020
  • We present a multi-step cold drawing for a non-equiatomic Co10Cr15Fe25Mn10Ni30V10 high entropy alloy (HEA) with a simple face-centered cubic (FCC) crystal structure. The distribution of strain in the cold-drawn Co10Cr15Fe25Mn10Ni30V10 HEA wires was analyzed by the finite element method (FEM). The effective strain was expected to be higher as it was closer to the surface of the wire. However, the reverse shear strain acted to cause a transition in the shear strain behavior. The critical effective strain at which the shear strain transition behavior is completely shifted was predicted to be 4.75. Severely cold-drawn Co10Cr15Fe25Mn10Ni30V10 HEA wires up to 96% of the maximum cross-sectional reduction ratio were successfully manufactured without breakage. With the assistance of electron back-scattering diffraction and transmission electron microscope analyses, the abundant deformation twins were found in the region of high effective strain, which is a major strengthening mechanism for the cold-drawn Co10Cr15Fe25Mn10Ni30V10 HEA wire.

A Two-Phase Shallow Semantic Parsing System Using Clause Boundary Information and Tree Distance (절 경계와 트리 거리를 사용한 2단계 부분 의미 분석 시스템)

  • Park, Kyung-Mi;Hwang, Kyu-Baek
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.531-540
    • /
    • 2010
  • In this paper, we present a two-phase shallow semantic parsing method based on a maximum entropy model. The first phase is to recognize semantic arguments, i.e., argument identification. The second phase is to assign appropriate semantic roles to the recognized arguments, i.e., argument classification. Here, the performance of the first phase is crucial for the success of the entire system, because the second phase is performed on the regions recognized at the identification stage. In order to improve performances of the argument identification, we incorporate syntactic knowledge into its pre-processing step. More precisely, boundaries of the immediate clause and the upper clauses of a predicate obtained from clause identification are utilized for reducing the search space. Further, the distance on parse trees from the parent node of a predicate to the parent node of a parse constituent is exploited. Experimental results show that incorporation of syntactic knowledge and the separation of argument identification from the entire procedure enhance performances of the shallow semantic parsing system.

Design of H.264/AVC CABAC Encoder with an Efficient Storage Reduction of Syntax Elements (구문 요소의 저장 공간을 효과적으로 줄인 H.264/AVC CABAC 부호화기 설계)

  • Kim, Yoon-Sup;Moon, Jeon-Hak;Lee, Seong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.4
    • /
    • pp.34-40
    • /
    • 2010
  • This paper proposes an efficient CABAC encoder to reduce syntax element storage in H.264/AVC entropy coding. In the proposed architecture, all blocks are designed in dedicated hardware, so it performs fast processing without programmable processors. Context modeler of CABAC encoder requires the neighbor block data. However it requires impractically huge memory size if the neighbor block data is directly stored without proper processing. Therefore, this paper proposes an effective method of storing the neighbor block data to decrease memory size. The proposed CABAC encoder has 35,463 gates in 0.18um standard cell library. It operates at maximum speed of 180MHz and its throughput is about 1 cycle per input symbol.

Automatic Detection of Korean Prosodic Boundaries U sing Acoustic and Grammatical Information (음성정보와 문법정보를 이용한 한국어 운율 경계의 자동 추정)

  • Kim, Sun-Hee;Jeon, Je-Hun;Hong, Hye-Jin;Chung, Min-Hwa
    • MALSORI
    • /
    • no.66
    • /
    • pp.117-130
    • /
    • 2008
  • This paper presents a method for automatically detecting Korean prosodic boundaries using both acoustic and grammatical information for the performance improvement of speech information processing systems. While most of previous works are solely based on grammatical information, our method utilizes not only grammatical information constructed by a Maximum-Entropy-based grammar model using 10 grammatical features, but also acoustical information constructed by a GMM-based acoustic model using 14 acoustic features. Given that Korean prosodic structure has two intonationally defined prosodic units, intonation phrase (IP) and accentual phrase (AP), experimental results show that the detection rate of AP boundaries is 82.6%, which is higher than the labeler agreement rate in hand transcribing, and that the detection rate of IP boundaries is 88.7%, which is slightly lower than the labeler agreement rate.

  • PDF

Extraction of ObjectProperty-UsageMethod Relation from Web Documents

  • Pechsiri, Chaveevan;Phainoun, Sumran;Piriyakul, Rapeepun
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1103-1125
    • /
    • 2017
  • This paper aims to extract an ObjectProperty-UsageMethod relation, in particular the HerbalMedicinalProperty-UsageMethod relation of the herb-plant object, as a semantic relation between two related sets, a herbal-medicinal-property concept set and a usage-method concept set from several web documents. This HerbalMedicinalProperty-UsageMethod relation benefits people by providing an alternative treatment/solution knowledge to health problems. The research includes three main problems: how to determine EDU (where EDU is an elementary discourse unit or a simple sentence/clause) with a medicinal-property/usage-method concept; how to determine the usage-method boundary; and how to determine the HerbalMedicinalProperty-UsageMethod relation between the two related sets. We propose using N-Word-Co on the verb phrase with the medicinal-property/usage-method concept to solve the first and second problems where the N-Word-Co size is determined by the learning of maximum entropy, support vector machine, and naïve Bayes. We also apply naïve Bayes to solve the third problem of determining the HerbalMedicinalProperty-UsageMethod relation with N-Word-Co elements as features. The research results can provide high precision in the HerbalMedicinalProperty-UsageMethod relation extraction.

Utilizing Various Natural Language Processing Techniques for Biomedical Interaction Extraction

  • Park, Kyung-Mi;Cho, Han-Cheol;Rim, Hae-Chang
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.459-472
    • /
    • 2011
  • The vast number of biomedical literature is an important source of biomedical interaction information discovery. However, it is complicated to obtain interaction information from them because most of them are not easily readable by machine. In this paper, we present a method for extracting biomedical interaction information assuming that the biomedical Named Entities (NEs) are already identified. The proposed method labels all possible pairs of given biomedical NEs as INTERACTION or NO-INTERACTION by using a Maximum Entropy (ME) classifier. The features used for the classifier are obtained by applying various NLP techniques such as POS tagging, base phrase recognition, parsing and predicate-argument recognition. Especially, specific verb predicates (activate, inhibit, diminish and etc.) and their biomedical NE arguments are very useful features for identifying interactive NE pairs. Based on this, we devised a twostep method: 1) an interaction verb extraction step to find biomedically salient verbs, and 2) an argument relation identification step to generate partial predicate-argument structures between extracted interaction verbs and their NE arguments. In the experiments, we analyzed how much each applied NLP technique improves the performance. The proposed method can be completely improved by more than 2% compared to the baseline method. The use of external contextual features, which are obtained from outside of NEs, is crucial for the performance improvement. We also compare the performance of the proposed method against the co-occurrence-based and the rule-based methods. The result demonstrates that the proposed method considerably improves the performance.

Uncertainty analysis of quantitative rainfall estimation process based on hydrological and meteorological radars (수문·기상레이더기반 정량적 강우량 추정과정에서의 불확실성 분석)

  • Lee, Jae-Kyoung
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.5
    • /
    • pp.439-449
    • /
    • 2018
  • Many potential sources of bias are used in several steps of the radar-rainfall estimation process because the hydrological and meteorological radars measure the rainfall amount indirectly. Previous studies on radar-rainfall uncertainties were performed to reduce the uncertainty of each step by using bias correction methods in the quantitative radar-rainfall estimation process. However, these studies do not provide comprehensive uncertainty for the entire process and the relative ratios of uncertainty between each step. Consequently, in this study, a suitable approach is proposed that can quantify the uncertainties at each step of the quantitative radar-rainfall estimation process and show the uncertainty propagation through the entire process. First, it is proposed that, in the suitable approach, the new concept can present the initial and final uncertainties, variation of the uncertainty as well as the relative ratio of uncertainty at each step. Second, the Maximum Entropy Method (MEM) and Uncertainty Delta Method (UDM) were applied to quantify the uncertainty and analyze the uncertainty propagation for the entire process. Third, for the uncertainty quantification of radar-rainfall estimation at each step, two quality control algorithms, two radar-rainfall estimation relations, and two bias correction methods as post-processing through the radar-rainfall estimation process in 18 rainfall cases in 2012. For the proposed approach, in the MEM results, the final uncertainty (from post-processing bias correction method step: ME = 3.81) was smaller than the initial uncertainty (from quality control step: ME = 4.28) and, in the UDM results, the initial uncertainty (UDM = 5.33) was greater than the final uncertainty (UDM = 4.75). However uncertainty of the radar-rainfall estimation step was greater because of the use of an unsuitable relation. Furthermore, it was also determined in this study that selecting the appropriate method for each stage would gradually reduce the uncertainty at each step. Therefore, the results indicate that this new approach can significantly quantify uncertainty in the radar-rainfall estimation process and contribute to more accurate estimates of radar rainfall.

A Study on the Five Senses Information Processing for HCI (HCI를 위한 오감정보처리에 관한 연구)

  • Lee, Hyeon Gu;Kim, Dong Kyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.77-85
    • /
    • 2009
  • In this paper, we propose data format for smell, taste, touch with speech and vision which can be transmitted and implement a floral scent detection and recognition system. We provide representation method of data of smell, taste, and touch. Also, proposed floral scent recognition system consists of three module such as floral scent acquisition module using Metal Oxide Semiconductor (MOS) sensor array, entropy-based floral scent detection module, and floral scent recognition module using correlation coefficients. The proposed system calculates correlation coefficients of the individual sensor between feature vector(16 sensors) from floral scent input point until the stable region and 12 types of reference models. Then, this system selects the floral scent with the maximum similarity to the calculated average of individual correlation coefficients. To evaluate the floral scent recognition system using correlation coefficients, we implemented an individual floral scent recognition system using K-NN with PCA and LDA that are generally used in conventional electronic noses. In the experimental results, the proposed system performs approximately 95.7% average recognition rate.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF

A Study on the Performance of a Radar Clutter Suppression Algorithm Based on the Adaptive Clutter Prewhitening Filter and Droppler Filter Bank (Adaptive Clutter Prewhitening Filter와 Doppler Filter Bank를 이용한 레이다 Clutter 제거 알고리듬의 성능에 관한 연구)

  • Kim, Yong-Ho;Lee, Hwang-Soo;Un, Chong-Kwan;Lee, Won-Kil
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.6
    • /
    • pp.140-146
    • /
    • 1989
  • In many situations, radar targets are embedded in a clutter environment and clutter rejection is required. The clutter is unwanted radar echoes and may arise owing to reflections from ground and weather disturbances and statistical properties of the clutter vary with range and azimuth as well as time. That is, adaptive signal processing is required. In this paper, a clutter suppression algorithm based on the clutter whitening filter (WF) and doppler filter bank(DFB) is described which provides improved performance compared with conventional nonadaptive clutter suppression algorithm that is the cascaded moving target indicator (MTI) and (DFB). The clutter whitening filter algorithm is based on the Burg's maximum entropy method.

  • PDF