• Title/Summary/Keyword: numeric data

Search Result 242, Processing Time 0.029 seconds

Analysis of Load Duration Curve Using Long Time Flow Measurement Data of Kyeongancheon (장기간 유량측정 자료를 이용한 경안천의 부하지속곡선 특성)

  • Noh, Changwan;Kwon, Phil-Sang;Jung, Woo-Seok;Lee, Myung-Gu;Cho, Yong-Chul;Yu, Soonju
    • Journal of Environmental Impact Assessment
    • /
    • v.28 no.1
    • /
    • pp.35-48
    • /
    • 2019
  • Long term flow measurement and water quality analysis data need to determine the target and allowable load for each basin in Total Water Pollution Load Management System (TWPLMS). The Load Duration Curve (LDC) is analyzed the relationship between flow data and water quality, and evaluates the pollutant load characterization by flow conditions. LDC of Kyeongancheon is created by the Flow Duration Curve (FDC) that was analyzed 8-day interval measured flow data from 2006 to 2015 and numeric water quality target in Kyeongancheon. As a result of this study, it is necessary to manage the point source pollutant because the numeric water quality target is not satisfied in the low flows. Also the numeric water quality target has been exceed for four months from March to June of the year and continuous and systematic watershed management is required to satisfy the numeric water quality target.

Pollutant Load Characterization with Flow Conditions in Heukcheon Stream (흑천의 유량조건별 오염부하량 특성)

  • Choi, Kyungwan;Lee, Sangwon;Noh, Changwan;Lee, Jaekwan;Lee, Youngjoon
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.29 no.5
    • /
    • pp.551-557
    • /
    • 2015
  • The TMDL (Total Maximum Daily Load) has been used to determine the water quality target. LDC (Load Duration Curve) based on hydrology has been used to support water quality assessments and development of TMDL. Also FDC (Flow Duration Curve) analysis can be used as a general indicator of hydrologic condition. The LDC is developed by multiplying FDC with the numeric water quality target of the factor for the pollutant of concern. Therefore, this study was to create LDC using the stream flow data and numeric water quality target of BOD and T-P in order to evaluate the pollutant load characterization by flow conditions in Heukcheon stream. When it is to be a high-flows condition, BOD and T-P are necessary to manage. BOD and T-P did not satisfy the numeric water quality target for both seasons (spring and summer). In order to meet the numeric water quality target in Heukcheon stream, management of non point source pollutant is much more important than that of point source pollutant control.

Fuzzy Clustering of Fuzzy Data using a Dissimilarity Measure (비유사도 척도를 이용한 퍼지 데이터에 대한 퍼지 클러스터링)

  • Lee, Geon-Myeong
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.9
    • /
    • pp.1114-1124
    • /
    • 1999
  • 클러스터링은 동일한 클러스터에 속하는 데이타들 간에는 유사도가 크도록 하고 다른 클러스터에 속하는 데이타들 간에는 유사도가 작도록 주어진 데이타를 몇 개의 클러스터로 묶는 것이다. 어떤 대상을 기술하는 데이타는 수치 속성뿐만 아니라 정성적인 비수치 속성을 갖게 되고, 이들 속성값은 관측 오류, 불확실성, 주관적인 판정 등으로 인해서 정확한 값으로 주어지지 않고 애매한 값으로 주어지는 경우가 많다. 본 논문에서는 애매한 값을 퍼지값으로 표현하는 수치 속성과 비수치 속성을 포함한 데이타에 대한 비유사도 척도를 제안하고, 이 척도를 이용하여 퍼지값을 포함한 데이타에 대하여 퍼지 클러스터링하는 방법을 소개한 다음, 이를 이용한 실험 결과를 보인다. Abstract The objective of clustering is to group a set of data into some number of clusters in a way to minimize the similarity between data belonging to different clusters and to maximize the similarity between data belonging to the same cluster. Many data for real world objects consist of numeric attributes and non-numeric attributes whose values are fuzzily described due to observation error, uncertainty, subjective judgement, and so on. This paper proposes a dissimilarity measure applicable to such data and then introduces a fuzzy clustering method for such data using the proposed dissimilarity measure. It also presents some experiment results to show the applicability of the proposed clustering method and dissimilarity measure.

Symbolic-numeric Estimation of Parameters in Biochemical Models by Quantifier Elimination

  • Orii, Shigeo;Anai, Hirokazu;Horimoto, Katsuhisa
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.272-277
    • /
    • 2005
  • We introduce a new approach to optimize the parameters in biological kinetic models by quantifier elimination (QE), in combination with numerical simulation methods. The optimization method was applied to a model for the inhibition kinetics of HIV proteinase with ten parameters and nine variables, and attained the goodness of fit to 300 points of observed data with the same magnitude as that obtained by the previous optimization methods, remarkably by using only one or two points of data. Furthermore, the utilization of QE demonstrated the feasibility of the present method for elucidating the behavior of the parameters in the analyzed model. The present symbolic-numeric method is therefore a powerful approach to reveal the fundamental mechanisms of kinetic models, in addition to being a computational engine.

  • PDF

Cluster Analysis with Balancing Weight on Mixed-type Data

  • Chae, Seong-San;Kim, Jong-Min;Yang, Wan-Youn
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.719-732
    • /
    • 2006
  • A set of clustering algorithms with proper weight on the formulation of distance which extend to mixed numeric and multiple binary values is presented. A simple matching and Jaccard coefficients are used to measure similarity between objects for multiple binary attributes. Similarities are converted to dissimilarities between i th and j th objects. The performance of clustering algorithms with balancing weight on different similarity measures is demonstrated. Our experiments show that clustering algorithms with application of proper weight give competitive recovery level when a set of data with mixed numeric and multiple binary attributes is clustered.

Effects of stimuli and reaction methods on error rate and time of choice reaction (자극의 종류와 반응방법이 선택반응에 미치는 영향에 관한 연구)

  • 장성록
    • Journal of the Ergonomics Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.27-35
    • /
    • 1994
  • Automation and mechanization of work make people put the machine into operation and control the state of operation. In the process of those works they are apt to have accidents caused by their carelessness. To reduce such accidents, we can practice "TOUCH '||'&'||' CALL", which is to indicate and ascertain the dangerous parts at every process before performing tasks. The objectives of this study are to examine the effects of S-R compatibility and to show quantitatively the efficiency of TOUCH '||'&'||' CALL. The results show that: 1. Reaction time in case of indicating with fingers and shouting is slightly longer (0.138sec. .approx. 0.279sec. ) than that of responding only visually. However, the error rate decreases by 1/3.3 .approx. 1/4.2 times. Frome this, it is considered to verify quantitative estimation on multiple feedback of TOUCH '||'&'||' CALL. 2. In the stimulus-response relation aspect, numeric numeric stimulus-numeric response shows lower error rate (0.033% .approx. 0.133%) than any other stimulus-response relation, ahd shorter reaction time is proven ( 0.556sec. .approx. 0.835sec. ). These data suggest that having the order of stimulus-response arranged in accordance with the experimental knowledges and conceptional compatibility can bring down the error rate considerably.

  • PDF

Discretization of Continuous-Valued Attributes for Classification Learning (분류학습을 위한 연속 애트리뷰트의 이산화 방법에 관한 연구)

  • Lee, Chang-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1541-1549
    • /
    • 1997
  • Many classification algorithms require that training examples contain only discrete values. In order to use these algorithms when some attributes have continuous numeric values, the numeric attributes must be converted into discrete ones. This paper describes a new way of discretizing numeric values using information theory. Our method is context-sensitive in the sense that it takes into account the value of the target attribute. The amount of information each interval gives to the target attribute is measured using Hellinger divergence, and the interval boundaries are decided so that each interval contains as equal amount of information as possible. In order to compare our discretization method with some current discretization methods, several popular classification data sets are selected for experiment. We use back propagation algorithm and ID3 as classification tools to compare the accuracy of our discretization method with that of other methods.

  • PDF

Incremental Generation of A Decision Tree Using Global Discretization For Large Data (대용량 데이터를 위한 전역적 범주화를 이용한 결정 트리의 순차적 생성)

  • Han, Kyong-Sik;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.487-498
    • /
    • 2005
  • Recently, It has focused on decision tree algorithm that can handle large dataset. However, because most of these algorithms for large datasets process data in a batch mode, if new data is added, they have to rebuild the tree from scratch. h more efficient approach to reducing the cost problem of rebuilding is an approach that builds a tree incrementally. Representative algorithms for incremental tree construction methods are BOAT and ITI and most of these algorithms use a local discretization method to handle the numeric data type. However, because a discretization requires sorted numeric data in situation of processing large data sets, a global discretization method that sorts all data only once is more suitable than a local discretization method that sorts in every node. This paper proposes an incremental tree construction method that efficiently rebuilds a tree using a global discretization method to handle the numeric data type. When new data is added, new categories influenced by the data should be recreated, and then the tree structure should be changed in accordance with category changes. This paper proposes a method that extracts sample points and performs discretiration from these sample points to recreate categories efficiently and uses confidence intervals and a tree restructuring method to adjust tree structure to category changes. In this study, an experiment using people database was made to compare the proposed method with the existing one that uses a local discretization.

An efficient search of binary tree for huffman decoding based on numeric interpretation of codewords

  • Kim, Byeong-Il;Chang, Tae-Gyu;Jeong, Jong-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.133-136
    • /
    • 2002
  • This paper presents a new method of Huffman decoding which gives a significant improvement of processing efficiency based on the reconstruction of an efficient one-dimensional array data structure incorporating the numeric interpretation of the accrued codewords in the binary tree. In the Proposed search method, the branching address is directly obtained by the arithematic operation with the incoming digit value eliminating the compare instruction needed in the binary tree search. The proposed search method gives 30% of improved Processing efficiency and the memory space of the reconstructed Huffman table is reduced to one third compared to the ordinary ‘compare and jump’ based binary tree. The experimental result with the six MPEG-2 AAC test files also shows about 198% of performance improvement compared to those of the widely used conventional sequential search method.

  • PDF

Intelligent adaptive controller for a process control

  • Kim, Jin-Hwan;Lee, Bong-Guk;Huh, Uk-Youl
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.378-384
    • /
    • 1993
  • In this paper, an intelligent adaptive controller is proposed for the process with unmodelled dynamics. The intelligent adaptive controller consists of the numeric adaptive controller and the intelligent tuning part. The continuous scheme is used for the numeric adaptive controller to avoid the problems occurred in the discrete time schemes. The adaptive controller is adopted to the process with time delay. It is an implicit adaptive algorithm based on GMV using the emulator. The tuning part changes the design parameters in the control algorithm. It is a multilayer neural network trained by robustness analysis data. The proposed method can improve the robustness of the adaptive control system because the design parameters are tuned according to the operating points of the process. Through the simulation, robustnesses are shown for intelligent adaptive controller. Finally, the proposed algorithms are implemented on the electric furnace temperature control system. The effectiveness of the proposed algorithm is shown from experiments.

  • PDF