• Title/Summary/Keyword: adaptive model

Search Result 2,837, Processing Time 0.052 seconds

A Development of Real Time Artificial Intelligence Warning System Linked Discharge and Water Quality (I) Application of Discharge-Water Quality Forecasting Model (유량과 수질을 연계한 실시간 인공지능 경보시스템 개발 (I) 유량-수질 예측모형의 적용)

  • Yeon, In-Sung;Ahn, Sang-Jin
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.7 s.156
    • /
    • pp.565-574
    • /
    • 2005
  • It is used water quality data that was measured at Pyeongchanggang real time monitoring stations in Namhan river. These characteristics were analyzed with the water qualify of rainy and nonrainy periods. TOC (Total Organic Carbon) data of rainy periods has correlation with discharge and shows high values of mean, maximum, and standard deviation. DO (Dissolved Oxygen) value of rainy periods is lower than those of nonrainy periods. Input data of the water quality forecasting models that they were constructed by neural network and neuro-fuzzy was chosen as the reasonable data, and water qualify forecasting models were applied. LMNN, MDNN, and ANFIS models have achieved the highest overall accuracy of TOC data. LMNN (Levenberg-Marquardt Neural Network) and MDNN (MoDular Neural Network) model which are applied for DO forecasting shows better results than ANFIS (Adaptive Neuro-Fuzzy Inference System). MDNN model shows the lowest estimation error when using daily time, which is qualitative data trained with quantitative data. The observation of discharge and water quality are effective at same point as well as same time for real time management. But there are some of real time water quality monitoring stations far from the T/M water stage. Pyeongchanggang station is one of them. So discharge on Pyeongchanggang station was calculated by developed runoff neural network model, and the water quality forecasting model is linked to the runoff forecasting model. That linked model shows the improvement of waterquality forecasting.

An Artificial Visual Attention Model based on Opponent Process Theory for Salient Region Segmentation (돌출영역 분할을 위한 대립과정이론 기반의 인공시각집중모델)

  • Jeong, Kiseon;Hong, Changpyo;Park, Dong Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.157-168
    • /
    • 2014
  • We propose an novel artificial visual attention model that is capable of automatic detection and segmentation of saliency region on natural images in this paper. The proposed model is based on human visual perceptions in biological vision and contains there are main contributions. Firstly, we propose a novel framework of artificial visual attention model based on the opponent process theory using intensity and color features, and an entropy filter is designed to perceive salient regions considering the amount of information from intensity and color feature channels. The entropy filter is able to detect and segment salient regions in high segmentation accuracy and precision. Lastly, we also propose an adaptive combination method to generate a final saliency map. This method estimates scores about intensity and color conspicuous maps from each perception model and combines the conspicuous maps with weight derived from scores. In evaluation of saliency map by ROC analysis, the AUC of proposed model as 0.9256 approximately improved 15% whereas the AUC of previous state-of-the-art models as 0.7824. And in evaluation of salient region segmentation, the F-beta of proposed model as 0.7325 approximately improved 22% whereas the F-beta of previous state-of-the-art models.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

A Policy-Based Meta-Planning for General Task Management for Multi-Domain Services (다중 도메인 서비스를 위한 정책 모델 주도 메타-플래닝 기반 범용적 작업관리)

  • Choi, Byunggi;Yu, Insik;Lee, Jaeho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.499-506
    • /
    • 2019
  • An intelligent robot should decide its behavior accordingly to the dynamic changes in the environment and user's requirements by evaluating options to choose the best one for the current situation. Many intelligent robot systems that use the Procedural Reasoning System (PRS) accomplishes such task management functions by defining the priority functions in the task model and evaluating the priority functions of the applicable tasks in the current situation. The priority functions, however, are defined locally inside of the plan, which exhibits limitation for the tasks for multi-domain services because global contexts for overall prioritization are hard to be expressed in the local priority functions. Furthermore, since the prioritization functions are not defined as an explicit module, reuse or extension of the them for general context is limited. In order to remove such limitations, we propose a policy-based meta-planning for general task management for multi-domain services, which provides the ability to explicitly define the utility of a task in the meta-planning process and thus the ability to evaluate task priorities for general context combining the modular priority functions. The ontological specification of the model also enhances the scalability of the policy model. In the experiments, adaptive behavior of a robot according to the policy model are confirmed by observing the appropriate tasks are selected in dynamic service environments.

Applicability Evaluation of Flood Inundation Analysis using Quadtree Grid-based Model (쿼드트리 격자기반 모형의 홍수범람해석 적용성 평가)

  • Lee, Dae Eop;An, Hyun Uk;Lee, Gi Ha;Jung, Kwan Sue
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.6
    • /
    • pp.655-666
    • /
    • 2013
  • Lately, intensity and frequency of natural disasters such as flood are increasing because of abnormal climate. Casualties and property damages due to large-scale floods such as Typhoon Rusa in 2002 and Typhoon Maemi in 2003 rapidly increased, and these show the limits of the existing disaster prevention measures and flood forecasting systems regarding irregular climate changes. In order to efficiently respond to extraordinary flood, it is important to provide effective countermeasures through an inundation model that can accurately simulate flood inundation patterns. However, the existing flood inundation analysis model has problems such as excessive take of analysis time and accuracy of the analyzed results. Therefore, this study conducted a flood inundation analysis by using the Gerris flow solver that uses quadtree grid, targeting the Baeksan Levee in the Nakdong River Basin that collapsed because of a concentrated torrential rainfall in August, 2002. Through comparisons with the FLUMEN model that uses unstructured grid among the existing flood inundation models and the actual flooded areas, it determined the applicability and efficiency of the quadtree grid-based flood inundation model of the Gerris flow solver.

A Study on the Analysis of Bicycle Road Service Level by Using Adaptive Neuro-Fuzzy Inference System (적응 뉴로-퍼지를 이용한 자전거도로 서비스수준 분석에 관한 연구)

  • Kim, Kyung Whan;Jo, Gyu Boong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.2D
    • /
    • pp.217-225
    • /
    • 2011
  • Currently our country has very serious problems of traffic congestion and urban environment due to increasing automobile ownership. Recently, our concern about environmentally sustainable transportation and green transportation is increasing, so the government is pushing ahead the policy of bicycle using activation. So it is needed to develop a model to analyze the service level of bicycle roads more realistically. In this study, a neuro-fuzzy inference model to analyze the service level of bicycle roads was built selecting the width of bicycle roads, the number of conflicts during cycling and pedestrian volume, which have fuzzy characteristics, as input variables. The predictability of the model was evaluated comparing the surveyed and the estimated. The values of the statistics, $R^2$, MAE and MSE were 0.987, 0.142, 0.032. Therefore, It may be judged that the explainability of the model is very high. The service levels of bicyle roads estimated by the model are 1~3 steps lower than KHCM assessments. The reason may be explained that the model estimates the service level considering the width of bicycle roads and the number of conflicts simultaneously besides pedestrian volume.

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Building a Model to Estimate Pedestrians' Critical Lags on Crosswalks (횡단보도에서의 보행자의 임계간격추정 모형 구축)

  • Kim, Kyung Whan;Kim, Daehyon;Lee, Ik Su;Lee, Deok Whan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1D
    • /
    • pp.33-40
    • /
    • 2009
  • The critical lag of crosswalk pedestrians is an important parameter in analyzing traffic operation at unsignalized crosswalks, however there is few research in this field in Korea. The purpose of this study is to develop a model to estimate the critical lag. Among the elements which influence the critical lag, the age of pedestrians and the length of crosswalks, which have fuzzy characteristics, and the each lag which is rejected or accepted are collected on crosswalks of which lengths range from 3.5 m to 10.5 m. The values of the critical lag range from 2.56 sec. to 5.56 sec. The age and the length are divided to the 3 fuzzy variables each, and the critical lag of each case is estimated according to Raff's technique, so a total of 9 fuzzy rules are established. Based on the rules, an ANFIS (Adaptive Neuro-Fuzzy Inference System) model to estimate the critical lag is built. The predictability of the model is evaluated comparing the observed with the estimated critical lags by the model. Statistics of $R^2$, MAE, MSE are 0.96, 0.097, 0.015 respectively. Therefore, the model is evaluated to explain the result well. During this study, it is found that the critical lag increases rapidly over the pedestrian's age of 40 years.

An Empirical Study on the Dual Burden of Married Working Women : Testifying the Adaptive Partnership, Dual Burden and Lagged Adaptation Hypotheses (근로기혼여성의 이중노동부담에 관한 실증연구: 가사노동분담에 관한 협조적 적응, 이중노동부담, 적응지체 가설의 검증)

  • Kim, Jin-Wook
    • Korean Journal of Social Welfare
    • /
    • v.57 no.3
    • /
    • pp.51-72
    • /
    • 2005
  • The purpose of this article is to empirically testify three hypotheses on the relation between married women's employment and the allocation of unpaid domestic work within households - i.e., adaptive partnership (AP), dual burden (DB) and lagged adaptation (LA) models. The AP hypothesis assumes that, when wives are employed, husbands spend more time doing housework in order to compensate for their wives' increased responsibility. The DB model, by contrast, indicates that, even if married women are employed, their burden on domestic work does not decrease. In this case, therefore, the dual burden of married women can be expected. Between these two opposite views, the third, alternative hypothesis has been suggested recently. The LA model argues that the behaviours of households are adaptive to the changing environments but over a period of many years and even across generations. The article has analysed the total work time as well as unpaid domestic work time to testify these three hypotheses, utilising 1999 Time Use Survey data of the National Statistical Office. The research results can be summarised as follows. First, married working women worked 100 minutes more than their male spouses. Second, the average domestic work time of married men, 23-25 minutes per day, was no more than 5-10% of that of women. Third, the effects of age and women's employment were not statistically significant in multiple regression models, which means that the DB hypothesis explains the situation of married working women in Korea. Based on these findings, the article suggested the expansion of the public social service system to mitigate the dual burden of married working women, the introduction of compensatory credit for caring work, and the directions of further empirical research using the time use survey data.

  • PDF

Sampling Strategies for Computer Experiments: Design and Analysis

  • Lin, Dennis K.J.;Simpson, Timothy W.;Chen, Wei
    • International Journal of Reliability and Applications
    • /
    • v.2 no.3
    • /
    • pp.209-240
    • /
    • 2001
  • Computer-based simulation and analysis is used extensively in engineering for a variety of tasks. Despite the steady and continuing growth of computing power and speed, the computational cost of complex high-fidelity engineering analyses and simulations limit their use in important areas like design optimization and reliability analysis. Statistical approximation techniques such as design of experiments and response surface methodology are becoming widely used in engineering to minimize the computational expense of running such computer analyses and circumvent many of these limitations. In this paper, we compare and contrast five experimental design types and four approximation model types in terms of their capability to generate accurate approximations for two engineering applications with typical engineering behaviors and a wide range of nonlinearity. The first example involves the analysis of a two-member frame that has three input variables and three responses of interest. The second example simulates the roll-over potential of a semi-tractor-trailer for different combinations of input variables and braking and steering levels. Detailed error analysis reveals that uniform designs provide good sampling for generating accurate approximations using different sample sizes while kriging models provide accurate approximations that are robust for use with a variety of experimental designs and sample sizes.

  • PDF