• Title/Summary/Keyword: Fuzzy Number

Search Result 1,022, Processing Time 0.027 seconds

Design of Network Attack Detection and Response Scheme based on Artificial Immune System in WDM Networks (WDM 망에서 인공면역체계 기반의 네트워크 공격 탐지 제어 모델 및 대응 기법 설계)

  • Yoo, Kyung-Min;Yang, Won-Hyuk;Kim, Young-Chon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4B
    • /
    • pp.566-575
    • /
    • 2010
  • In recent, artificial immune system has become an important research direction in the anomaly detection of networks. The conventional artificial immune systems are usually based on the negative selection that is one of the computational models of self/nonself discrimination. A main problem with self and non-self discrimination is the determination of the frontier between self and non-self. It causes false positive and false negative which are wrong detections. Therefore, additional functions are needed in order to detect potential anomaly while identifying abnormal behavior from analogous symptoms. In this paper, we design novel network attack detection and response schemes based on artificial immune system, and evaluate the performance of the proposed schemes. We firstly generate detector set and design detection and response modules through adopting the interaction between dendritic cells and T-cells. With the sequence of buffer occupancy, a set of detectors is generated by negative selection. The detection module detects the network anomaly with a set of detectors and generates alarm signal to the response module. In order to reduce wrong detections, we also utilize the fuzzy number theory that infers the degree of threat. The degree of threat is calculated by monitoring the number of alarm signals and the intensity of alarm occurrence. The response module sends the control signal to attackers to limit the attack traffic.

A Study on the GIS-based Deterministic MCDA Techniques for Evaluating the Flood Damage Reduction Alternatives (확정론적 다중의사결정기법을 이용한 최적 홍수저감대책 선정 기법 연구)

  • Lim, Kwang-Suop;Kim, Joo-Cheol;Hwang, Eui-Ho;Lee, Sang-Uk
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.12
    • /
    • pp.1015-1029
    • /
    • 2011
  • Conventional MCDA techniques have been used in the field of water resources in the past. A GIS can offer an effective spatial data-handling tool that can enhance water resources modeling through interfaces with sophisticated models. However, GIS systems have a limited capability as far as the analysis of the value structure is concerned. The MCDA techniques provide the tools for aggregating the geographical data and the decision maker's preferences into a one-dimensional value for analyzing alternative decisions. In other words, the MCDA allows multiple criteria to be used in deciding upon the best alternatives. The combination of GIS and MCDA capabilities is of critical importance in spatial multi-criteria analysis. The advantage of having spatial data is that it allows the consideration of the unique characteristics at every point. The purpose of this study is to identify, review, and evaluate the performance of a number of conventional MCDA techniques for integration with GIS. Even though there are a number of techniques which have been applied in many fields, this study will only consider the techniques that have been applied in floodplain decision-making problems. Two different methods for multi-criteria evaluation were selected to be integrated with GIS. These two algorithms are Compromise Programming (CP), Spatial Compromise Programming (SCP). The target region for a demonstration application of the methodology was the Suyoung River Basin in Korea.

A Study on the Analysis of Non-competitive factors of Mokpo port and Improvement (목포항 비경쟁 요인 분석 및 개선방안 연구)

  • Park, Gyei-Kark;Choi, Kyoung-Hoon;Lee, Cheong-Hwan
    • Journal of Korea Port Economic Association
    • /
    • v.34 no.3
    • /
    • pp.113-132
    • /
    • 2018
  • Mokpo port marked the $131^{st}$ anniversary of its opening in 2018. while the Mokpo has taken the new port development initiatives, it is limited by inefficient port functioning due to the lack of maritime port policy and government investment. Hence, port logistics has not been activated. Additionally, studies on Mokpo port have not been conducted, and knowledge available on the port is declarative in nature. On the other hand, research on port competitiveness focuses on how to analyze the factors that determine port competitiveness. Therefore, this study was intended to expand the existing research on Mokpo port and conduct an analysis of non-competitiveness factors and suggested improvements by considering the operational aspect of Mokpo port. In this regard the importance of non-competitiveness factors was assessed through an analytic hierarchy process(AHP) analysis and the influence of the non-competitiveness factors was analyzed through an fuzzy structural modeling(FSM) analysis. The result of the AHP analysis show ed the important non-competitiveness factors included the deactivation of industrial complexes around Mokpo port, the number of liner route, the cost of the pilot and tug. Accor ding to the FSM analysis, the top level included the non-competitive factors at Mokpo port; the intermediate level included the number of liner routes, cost of pilot and tug, enrance and clearance fee, costs of inland transportation, fee for port facilities, and loading and unloading costs; and the bottom level comprised the most non-competitive factors including the deactivation of industrial complexes around Mokpo port, hinterland connectivity, access to international port, incentives, and cost of transportation and storage. Based on the results of analysis, improvements were suggested for non-competitive factors of Mokpo.

A Brief Empirical Verification Using Multiple Regression Analysis on the Measurement Results of Seaport Efficiency of AHP/DEA-AR (다중회귀분석을 이용한 AHP/DEA-AR 항만효율성 측정결과의 실증적 검증소고)

  • Park, Ro-kyung
    • Journal of Korea Port Economic Association
    • /
    • v.32 no.4
    • /
    • pp.73-87
    • /
    • 2016
  • The purpose of this study is to investigate the empirical results of Analytic Hierarchy Process/Data Envelopment Analysis-Assurance Region(AHP/DEA-AR) by using multiple regression analysis during the period of 2009-2012 with 5 inputs (number of gantry cranes, number of berth, berth length, terminal yard, and mean depth) and 2 outputs (container TEU, and number of direct calling shipping companies). Assurance Region(AR) is the most important tool to measure the efficiency of seaports, because individual seaports are characterized in terms of inputs and outputs. Traditional AHP and multiple regression analysis techniques have been used for measuring the AR. However, few previous studies exist in the field of seaport efficiency measurement. The main empirical results of this study are as follows. First, the efficiency ranking comparison between the two models (AHP/DEA-AR and multiple regression) using the Wilcoxon signed-rank test and Mann-Whitney signed-rank sum test were matched with the average level of 84.5 % and 96.3% respectively. When data for four years are used, the ratios of the significant probability are decreased to 61.4% and 92.5%. The policy implication of this study is that the policy planners of Korean port should introduce AHP/DEA-AR and multiple regression analysis when they measure the seaport efficiency and consider the port investment for enhancing the efficiency of inputs and outputs. The next study will deal with the subjects introducing the Fuzzy method, non-radial DEA, and the mixed analysis between AHP/DEA-AR and multiple regression analysis.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Methodology of Shape Design for Component Using Optimal Design System (최적설계 시스템을 이용한 부품에 대한 형상설계 방법론)

  • Lee, Joon-Seong;Cho, Seong-Gyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.1
    • /
    • pp.672-679
    • /
    • 2018
  • This paper describes a methodology for shape design using an optimal design system, whereas generally a three dimensional analysis is required for such designs. An automatic finite element mesh generation technique, which is based on fuzzy knowledge processing and computational geometry techniques, is incorporated into the system, together with a commercial FE analysis code and a commercial solid modeler. Also, with the aid of multilayer neural networks, the present system allows us to automatically obtain a design window, in which a number of satisfactory design solutions exist in a multi-dimensional design parameter space. The developed optimal design system is successfully applied to evaluate the structures that are used. This study used a stress gauge to measure the maximum stress affecting the parts of the side housing bracket which are most vulnerable to cracking. Thereafter, we used a tool to interpret the maximum stress value, while maintaining the same stress as that exerted on the spot. Furthermore, a stress analysis was performed with the typical shape maintained intact, SM490 used for the material and the minimizing weight safety coefficient set to 3, while keeping the maximum stress the same as or smaller than the allowable stress. In this paper, a side housing bracket with a comparably simple structure for 36 tons was optimized, however if the method developed in this study were applied to side housing brackets of different classes (tons), their quality would be greatly improved.

Web Cogmulator : The Web Design Simulator Using Fuzzy Cognitive Map (Web Cogmulator : 퍼지 인식도를 이용한 웹 디자인 시뮬레이터에 관한 연구)

  • 이건창;정남호;조형래
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.04a
    • /
    • pp.357-364
    • /
    • 2000
  • 기존의 웹 디자인은 웹이라는 매체의 특성 상 디자인적인 요소가 매우 중요함에도 불구하고 디자인은 위한 구체적인 방법론이 미약하다. 특히, 많은 소비자들을 유인하고 구매를 촉발시켜야 하는 인터넷 쇼핑몰의 경우에는 더욱 더 그럼하에도 불구하고 이를 위한 전략적인 방법론이 부족하다. 즉, 기존 연구들은 제품의 다양성, 서비스, 촉진, 항해량, 편리성, 사용자 인터페이스 등이 중요하다고 하였지만 실제 인터넷 쇼핑몰을 디자인하는 입장에서는 활용하기가 상당히 애매하다. 그 이유는 이들 요인들은 서로 영향관계를 가지고 있어서 사용자 인터페이스가 복잡하면 항해량이 늘어나 편리성이 감소하고, 제품이 늘어나더라도 검색엔진을 사용하면 상대적으로 항해량이 감소하게 되어 편리성이 증가한다. 따라서, 이들 요인을 활용하여 인터넷 쇼핑몰을 구축하려면 요인간의 영향관계를 면밀히 파악하고 이 영향요인이 소비자의 구매행동에 어떠한 영향을 주는지가 충분히 검토되어야 한다.이에 본 연구에서는 퍼지인식도를 이용하여 인터넷 쇼핑몰 상에서 소비자의 구매행동에 영향을 주는 요인을 추출하고 이들 요인간의 인과관계를 도출하여 보다 구체적이고 전략적으로 인터넷 쇼핑몰을 디자인할 수 있는 방법으로 web-Cogmulator를 제시한다. Web-Cogmulator는 소비자의 쇼핑몰에 대한 암묵지식 형태의 구매행동을 형태지식화하여 지식베이스 형태로 가지고 있기 때문에 인터넷 쇼핑몰의 다양한 요인의 변화에 따른 소비자의 구매행동을 추론 시뮬레이션하는 것이 가능하다. 이에 본 연구에서는 기본적인 인터넷 쇼핑몰 시나리오를 바탕으로 추론 시뮬레이션을 실시하여 Web-Cogmulator의 유용성을 검증하였다.를, 지지도(support), 신뢰도(confidence), 리프트(lift), 컨빅션(conviction)등의 관계를 통해 다양한 방법으로 모색해본다. 이 연구에서 제안하는 이러한 개념계층상의 흥미로운 부분의 탐색은, 전자 상거래에서의 CRM(Customer Relationship Management)나 틈새시장(niche market) 마케팅 등에 적용가능하리라 여겨진다.선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computati

  • PDF

Detection of Text Candidate Regions using Region Information-based Genetic Algorithm (영역정보기반의 유전자알고리즘을 이용한 텍스트 후보영역 검출)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.70-77
    • /
    • 2008
  • This paper proposes a new text candidate region detection method that uses genetic algorithm based on information of the segmented regions. In image segmentation, a classification of the pixels at each color channel and a reclassification of the region-unit for reducing inhomogeneous clusters are performed. EWFCM(Entropy-based Weighted C-Means) algorithm to classify the pixels at each color channel is an improved FCM algorithm added with spatial information, and therefore it removes the meaningless regions like noise. A region-based reclassification based on a similarity between each segmented region of the most inhomogeneous cluster and the other clusters reduces the inhomogeneous clusters more efficiently than pixel- and cluster-based reclassifications. And detecting text candidate regions is performed by genetic algorithm based on energy and variance of the directional edge components, the number, and a size of the segmented regions. The region information-based detection method can singles out semantic text candidate regions more accurately than pixel-based detection method and the detection results will be more useful in recognizing the text regions hereafter. Experiments showed the results of the segmentation and the detection. And it confirmed that the proposed method was superior to the existing methods.

A Desirability Function-Based Multi-Characteristic Robust Design Optimization Technique (호감도 함수 기반 다특성 강건설계 최적화 기법)

  • Jong Pil Park;Jae Hun Jo;Yoon Eui Nahm
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.199-208
    • /
    • 2023
  • Taguchi method is one of the most popular approaches for design optimization such that performance characteristics become robust to uncontrollable noise variables. However, most previous Taguchi method applications have addressed a single-characteristic problem. Problems with multiple characteristics are more common in practice. The multi-criteria decision making(MCDM) problem is to select the optimal one among multiple alternatives by integrating a number of criteria that may conflict with each other. Representative MCDM methods include TOPSIS(Technique for Order of Preference by Similarity to Ideal Solution), GRA(Grey Relational Analysis), PCA(Principal Component Analysis), fuzzy logic system, and so on. Therefore, numerous approaches have been conducted to deal with the multi-characteristic design problem by combining original Taguchi method and MCDM methods. In the MCDM problem, multiple criteria generally have different measurement units, which means that there may be a large difference in the physical value of the criteria and ultimately makes it difficult to integrate the measurements for the criteria. Therefore, the normalization technique is usually utilized to convert different units of criteria into one identical unit. There are four normalization techniques commonly used in MCDM problems, including vector normalization, linear scale transformation(max-min, max, or sum). However, the normalization techniques have several shortcomings and do not adequately incorporate the practical matters. For example, if certain alternative has maximum value of data for certain criterion, this alternative is considered as the solution in original process. However, if the maximum value of data does not satisfy the required degree of fulfillment of designer or customer, the alternative may not be considered as the solution. To solve this problem, this paper employs the desirability function that has been proposed in our previous research. The desirability function uses upper limit and lower limit in normalization process. The threshold points for establishing upper or lower limits let us know what degree of fulfillment of designer or customer is. This paper proposes a new design optimization technique for multi-characteristic design problem by integrating the Taguchi method and our desirability functions. Finally, the proposed technique is able to obtain the optimal solution that is robust to multi-characteristic performances.

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.