• Title/Summary/Keyword: 지능형 알고리즘

Search Result 907, Processing Time 0.025 seconds

An Incident-Responsive Dynamic Control Model for Urban Freeway Corridor (도시고속도로축의 유고감응 동적제어모형의 구축)

  • 유병석;박창호;전경수;김동선
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.4
    • /
    • pp.59-69
    • /
    • 1999
  • A Freeway corridor is a network consisting of a few Primary longitudinal roadways (freeway or major arterial) carrying a major traffic movement with interconnecting roads which offer the motorist alternative paths to his/her destination. Control measures introduced to ameliorate traffic performance in freeway corridors typically include ramp metering at the freeway entrances, and signal control at each intersections. During a severe freeway incident, on-ramp metering usually is not adequate to relieve congestion effectively. Diverting some traffic to the Parallel surface street to make full use of available corridor capacity will be necessary. This is the purpose of the traffic management system. So, an integrated traffic control scheme should include three elements. (a)on-ramp metering, (b)off-ramp diversion and (c)signal timing at surface street intersections. The purpose of this study is to develop an integrated optimal control model in a freeway corridor. By approximating the flow-density relation with a two-segment linear function. the nonlinear optimal control problem can be simplified into a set of Piecewise linear programming models. The formulated optimal-control Problem can be solved in real time using common linear program. In this study, program MPL(ver 4.0) is used to solve the formulated optimal-control problem. Simulation results with TSIS(ver 4.01) for a sample network have demonstrated the merits of the Proposed model and a1gorithm.

  • PDF

A Study on an Adaptive Guidance Plan by Quickest Path Algorithm for Building Evacuations due to Fire (건물 화재시 Quickest Path를 이용한 Adaptive 피난경로 유도방안)

  • Sin, Seong-Il;Seo, Yong-Hui;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.6
    • /
    • pp.197-208
    • /
    • 2007
  • Enormously sized buildings are appearing world-wide with the advancement of construction techniques. Large-scaled and complicated structures will have increased difficulties for dealing with safety, and will demand well-matched safety measures. This research introduced up-to-date techniques and systems which are applied in buildings in foreign nations. Furthermore, it proposed s direct guidance plan for buildings in case of fire. Since it is possible to install wireless sensor networks which detect fires or effects of fire, the plan makes use of this information. Accordingly, the authors completed a direct guidance plan that was based on omnidirectional guidance lights. It is possible to select a route with concern about both time and capacity with a concept of a non-dominated path. Finally, case studies showed that quickest path algorithms were effective for guiding efficient dispersion routes and in case of restriction of certain links in preferred paths due to temperature and smoke, it was possible to avoid relevant links and to restrict demand in the network application. Consequently, the algorithms were able to maximize safety and minimize evacuation time, which were the purposes of this study.

Development of an Intelligent Legged Walking Rehabilitation Robot (지능적 족형 보행 재활 보조 로봇의 개발)

  • Kim, Hyun;Kim, Jung-Yup
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.9
    • /
    • pp.825-837
    • /
    • 2017
  • This paper describes a novel type of a walking rehabilitation robot that applies robot technologies to crutches used by patients with walking difficulties in the lower body. The primary features of the developed robot are divided into three parts. First, the developed robot is worn on the patient's chest, as opposed to the conventional elbow crutch that is attached to the forearm; hence, it can effectively disperse the patient's weight throughout the width of the chest, and eliminate the concentrated load at the elbow. Furthermore, it allows free arm motion during walking. Second, the developed robot can recognize the walking intention of the patient from the magnitude and direction of the ground reactive forces. This is done using three-axis force sensors attached to the feet of the robot. Third, the robot can perform a stair walking function, which can change vertical movement trajectories in order to step up and down a single stair according to the floor height. Consequently, we experimentally showed that the developed robot can effectively perform walking rehabilitation assistance by perceiving the walking intention of the patient. Moreover we quantitatively verified muscle power assistance by measuring the electromyography (EMG) signals of the muscles of the lower limb.

Development of Automatic Sorting System for Black Plastics Using Laser Induced Breakdown Spectroscopy (LIBS) (LIBS를 이용한 흑색 플라스틱의 자동선별 시스템 개발)

  • Park, Eun Kyu;Jung, Bam Bit;Choi, Woo Zin;Oh, Sung Kwun
    • Resources Recycling
    • /
    • v.26 no.6
    • /
    • pp.73-83
    • /
    • 2017
  • Used small household appliances have a wide variety of product types and component materials, and contain high percentage of black plastics. However, they are not being recycled efficiently as conventional sensors such as near-infrared ray (NIR), etc. are not able to detect black plastic by types. In the present study, an automatic sorting system was developed based on laser-induced breakdown spectroscopy (LIBS) to promote the recycling of waste plastics. The system we developed mainly consists of sample feeder, automatic position recognition system, LIBS device, separator and control unit. By applying laser pulse on the target sample, characteristic spectral data can be obtained and analyzed by using CCD detectors. The obtained data was then treated by using a classifier, which was developed based on artificial intelligent algorithm. The separation tests on waste plastics also were carried out by using a lab-scale automatic sorting system and the test results will be discussed. The classification rate of the radial basis neural network (RBFNNs) classifier developed in this study was about > 97%. The recognition rate of the black plastic by types with the automatic sorting system was more than 94.0% and the sorting efficiency was more than 80.0%. Automatic sorting system based on LIBS technology is in its infant stage and it has a high potential for utilization in and outside Korea due to its excellent economic efficiency.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.