• Title/Summary/Keyword: 결합 알고리즘

Search Result 1,723, Processing Time 0.026 seconds

Rapid Hybrid Recommender System with Web Log for Outbound Leisure Products (웹로그를 활용한 고속 하이브리드 해외여행 상품 추천시스템)

  • Lee, Kyu Shik;Yoon, Ji Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.646-653
    • /
    • 2016
  • Outbound market is a rapidly growing global industry, and has evolved into a 11 trillion won trade. A lot of recommender systems, which are based on collaborative and content filtering, target the existing purchase log or rely on studies based on similarity of products. These researches are not highly efficient as data was not obtained in advance, and acquiring the overwhelming amount of data has been relatively slow. The characteristics of an outbound product are that it should be purchased at least twice in a year, and its pricing should be in the higher category. Since the repetitive purchase of a product is rare for the outbound market, the old recommender system which profiles the existing customers is lacking, and has some limitations. Therefore, due to the scarcity of data, we suggest an improved customer-profiling method using web usage mining, algorithm of association rule, and rule-based algorithm, for faster recommender system of outbound product.

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.

Automatic Recommendation of Nearby Tourist Attractions related to Events (이벤트와 관련된 주변 관광지 자동 추천 알고리즘 개발)

  • Ahn, Jinhyun;Im, Dong-Hyuk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.407-413
    • /
    • 2020
  • Participating in exhibitions is one of the major activities for tourists. When selecting their next travel destination after participating in an event, they use map services and social network services, such as blogs, to obtain information about tourist attractions. The map services are location-based recommendations, because they can easily retrieve information regarding nearby places. Blogs contain informative content about tourist attractions, thereby providing content-based recommendations. However, few services consider both location and content. In location-based recommendations, tourist attractions that are not related to the content of the event attended might be recommended. Content-based recommendation has a disadvantage in that events located at a distance might get recommended. We propose an algorithm that considers both location and content, based on information from the Korea Tourism Organization's Linked Open Data (LOD), Wikipedia, and a Korean dictionary. By extracting nouns from the description of a tourist attraction and then comparing them with nouns about other attractions, a content-based relationship is determined. The distance to the event is calculated based on the latitude and longitude of each tourist attraction. A weight selected by the user is used for linear combination with the content-based relationship to determine the preference order of the recommendations.

Software Development for Dynamic Positron Emission Tomography : Dynamic Image Analysis (DIA) Tool (동적 양전자방출단층 영상 분석을 위한 소프트웨어 개발: DIA Tool)

  • Pyeon, Do-Yeong;Kim, Jung-Su;Jung, Young-Jin
    • Journal of radiological science and technology
    • /
    • v.39 no.3
    • /
    • pp.369-376
    • /
    • 2016
  • Positron Emission Tomography(PET) is nuclear medical tests which is a combination of several compounds with a radioactive isotope that can be injected into body to quantitatively measure the metabolic rate (in the body). Especially, Phenomena that increase (sing) glucose metabolism in cancer tissue using the $^{18}F$-FDG (Fluorodeoxyglucose) is utilized widely in cancer diagnosis. And then, Numerous studies have been reported that incidence seems high availability even in the modern diagnosis of dementia and Parkinson's (disease) in brain disease. When using a dynamic PET iamge including the time information in the static information that is provided for the diagnosis many can increase the accuracy of diagnosis. For this reason, clinical researchers getting great attention but, it is the lack of tools to conduct research. And, it interfered complex mathematical algorithm and programming skills for activation of research. In this study, in order to easy to use and enable research dPET, we developed the software based graphic user interface(GUI). In the future, by many clinical researcher using DIA-Tool is expected to be of great help to dPET research.

Determination of Optimal Unit Hydrographs and Infultration Rate Functions from Single Rainfall-Runoff Event (단순 강우-유출 사상으로부터 최적단위도와 침투율의 결정)

  • An, Tae-Jin;Ryu, Hui-Jeong;Jeong, Gwang-Geun;Sim, Myeong-Pil
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.3
    • /
    • pp.365-374
    • /
    • 2000
  • This paper is to present the determination of the optimal Joss rate parameters and urnt bydrographs from the observed single rainfall-runoff event using optimization models coupled with a stochastic technique for the global solution. Two kinds of the linear program models are formulated to derive the optimal unit hydrographs and loss rate parameters for gaged basins; one mimmizes the summation of the absolute residual between predlCted and observed runoff ordinates and the other, the maximum absolute residuaL Multistart algorithm which is one or stochastic techniques for the global optimum is adopted to perturb the parameters of the loss rate equations. Multistart efficiently searches the feasIble region to identify the global optimlUll for loss rate parameters, which yields the optimal loss rate parameters and unit hydrograph for Kostiakov's, Plulip's, and Horton's equation. The unique unit hydrograph ordinates for a gIven rainfall-runoff event iS exclusrvely obtained WIth $\Phi$ index, but unit hydrograph ordinates depend upon the parameters [or each loss rate equations. The parameters of Green-Ampt's are determined through a trial and error method. In this paper the single rainfall-nmoff event observed from a watershed is considered to test the proposed method. The optimal unit hydrograph herein found has smaller deviations than the ones reported previously by other researchers.

  • PDF

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

LiDAR Ground Classification Enhancement Based on Weighted Gradient Kernel (가중 경사 커널 기반 LiDAR 미추출 지형 분류 개선)

  • Lee, Ho-Young;An, Seung-Man;Kim, Sung-Su;Sung, Hyo-Hyun;Kim, Chang-Hun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.29-33
    • /
    • 2010
  • The purpose of LiDAR ground classification is to archive both goals which are acquiring confident ground points with high precision and describing ground shape in detail. In spite of many studies about developing optimized algorithms to kick out this, it is very difficult to classify ground points and describing ground shape by airborne LiDAR data. Especially it is more difficult in a dense forested area like Korea. Principle misclassification was mainly caused by complex forest canopy hierarchy in Korea and relatively coarse LiDAR points density for ground classification. Unfortunately, a lot of LiDAR surveying performed in summer in South Korea. And by that reason, schematic LiDAR points distribution is very different from those of Europe. So, this study propose enhanced ground classification method considering Korean land cover characteristics. Firstly, this study designate highly confident candidated LiDAR points as a first ground points which is acquired by using big roller classification algorithm. Secondly, this study applied weighted gradient kernel(WGK) algorithm to find and include highly expected ground points from the remained candidate points. This study methods is very useful for reconstruct deformed terrain due to misclassification results by detecting and include important terrain model key points for describing ground shape at site. Especially in the case of deformed bank side of river area, this study showed highly enhanced classification and reconstruction results by using WGK algorithm.

Interface Capturing for Immiscible Two-phase Fluid Flows by THINC Method (THINC법을 이용한 비혼합 혼상류의 경계면 추적)

  • Lee, Kwang-Ho;Kim, Kyu-Han;Kim, Do-Sam
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.24 no.4
    • /
    • pp.277-286
    • /
    • 2012
  • In the numerical simulation of wave fields using a multi-phase flow model that considers simultaneous flows of materials with different states such as gas, liquid and solid, there is need of an accurate representation of the interface separating the fluids. We adopted an algebraic interface capturing method called tangent of hyperbola for interface-capturing(THINC) method for the capture of the free-surface in computations of multi-phase flow simulations instead of geometrical-type methods such a volume of fluid(VOF) method. The THINC method uses a hyperbolic tangent functions to represent the surface, and compute the numerical flux for the fluid fraction functions. One of the remarkable advantages of THINC method is its easy applicability to incorporate various numerical codes based on Navier-Stokes solver because it does not require the extra geometric reconstruction needed in most of VOF-type methods. Several tests were carried out in order to investigate the advection of interfaces and to verify the applicability of the THINC method to wave fields based on the one-field model for immiscible two-phase flows (TWOPM). The numerical results revealed that the THINC method is able to track the interface between air and water separating the fluids although its algorithm is fairly simple.

Creation and labeling of multiple phonotopic maps using a hierarchical self-organizing classifier (계층적 자기조직화 분류기를 이용한 다수 음성자판의 생성과 레이블링)

  • Chung, Dam;Lee, Kee-Cheol;Byun, Young-Tai
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.3
    • /
    • pp.600-611
    • /
    • 1996
  • Recently, neural network-based speech recognition has been studied to utilize the adaptivity and learnability of neural network models. However, conventional neural network models have difficulty in the co-articulation processing and the boundary detection of similar phonmes of the Korean speech. Also, in case of using one phonotopic map, learning speed may dramatically increase and inaccuracies may be caused because homogeneous learning and recognition method should be applied for heterogenous data. Hence, in this paper, a neural net typewriter has been designed using a hierarchical self-organizing classifier(HSOC), and related algorithms are presented. This HSOC, during its learing stage, distributed phoneme data on hierarchically structured multiple phonotopic maps, using Kohonen's self-organizing feature maps(SOFM). Presented and experimented in this paper were the algorithms for deciding the number of maps, map sizes, the selection of phonemes and their placement per map, an approapriate learning and preprocessing method per map. If maps are divided according to a priorlinguistic knowledge, we would have difficulty in acquiring linguistic knowledge and how to alpply it(e.g., processing extended phonemes). Contrarily, our HSOC has an advantage that multiple phonotopic maps suitable for given input data are self-organizable. The resulting three korean phonotopic maps are optimally labelled and have their own optimal preprocessing schemes, and also confirm to the conventional linguistic knowledge.

  • PDF

3D Pose Estimation of a Human Arm for Human-Computer Interaction - Application of Mechanical Modeling Techniques to Computer Vision (인간-컴퓨터 상호 작용을 위한 인간 팔의 3차원 자세 추정 - 기계요소 모델링 기법을 컴퓨터 비전에 적용)

  • Han Young-Mo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.4 s.304
    • /
    • pp.11-18
    • /
    • 2005
  • For expressing intention the human often use body languages as well as vocal languages. Of course the gestures using arms and hands are the representative ones among the body languages. Therefore it is very important to understand the human arm motion in human-computer interaction. In this respect we present here how to estimate 3D pose of human arms by using computer vision systems. For this we first focus on the idea that the human arm motion consists of mostly revolute joint motions, and then we present an algorithm for understanding 3D motion of a revolute joint using vision systems. Next we apply it to estimating 3D pose of human arms using vision systems. The fundamental idea for this algorithm extension is that we may apply the algorithm for a revolute joint to each of the revolute joints of hmm arms one after another. In designing the algorithms we focus on seeking closed-form solutions with high accuracy because we aim at applying them to human computer interaction for ubiquitous computing and virtual reality.