• Title/Summary/Keyword: Process Step

Search Result 5,389, Processing Time 0.033 seconds

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Optimal Operation of Gas Engine for Biogas Plant in Sewage Treatment Plant (하수처리장 바이오가스 플랜트의 가스엔진 최적 운영 방안)

  • Kim, Gill Jung;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.2
    • /
    • pp.18-35
    • /
    • 2019
  • The Korea District Heating Corporation operates a gas engine generator with a capacity of $4500m^3 /day$ of biogas generated from the sewage treatment plant of the Nanji Water Recycling Center and 1,500 kW. However, the actual operation experience of the biogas power plant is insufficient, and due to lack of accumulated technology and know-how, frequent breakdown and stoppage of the gas engine causes a lot of economic loss. Therefore, it is necessary to prepare technical fundamental measures for stable operation of the power plant In this study, a series of process problems of the gas engine plant using the biogas generated in the sewage treatment plant of the Nanji Water Recovery Center were identified and the optimization of the actual operation was made by minimizing the problems in each step. In order to purify the gas, which is the main cause of the failure stop, the conditions for establishing the quality standard of the adsorption capacity of the activated carbon were established through the analysis of the components and the adsorption test for the active carbon being used at present. In addition, the system was applied to actual operation by applying standards for replacement cycle of activated carbon to minimize impurities, strengthening measurement period of hydrogen sulfide, localization of activated carbon, and strengthening and improving the operation standards of the plant. As a result, the operating performance of gas engine # 1 was increased by 530% and the operation of the second engine was increased by 250%. In addition, improvement of vent line equipment has reduced work process and increased normal operation time and operation rate. In terms of economic efficiency, it also showed a sales increase of KRW 77,000 / year. By applying the strengthening and improvement measures of operating standards, it is possible to reduce the stoppage of the biogas plant, increase the utilization rate, It is judged to be an operational plan.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

Optimization of Characteristic Change due to Differences in the Electrode Mixing Method (전극 혼합 방식의 차이로 인한 특성 변화 최적화)

  • Jeong-Tae Kim;Carlos Tafara Mpupuni;Beom-Hui Lee;Sun-Yul Ryou
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • The cathode, which is one of the four major components of a lithium secondary battery, is an important component responsible for the energy density of the battery. The mixing process of active material, conductive material, and polymer binder is very essential in the commonly used wet manufacturing process of the cathode. However, in the case of mixing conditions of the cathode, since there is no systematic method, in most cases, differences in performance occur depending on the manufacturer. Therefore, LiMn2O4 (LMO) cathodes were prepared using a commonly used THINKY mixer and homogenizer to optimize the mixing method in the cathode slurry preparation step, and their characteristics were compared. Each mixing condition was performed at 2000 RPM and 7 min, and to determine only the difference in the mixing method during the manufacture of the cathode other experiment conditions (mixing time, material input order, etc.) were kept constant. Among the manufactured THINKY mixer LMO (TLMO) and homogenizer LMO (HLMO), HLMO has more uniform particle dispersion than TLMO, and thus shows higher adhesive strength. Also, the result of the electrochemical evaluation reveals that HLMO cathode showed improved performance with a more stable life cycle compared to TLMO. The initial discharge capacity retention rate of HLMO at 69 cycles was 88%, which is about 4.4 times higher than that of TLMO, and in the case of rate capability, HLMO exhibited a better capacity retention even at high C-rates of 10, 15, and 20 C and the capacity recovery at 1 C was higher than that of TLMO. It's postulated that the use of a homogenizer improves the characteristics of the slurry containing the active material, the conductive material, and the polymer binder creating an electrically conductive network formed by uniformly dispersing the conductive material suppressing its strong electrostatic properties thus avoiding aggregation. As a result, surface contact between the active material and the conductive material increases, electrons move more smoothly, changes in lattice volume during charging and discharging are more reversible and contact resistance between the active material and the conductive material is suppressed.

A Study on the Effect of Healing Experience Program on Satisfaction: Focused on Experience Cost and Experience Time (치유체험프로그램이 만족도에 미치는 영향에 관한 연구: 체험비용과 체험시간을 중심으로)

  • An, Hye-Jung;Kan, Soon-Ah
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.3
    • /
    • pp.183-200
    • /
    • 2022
  • This study is a study on the effect of a healing experience program on satisfaction in the field of healing agriculture. In the development of a rural experience program, what factors constituting the healing experience program affect satisfaction, and how much time and participation cost affect the satisfaction of the healing experience program from the marketing point of view of the healing experience program. I want to analyze By researching the effect of experience cost and experience time on satisfaction of the healing experience program, I would like to suggest the development direction of the healing experience program. To this end, by empirically analyzing the effect of a healing experience program using experience cost and experience time as parameters on satisfaction, we present a theoretical basis for priority considerations when developing a rural experience program. There are entertainment experience, educational experience, deviant experience, and aesthetic experience as sub-factors of the experience program, experience time and experience cost as parameters, and satisfaction as a dependent variable. In addition, the reliability of the research results was secured by setting the demographic variables of the survey subjects as control variables. The empirical analysis was conducted on 314 valid questionnaires from the unspecified majority who were interested in or aware of the healing experience program. SPSS v22.0 was used, and to test the mediating effect, the three-step verification method of Baron & Kenny(1986) and the SPSS PROCESS Macro Model No. of Andrew F. Hayes(2018). 4 The reliability of the mediating effect was secured by applying the verification method and comparing the analysis resul. As a result of the study, it was found that educational experience (𝛽=.134, t=1.759*) had a positive (+) effect on experience cost, and aesthetic experience (𝛽=.144 t=1.684*) had a positive (+) effect on experience time. +) was found to have an effect. Also, educational experience (𝛽=.239, t=4.112***) was found to have a positive (+) effect on satisfaction, and aesthetic experience (𝛽=.330 t=4.921***) had a positive effect on satisfaction. It has been shown to have a (+) effect. Experience time was found to have a negative (-) inconsistent mediating effect between aesthetic experience and satisfaction. That is, it is the total effect (𝛽=.330 t=4.921***), and the direct effect (𝛽=.349 t=5.241***) increased by 𝛽=.019 compared to the total effect when the experience time was input, while the indirect effect (𝛽=-.019), which was shown to exert a negative (-) mediating effect.

An Interpretation of the Korean Fairy-Tale "Borrowed Fortune From Heaven" From the Perspective of Analytical Psychology (한국민담 <하늘에서 빌려온 복>에 대한 분석심리학적 이해)

  • Kihong Baek
    • Sim-seong Yeon-gu
    • /
    • v.38 no.1
    • /
    • pp.112-160
    • /
    • 2023
  • This study examined the Korean folklore "Borrowed Fortune from Heaven" from the perspective of Analytical Psychology, considering it a manifestation of the human psyche, and tried to gain a deeper understanding of what happens in our mind. Through the exploration, the researcher was able to re-identify the ongoing psychological process operating in the depths of our mind, pertaining to the emergence of a new dimension of consciousness. Particularly the researcher was able to gain some insights into how the potential psychic elements for the new consciousness are prepared in the unconscious, how they get integrated into the conscious life, and what is essential for the accomplishment of the process. The tale begins with a poor woodcutter who, in order to escape from poverty, starts gathering twice as much firewood. However, the newly acquired amount disappears overnight, so the woodcutter gets perplexed and curious about where it goes and who is taking it. He seeks to find out the truth, which leads him to an unexpected journey to Heaven. There he learns the truth concerning his very tiny amount of fortune, and discovers another big fortune for an unborn person. By pleading with the ruler of Heaven, the woodcutter borrows that grand fortune, on the condition that he must return it to the owner when the time comes. After that, the woodcutter's life undergoes a series of changes, in which he finally becomes a wealthy farmer, but gradually is reminded more and more that the destined time is approaching. In the end, the fortune is completely transferred to the original owner, resulting in a dramatic twist and the creation of a new life circumstances. The overall plot can be understood as a reflection of the psychological process aiming at the evolution of consciousness through renewal. In this context, the woodcutter can be considered a psychic element that undergoes a continuous transformation in preparation for participating in the upcoming new consciousness. In other words, the changes brought about by this figure can be interpreted as a gradual and increasingly detailed foreshadowing of what the forthcoming new consciousness would be like. Interestingly, as the destined time approaches, the protagonist's anguish in conflict reaches its climax, despite his good performance in his role until then. This effectively portrays the difficulty of achieving a new dimension of consciousness, which requires moving past the last step. All the events in the story ultimately converge at this point. After all, the resolution occurs when the protagonist lets go of everything he has and follows the will of Heaven. This implies what is essential for the renewal of consciousness. Only by completely complying with the entire mind, the potential constituents of the new consciousness that should play important roles in a renewal and evolution of consciousness through experiencing, can participate in the ultimate outcome. As long as they remain trapped in any intermediate stage, the totality of the psyche would develop another detour aiming at the final destination, which means the beginning of another period of suffering carrying a purposeful meaning. The tale suggests that this truth will be applied everywhere that renewal of consciousness is directed, whether for an individual or a society.

A Study on Improvements on Legal Structure on Security of National Research and Development Projects (과학기술 및 학술 연구보고서 서비스 제공을 위한 국가연구개발사업 관련 법령 입법론 -저작권법상 공공저작물의 자유이용 제도와 연계를 중심으로-)

  • Kang, Sun Joon;Won, Yoo Hyung;Choi, San;Kim, Jun Huck;Kim, Seul Ki
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2015.05a
    • /
    • pp.545-570
    • /
    • 2015
  • Korea is among the ten countries with the largest R&D budget and the highest R&D investment-to-GDP ratio, yet the subject of security and protection of R&D results remains relatively unexplored in the country. Countries have implemented in their legal systems measures to properly protect cutting-edge industrial technologies that would adversely affect national security and economy if leaked to other countries. While Korea has a generally stable legal framework as provided in the Regulation on the National R&D Program Management (the "Regulation") and the Act on Industrial Technology Protection, many difficulties follow in practice when determining details on security management and obligations and setting standards in carrying out national R&D projects. This paper proposes to modify and improve security level classification standards in the Regulation. The Regulation provides a dual security level decision-making system for R&D projects: the security level can be determined either by researcher or by the central agency in charge of the project. Unification of such a dual system can avoid unnecessary confusions. To prevent a leakage, it is crucial that research projects be carried out in compliance with their assigned security levels and standards and results be effectively managed. The paper examines from a practitioner's perspective relevant legal provisions on leakage of confidential R&D projects, infringement, injunction, punishment, attempt and conspiracy, dual liability, duty of report to the National Intelligence Service (the "NIS") of security management process and other security issues arising from national R&D projects, and manual drafting in case of a breach. The paper recommends to train security and technological experts such as industrial security experts to properly amend laws on security level classification standards and relevant technological contents. A quarterly policy development committee must also be set up by the NIS in cooperation with relevant organizations. The committee shall provide a project management manual that provides step-by-step guidance for organizations that carry out national R&D projects as a preventive measure against possible leakage. In the short term, the NIS National Industrial Security Center's duties should be expanded to incorporate national R&D projects' security. In the long term, a security task force must be set up to protect, support and manage the projects whose responsibilities should include research, policy development, PR and training of security-related issues. Through these means, a social consensus must be reached on the need for protecting national R&D projects. The most efficient way to implement these measures is to facilitate security training programs and meetings that provide opportunities for communication among industrial security experts and researchers. Furthermore, the Regulation's security provisions must be examined and improved.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.