• Title/Summary/Keyword: Accuracy

Search Result 33,648, Processing Time 0.068 seconds

Genotype Frequencies of the Sex-Linked Feathering and Their Phenotypes in Domestic Chicken Breeds for the Establishment of Auto-Sexing Strains (자가성감별 계통 조성을 위한 국내 토종 닭의 깃털 조만성 양상과 유전자형 빈도)

  • Sohn, Sea-Hwan;Park, Dhan-Bee;Song, Hae-Ran;Cho, Eun-Jung;Kang, Bo-Seok;Suh, Ok-Suk
    • Journal of Animal Science and Technology
    • /
    • v.54 no.4
    • /
    • pp.267-274
    • /
    • 2012
  • The method of sexing based on differences in the rate of feather growth provides a convenient and inexpensive approach. The locus of feather development gene (K) is located on the Z chromosome and can be utilized to produce phenotypes that distinguish between the sexes of chicks at hatching. To establish the auto-sexing native chicken strains, this study analyzed the genotype frequency of the feathering in domestic chicken breeds. The method of classification of slow- and rapid-feathering chickens was also investigated. In the slow-feathering chicks, the coverts were either the same length or longer than the primary wing feathers at hatching. However, the rapid-feathering chicks had the primary wing feathers that were longer than the coverts. The growth pattern of tail feather also distinctively differed between the rapid- and slow-feathering chicks after 5-days. The accuracy of wing feather sexing was about 98% compared with tail sexing. In domestic chicken breeds, Korean Black Cornish, Korean Rhode Island Red, and Korean Native Chicken-Red had both dominant (K) and recessive ($k^+$) feathering genes. The other breeds of chickens, Korean Brown Cornish, Ogol, White Leghorn, Korean Native Chicken-Yellow, -Gray, -White and -Black had only the recessive feathering gene ($k^+$). Consequently, feather sexing is available using the domestic chicken breeds. Establishing the maternal stock with dominant gene (K-) and paternal stock with recessive gene ($k^+k^+$), the slow-feathering characteristic is passed from mothers to their sons, and the rapid-feathering characteristic is inherited by daughters from their fathers.

Partial transmission block production for real efficient method of block and MLC (Partial transmission block 제작 시 real block과 MLC를 이용한 방법 중 효율적인 방법에 대한 고찰)

  • Choi JiMin;Park JuYoung;Ju SangGyu;Ahn JongHo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.16 no.2
    • /
    • pp.19-24
    • /
    • 2004
  • Introduction : The Vaginal, the urethra, the vulva and anal cancer avoid the many dose to femur head and the additional treatment is necessary in inguinal LN. The partial transmission block to use inguinal LN addition there is to a method which it treats and produce partial transmission block a method and the MLC which to it analyzes. Material & Methode : The Inguinal the LN treatment patient partial transmission it used block and the MLC in the object and with solid water phantom with the patient it reappeared the same depth. In order to analyze the error of the junction the EDR2 (Extended dose range, the Kodak and the U.S) it used the Film and it got film scanner it got the beam profile. The partial transmission block and the MLC bias characteristic, accuracy and stability of production for, it shared at hour and comparison it analyzed. Result : The partial the transmission block compares in the MLC and the block production is difficult and production hour also above 1 hours. The custom the block the place where it revises the error of the junction is a difficult problem. If use of the MLC the fabrication will be break and only the periodical calibration of the MLC it will do and it will be able to use easily. Conclusion : The Inguinal there is to LN treatment and partial transmission block and the MLC there is efficiency of each one but there is a place where the junction of block for partial transmission block the production hour is caught long and it fixes and a point where the control of the block is difficult. like this problem it transfers with the MLC and if it treats, it means the effective treatment will be possible.

  • PDF

Evaluation of Usefulness of Iterative Metal Artifact Reduction(IMAR) Algorithm In Proton Therapy Planning (양성자 치료계획에서 Iterative Metal Artifact Reduction(IMAR) Algorithm 적용의 유용성 평가)

  • Han, Young Gil;Jang, Yo Jong;Kang, Dong Heok;Kim, Sun Young;Lee, Du Hyeon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.49-56
    • /
    • 2017
  • Purpose: To evaluate the accuracy of the Iterative Metal Artifact Reduction (IMAR) algorithm in correcting CT (computed tomography) images distorted due to a metal artifact and to evaluate the usefulness when proton therapy plan was plan using the images on which IMAR algorithm was applied. Materials and Methods: We used a CT simulator to capture the images when metal was not inserted in the CIRS model 062 Phantom and when metal was inserted in it and Artifact occurred. We compared the differences in the CT numbers from the images without metal, with a metal artifact, and with IMAR algorithm by setting ROI 1 and ROI 2 at the same position in the phantom. In addition, CT numbers of the tissue equivalents located near the metal were compared. For the evaluation of Rando Phantom, CT was taken by inserting a titanium rod into the spinal region of the Rando phantom modelling a patient who underwent spinal implant surgery. In addition, the same proton therapy plan was established for each image, and the differences in Range at three sites were compared. Results: In the evaluation of CIRS Phantom, the CT numbers were -6.5 HU at ROI 1 and -10.5 HU at ROI 2 in the absence of metal. In the presence of metal, Fe, Ti, and W were -148.1, -45.1 and -151.7 HU at ROI 1, respectively, and when the IMAR algorithm was applied, it increased to -0.9, -2.0, -1.9 HU. In the presence of metal, they were 171.8, 63.9 and 177.0 HU at ROI 2 and after the application of IMAR algorithm they decreased to 10.0 6,7 and 8.1 HU. The CT numbers of the tissue equivalents were corrected close to the original CT numbers except those in the lung located farthest. In the evaluation of the Rando Phantom, the mean CT numbers were 9.9, -202.8, and 35.1 HU at ROI 1, and 9.0, 107.1, and 29 HU at ROI 2 in the absence, presence of metal, and in the application of IMAR algorithm. The difference between the absence of metal and the range of proton beam in the therapy was reduced on the average by 0.26 cm at point 1, 0.20 cm at point 2, and 0.12 cm at point 3 when the IMAR algorithm was applied. Conclusion: By applying the IMAR algorithm, the CT numbers were corrected close to the original ones obtained in the absence of metal. In the beam profile of the proton therapy, the difference in Range after applying the IMAR algorithm was reduced by 0.01 to 3.6 mm. There were slight differences as compared to the images absence of metal but it was thought that the application of the IMAR algorithm could result in less error compared with the conventional therapy.

  • PDF

Evaluation of MODIS-derived Evapotranspiration at the Flux Tower Sites in East Asia (동아시아 지역의 플럭스 타워 관측지에 대한 MODIS 위성영상 기반의 증발산 평가)

  • Jeong, Seung-Taek;Jang, Keun-Chang;Kang, Sin-Kyu;Kim, Joon;Kondo, Hiroaki;Gamo, Minoru;Asanuma, Jun;Saigusa, Nobuko;Wang, Shaoqiang;Han, Shijie
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.4
    • /
    • pp.174-184
    • /
    • 2009
  • Evapotranspiration (ET) is one of the major hydrologic processes in terrestrial ecosystems. A reliable estimation of spatially representavtive ET is necessary for deriving regional water budget, primary productivity of vegetation, and feedbacks of land surface to regional climate. Moderate resolution imaging spectroradiometer (MODIS) provides an opportunity to monitor ET for wide area at daily time scale. In this study, we applied a MODIS-based ET algorithm and tested its reliability for nine flux tower sites in East Asia. This is a stand-alone MODIS algorithm based on the Penman-Monteith equation and uses input data derived from MODIS. Instantaneous ET was estimated and scaled up to daily ET. For six flux sites, the MODIS-derived instantaneous ET showed a good agreement with the measured data ($r^2=0.38$ to 0.73, ME = -44 to $+31W\;m^{-2}$, RMSE =48 to $111W\;m^{-2}$). However, for the other three sites, a poor agreement was observed. The predictability of MODIS ET was improved when the up-scaled daily ET was used ($r^2\;=\;0.48$ to 0.89, ME = -0.7 to $-0.6\;mm\;day^{-1}$, $RMSE=\;0.5{\sim}1.1\;mm\;day^{-1}$). Errors in the canopy conductance were identified as a primary factor of uncertainty in MODIS-derived ET and hence, a more reliable estimation of canopy conductance is necessary to increase the accuracy of MODIS ET.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

A Topic Modeling-based Recommender System Considering Changes in User Preferences (고객 선호 변화를 고려한 토픽 모델링 기반 추천 시스템)

  • Kang, So Young;Kim, Jae Kyeong;Choi, Il Young;Kang, Chang Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.43-56
    • /
    • 2020
  • Recommender systems help users make the best choice among various options. Especially, recommender systems play important roles in internet sites as digital information is generated innumerable every second. Many studies on recommender systems have focused on an accurate recommendation. However, there are some problems to overcome in order for the recommendation system to be commercially successful. First, there is a lack of transparency in the recommender system. That is, users cannot know why products are recommended. Second, the recommender system cannot immediately reflect changes in user preferences. That is, although the preference of the user's product changes over time, the recommender system must rebuild the model to reflect the user's preference. Therefore, in this study, we proposed a recommendation methodology using topic modeling and sequential association rule mining to solve these problems from review data. Product reviews provide useful information for recommendations because product reviews include not only rating of the product but also various contents such as user experiences and emotional state. So, reviews imply user preference for the product. So, topic modeling is useful for explaining why items are recommended to users. In addition, sequential association rule mining is useful for identifying changes in user preferences. The proposed methodology is largely divided into two phases. The first phase is to create user profile based on topic modeling. After extracting topics from user reviews on products, user profile on topics is created. The second phase is to recommend products using sequential rules that appear in buying behaviors of users as time passes. The buying behaviors are derived from a change in the topic of each user. A collaborative filtering-based recommendation system was developed as a benchmark system, and we compared the performance of the proposed methodology with that of the collaborative filtering-based recommendation system using Amazon's review dataset. As evaluation metrics, accuracy, recall, precision, and F1 were used. For topic modeling, collapsed Gibbs sampling was conducted. And we extracted 15 topics. Looking at the main topics, topic 1, top 3, topic 4, topic 7, topic 9, topic 13, topic 14 are related to "comedy shows", "high-teen drama series", "crime investigation drama", "horror theme", "British drama", "medical drama", "science fiction drama", respectively. As a result of comparative analysis, the proposed methodology outperformed the collaborative filtering-based recommendation system. From the results, we found that the time just prior to the recommendation was very important for inferring changes in user preference. Therefore, the proposed methodology not only can secure the transparency of the recommender system but also can reflect the user's preferences that change over time. However, the proposed methodology has some limitations. The proposed methodology cannot recommend product elaborately if the number of products included in the topic is large. In addition, the number of sequential patterns is small because the number of topics is too small. Therefore, future research needs to consider these limitations.