• Title/Summary/Keyword: Interest rate

Search Result 1,929, Processing Time 0.033 seconds

A Pilot Study for the Feasibility of F-18 FLT-PET in Locally Advanced Breast Cancer: Comparison with F-18 FDG-PET (국소진행성 유방암에서 F-18 FLT-PET 적용 가능성에 대한 예비 연구: F-18 FDG-PET와 비교)

  • Hyuen, Lee-Jai;Kim, Euy-Nyong;Hong, Il-Ki;Ahn, Jin-Hee;Kim, Sung-Bae;Ahn, Sei-Hyun;Gong, Gyung-Yup;Kim, Jae-Seung;Oh, Seung-Jun;Moon, Dae-Hyuk;Ryu, Jin-Sook
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.1
    • /
    • pp.29-38
    • /
    • 2008
  • Purpose: The aim of this study was to investigate the feasibility of 3 ' -[F-18]fluoro-3 ' -deoxythymidine positron emission tomography(FLT-PET) for the detection of locally advanced breast cancer and to compare the degree of FLT and 2' -deoxy-2 ' -[F-18]fluoro-d-glucose(FDG) uptake in primary tumor, lymph nodes and other normal organs. Material & Methods: The study subjects consisted of 22 female patients (mean age; $42{\pm}6$ years) with biopsy-confirmed infiltrating ductal carcinoma between Aug 2005 and Nov 2006. We performed conventional imaging workup, FDG-PET and FLT PET/CT. Average tumor size measured by MRI was $7.2{\pm}3.4$ cm. With visual analysis, Tumor and Lymph node uptakes of FLT and FDG were determined by calculation of standardized uptake value (SUV) and tumor to background (TB) ratio. We compared FLT tumor uptake with FDG tumor uptake. We also investigated the correlation between FLT tumor uptake and FDG tumor uptake and the concordant rate with lymph node uptakes of FLT and FDG. FLT and FDG uptakes of bone marrow and liver were measured to compare the biodistribution of each other. Results: All tumor lesions were visually detected in both FLT-PET and FDG-PET. There was no significant correlation between maximal tumor size by MRI and SUVmax of FLT-PET or FDG-PET (p>0.05). SUVmax and $$SUV_{75} (average SUV within volume of interest using 75% isocontour) of FLT-PET were significantly lower than those of FDG-PET in primary tumor (SUVmax; $6.3{\pm}5.2\;vs\;8.3{\pm}4.9$, p=0.02 /$SUV_{75};\;5.3{\pm}4.3\;vs\;6.9{\pm}4.2$, p=0.02). There is significant moderate correlation between uptake of FLT and FDG in primary tumor (SUVmax; rho=0.450, p=0.04 / SUV75; rho=0.472, p=0.03). But, TB ratio of FLT-PET was higher than that of FDG-PET($11.7{\pm}7.7\;vs\;6.3{\pm}3.8$, p=0.001). The concordant rate between FLT and FDG uptake of lymph node was reasonably good (33/34). The FLT SUVs of liver and bone marrow were $4.2{\pm}1.2\;and\;8.3{\pm}4.9$. The FDG SUVs of liver and bone marrow were $1.8{\pm}0.4\;and\;1.6{\pm}0.4$. Conclusion: The uptakes of FLT were lower than those of FDG, but all patients of this study revealed good FLT uptakes of tumor and lymph node. Because FLT-PET revealed high TB ratio and concordant rate with lymph node uptakes of FDG-PET, FLT-PET could be a useful diagnostic tool in locally advanced breast cancer. But, physiological uptake and individual variation of FLT in bone marrow and liver will limit the diagnosis of bone and liver metastases.

CComparative evaluation of the methods of producing planar image results by using Q-Metrix method of SPECT/CT in Lung Perfusion Scan (Lung Perfusion scan에서 SPECT-CT의 Q-Metrix방법과 평면영상 결과 산출방법에 대한 비교평가)

  • Ha, Tae Hwan;Lim, Jung Jin;Do, Yong Ho;Cho, Sung Wook;Noh, Gyeong Woon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.90-97
    • /
    • 2018
  • Purpose The lung segment ratio which is obtained through quantitative analyses of lung perfusion scan images is calculated to evaluate the lung function pre and post surgery. In this Study, the planar image production methods by using Q-Metrix (GE Healthcare, USA) program capable of not only quantitative analysis but also computation of the segment ratio after having performed SPECT/CT are comparatively evaluated. Materials and Methods Lung perfusion scan and SPECT/CT were performed on 50 lung cancer patients prior to surgery who visited our hospital from May 1, 2015 to September 13, 2016 by using Discovery 670(GE Healthcare, USA) equipment. AP(Anterior Posterior)method that uses planar image divided the frontal and rear images into three rectangular portions by means of ROI tool while PO(Posterior Oblique)method computed the segment ratio by dividing the right lobe into three parts and the left lobe into two parts on the oblique image. Segment ratio was computed by setting the ROI and VOI in the CT image by using Q-Metrix program and statistically analysis was performed with SPSS Ver. 23. Results Regarding the correlation concordance rate of Q-Metrix and AP methods, RUL(Right upper lobe), RML(Right middle lobe) and RLL(Right lower lobe) were 0.224, 0.035 and 0.447. LUL(Left upper lobe) and LLL(Left lower lobe) were found to be 0.643 and 0.456, respectively. In the PO method, the right lobe were 0.663, 0.623 and 0.702, respectively, while the left lobe were 0.754 and 0.823. When comparison was made by using the Paired sample T-test, Right lobe were $11.6{\pm}4.5$, $26.9{\pm}6.2$ and $17.8{\pm}4.2$, respectively in the AP method. Left lobe were $28.4{\pm}4.8$ and $15.4{\pm}5.6$. The right lobe of PO had values of $17.4{\pm}5.0$, $10.5{\pm}3.6$ and $27.3{\pm}6.0$, while the left lobe had values of $21.6{\pm}4.8$ and $23.1{\pm}6.6$, thereby having statistically significant difference in comparison to the Q-Metrix method for each of the lobes (P<0.05). However, there was no statistically significant difference in Right middle lobe (P>0.05). Conclusion The AP method showed low concordance rate in correlation with the Q-Metrix method. However, PO method displayed high concordance rate overall. although AP method had significant differences in all lobes, there was no significant difference in Right middle lobe of PO method. Therefore, at the time of production of lung perfusion scan results, utilization of Q-Metrix method of SPECT/CT would be useful in computation of accurate resultant values. Moreover, it is deemed possible to expect obtain more practical sectional computation result values by using PO method at the time of planar image acquisition.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.

Comparison of Activity Capacity Change and GFR Value Change According to Matrix Size during 99mTc-DTPA Renal Dynamic Scan (99mTc-DTPA 신장 동적 검사(Renal Dynamic Scan) 시 동위원소 용량 변화와 Matrix Size 변경에 따른 사구체 여과율(Glomerular Filtration Rate, GFR) 수치 변화 비교)

  • Kim, Hyeon;Do, Yong-Ho;Kim, Jae-Il;Choi, Hyeon-Jun;Woo, Jae-Ryong;Bak, Chan-Rok;Ha, Tae-Hwan
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.24 no.1
    • /
    • pp.27-32
    • /
    • 2020
  • Purpose Glomerular Filtration Rate(GFR) is an important indicator for evaluating renal function and monitoring the progress of renal disease. Currently, the method of measuring GFR in clinical trials by using serum creatinine value and 99mTc-DTPA(diethylenetriamine pentaacetic acid) renal dynamic scan is still useful. After the Gates method of formula was announced, when 99mTc-DTPA Renal dynamic scan is taken, it is applied the GFR is measured using a gamma camera. The purpose of this paper is to measure the GFR by applying the Gates method of formula. It is according to effect activity and matrix size that is related in the GFR. Materials and Methods Data from 5 adult patients (patient age = 62 ± 5, 3 males, 2 females) who had been examined 99mTc-DTPA Renal dynamic scan were analyzed. A dynamic image was obtained for 21 minutes after instantaneous injection of 99mTc-DTPA 15 mCi into the patient's vein. To evaluate the glomerular filtration rate according to changes in activity and matrix size, total counts were measured after setting regions of interest in both kidneys and tissues in 2-3 minutes. The distance from detector to the table was maintained at 30cm, and the capacity of the pre-syringe (PR) was set to 15, 20, 25, 30 mCi, and each the capacity of post-syringe (PO) was 1, 5, 10, 15 mCi is set to evaluate the activity change. And then, each matrix size was changed to 32 × 32, 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 to compare and to evaluate the values. Results As the activity increased in matrix size, the difference in GFR gradually decreased from 52.95% at the maximum to 16.67% at the minimum. The GFR value according to the change of matrix size was similar to 2.4%, 0.2%, 0.2% of difference when changing from 128 to 256, 256 to 512, and 512 to 1024, but 54.3% of difference when changing from 32 to 64 and 39.43% of difference when changing from 64 to 128. Finally, based on the presently used protocol, 256 × 256, PR 15 mCi and PO 1 mCi, the GFR value was the largest difference with 82% in PR 15 mCi and PO 1 mCi. conditions, and at the least difference is 0.2% in the conditions of PR 30 mCi and PO 15 mCi. Conclusion Through this paper, it was confirmed that when measuring the GFR using the gate method in the 99mTc-DTPA renal dynamic scan. The GFR was affected by activity and matrix size changes. Therefore, it is considered that when taking the 99mTc-DTPA renal dynamic scan, is should be careful by applying appropriate parameters when calculating GFR in the every hospital.

A Study on the Problems and Resolutions of Provisions in Korean Commercial Law related to the Aircraft Operator's Liability of Compensation for Damages to the Third Party (항공기운항자의 지상 제3자 손해배상책임에 관한 상법 항공운송편 규정의 문제점 및 개선방안)

  • Kim, Ji-Hoon
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.2
    • /
    • pp.3-54
    • /
    • 2014
  • The Republic of Korea enacted the Air Transport Act in Commercial Law which was entered into force in November, 2011. The Air Transport Act in Korean Commercial Law was established to regulate domestic carriage by air and damages to the third party which occur within the territorial area caused by aircraft operations. There are some problems to be reformed in the Provisions of Korean Commercial Law for the aircraft operator's liability of compensation for damages to the third party caused by aircraft operation as follows. First, the aircraft operator's liability of compensation for damages needs to be improved because it is too low to compensate adequately to the third party damaged owing to the aircraft operation. Therefore, the standard of classifying per aircraft weight is required to be detailed from the current 4-tier into 10-tier and the total limited amount of liability is also in need of being increased to the maximum 7-hundred-million SDR. In addition, the limited amount of liability to the personal damage is necessary to be risen from the present 125,000 SDR to 625,000 SDR according to the recent rate of prices increase. This is the most desirable way to improve the current provisions given the ordinary insurance coverage per one aircraft accident and various specifications of recent aircraft in order to compensate the damaged appropriately. Second, the aircraft operator shall be liable without fault to damages caused by terrorism such as hijacking, attacking an aircraft and utilizing it as means of attack like the 9 11 disaster according to the present Air Transport Act in Korean Commercial Law. Some argue that it is too harsh to aircraft operators and irrational, but given they have also some legal duties of preventing terrorism and in respect of helping the third party damaged, it does not look too harsh or irrational. However, it should be amended into exempting aircraft operator's liability when the terrorism using of an aircraft by well-organized terrorists group happens like 9 11 disaster in view of balancing the interest between the aircraft operator and the third party damaged. Third, considering the large scale of the damage caused by the aircraft operation usually aircraft accident, it is likely that many people damaged can be faced with a financial crisis, and the provision of advance payment for air carrier's liability of compensation also needs to be applied to the case of aircraft operator's liability. Fourth, the aircraft operator now shall be liable to the damages which occur in land or water except air according to the current Air Transport Act of Korean Commercial Law. However, because the damages related to the aircraft operation in air caused by another aircraft operation are not different from those in land or water. Therefore, the term of 'on the surface' should be eliminated in the term of 'third parties on the surface' in order to make the damages by the aircraft operation in air caused by another aircraft operation compensable by Air Transport Act of Korean Commercial Law. It is desired that the Air Transport Act in Commercial Law including the clauses related to the aircraft operator's liability of compensation for damages to the third party be developed continually through the resolutions about its problems mentioned above for compensating the third party damaged appropriately and balancing the interest between the damaged and the aircraft operator.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Studies on the Appraisal of Stumpage Value in the Forest Land - With Respect to Kyung-Ju Area - (산원지(山元地) 임목평가(林木平価)에 관(関)한 연구(研究) - 경주지방(慶州地方)을 중심(中心)으로 -)

  • Rha, Sang Soo;Park, Tai Sik
    • Journal of Korean Society of Forest Science
    • /
    • v.52 no.1
    • /
    • pp.37-49
    • /
    • 1981
  • The purpose of the study is to find out the objective method of valuation on the forest stands through the analysis of logging costs that is positively related to timber production. The two forest (Amgog, Whangryoung), located nereby, but forest type, logging and skidding conditions being slightly different, were slected to carry out the study. The objective timber stumpage value were determined by investigating the appropriate timber production costs and profits of logging operations. The main result obtained in this study are as follows: 1. The rate of logging cost in consisting of timber market price is 13.15% in the area of Amgog logging place and 19.48% in Whangryoung. 2. The rate of the other production cost excluding logging cost is 15.36% in the area of Amgog logging place and 28.85% in Whangryoung. 3. The total rate of timber production cost in consisting of the market price is more than 28.51% in the area of Amgog logging place and 48.33% in Whangryoung, 4. Though the productivity of forest land is affected by the selection of tree species, tending, treatments and effective management of forest land, the more important problem is improvement of logging condition. 5. The rate of production cost in timber price is so high that we should endeavore to improve the productivity of labour and its quality, and minimize the difference of piece work per day in accordance to the various site condition. 6. Although the profit of forest industry is related to the period of recapturing investment, it is more closely related to the working condition, risk of investment and continuous change of social investment interest. 7. If the right variables which are related to the timber market, are objectively obtained, the stumpage value of mature forests can be objectively caculated by applying straight line discounting method or compound discounting method in caculating the stump to market price.

  • PDF

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.