• Title/Summary/Keyword: Threshold setting

Search Result 150, Processing Time 0.027 seconds

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

A Study on the Image Change Using Twinkle Artifact Images and Phantom according to Calcification-Inducing Environment in Breast Ultrasonography (유방 초음파 검사에서 석회화 유발 환경에 따른 반짝 허상과 팸텀을 활용한 영상 변화에 관한 연구)

  • Cheol-Min Jeon
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.751-759
    • /
    • 2023
  • Breast ultrasonography is difficult to image in fatty breasts and to find micro-calcification, but the discovery of micro-calcification is very important for breast cancer screening. Among the color Doppler artifact of ultrasound, twinkle artifact mainly occur on strong reflectors such as stones or calcification in images, and evaluation methods using them are clinically being used. In this study, we are conducting experiments on the color Doppler settings of ultrasound equipment, such as repetition frequency, ensemble, persist, wall filtering, smoothing, linear density, and dissociation value, by producing a breast simulation phantom using the largest amount of calcium phosphate among breast implants. The purpose of this study was to improve the contrast of twinkle artifact in breast ultrasound examinations and to maximize their use in clinical practice. As a result, the pulse repetition frequency occurred in the range of 3.6 kHz to 7.2 kHz, and did not occur above 10.5 kHz. For ensembles, twinkle artifact occurred in all sizes of calcification under low conditions, and in threshold settings, the twinkle artifact increased slightly only under 80 to 100 conditions, and did not occur in 1 mm size calcification. Persist, wall filter, smoothing, and line density settings did not have much meaning in the setting variable because conditions did not increase by condition, and pulse repetition frequency, ensemble, and thresholds had the greatest impact on the twinkling artifact image. This study is expected to help examiners select optimal conditions to effectively increase twinkle artifact by adjusting color Doppler settings.

Reduction of Artifacts in Magnetic Resonance Imaging with Diamagnetic Substance (반자성 물질을 이용한 자기공명영상검사에서의 인공물 감소)

  • Choi, Woo Jeon;Kim, Dong Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.581-588
    • /
    • 2019
  • MRI is superior when contrasted to help the organization generate artifacts resolution, but also affect the diagnosis and create a image that can not be read. Metal is inserted into the tooth, it is necessary to often be inhibited in imaging by causing the geometric distortion due to the majority and if the difference between the magnetic susceptibility of a ferromagnetic material or paramagnetic reducing them. The purpose of this study is to conduct a metal artefact in accordance with the analysis using a diamagnetic material. The magnetic material include a wire for the orthodontic bracket and a stainless steel was used as a diamagnetic material was used copper, zinc, bismuth. Testing equipment is sequenced using 1.5T, 3T was used was measured using a SE, TSE, GE, EPI. A self-produced phantom material was used for agarose gel (10%) to a uniform signal artifacts causing materials are stainless steel were tested by placing in the center of the phantom and cover inspection of the positive cube diamagnetic material of 10mm each length.After a measurement artefact artifact zone settings area was calculated using the Wand tool After setting the Low Threshold value of 10 in the image obtained by subtracting images, including magnetic material from a pure tool phantom images using Image J. Metal artifacts occur in stainless steel metal artifact reduction was greatest in the image with the bismuth diamagnetic materials of copper and zinc is slightly reduced, but the difference in degree will not greater. The reason for this is thought to be due to hayeotgi offset most of the susceptibility in bismuth diamagnetic susceptibility of most small ferromagnetic. Most came with less artifacts in image of bismuth in both 1.5T and 3T. Sequence-specific artifact reduction was most reduced artifacts from the TSE 1.5T 3T was reduced in the most artifacts from SE. Signal-to-noise ratio was the lowest SNR is low, appears in the implant, the 1.5T was the Implant + Bi Cu and Zn showed similar results to each other. Therefore, the results of artifacts variation of diamagnetic material, magnetic susceptibility (${\chi}$) is the most this shows the reduced aspect lower than the implant artificial metal artifacts criteria in the video using low bismuth susceptibility to low material the more metal artifacts It was found that the decrease. Therefore, based on the study on the increase, the metal artifacts reduction for the whole, as well as dental prosthesis future orthodontic materials in a way that can even reduce the artifact does not appear which has been pointed out as a disadvantage of the solutions of conventional metal artifact It is considered to be material.

A Management Plan According to the Estimation of Nutria (Myocastorcoypus) Distribution Density and Potential Suitable Habitat (뉴트리아(Myocastor coypus) 분포밀도 및 잠재적 서식가능지역 예측에 따른 관리방향)

  • Kim, Areum;Kim, Young-Chae;Lee, Do-Hun
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.2
    • /
    • pp.203-214
    • /
    • 2018
  • The purpose of this study is to estimate the concentrated distribution area of nutria (Myocastor coypus) and potential suitable habitat and to provide useful data for the effective management direction setting. Based on the nationwide distribution data of nutria, the cross-validation value was applied to analyze the distribution density. As a result, the concentrated distribution areas thatrequired preferential elimination is found in 14 administrative areas including Busan Metropolitan City, Daegu Metropolitan City, 11 cities and counties in Gyeongsangnam-do and 1 county in Gyeongsangbuk-do. In the potential suitable habitat estimation using a MaxEnt (Maximum Entropy) model, the possibility of emergency was found in the Nakdong River middle and lower stream area and the Seomjin riverlower stream area and Gahwacheon River area. As for the contribution by variables of a model, it showed DEM, precipitation of driest month, min temperature of coldest month and distance from river had contribution from the highest order. In terms of the relation with the probability of appearance, the probability of emergence was higher than the threshold value in areas with less than 34m of altitude, with $-5.7^{\circ}C{\sim}-0.6^{\circ}C$ of min temperature of the coldest month, with 15-30mm of precipitation of the driest month and with less than 1,373m away from the river. Variables that Altitude, existence of water and wintertemperature affected settlement and expansion of nutria, considering the research results and the physiological and ecological characteristics of nutria. Therefore, it is necessary to reflect them as important variables in the future habitable area detection and expansion estimation modeling. It must be essential to distinguish the concentrated distribution area and the management area of invasive alien species such as nutria and to establish and apply a suitable management strategy to the management site for the permanent control. The results in this study can be used as useful data for a strategic management such as rapid management on the preferential management area and preemptive and preventive management on the possible spreading area.

A development of DS/CDMA MODEM architecture and its implementation (DS/CDMA 모뎀 구조와 ASIC Chip Set 개발)

  • 김제우;박종현;김석중;심복태;이홍직
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.6
    • /
    • pp.1210-1230
    • /
    • 1997
  • In this paper, we suggest an architecture of DS/CDMA tranceiver composed of one pilot channel used as reference and multiple traffic channels. The pilot channel-an unmodulated PN code-is used as the reference signal for synchronization of PN code and data demondulation. The coherent demodulation architecture is also exploited for the reverse link as well as for the forward link. Here are the characteristics of the suggested DS/CDMA system. First, we suggest an interlaced quadrature spreading(IQS) method. In this method, the PN coe for I-phase 1st channel is used for Q-phase 2nd channels and the PN code for Q-phase 1st channel is used for I-phase 2nd channel, and so on-which is quite different from the eisting spreading schemes of DS/CDMA systems, such as IS-95 digital CDMA cellular or W-CDMA for PCS. By doing IQS spreading, we can drastically reduce the zero crossing rate of the RF signals. Second, we introduce an adaptive threshold setting for the synchronization of PN code, an initial acquistion method that uses a single PN code generator and reduces the acquistion time by a half compared the existing ones, and exploit the state machines to reduce the reacquistion time Third, various kinds of functions, such as automatic frequency control(AFC), automatic level control(ALC), bit-error-rate(BER) estimator, and spectral shaping for reducing the adjacent channel interference, are introduced to improve the system performance. Fourth, we designed and implemented the DS/CDMA MODEM to be used for variable transmission rate applications-from 16Kbps to 1.024Mbps. We developed and confirmed the DS/CDMA MODEM architecture through mathematical analysis and various kind of simulations. The ASIC design was done using VHDL coding and synthesis. To cope with several different kinds of applications, we developed transmitter and receiver ASICs separately. While a single transmitter or receiver ASC contains three channels (one for the pilot and the others for the traffic channels), by combining several transmitter ASICs, we can expand the number of channels up to 64. The ASICs are now under use for implementing a line-of-sight (LOS) radio equipment.

  • PDF

Syllabus Design and Pronunciation Teaching

  • Amakawa, Yukiko
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.235-240
    • /
    • 2000
  • In the age of global communication, more human exchange is extended at the grass-roots level. In the old days, language policy and language planning was based on one nation-state with one language. But high waves of globalizaiton have allowed extended human flow of exchange beyond one's national border on a daily basis. Under such circumstances, homogeneity in Japan may not allow Japanese to speak and communicate only in Japanese and only with Japanese people. In Japan, an advisory report was made to the Ministry of Education in June 1996 about what education should be like in the 21st century. In this report, an introduction of English at public elementary schools was for the first time made. A basic policy of English instruction at the elementary school level was revealed. With this concept, English instruction is not required at the elementary school level but each school has their own choice of introducing English as their curriculum starting April 2002. As Baker, Colin (1996) indicates the age of three as being the threshold diving a child becoming bilingual naturally or by formal instruction. Threre is a movement towards making second language acquisition more naturalistic in an educational setting, developing communicative competence in a more or less formal way. From the lesson of the Canadian immersion success, Genesee (1987) stresses the importance of early language instruction. It is clear that from a psycho-linguistic perspective, most children acquire basic communication skills in their first language apparently effortlessly and without systematic and formal instruction during the first six or seven years of life. This innate capacity diminishes with age, thereby making language learning increasingly difficult. The author, being a returnee, experienced considerable difficulty acquiring L2, and especially achieving native-like competence. There will be many hurdles to conquer until Japanese students are able to reach at least a communicative level in English. It has been mentioned that English is not taught to clear the college entrance examination, but to communicate. However, Japanese college entrance examination still makes students focus more on the grammar-translation method. This is expected to shift to a more communication stressed approach. Japan does not have to aim at becoming an official bilingual country, but at least communicative English should be taught at every level in school Mito College is a small two-year co-ed college in Japan. Students at Mito College are basically notgood at English. It has only one department for business and economics, and English is required for all freshmen. It is necessary for me to make my classes enjoyable and attractive so that students can at least get motivated to learn English. My major target is communicative English so that students may be prepared to use English in various business settings. As an experiment to introduce more communicative English, the author has made the following syllabus design. This program aims at training students speak and enjoy English. 90-minute class (only 190-minute session per week is most common in Japanese colleges) is divided into two: The first half is to train students orally using Graded Direct Method. The latter half uses different materials each time so that students can learn and enjoy English culture and language simultaneously. There are no quizes or examinations in my one-academic year program. However, all students are required to make an original English poem by the end of the spring semester. 2-6 students work together in a group on one poem. Students coming to Mito College, Japan have one of the lowest English levels in all of Japan. However, an attached example of one poem made by a group shows that students can improve their creativity as long as they are kept encouraged. At the end of the fall semester, all students are then required individually to make a 3-minute original English speech. An example of that speech contest will be presented at the Convention in Seoul.

  • PDF

Mobile Camera-Based Positioning Method by Applying Landmark Corner Extraction (랜드마크 코너 추출을 적용한 모바일 카메라 기반 위치결정 기법)

  • Yoo Jin Lee;Wansang Yoon;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1309-1320
    • /
    • 2023
  • The technological development and popularization of mobile devices have developed so that users can check their location anywhere and use the Internet. However, in the case of indoors, the Internet can be used smoothly, but the global positioning system (GPS) function is difficult to use. There is an increasing need to provide real-time location information in shaded areas where GPS is not received, such as department stores, museums, conference halls, schools, and tunnels, which are indoor public places. Accordingly, research on the recent indoor positioning technology based on light detection and ranging (LiDAR) equipment is increasing to build a landmark database. Focusing on the accessibility of building a landmark database, this study attempted to develop a technique for estimating the user's location by using a single image taken of a landmark based on a mobile device and the landmark database information constructed in advance. First, a landmark database was constructed. In order to estimate the user's location only with the mobile image photographing the landmark, it is essential to detect the landmark from the mobile image, and to acquire the ground coordinates of the points with fixed characteristics from the detected landmark. In the second step, by applying the bag of words (BoW) image search technology, the landmark photographed by the mobile image among the landmark database was searched up to a similar 4th place. In the third step, one of the four candidate landmarks searched through the scale invariant feature transform (SIFT) feature point extraction technique and Homography random sample consensus(RANSAC) was selected, and at this time, filtering was performed once more based on the number of matching points through threshold setting. In the fourth step, the landmark image was projected onto the mobile image through the Homography matrix between the corresponding landmark and the mobile image to detect the area of the landmark and the corner. Finally, the user's location was estimated through the location estimation technique. As a result of analyzing the performance of the technology, the landmark search performance was measured to be about 86%. As a result of comparing the location estimation result with the user's actual ground coordinate, it was confirmed that it had a horizontal location accuracy of about 0.56 m, and it was confirmed that the user's location could be estimated with a mobile image by constructing a landmark database without separate expensive equipment.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

The Concentration of Economic Power in Korea (경제력집중(經濟力集中) : 기본시각(基本視角)과 정책방향(政策方向))

  • Lee, Kyu-uck
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.31-68
    • /
    • 1990
  • The concentration of economic power takes the form of one or a few firms controlling a substantial portion of the economic resources and means in a certain economic area. At the same time, to the extent that these firms are owned by a few individuals, resource allocation can be manipulated by them rather than by the impersonal market mechanism. This will impair allocative efficiency, run counter to a decentralized market system and hamper the equitable distribution of wealth. Viewed from the historical evolution of Western capitalism in general, the concentration of economic power is a paradox in that it is a product of the free market system itself. The economic principle of natural discrimination works so that a few big firms preempt scarce resources and market opportunities. Prominent historical examples include trusts in America, Konzern in Germany and Zaibatsu in Japan in the early twentieth century. In other words, the concentration of economic power is the outcome as well as the antithesis of free competition. As long as judgment of the economic system at large depends upon the value systems of individuals, therefore, the issue of how to evaluate the concentration of economic power will inevitably be tinged with ideology. We have witnessed several different approaches to this problem such as communism, fascism and revised capitalism, and the last one seems to be the only surviving alternative. The concentration of economic power in Korea can be summarily represented by the "jaebol," namely, the conglomerate business group, the majority of whose member firms are monopolistic or oligopolistic in their respective markets and are owned by particular individuals. The jaebol has many dimensions in its size, but to sketch its magnitude, the share of the jaebol in the manufacturing sector reached 37.3% in shipment and 17.6% in employment as of 1989. The concentration of economic power can be ascribed to a number of causes. In the early stages of economic development, when the market system is immature, entrepreneurship must fill the gap inherent in the market in addition to performing its customary managerial function. Entrepreneurship of this sort is a scarce resource and becomes even more valuable as the target rate of economic growth gets higher. Entrepreneurship can neither be readily obtained in the market nor exhausted despite repeated use. Because of these peculiarities, economic power is bound to be concentrated in the hands of a few entrepreneurs and their business groups. It goes without saying, however, that the issue of whether the full exercise of money-making entrepreneurship is compatible with social mores is a different matter entirely. The rapidity of the concentration of economic power can also be traced to the diversification of business groups. The transplantation of advanced technology oriented toward mass production tends to saturate the small domestic market quite early and allows a firm to expand into new markets by making use of excess capacity and of monopoly profits. One of the reasons why the jaebol issue has become so acute in Korea lies in the nature of the government-business relationship. The Korean government has set economic development as its foremost national goal and, since then, has intervened profoundly in the private sector. Since most strategic industries promoted by the government required a huge capacity in technology, capital and manpower, big firms were favored over smaller firms, and the benefits of industrial policy naturally accrued to large business groups. The concentration of economic power which occured along the way was, therefore, not necessarily a product of the market system. At the same time, the concentration of ownership in business groups has been left largely intact as they have customarily met capital requirements by means of debt. The real advantage enjoyed by large business groups lies in synergy due to multiplant and multiproduct production. Even these effects, however, cannot always be considered socially optimal, as they offer disadvantages to other independent firms-for example, by foreclosing their markets. Moreover their fictitious or artificial advantages only aggravate the popular perception that most business groups have accumulated their wealth at the expense of the general public and under the behest of the government. Since Korea stands now at the threshold of establishing a full-fledged market economy along with political democracy, the phenomenon called the concentration of economic power must be correctly understood and the roles of business groups must be accordingly redefined. In doing so, we would do better to take a closer look at Japan which has experienced a demise of family-controlled Zaibatsu and a success with business groups(Kigyoshudan) whose ownership is dispersed among many firms and ultimately among the general public. The Japanese case cannot be an ideal model, but at least it gives us a good point of departure in that the issue of ownership is at the heart of the matter. In setting the basic direction of public policy aimed at controlling the concentration of economic power, one must harmonize efficiency and equity. Firm size in itself is not a problem, if it is dictated by efficiency considerations and if the firm behaves competitively in the market. As long as entrepreneurship is required for continuous economic growth and there is a discrepancy in entrepreneurial capacity among individuals, a concentration of economic power is bound to take place to some degree. Hence, the most effective way of reducing the inefficiency of business groups may be to impose competitive pressure on their activities. Concurrently, unless the concentration of ownership in business groups is scaled down, the seed of social discontent will still remain. Nevertheless, the dispersion of ownership requires a number of preconditions and, consequently, we must make consistent, long-term efforts on many fronts. We can suggest a long list of policy measures specifically designed to control the concentration of economic power. Whatever the policy may be, however, its intended effects will not be fully realized unless business groups abide by the moral code expected of socially responsible entrepreneurs. This is especially true, since the root of the problem of the excessive concentration of economic power lies outside the issue of efficiency, in problems concerning distribution, equity, and social justice.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.