• Title/Summary/Keyword: Network Technique

Search Result 4,415, Processing Time 0.038 seconds

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

호스피스 전달체계 모형

  • Choe, Hwa-Suk
    • Korean Journal of Hospice Care
    • /
    • v.1 no.1
    • /
    • pp.46-69
    • /
    • 2001
  • Hospice Care is the best way to care for terminally ill patients and their family members. However most of them can not receive the appropriate hospice service because the Korean health delivery system is mainly be focussed on acutly ill patients. This study was carried out to clarify the situation of hospice in Korea and to develop a hospice care delivery system model which is appropriate in the Korean context. The theoretical framework of this study that hospice care delivery system is composed of hospice resources with personnel, facilities, etc., government and non-government hospice organization, hospice finances, hospice management and hospice delivery, was taken from the Health Delivery System of WHO(1984). Data was obtained through data analysis of litreature, interview, questionairs, visiting and Delphi Technique, from October 1998 to April 1999 involving 56 hospices, 1 hospice research center, 3 non-government hospice organizations, 20 experts who have had hospice experience for more than 3 years(mean is 9 years and 5 months) and officials or members of 3 non-government hospice organizations. There are 61 hospices in Korea. Even though hospice personnel have tried to study and to provide qualified hospice serices, there is nor any formal hospice linkage or network in Korea. This is the result of this survey made to clarify the situation of Korean hospice. Results of the study by Delphi Technique were as follows: 1.Hospice Resources: Key hospice personnel were found to be hospice coordinator, doctor, nurse, clergy, social worker, volunteers. Necessary qualifications for all personnel was that they conditions were resulted as have good health, receive hospice education and have communication skills. Education for hospice personnel is divided into (i)basic training and (ii)special education, e.g. palliative medicine course for hospice specialist or palliative care course in master degree for hospice nurse specialist. Hospice facilities could be developed by adding a living room, a space for family members, a prayer room, a church, an interview room, a kitchen, a dining room, a bath facility, a hall for music, art or work therapy, volunteers' room, garden, etc. to hospital facilities. 2.Hospice Organization: Whilst there are three non-government hospice organizations active at present, in the near future an hospice officer in the Health&Welfare Ministry plus a government Hospice body are necessary. However a non-government council to further integrate hospice development is also strongly recommended. 3.Hospice Finances: A New insurance standards, I.e. the charge for hospice care services, public information and tax reduction for donations were found suggested as methods to rise the hospice budget. 4.Hospice Management: Two divisions of hospice management/care were considered to be necessary in future. The role of the hospice officer in the Health & Welfare Ministry would be quality control of hospice teams and facilities involved/associated with hospice insurance standards. New non-government integrating councils role supporting the development of hospice care, not insurance covered. 5.Hospice delivery: Linkage&networking between hospice facilities and first, second, third level medical institutions are needed in order to provide varied and continous hospice care. Hospice Acts need to be established within the limits of medical law with regards to standards for professional staff members, educational programs, etc. The results of this study could be utilizes towards the development to two hospice care delivery system models, A and B. Model A is based on the hospital, especially the hospice unit, because in this setting is more easily available the new medical insurance for hospice care. Therefore a hospice team is organized in the hospital and may operate in the hospice unit and in the home hospice care service. After Model A is set up and operating, Model B will be the next stage, in which medical insurance cover will be extended to home hospice care service. This model(B) is also based on the hospital, but the focus of the hospital hospice unit will be moved to home hospice care which is connected by local physicians, national public health centers, community parties as like churches or volunteer groups. Model B will contribute to the care of terminally ill patients and their family members and also assist hospital administrators in cost-effectiveness.

  • PDF

Compact Orthomode Transducer for Field Experiments of Radar Backscatter at L-band (L-밴드 대역 레이더 후방 산란 측정용 소형 직교 모드 변환기)

  • Hwang, Ji-Hwan;Kwon, Soon-Gu;Joo, Jeong-Myeong;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.7
    • /
    • pp.711-719
    • /
    • 2011
  • A study of miniaturization of an L-band orthomode transducer(OMT) for field experiments of radar backscatter is presented in this paper. The proposed OMT is not required the additional waveguide taper structures to connect with a standard adaptor by the newly designed junction structure which bases on a waveguide taper. Total length of the OMT for L-band is about 1.2 ${\lambda}_o$(310 mm) and it's a size of 60 % of the existing OMTs. And, to increase the matching and isolation performances of each polarization, two conducting posts are inserted. The bandwidth of 420 MHz and the isolation level of about 40 dB are measured in the operating frequency. The L-band scatterometer consisting of the manufactured OMT, a horn-antenna and network analyzer(Agilent 8753E) was used STCT and 2DTST to analysis the measurement accuracy of radar backscatter. The full-polarimetric RCSs of test-target, 55 cm trihedral corner reflector, measured by the calibrated scatterometer have errors of -0.2 dB and 0.25 dB for vv-/hh-polarization, respectively. The effective isolation level is about 35.8 dB in the operating frequency. Then, the horn-antenna used to measure has the length of 300 mm, the aperture size of $450{\times}450\;mm^2$, and HPBWs of $29.5^{\circ}$ and $36.5^{\circ}$ on the principle E-/H-planes.

Measurement of Backscattering Coefficients of Rice Canopy Using a Ground Polarimetric Scatterometer System (지상관측 레이다 산란계를 이용한 벼 군락의 후방산란계수 측정)

  • Hong, Jin-Young;Kim, Yi-Hyun;Oh, Yi-Sok;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.145-152
    • /
    • 2007
  • The polarimetric backscattering coefficients of a wet-land rice field which is an experimental plot belong to National Institute of Agricultural Science and Technology in Suwon are measured using ground-based polarimetric scatterometers at 1.8 and 5.3 GHz throughout a growth year from transplanting period to harvest period (May to October in 2006). The polarimetric scatterometers consist of a vector network analyzer with time-gating function and polarimetric antenna set, and are well calibrated to get VV-, HV-, VH-, HH-polarized backscattering coefficients from the measurements, based on single target calibration technique using a trihedral corner reflector. The polarimetric backscattering coefficients are measured at $30^{\circ},\;40^{\circ},\;50^{\circ}\;and\;60^{\circ}$ with 30 independent samples for each incidence angle at each frequency. In the measurement periods the ground truth data including fresh and dry biomass, plant height, stem density, leaf area, specific leaf area, and moisture contents are also collected for each measurement. The temporal variations of the measured backscattering coefficients as well as the measured plant height, LAI (leaf area index) and biomass are analyzed. Then, the measured polarimetric backscattering coefficients are compared with the rice growth parameters. The measured plant height increases monotonically while the measured LAI increases only till the ripening period and decreases after the ripening period. The measured backscattering coefficientsare fitted with polynomial expressions as functions of growth age, plant LAI and plant height for each polarization, frequency, and incidence angle. As the incidence angle is bigger, correlations of L band signature to the rice growth was higher than that of C band signatures. It is found that the HH-polarized backscattering coefficients are more sensitive than the VV-polarized backscattering coefficients to growth age and other input parameters. It is necessary to divide the data according to the growth period which shows the qualitative changes of growth such as panicale initiation, flowering or heading to derive functions to estimate rice growth.

Minimizing Estimation Errors of a Wind Velocity Forecasting Technique That Functions as an Early Warning System in the Agricultural Sector (농업기상재해 조기경보시스템의 풍속 예측 기법 개선 연구)

  • Kim, Soo-ock;Park, Joo-Hyeon;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.2
    • /
    • pp.63-77
    • /
    • 2022
  • Our aim was to reduce estimation errors of a wind velocity model used as an early warning system for weather risk management in the agricultural sector. The Rural Development Administration (RDA) agricultural weather observation network's wind velocity data and its corresponding estimated data from January to December 2020 were used to calculate linear regression equations (Y = aX + b). In each linear regression, the wind estimation error at 87 points and eight time slots per day (00:00, 03:00, 06:00, 09.00, 12.00, 15.00, 18.00, and 21:00) is the dependent variable (Y), while the estimated wind velocity is the independent variable (X). When the correlation coefficient exceeded 0.5, the regression equation was used as the wind velocity correction equation. In contrast, when the correlation coefficient was less than 0.5, the mean error (ME) at the corresponding points and time slots was substituted as the correction value instead of the regression equation. To enable the use of wind velocity model at a national scale, a distribution map with a grid resolution of 250 m was created. This objective was achieved b y performing a spatial interpolation with an inverse distance weighted (IDW) technique using the regression coefficients (a and b), the correlation coefficient (R), and the ME values for the 87 points and eight time slots. Interpolated grid values for 13 weather observation points in rural areas were then extracted. The wind velocity estimation errors for 13 points from January to December 2019 were corrected and compared with the system's values. After correction, the mean ME of the wind velocities reduced from 0.68 m/s to 0.45 m/s, while the mean RMSE reduced from 1.30 m/s to 1.05 m/s. In conclusion, the system's wind velocities were overestimated across all time slots; however, after the correction model was applied, the overestimation reduced in all time slots, except for 15:00. The ME and RMSE improved b y 33% and 19.2%, respectively. In our system, the warning for wind damage risk to crops is driven by the daily maximum wind speed derived from the daily mean wind speed obtained eight times per day. This approach is expected to reduce false alarms within the context of strong wind risk, by reducing the overestimation of wind velocities.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.