• Title/Summary/Keyword: Modeling performance

Search Result 5,421, Processing Time 0.042 seconds

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

Satellite-Based Cabbage and Radish Yield Prediction Using Deep Learning in Kangwon-do (딥러닝을 활용한 위성영상 기반의 강원도 지역의 배추와 무 수확량 예측)

  • Hyebin Park;Yejin Lee;Seonyoung Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1031-1042
    • /
    • 2023
  • In this study, a deep learning model was developed to predict the yield of cabbage and radish, one of the five major supply and demand management vegetables, using satellite images of Landsat 8. To predict the yield of cabbage and radish in Gangwon-do from 2015 to 2020, satellite images from June to September, the growing period of cabbage and radish, were used. Normalized difference vegetation index, enhanced vegetation index, lead area index, and land surface temperature were employed in this study as input data for the yield model. Crop yields can be effectively predicted using satellite images because satellites collect continuous spatiotemporal data on the global environment. Based on the model developed previous study, a model designed for input data was proposed in this study. Using time series satellite images, convolutional neural network, a deep learning model, was used to predict crop yield. Landsat 8 provides images every 16 days, but it is difficult to acquire images especially in summer due to the influence of weather such as clouds. As a result, yield prediction was conducted by splitting June to July into one part and August to September into two. Yield prediction was performed using a machine learning approach and reference models , and modeling performance was compared. The model's performance and early predictability were assessed using year-by-year cross-validation and early prediction. The findings of this study could be applied as basic studies to predict the yield of field crops in Korea.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Effects of Regular Exercise and L-Arginine Intake on Abdominal Fat, GH/IGF-1 Axis, and Circulating Inflammatory Markers in the High Fat Diet-Induced Obese Aged Rat (규칙적인 운동과 L-arginine의 섭취가 고지방식이 유도 비만 노화생쥐의 복부지방량, GH/IGF-1 axis 및 혈관염증지표에 미치는 영향)

  • Park, Sok;Sung, Ki-Woon;Lee, Jin;Lee, Cheon-Ho;Lee, Young-Jun;Yoo, Young-June;Park, Kyoung-Shil;Min, Byung-Jin;Shin, Yong-Sub;Kim, Jung-Suk;Jung, Hun
    • Journal of Life Science
    • /
    • v.22 no.4
    • /
    • pp.516-523
    • /
    • 2012
  • The purpose of this study was to investigate the effect of exercise and/or L-arginine on abdominal fat, IGF-1 on GH/IGF-1 axis, fibrinogen, and PAI-1 in aged and obese rats. Male Sprague-Dawley rats were treated with a D-galactose aging inducing agent (50 mg/kg) given intraperitoneally for 12 weeks. Thirty-two male Sprague-Dawley rats were treated and divided into four groups: aging-high fat diet group (AG+HF), AG+HF with L-arginine intake group (AG+LA), AG+HF with exercise group (AG+EX), and AG+EX with L-arginine intake group (AG+LA+EX). The experimental rats underwent treadmill training (60 min/day, 6 days/week at 0% gradient) for 12 weeks. L-arginine was given orally (150 mg/kg/day) for 12 weeks. After the experiment, blood was collected from the left ventricle and abdominal fat was extracted. The results showed that GH was significantly increased in AG+EX and AG+AL+EX. IGF-1 was significantly increased in both the AG+AL+EX and AG+EX group ($p$<0.05), while fibrinogen and PAI-1 were not significantly different among the groups. Abdominal fat was significantly decreased in the AG+LA, AG+EX, and AG+LA+EX groups ($p$<0.05) compared with the AG+HF group. In conclusion, this study suggests that exercise alone or L-arginine alone or a combination not only increases the GH and IGF-1 concentration, but also decreases the abdominal fat mass.

A Study of Guide System for Cerebrovascular Intervention (뇌혈관 중재시술 지원 가이드 시스템에 관한 연구)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Yoon, Kwon-Ha;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.101-107
    • /
    • 2016
  • Due to the recent advancement in digital imaging technology, development of intervention equipment has become generalize. Video arbitration procedure is a process to insert a tiny catheter and a guide wire in the body, so in order to enhance the effectiveness and safety of this treatment, the high-quality of x-ray of image should be used. However, the increasing of radiation has become the problem. Therefore, the studies to improve the performance of x-ray detectors are being actively processed. Moreover, this intervention is based on the reference of the angiographic imaging and 3D medical image processing. In this paper, we propose a guidance system to support this intervention. Through this intervention, it can solve the problem of the existing 2D medical images based vessel that has a formation of cerebrovascular disease, and guide the real-time tracking and optimal route to the target lesion by intervention catheter and guide wire tool. As a result, the system was completely composed for medical image acquisition unit and image processing unit as well as a display device. The experimental environment, guide services which are provided by the proposed system Brain Phantom (complete intracranial model with aneurysms, ref H+N-S-A-010) was taken with x-ray and testing. To generate a reference image based on the Laplacian algorithm for the image processing which derived from the cerebral blood vessel model was applied to DICOM by Volume ray casting technique. $A^*$ algorithm was used to provide the catheter with a guide wire tracking path. Finally, the result does show the location of the catheter and guide wire providing in the proposed system especially, it is expected to provide a useful guide for future intervention service.

SVC Based Multi-channel Transmission of High Definition Multimedia and Its Improved Service Efficiency (SVC 적용에 의한 다매체 멀티미디어 지원 서비스 효율 향상 기법)

  • Kim, Dong-Hwan;Cho, Min-Kyu;Moon, Seong-Pil;Lee, Jae-Yeal;Jun, Jun-Gil;Chang, Tae-Gyu
    • Journal of IKEEE
    • /
    • v.15 no.2
    • /
    • pp.179-189
    • /
    • 2011
  • This paper presents an SVC based multi-channel transmission technique. Transmission of high definition multimedia and its service efficiency can be significantly improved by the proposed method. In this method, the HD stream is divided into the two layer streams, i.e., a base layer stream and an enhancement layer stream. The divided streams are transmitted through a primary channel and an auxiliary channel, respectively. The proposed technique provides a noble mode switching technique which enables a seamless service of HD multimedia even under the conditions of abrupt and intermittent deterioration of the auxiliary channel. When the enhancement layer stream is disrupted by the channel monitoring in the mode switching algorithm, the algorithm works further to maintain the spatial and time resolution of the HD multimedia by upsampling and interpolating the base layer stream, consequently serving for the non disrupted play of the media. Moreover, the adoption of an adaptive switching algorithm significantly reduces the frequency of channel disruption avoiding the unnecessary switching for the short period variations of the channel. The feasibility of the proposed technique is verified through the simulation study with an example application to the simultaneous utilization of both Ku and Ka bands for HD multimedia broadcasting service. The rainfall modeling and the analysis of the satellite channel attenuation characteristics are performed to simulate the quality of service performance of the proposed HD broadcasting method. The simulation results obtained under a relatively poor channel (weather) situations show that the average lasting period of enhancement layer service is extended from 9.48[min] to 23.12[min] and the average switching frequency is reduced from 3.84[times/hour] to 1.68[times/hour]. It is verified in the satellite example that the proposed SVC based transmission technique best utilizes the Ka band channel for the service of HD broadcasting, although it is characterized by its inherent weather related poor reliability causing severe limitations in its independent application.

Structural Relationships Among Factors to Adoption of Telehealth Service (원격의료서비스 수용요인의 구조적 관계 실증연구)

  • Kim, Sung-Soo;Ryu, See-Won
    • Asia pacific journal of information systems
    • /
    • v.21 no.3
    • /
    • pp.71-96
    • /
    • 2011
  • Within the traditional medical delivery system, patients residing in medically vulnerable areas, those with body movement difficulties, and nursing facility residents have had limited access to good healthcare services. However, Information and Communication Technology (ICT) provides us with a convenient and useful means of overcoming distance and time constraints. ICT is integrated with biomedical science and technology in a way that offers a new high-quality medical service. As a result, rapid technological advancement is expected to play a pivotal role bringing about innovation in a wide range of medical service areas, such as medical management, testing, diagnosis, and treatment; offering new and improved healthcare services; and effecting dramatic changes in current medical services. The increase in aging population and chronic diseases has caused an increase in medical expenses. In response to the increasing demand for efficient healthcare services, a telehealth service based on ICT is being emphasized on a global level. Telehealth services have been implemented especially in pilot projects and system development and technological research. With the service about to be implemented in earnest, it is necessary to study its overall acceptance by consumers, which is expected to contribute to the development and activation of a variety of services. In this sense, the study aims at positively examining the structural relationship among the acceptance factors for telehealth services based on the Technology Acceptance Model (TAM). Data were collected by showing audiovisual material on telehealth services to online panels and requesting them to respond to a structured questionnaire sheet, which is known as the information acceleration method. Among the 1,165 adult respondents, 608 valid samples were finally chosen, while the remaining were excluded because of incomplete answers or allotted time overrun. In order to test the reliability and validity of the assessment scale items, we carried out reliability and factor analyses, and in order to explore the causal relation among potential variables, we conducted a structural equation modeling analysis using AMOS 7.0 and SPSS 17.0. The research outcomes are as follows. First, service quality, innovativeness of medical technology, and social influence were shown to affect perceived ease of use and perceived usefulness of the telehealth service, which was statistically significant, and the two factors had a positive impact on willingness to accept the telehealth service. In addition, social influence had a direct, significant effect on intention to use, which is paralleled by the TAM used in previous research on technology acceptance. This shows that the research model proposed in the study effectively explains the acceptance of the telehealth service. Second, the research model reveals that information privacy concerns had a insignificant impact on perceived ease of use of the telehealth service. From this, it can be gathered that the concerns over information protection and security are reduced further due to advancements in information technology compared to the initial period in the information technology industry, and thus the improvement in quality of medical services appeared to ensure that information privacy concerns did not act as a prohibiting factor in the acceptance of the telehealth service. Thus, if other factors have an enormous impact on ease of use and usefulness, concerns over these results in the initial period of technology acceptance may become irrelevant. However, it is clear that users' information privacy concerns, as other studies have revealed, is a major factor affecting technology acceptance. Thus, caution must be exercised while interpreting the result, and further study is required on the issue. Numerous information technologies with outstanding performance and innovativeness often attract few consumers. A revised bill for those urgently in need of telehealth services is about to be approved in the national assembly. As telemedicine is implemented between doctors and patients, a wide range of systems that will improve the quality of healthcare services will be designed. In this sense, the study on the consumer acceptance of telehealth services is meaningful and offers strong academic evidence. Based on the implications, it can be expected to contribute to the activation of telehealth services. Further study is needed to assess the acceptance factors for telehealth services, such as motivation to remain healthy, health care involvement, knowledge on health, and control of health-related behavior, in order to develop unique services according to the categorization of customers based on health factors. In addition, further study may focus on various theoretical cognitive behavior models other than the TAM, such as the health belief model.

Investigation into influence of sound absorption block on interior noise of high speed train in tunnel (터널 내부 도상 블록형 흡음재의 고속철도차량 내부 소음에 미치는 영향에 대한 고찰)

  • Lee, Sang-heon;Cheong, Cheolung;Lee, Song-June;Kim, Jae-Hwan;Son, Dong-Gi;Sim, Gyu-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.4
    • /
    • pp.223-231
    • /
    • 2018
  • Recently, due to various environmental problems, blast tracks in tunnel are replaced with concrete tracks, but they have more adverse effects on noise than blast tracks so that additional noise measures are needed. Among these measures, sound-absorbing blocks start to be used due to its easy and quick installation. However, the performance of sound absorption blocks need to be verified under real environmental and operational conditions. In this paper, interior noise levels in KTX train cruising in Dalseong tunnel are measured before and after the installation of sound-absorbing blocks and the measured data are analyzed and compared. Additionally, noise reduction are estimated by modeling the high speed train, the tunnel and absorption blocks. Measurement devices and methods are used according to ISO 3381 and the equivalent sound pressure levels during the cruising time inside the tunnel are computed. In addition to overall SPLs(Sound Pressure Levels), 1/3-octave-band levels are also analyzed to account for the frequency characteristics of sound absorption and equipment noise in a cabin. In addition, to consider the effects of train cruising speeds and environmental conditions on the measurements, the measured data are corrected by using those measured during the train-passing through the tunnels located before and behind the Dalseong tunnel. Analysis of measured results showed that the maximum noise reduction of 6.8 dB (A) can be achieved for the local region where the sound-absorbing blocks are installed. Finally, through the comparison of predicted 1/3-octave band SPLs for the KTX interior noise with the measurements, the understanding of noise reduction mechanism due to sound-absorbing blocks is enhanced.