• Title/Summary/Keyword: Novel data

Search Result 3,401, Processing Time 0.034 seconds

Research on a Biennale Visitors' Pursuing Benefit -Centering on the Gwangju Biennale- (비엔날레 참관자의 추구편익이 행동의도에 미치는 영향 연구 -광주비엔날레를 중심으로-)

  • An, Tai-Gi;Kim, Hee-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.432-442
    • /
    • 2009
  • This research had a look at what effect of the pursing benefit has on satisfaction, behavioral intention, and attitude of the visitors to the Gwangju Biennale. The survey was conducted for 320 visitors who finished their exhibition viewing schedule starting September 5 until September 19 [15 days]. 300 questionnaires excepting 20 unfaithfully responded copies among those collected from the surveyed were used for the analysis. As for the statistical disposal of the collected data, after going through the process of Data Coding, this research conducted an frequency analysis using SPSS 12.0 for window & statistics package program AMOS 5.0, an exploratory factor analysis to test the reliability and feasibility of the data, and reliability test of each factor; then, this research tested a hypothesis using structural equation model. The research results are as follows: First, as a result of factor analysis of the 15 pursuing benefits, 4 factors were elicited, such as pursuit of an intellectual experience, pursuit of a novel, exotic experience, pursuit of interpersonal, cultural exchange, and pursuit of internal fullness, etc.; as a result of factor analysis of the 10 attitudes, three factors were elicited, such as affective, cognitive, behavioral factors; as a result of factor analysis of 12 types of satisfaction, two factors, such as satisfaction with facilities and convenience matters, etc. were elicited. Second, as a result of the suitability of research model, suitability, its fidelity came out as $x^2=107.508$, d.f.=48, p=.000, Q=2.240, GFI=.942, AGFI=.906, RMR=.024, NFI=.952, TLI=.963, CFI=.973, RMSEA=.064. Third, pursuing benefit was found out to have a positive effect on satisfaction, attitude, and behavioral intention. Fourth, attitude was found out to have a positive significant effect on satisfaction. Fifth, attitude was found out to have a positive effect on behavioral intention. Sixth, satisfaction was found out to have a positive effect on behavioral intention.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

A Study on Enhancing Personalization Recommendation Service Performance with CNN-based Review Helpfulness Score Prediction (CNN 기반 리뷰 유용성 점수 예측을 통한 개인화 추천 서비스 성능 향상에 관한 연구)

  • Li, Qinglong;Lee, Byunghyun;Li, Xinzhe;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.29-56
    • /
    • 2021
  • Recently, various types of products have been launched with the rapid growth of the e-commerce market. As a result, many users face information overload problems, which is time-consuming in the purchasing decision-making process. Therefore, the importance of a personalized recommendation service that can provide customized products and services to users is emerging. For example, global companies such as Netflix, Amazon, and Google have introduced personalized recommendation services to support users' purchasing decisions. Accordingly, the user's information search cost can reduce which can positively affect the company's sales increase. The existing personalized recommendation service research applied Collaborative Filtering (CF) technique predicts user preference mainly use quantified information. However, the recommendation performance may have decreased if only use quantitative information. To improve the problems of such existing studies, many studies using reviews to enhance recommendation performance. However, reviews contain factors that hinder purchasing decisions, such as advertising content, false comments, meaningless or irrelevant content. When providing recommendation service uses a review that includes these factors can lead to decrease recommendation performance. Therefore, we proposed a novel recommendation methodology through CNN-based review usefulness score prediction to improve these problems. The results show that the proposed methodology has better prediction performance than the recommendation method considering all existing preference ratings. In addition, the results suggest that can enhance the performance of traditional CF when the information on review usefulness reflects in the personalized recommendation service.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

On-line Monitoring of the Flocs in Mixing Zone using iPDA in the Drinking Water Treatment Plant (정수장 응집혼화공정에서의 응집플럭 연속 모니터링)

  • Ga, Gil-Hyun;Jang, Hyun-Sung;Kim, Young-Beom;Kwak, Jong-Woon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.4
    • /
    • pp.263-271
    • /
    • 2009
  • This study evaluated the flocs forming characteristics in the mixing zone to increase the coagulation effect in the drinking water plant. As a measuring tool of formed flocs, on-line particle dispersion analyzer (iPDA) was used in Y drinking water plant. To evaluate the forming flocs, many parameters such as poly amine, coagulant dosing amount, raw water turbidity, and pH was applied in this study. During the periods of field test, poly aluminium chloride (PACl) as a coagulant was used. With the increase of the raw water turbidities, poly amine was also added as one of aids for increasing in coagulation efficiency. The turbidity and pH of raw water was ranged from 7 to 9 and from 25 to 140 NTU, respectively. The increasing of raw water turbidity brought the bigger floc sizes accordingly. From a regression analysis, $R^2$ value was 0.8040 as a function of T, raw water turbidity. Floc size index (FSI) was obtained from a correlation equation as follows; FSI = 0.9388logT - 0.3214 Also, polyamine gave the bigger flocs the moment it is added to the coagulated water in the rapid mixing zone. One of parameters influencing the floc sizes was the addition of powdered active carbon(PAC) in the mixing zone. In case of higher turbidity of raw water, $R^2$ value was 0.9050 in the parameters of [PACl] and [PAC]; FSI = $0.0407[T]^{0.324}[PACI]^{0.769}[PAC]^{0.178}$ On-line floc monitor was beneficial to evaluate the flocs sizes depending on the many parameters consisting raw water properties, bring the profitable basic data to control the mixing zone more effectively.

The Relationship Between DEA Model-based Eco-Efficiency and Economic Performance (DEA 모형 기반의 에코효율성과 경제적 성과의 연관성)

  • Kim, Myoung-Jong
    • Journal of Environmental Policy
    • /
    • v.13 no.4
    • /
    • pp.3-49
    • /
    • 2014
  • Growing interest of stakeholders on corporate responsibilities for environment and tightening environmental regulations are highlighting the importance of environmental management more than ever. However, companies' awareness of the importance of environment is still falling behind, and related academic works have not shown consistent conclusions on the relationship between environmental performance and economic performance. One of the reasons is different ways of measuring these two performances. The evaluation scope of economic performance is relatively narrow and the performance can be measured by a unified unit such as price, while the scope of environmental performance is diverse and a wide range of units are used for measuring environmental performances instead of using a single unified unit. Therefore, the results of works can be different depending on the performance indicators selected. In order to resolve this problem, generalized and standardized performance indicators should be developed. In particular, the performance indicators should be able to cover the concepts of both environmental and economic performances because the recent idea of environmental management has expanded to encompass the concept of sustainability. Another reason is that most of the current researches tend to focus on the motive of environmental investments and environmental performance, and do not offer a guideline for an effective implementation strategy for environmental management. For example, a process improvement strategy or a market discrimination strategy can be deployed through comparing the environment competitiveness among the companies in the same or similar industries, so that a virtuous cyclical relationship between environmental and economic performances can be secured. A novel method for measuring eco-efficiency by utilizing Data Envelopment Analysis (DEA), which is able to combine multiple environmental and economic performances, is proposed in this report. Based on the eco-efficiencies, the environmental competitiveness is analyzed and the optimal combination of inputs and outputs are recommended for improving the eco-efficiencies of inefficient firms. Furthermore, the panel analysis is applied to the causal relationship between eco-efficiency and economic performance, and the pooled regression model is used to investigate the relationship between eco-efficiency and economic performance. The four-year eco-efficiencies between 2010 and 2013 of 23 companies are obtained from the DEA analysis; a comparison of efficiencies among 23 companies is carried out in terms of technical efficiency(TE), pure technical efficiency(PTE) and scale efficiency(SE), and then a set of recommendations for optimal combination of inputs and outputs are suggested for the inefficient companies. Furthermore, the experimental results with the panel analysis have demonstrated the causality from eco-efficiency to economic performance. The results of the pooled regression have shown that eco-efficiency positively affect financial perform ances(ROA and ROS) of the companies, as well as firm values(Tobin Q, stock price, and stock returns). This report proposes a novel approach for generating standardized performance indicators obtained from multiple environmental and economic performances, so that it is able to enhance the generality of relevant researches and provide a deep insight into the sustainability of environmental management. Furthermore, using efficiency indicators obtained from the DEA model, the cause of change in eco-efficiency can be investigated and an effective strategy for environmental management can be suggested. Finally, this report can be a motive for environmental management by providing empirical evidence that environmental investments can improve economic performance.

  • PDF

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

A preliminary assessment of high-spatial-resolution satellite rainfall estimation from SAR Sentinel-1 over the central region of South Korea (한반도 중부지역에서의 SAR Sentinel-1 위성강우량 추정에 관한 예비평가)

  • Nguyen, Hoang Hai;Jung, Woosung;Lee, Dalgeun;Shin, Daeyun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.6
    • /
    • pp.393-404
    • /
    • 2022
  • Reliable terrestrial rainfall observations from satellites at finer spatial resolution are essential for urban hydrological and microscale agricultural demands. Although various traditional "top-down" approach-based satellite rainfall products were widely used, they are limited in spatial resolution. This study aims to assess the potential of a novel "bottom-up" approach for rainfall estimation, the parameterized SM2RAIN model, applied to the C-band SAR Sentinel-1 satellite data (SM2RAIN-S1), to generate high-spatial-resolution terrestrial rainfall estimates (0.01° grid/6-day) over Central South Korea. Its performance was evaluated for both spatial and temporal variability using the respective rainfall data from a conventional reanalysis product and rain gauge network for a 1-year period over two different sub-regions in Central South Korea-the mixed forest-dominated, middle sub-region and cropland-dominated, west coast sub-region. Evaluation results indicated that the SM2RAIN-S1 product can capture general rainfall patterns in Central South Korea, and hold potential for high-spatial-resolution rainfall measurement over the local scale with different land covers, while less biased rainfall estimates against rain gauge observations were provided. Moreover, the SM2RAIN-S1 rainfall product was better in mixed forests considering the Pearson's correlation coefficient (R = 0.69), implying the suitability of 6-day SM2RAIN-S1 data in capturing the temporal dynamics of soil moisture and rainfall in mixed forests. However, in terms of RMSE and Bias, better performance was obtained with the SM2RAIN-S1 rainfall product over croplands rather than mixed forests, indicating that larger errors induced by high evapotranspiration losses (especially in mixed forests) need to be included in further improvement of the SM2RAIN.