• Title/Summary/Keyword: real value

Search Result 3,378, Processing Time 0.032 seconds

Comparisons of Popularity- and Expert-Based News Recommendations: Similarities and Importance (인기도 기반의 온라인 추천 뉴스 기사와 전문 편집인 기반의 지면 뉴스 기사의 유사성과 중요도 비교)

  • Suh, Kil-Soo;Lee, Seongwon;Suh, Eung-Kyo;Kang, Hyebin;Lee, Seungwon;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.191-210
    • /
    • 2014
  • As mobile devices that can be connected to the Internet have spread and networking has become possible whenever/wherever, the Internet has become central in the dissemination and consumption of news. Accordingly, the ways news is gathered, disseminated, and consumed have changed greatly. In the traditional news media such as magazines and newspapers, expert editors determined what events were worthy of deploying their staffs or freelancers to cover and what stories from newswires or other sources would be printed. Furthermore, they determined how these stories would be displayed in their publications in terms of page placement, space allocation, type sizes, photographs, and other graphic elements. In turn, readers-news consumers-judged the importance of news not only by its subject and content, but also through subsidiary information such as its location and how it was displayed. Their judgments reflected their acceptance of an assumption that these expert editors had the knowledge and ability not only to serve as gatekeepers in determining what news was valuable and important but also how to rank its value and importance. As such, news assembled, dispensed, and consumed in this manner can be said to be expert-based recommended news. However, in the era of Internet news, the role of expert editors as gatekeepers has been greatly diminished. Many Internet news sites offer a huge volume of news on diverse topics from many media companies, thereby eliminating in many cases the gatekeeper role of expert editors. One result has been to turn news users from passive receptacles into activists who search for news that reflects their interests or tastes. To solve the problem of an overload of information and enhance the efficiency of news users' searches, Internet news sites have introduced numerous recommendation techniques. Recommendations based on popularity constitute one of the most frequently used of these techniques. This popularity-based approach shows a list of those news items that have been read and shared by many people, based on users' behavior such as clicks, evaluations, and sharing. "most-viewed list," "most-replied list," and "real-time issue" found on news sites belong to this system. Given that collective intelligence serves as the premise of these popularity-based recommendations, popularity-based news recommendations would be considered highly important because stories that have been read and shared by many people are presumably more likely to be better than those preferred by only a few people. However, these recommendations may reflect a popularity bias because stories judged likely to be more popular have been placed where they will be most noticeable. As a result, such stories are more likely to be continuously exposed and included in popularity-based recommended news lists. Popular news stories cannot be said to be necessarily those that are most important to readers. Given that many people use popularity-based recommended news and that the popularity-based recommendation approach greatly affects patterns of news use, a review of whether popularity-based news recommendations actually reflect important news can be said to be an indispensable procedure. Therefore, in this study, popularity-based news recommendations of an Internet news portal was compared with top placements of news in printed newspapers, and news users' judgments of which stories were personally and socially important were analyzed. The study was conducted in two stages. In the first stage, content analyses were used to compare the content of the popularity-based news recommendations of an Internet news site with those of the expert-based news recommendations of printed newspapers. Five days of news stories were collected. "most-viewed list" of the Naver portal site were used as the popularity-based recommendations; the expert-based recommendations were represented by the top pieces of news from five major daily newspapers-the Chosun Ilbo, the JoongAng Ilbo, the Dong-A Daily News, the Hankyoreh Shinmun, and the Kyunghyang Shinmun. In the second stage, along with the news stories collected in the first stage, some Internet news stories and some news stories from printed newspapers that the Internet and the newspapers did not have in common were randomly extracted and used in online questionnaire surveys that asked the importance of these selected news stories. According to our analysis, only 10.81% of the popularity-based news recommendations were similar in content with the expert-based news judgments. Therefore, the content of popularity-based news recommendations appears to be quite different from the content of expert-based recommendations. The differences in importance between these two groups of news stories were analyzed, and the results indicated that whereas the two groups did not differ significantly in their recommendations of stories of personal importance, the expert-based recommendations ranked higher in social importance. This study has importance for theory in its examination of popularity-based news recommendations from the two theoretical viewpoints of collective intelligence and popularity bias and by its use of both qualitative (content analysis) and quantitative methods (questionnaires). It also sheds light on the differences in the role of media channels that fulfill an agenda-setting function and Internet news sites that treat news from the viewpoint of markets.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

A Study on Storytelling of Yeongweal-palkyung Applied by Halo Effect of King Danjong' Sorrowful Story (단종애사(端宗哀史)의 후광효과를 적용한 영월팔경의 스토리탤링 전략)

  • Rho, Jae-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.63-74
    • /
    • 2008
  • With the awareness that Sinyeongwol Sipgyeong(ten scenic spots in Yeongwol) were designed too hastily and only for PR purposes after the change in the tourism environment, this paper indicates that most tourism and culture sources in Yeongwol are related to King Danjong, the sixth king of the Joseon Dynasty. This study proposes a 'Storytelling Plan' for the landscape content called 'Cultural Landscapes - Yeongwol Palgyeong(eight scenic spots in Yeongwol)' after reviewing types and content of Yeongwol Palgyeong through the halo effect of the well-known sad history of King Danjong and the cultural value of Yeongwol. The significance of the unity of the historic site and neighboring landscape is focused on by investigating the anaphoric relations between cultural landscape texts('Yeongwol Palgyeong') and historic content(the sad history of King Danjong). For this, the cultural lnddscape of Yeongwol has been framed and layered to make spatial texts. To emphasize the 'Telling' as well as the 'Story,' interesting episodes have been reviewed to discover a motive. To diversify the 'Telling' methods, absorptive landscape factors have been classified as 'Place,' 'Object' and 'Visual Point.' In addition the storytelling of Yeongwol Palgyeong was examined in consideration of the story and background of 'Yeongwol Palgyeong - Sad Story of King Danjong' and the interaction of a variety of cultural content by suggesting micro-content such as infotainment and edutainment as absorptive landscape factors. In order to make the storytelling plan available in practice as an alternative plan for Yeongwol Tourism, a visual point should be properly set to make the landscape look sufficiently dynamic. In addition, real landscape routes and narration scenarios should be prepared as well. Professional landscape interpreters who are well informed of the natural features of Yeongwol and the history of King Danjong should be brought into the project, and Internet and digital technology-based strategies should be developed.

Evaluation of useful treatment which uses dual-energy when curing lung-cancer patient with stereotactic body radiation therapy (폐암 환자의 정위적방사선 치료 시 이중 에너지를 이용한 치료 방법의 유용성 평가)

  • Jang, Hyeong Jun;Lee, Yeong Gyu;Kim, Yeong Jae;Park, Yeong Gyu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.87-99
    • /
    • 2016
  • Purpose : This study will evaluate the clinical utility by applying clinical schematic that uses monoenergy or dual energy as according to the location of tumors to the stereotactic radiotherapy to compare the change in actual dose given to the real tumor and the dose that locates adjacent to the tumor. Materials and Methods : CT images from a total of 10 patients were obtained and the clinical planning were planned based on the volumetric modulated arc therapy on monoenergy and dual energy. To analyze the change factor in the tumor, Comformity Index(CI) and Homogeneity Index(HI) and maximum dose quantity were each calculated and comparing the dose distribution on normal tissues, $V_{10}$ and $V_5$, first ~ fourth ribs closest to the tumor ($1^{st}{\sim}4^{th}$ Rib), Spinal Cord, Esophagus and Trachea were selected. Also, in order to confirm the accuracy on which the planned dose distribution is really measured, the 2-dimensional ion chamber array was used to measure the dose distribution. Results : As of the tumor factor, CI and HI showed a number close to 1 when the two energies were used. As of the maximum dose, the front chest wall showed 2% and the dorsal tumor showed equivalent value. As of normal tissue, the front chest wall tumors were reduced by 4%, 5% when both energies were used in the adjacent rib and as of trachea, reduced by 11%, 17%. As of the dose in the lung, as of $V_{10}$, it reduced by 1.5%, $V_5$ by 1%. As of the rear chest wall, when both energies were used, the ribs adjacent to the tumors showed 6%, 1%, 4%, 12% reduction, and in the lung dose distribution, $V_{10}$ reduced by 3%, and $V_5$ reduced by 3.1%. The dose measurement in all energies were in accordance to the results of Gamma Index 3mm/3%. Conclusion : It is considered that rather than using monoenergy, utilizing double energy in the clinical setting can be more effectively applied to the superficial tumors.

  • PDF

A Study on Construction and Application of Nuclear Grade ESF ACS Simulator (원자력등급 ESF 공기정화계통 시뮬레이터 제작 및 활용에 관한 연구)

  • Lee, Sook-Kyung;Kim, Kwang-Sin;Sohn, Soon-Hwan;Song, Kyu-Min;Lee, Kei-Woo;Park, Jeong-Seo;Hong, Soon-Joon;Kang, Sun-Haeng
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.8 no.4
    • /
    • pp.319-327
    • /
    • 2010
  • A nuclear plant ESF ACS simulator was designed, built, and verified to perform experiment related to ESF ACS of nuclear power plants. The dimension of 3D CAD model was based on drawings of the main control room(MCR) of Yonggwang units 5 and 6. The CFD analysis was performed based on the measurement of the actual flow rate of ESF ACS. The air flowing in ACS was assumed to have $30^{\circ}C$ and uniform flow. The flow rate across the HEPA filter was estimated to be 1.83 m/s based on the MCR ACS flow rate of 12,986 CFM and HEPA filter area of 9 filters having effective area of $610{\times}610mm^2$ each. When MCR ACS was modeled, air flow blocking filter frames were considered for better simulation of the real ACS. In CFD analysis, the air flow rate in the lower part of the active carbon adsorber was simulated separately at higher than 7 m/s to reflect the measured value of 8 m/s. Through the CFD analyses of the ACSes of fuel building emergency ventilation system, emergency core cooling system equipment room ventilation cleanup system, it was confirmed that all three EFS ACSes can be simulated by controlling the flow rate of the simulator. After the CFD analysis, the simulator was built in nuclear grade and its reliability was verified through air flow distribution tests before it was used in main tests. The verification result showed that distribution of the internal flow was uniform except near the filter frames when medium filter was installed. The simulator was used in the tests to confirm the revised contents in Reg. Guide 1.52 (Rev. 3).

Feasibility Study of Dose Evaluation of Stereotactic Radiosurgery using GafChromic $EBT^{(R)}$ Film (GafChromic $EBT^{(R)}$ 필름을 이용한 뇌정위방사선치료의 선량분석 가능성 평가)

  • Jang, Eun-Sung;Lee, Chul-Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.19 no.1
    • /
    • pp.27-33
    • /
    • 2007
  • Purpose: We have performed SRS (stereotactic radiosurgery) for avm (arterry vein malformation) and brain cancer. In order to verify dose and localization of SRS, dose distributions from TPS ($X-Knife^{(R)}$ 3.0, Radionics, USA) and GafChromic $EBT^{(R)}$ film in a head phantom were compared. Materials and Methods: In this study, head and neck region of conventional humanoid phantom was modified by substituting one of 2.5 cm slap with five 0.5 cm acrylic plates to stack the GafChromic $EBT^{(R)}$ film slice by slice with 5 mm intervals. Four films and five acrylic plates were cut along the contour of head phantom in axial plane. The head phantom was fixed with SRS head ring and adapted SRS localizer as same as real SRS procedure. CT images of the head phantom were acquired in 5 mm slice intervals as film interval. Five arc 6 MV photon beams using the SRS cone with 2 cm diameter were delivered 300 cGy to the target in the phantom. Ten small pieces of the film were exposed to 0, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900 cGy, respectively to calibrate the GafChromic $EBT^{(R)}$ film. The films in the phantom were digitized after 24 hours and its linearity was calibrated. The pixel values of the film were converted to the dose and compared with the dose distribution from the TPS calculation. Results: Calibration curve for the GafChromic $EBT^{(R)}$ film was linear up to 900 cGy. The R2 value was better than 0.992. Discrepancy between calculated from $X-Knife^{(R)}$ 3.0 and measured dose distributions with the film was less than 5% through all slices. Conclusion: It was possible to evaluate every slice of humanoid phantom by stacking the GafChromic EBT film which is suitable for 2 dimensional dosimetry, It was found that film dosimetry using the GafChromic $EBT^{(R)}$ film is feasible for routine dosimetric QA of stereotactic radiosurgery.

  • PDF

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Validation of Extreme Rainfall Estimation in an Urban Area derived from Satellite Data : A Case Study on the Heavy Rainfall Event in July, 2011 (위성 자료를 이용한 도시지역 극치강우 모니터링: 2011년 7월 집중호우를 중심으로)

  • Yoon, Sun-Kwon;Park, Kyung-Won;Kim, Jong Pil;Jung, Il-Won
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.371-384
    • /
    • 2014
  • This study developed a new algorithm of extreme rainfall extraction based on the Communication, Ocean and Meteorological Satellite (COMS) and the Tropical Rainfall Measurement Mission (TRMM) Satellite image data and evaluated its applicability for the heavy rainfall event in July-2011 in Seoul, South Korea. The power-series-regression-based Z-R relationship was employed for taking into account for empirical relationships between TRMM/PR, TRMM/VIRS, COMS, and Automatic Weather System(AWS) at each elevation. The estimated Z-R relationship ($Z=303R^{0.72}$) agreed well with observation from AWS (correlation coefficient=0.57). The estimated 10-minute rainfall intensities from the COMS satellite using the Z-R relationship generated underestimated rainfall intensities. For a small rainfall event the Z-R relationship tended to overestimated rainfall intensities. However, the overall patterns of estimated rainfall were very comparable with the observed data. The correlation coefficients and the Root Mean Square Error (RMSE) of 10-minute rainfall series from COMS and AWS gave 0.517, and 3.146, respectively. In addition, the averaged error value of the spatial correlation matrix ranged from -0.530 to -0.228, indicating negative correlation. To reduce the error by extreme rainfall estimation using satellite datasets it is required to take into more extreme factors and improve the algorithm through further study. This study showed the potential utility of multi-geostationary satellite data for building up sub-daily rainfall and establishing the real-time flood alert system in ungauged watersheds.