• Title/Summary/Keyword: Essential applications

Search Result 1,261, Processing Time 0.027 seconds

A Comparison between the Reference Evapotranspiration Products for Croplands in Korea: Case Study of 2016-2019 (우리나라 농지의 기준증발산 격자자료 비교평가: 2016-2019년의 사례연구)

  • Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Nari;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1465-1483
    • /
    • 2020
  • Evapotranspiration is a concept that includes the evaporation from soil and the transpiration from the plant leaf. It is an essential factor for monitoring water balance, drought, crop growth, and climate change. Actual evapotranspiration (AET) corresponds to the consumption of water from the land surface and the necessary amount of water for the land surface. Because the AET is derived from multiplying the crop coefficient by the reference evapotranspiration (ET0), an accurate calculation of the ET0 is required for the AET. To date, many efforts have been made for gridded ET0 to provide multiple products now. This study presents a comparison between the ET0 products such as FAO56-PM, LDAPS, PKNU-NMSC, and MODIS to find out which one is more suitable for the local-scale hydrological and agricultural applications in Korea, where the heterogeneity of the land surface is critical. In the experiment for the period between 2016 and 2019, the daily and 8-day products were compared with the in-situ observations by KMA. The analyses according to the station, year, month, and time-series showed that the PKNU-NMSC product with a successful optimization for Korea was superior to the others, yielding stable accuracy irrespective of space and time. Also, this paper showed the intrinsic characteristics of the FAO56-PM, LDAPS, and MODIS ET0 products that could be informative for other researchers.

Geomagnetic Paleosecular Variation in the Korean Peninsula during the First Six Centuries (기원후 600년간 한반도 지구 자기장 고영년변화)

  • Park, Jong kyu;Park, Yong-Hee
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.611-625
    • /
    • 2022
  • One of the applications of geomagnetic paleo-secular variation (PSV) is the age dating of archeological remains (i.e., the archeomagnetic dating technique). This application requires the local model of PSV that reflects non-dipole fields with regional differences. Until now, the tentative Korean paleosecular variation (t-KPSV) calculated based on JPSV (SW Japanese PSV) has been applied as a reference curve for individual archeomagnetic directions in Korea. However, it is less reliable due to regional differences in the non-dipole magnetic field. Here, we present PSV curves for AD 1 to 600, corresponding to the Korean Three Kingdoms (including the Proto Three Kingdoms) Period, using the results of archeomagnetic studies in the Korean Peninsula and published research data. Then we compare our PSV with the global geomagnetic prediction model and t-KPSV. A total of 49 reliable archeomagnetic directional data from 16 regions were compiled for our PSV. In detail, each data showed statistical consistency (N > 6, 𝛼95 < 7.8°, and k > 57.8) and had radiocarbon or archeological ages in the range of AD 1 to 600 years with less than ±200 years error range. The compiled PSV for the initial six centuries (KPSV0.6k) showed declination and inclination in the range of 341.7° to 20.1° and 43.5° to 60.3°, respectively. Compared to the t-KPSV, our curve revealed different variation patterns both in declination and inclination. On the other hand, KPSV0.6k and global geomagnetic prediction models (ARCH3K.1, CALS3K.4, and SED3K.1) revealed consistent variation trends during the first six centennials. In particular, the ARCH3K.1 showed the best fitting with our KPSV0.6k. These results indicate that contribution of the non-dipole field to Korea and Japan is quite different, despite their geographical proximity. Moreover, the compilation of archeomagnetic data from the Korea territory is essential to build a reliable PSV curve for an age dating tool. Lastly, we double-check the reliability of our KPSV0.6k by showing a good fitting of newly acquired age-controlled archeomagnetic data on our curve.

Applications of Radiocarbon Isotope Ratios in Environmental Sciences in South Korea (방사성탄소동위원소비 분석을 적용한 우리나라 환경과학 연구)

  • Neung-Hwan Oh;Ji-Yeon Cha
    • Korean Journal of Ecology and Environment
    • /
    • v.56 no.4
    • /
    • pp.281-302
    • /
    • 2023
  • Carbon is not only an essential element for life but also a key player in climate change. The radiocarbon (14C) analysis using accelerator mass spectrometry (AMS) is a powerful tool not only to understand the carbon cycle but also to track pollutants derived from fossil carbon, which have a distinct radiocarbon isotope ratio (Δ14C). Many studies have reported Δ14C of carbon compounds in streams, rivers, rain, snow, throughfall, fine particulate matter (PM2.5), and wastewater treatment plant effluents in South Korea, which are reviewed in this manuscript. In summary, (1) stream and river carbon in South Korea are largely derived from the chemical weathering of soils and rocks, and organic compounds in plants and soils, strongly influenced by precipitation, wastewater treatment effluents, agricultural land use, soil water, and groundwater. (2) Unprecedentedly high Δ14C of precipitation during winter has been reported, which can directly and indirectly influence stream and river carbon. Although we cannot exclude the possibility of local contamination sources of high Δ14C, the results suggest that stream dissolved organic carbon could be older than previously thought, warranting future studies. (3) The 14C analysis has also been applied to quantify the sources of forest throughfall and PM2.5, providing new insights. The 14C data on a variety of ecosystems will be valuable not only to track the pollutants derived from fossil carbon but also to improve our understanding of climate change and provide solutions.

Situation of Utilization and Geological Occurrences of Critical Minerals(Graphite, REE, Ni, Li, and V) Used for a High-tech Industry (첨단산업용 핵심광물(흑연, REE, Ni, Li, V)의 지질학적 부존특성 및 활용현황)

  • Sang-Mo Koh;Bum Han Lee;Chul-Ho Heo;Otgon-Erdene Davaasuren
    • Economic and Environmental Geology
    • /
    • v.56 no.6
    • /
    • pp.781-797
    • /
    • 2023
  • Recently, there has been a rapid response from mineral-demanding countries for securing critical minerals in a high tech industries. Graphite, while overwhelmingly dominated by China in production, is changing in global supply due to the exponential growth in EV battery sector, with active exploration in East Africa. Rare earth elements are essential raw materials widely used in advanced industries. Globally, there are ongoing developments in the production of REEs from three main deposit types: carbonatite, laterite, and ion-adsorption clay types. While China's production has decreased somewhat, it still maintains overwhelming dominance in this sector. Recent changes over the past few years include the rapid emergence of Myanmar and increased production in Vietnam. Nickel has been used in various chemical and metal industries for a long time, but recently, its significance in the market has been increasing, particularly in the battery sector. Worldwide, nickel deposits can be broadly classified into two types: laterite-type, which are derived from ultramafic rocks, and ultramafic hosted sulfide-type. It is predicted that the development of sulfide-type, primarily in Australia, will continue to grow, while the development of laterite-type is expected to be promoted in Indonesia. This is largely driven by the growing demand for nickel in response to the demand for lithium-ion batteries. The global lithium ores are produced in three main types: brine lake (78%), rock/mineral (19%), and clay types (3%). Rock/mineral type has a slightly higher grade compared to brine lake type, but they are less abundant. Chile, Argentina, and the United States primarily produce lithium from brine lake deposits, while Australia and China extract lithium from both brine lake and rock/mineral sources. Canada, on the other hand, exclusively produces lithium from rock/mineral type. Vanadium has traditionally been used in steel alloys, accounting for approximately 90% of its usage. However, there is a growing trend in the use for vanadium redox flow batteries, particularly for large-scale energy storage applications. The global sources of vanadium can be broadly categorized into two main types: vanadium contained in iron ore (81%) produced from mines and vanadium recovered from by-products (secondary sources, 18%). The primary source, accounting for 81%, is vanadium-iron ores, with 70% derived from vanadium slag in the steel making process and 30% from ore mined in primary sources. Intermediate vanadium oxides are manufactured from these sources. Vanadium deposits are classified into four types: vanadiferous titanomagnetite (VTM), sandstone-hosted, shale-hosted, and vanadate types. Currently, only the VTM-type ore is being produced.

Introduction and Evaluation of the Production Method for Chlorophyll-a Using Merging of GOCI-II and Polar Orbit Satellite Data (GOCI-II 및 극궤도 위성 자료를 병합한 Chlorophyll-a 산출물 생산방법 소개 및 활용 가능성 평가)

  • Hye-Kyeong Shin;Jae Yeop Kwon;Pyeong Joong Kim;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1255-1272
    • /
    • 2023
  • Satellite-based chlorophyll-a concentration, produced as a long-term time series, is crucial for global climate change research. The production of data without gaps through the merging of time-synthesized or multi-satellite data is essential. However, studies related to satellite-based chlorophyll-a concentration in the waters around the Korean Peninsula have mainly focused on evaluating seasonal characteristics or proposing algorithms suitable for research areas using a single ocean color sensor. In this study, a merging dataset of remote sensing reflectance from the geostationary sensor GOCI-II and polar-orbiting sensors (MODIS, VIIRS, OLCI) was utilized to achieve high spatial coverage of chlorophyll-a concentration in the waters around the Korean Peninsula. The spatial coverage in the results of this study increased by approximately 30% compared to polar-orbiting sensor data, effectively compensating for gaps caused by clouds. Additionally, we aimed to quantitatively assess accuracy through comparison with global chlorophyll-a composite data provided by Ocean Colour Climate Change Initiative (OC-CCI) and GlobColour, along with in-situ observation data. However, due to the limited number of in-situ observation data, we could not provide statistically significant results. Nevertheless, we observed a tendency for underestimation compared to global data. Furthermore, for the evaluation of practical applications in response to marine disasters such as red tides, we qualitatively compared our results with a case of a red tide in the East Sea in 2013. The results showed similarities to OC-CCI rather than standalone geostationary sensor results. Through this study, we plan to use the generated data for future research in artificial intelligence models for prediction and anomaly utilization. It is anticipated that the results will be beneficial for monitoring chlorophyll-a events in the coastal waters around Korea.

The Development of an Aggregate Power Resource Configuration Model Based on the Renewable Energy Generation Forecasting System (재생에너지 발전량 예측제도 기반 집합전력자원 구성모델 개발)

  • Eunkyung Kang;Ha-Ryeom Jang;Seonuk Yang;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.229-256
    • /
    • 2023
  • The increase in telecommuting and household electricity demand due to the pandemic has led to significant changes in electricity demand patterns. This has led to difficulties in identifying KEPCO's PPA (power purchase agreements) and residential solar power generation and has added to the challenges of electricity demand forecasting and grid operation for power exchanges. Unlike other energy resources, electricity is difficult to store, so it is essential to maintain a balance between energy production and consumption. A shortage or overproduction of electricity can cause significant instability in the energy system, so it is necessary to manage the supply and demand of electricity effectively. Especially in the Fourth Industrial Revolution, the importance of data has increased, and problems such as large-scale fires and power outages can have a severe impact. Therefore, in the field of electricity, it is crucial to accurately predict the amount of power generation, such as renewable energy, along with the exact demand for electricity, for proper power generation management, which helps to reduce unnecessary power production and efficiently utilize energy resources. In this study, we reviewed the renewable energy generation forecasting system, its objectives, and practical applications to construct optimal aggregated power resources using data from 169 power plants provided by the Ministry of Trade, Industry, and Energy, developed an aggregation algorithm considering the settlement of the forecasting system, and applied it to the analytical logic to synthesize and interpret the results. This study developed an optimal aggregation algorithm and derived an aggregation configuration (Result_Number 546) that reached 80.66% of the maximum settlement amount and identified plants that increase the settlement amount (B1783, B1729, N6002, S5044, B1782, N6006) and plants that decrease the settlement amount (S5034, S5023, S5031) when aggregating plants. This study is significant as the first study to develop an optimal aggregation algorithm using aggregated power resources as a research unit, and we expect that the results of this study can be used to improve the stability of the power system and efficiently utilize energy resources.

Incorporating Social Relationship discovered from User's Behavior into Collaborative Filtering (사용자 행동 기반의 사회적 관계를 결합한 사용자 협업적 여과 방법)

  • Thay, Setha;Ha, Inay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.1-20
    • /
    • 2013
  • Nowadays, social network is a huge communication platform for providing people to connect with one another and to bring users together to share common interests, experiences, and their daily activities. Users spend hours per day in maintaining personal information and interacting with other people via posting, commenting, messaging, games, social events, and applications. Due to the growth of user's distributed information in social network, there is a great potential to utilize the social data to enhance the quality of recommender system. There are some researches focusing on social network analysis that investigate how social network can be used in recommendation domain. Among these researches, we are interested in taking advantages of the interaction between a user and others in social network that can be determined and known as social relationship. Furthermore, mostly user's decisions before purchasing some products depend on suggestion of people who have either the same preferences or closer relationship. For this reason, we believe that user's relationship in social network can provide an effective way to increase the quality in prediction user's interests of recommender system. Therefore, social relationship between users encountered from social network is a common factor to improve the way of predicting user's preferences in the conventional approach. Recommender system is dramatically increasing in popularity and currently being used by many e-commerce sites such as Amazon.com, Last.fm, eBay.com, etc. Collaborative filtering (CF) method is one of the essential and powerful techniques in recommender system for suggesting the appropriate items to user by learning user's preferences. CF method focuses on user data and generates automatic prediction about user's interests by gathering information from users who share similar background and preferences. Specifically, the intension of CF method is to find users who have similar preferences and to suggest target user items that were mostly preferred by those nearest neighbor users. There are two basic units that need to be considered by CF method, the user and the item. Each user needs to provide his rating value on items i.e. movies, products, books, etc to indicate their interests on those items. In addition, CF uses the user-rating matrix to find a group of users who have similar rating with target user. Then, it predicts unknown rating value for items that target user has not rated. Currently, CF has been successfully implemented in both information filtering and e-commerce applications. However, it remains some important challenges such as cold start, data sparsity, and scalability reflected on quality and accuracy of prediction. In order to overcome these challenges, many researchers have proposed various kinds of CF method such as hybrid CF, trust-based CF, social network-based CF, etc. In the purpose of improving the recommendation performance and prediction accuracy of standard CF, in this paper we propose a method which integrates traditional CF technique with social relationship between users discovered from user's behavior in social network i.e. Facebook. We identify user's relationship from behavior of user such as posts and comments interacted with friends in Facebook. We believe that social relationship implicitly inferred from user's behavior can be likely applied to compensate the limitation of conventional approach. Therefore, we extract posts and comments of each user by using Facebook Graph API and calculate feature score among each term to obtain feature vector for computing similarity of user. Then, we combine the result with similarity value computed using traditional CF technique. Finally, our system provides a list of recommended items according to neighbor users who have the biggest total similarity value to the target user. In order to verify and evaluate our proposed method we have performed an experiment on data collected from our Movies Rating System. Prediction accuracy evaluation is conducted to demonstrate how much our algorithm gives the correctness of recommendation to user in terms of MAE. Then, the evaluation of performance is made to show the effectiveness of our method in terms of precision, recall, and F1-measure. Evaluation on coverage is also included in our experiment to see the ability of generating recommendation. The experimental results show that our proposed method outperform and more accurate in suggesting items to users with better performance. The effectiveness of user's behavior in social network particularly shows the significant improvement by up to 6% on recommendation accuracy. Moreover, experiment of recommendation performance shows that incorporating social relationship observed from user's behavior into CF is beneficial and useful to generate recommendation with 7% improvement of performance compared with benchmark methods. Finally, we confirm that interaction between users in social network is able to enhance the accuracy and give better recommendation in conventional approach.

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.