• Title/Summary/Keyword: minimization method

Search Result 1,174, Processing Time 0.042 seconds

A Ship-Wake Joint Detection Using Sentinel-2 Imagery

  • Woojin, Jeon;Donghyun, Jin;Noh-hun, Seong;Daeseong, Jung;Suyoung, Sim;Jongho, Woo;Yugyeong, Byeon;Nayeon, Kim;Kyung-Soo, Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.77-86
    • /
    • 2023
  • Ship detection is widely used in areas such as maritime security, maritime traffic, fisheries management, illegal fishing, and border control, and ship detection is important for rapid response and damage minimization as ship accident rates increase due to recent increases in international maritime traffic. Currently, according to a number of global and national regulations, ships must be equipped with automatic identification system (AIS), which provide information such as the location and speed of the ship periodically at regular intervals. However, most small vessels (less than 300 tons) are not obligated to install the transponder and may not be transmitted intentionally or accidentally. There is even a case of misuse of the ship'slocation information. Therefore, in this study, ship detection was performed using high-resolution optical satellite images that can periodically remotely detect a wide range and detectsmallships. However, optical images can cause false-alarm due to noise on the surface of the sea, such as waves, or factors indicating ship-like brightness, such as clouds and wakes. So, it is important to remove these factors to improve the accuracy of ship detection. In this study, false alarm wasreduced, and the accuracy ofship detection wasimproved by removing wake.As a ship detection method, ship detection was performed using machine learning-based random forest (RF), and convolutional neural network (CNN) techniquesthat have been widely used in object detection fieldsrecently, and ship detection results by the model were compared and analyzed. In addition, in this study, the results of RF and CNN were combined to improve the phenomenon of ship disconnection and the phenomenon of small detection. The ship detection results of thisstudy are significant in that they improved the limitations of each model while maintaining accuracy. In addition, if satellite images with improved spatial resolution are utilized in the future, it is expected that ship and wake simultaneous detection with higher accuracy will be performed.

Analyses of the Efficiency in Hospital Management (병원 단위비용 결정요인에 관한 연구)

  • Ro, Kong-Kyun;Lee, Seon
    • Korea Journal of Hospital Management
    • /
    • v.9 no.1
    • /
    • pp.66-94
    • /
    • 2004
  • The objective of this study is to examine how to maximize the efficiency of hospital management by minimizing the unit cost of hospital operation. For this purpose, this paper proposes to develop a model of the profit maximization based on the cost minimization dictum using the statistical tools of arriving at the maximum likelihood values. The preliminary survey data are collected from the annual statistics and their analyses published by Korea Health Industry Development Institute and Korean Hospital Association. The maximum likelihood value statistical analyses are conducted from the information on the cost (function) of each of 36 hospitals selected by the random stratified sampling method according to the size and location (urban or rural) of hospitals. We believe that, although the size of sample is relatively small, because of the sampling method used and the high response rate, the power of estimation of the results of the statistical analyses of the sample hospitals is acceptable. The conceptual framework of analyses is adopted from the various models of the determinants of hospital costs used by the previous studies. According to this framework, the study postulates that the unit cost of hospital operation is determined by the size, scope of service, technology (production function) as measured by capacity utilization, labor capital ratio and labor input-mix variables, and by exogeneous variables. The variables to represent the above cost determinants are selected by using the step-wise regression so that only the statistically significant variables may be utilized in analyzing how these variables impact on the hospital unit cost. The results of the analyses show that the models of hospital cost determinants adopted are well chosen. The various models analyzed have the (goodness of fit) overall determination (R2) which all turned out to be significant, regardless of the variables put in to represent the cost determinants. Specifically, the size and scope of service, no matter how it is measured, i. e., number of admissions per bed, number of ambulatory visits per bed, adjusted inpatient days and adjusted outpatients, have overall effects of reducing the hospital unit costs as measured by the cost per admission, per inpatient day, or office visit implying the existence of the economy of scale in the hospital operation. Thirdly, the technology used in operating a hospital has turned out to have its ramifications on the hospital unit cost similar to those postulated in the static theory of the firm. For example, the capacity utilization as represented by the inpatient days per employee tuned out to have statistically significant negative impacts on the unit cost of hospital operation, while payroll expenses per inpatient cost has a positive effect. The input-mix of hospital operation, as represented by the ratio of the number of doctor, nurse or medical staff per general employee, supports the known thesis that the specialized manpower costs more than the general employees. The labor/capital ratio as represented by the employees per 100 beds is shown to have a positive effect on the cost as expected. As for the exogeneous variable's impacts on the cost, when this variable is represented by the percent of urban 100 population at the location where the hospital is located, the regression analysis shows that the hospitals located in the urban area have a higher cost than those in the rural area. Finally, the case study of the sample hospitals offers a specific information to hospital administrators about how they share in terms of the cost they are incurring in comparison to other hospitals. For example, if his/her hospital is of small size and located in a city, he/she can compare the various costs of his/her hospital operation with those of other similar hospitals. Therefore, he/she may be able to find the reasons why the cost of his/her hospital operation has a higher or lower cost than other similar hospitals in what factors of the hospital cost determinants.

  • PDF

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

Optimum Design of Soil Nailing Excavation Wall System Using Genetic Algorithm and Neural Network Theory (유전자 알고리즘 및 인공신경망 이론을 이용한 쏘일네일링 굴착벽체 시스템의 최적설계)

  • 김홍택;황정순;박성원;유한규
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.4
    • /
    • pp.113-132
    • /
    • 1999
  • Recently in Korea, application of the soil nailing is gradually extended to the sites of excavations and slopes having various ground conditions and field characteristics. Design of the soil nailing is generally carried out in two steps, The First step is to examine the minimum safety factor against a sliding of the reinforced nailed-soil mass based on the limit equilibrium approach, and the second step is to check the maximum displacement expected to occur at facing using the numerical analysis technique. However, design parameters related to the soil nailing system are so various that a reliable design method considering interrelationships between these design parameters is continuously necessary. Additionally, taking into account the anisotropic characteristics of in-situ grounds, disturbances in collecting the soil samples and errors in measurements, a systematic analysis of the field measurement data as well as a rational technique of the optimum design is required to improve with respect to economical efficiency. As a part of these purposes, in the present study, a procedure for the optimum design of a soil nailing excavation wall system is proposed. Focusing on a minimization of the expenses in construction, the optimum design procedure is formulated based on the genetic algorithm. Neural network theory is further adopted in predicting the maximum horizontal displacement at a shotcrete facing. Using the proposed procedure, various effects of relevant design parameters are also analyzed. Finally, an optimized design section is compared with the existing design section at the excavation site being constructed, in order to verify a validity of the proposed procedure.

  • PDF

Changes and Improvements of the Standardized Eddy Covariance Data Processing in KoFlux (표준화된 KoFlux 에디 공분산 자료 처리 방법의 변화와 개선)

  • Kang, Minseok;Kim, Joon;Lee, Seung-Hoon;Kim, Jongho;Chun, Jung-Hwa;Cho, Sungsik
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.1
    • /
    • pp.5-17
    • /
    • 2018
  • The standardized eddy covariance flux data processing in KoFlux has been updated, and its database has been amended accordingly. KoFlux data users have not been informed properly regarding these changes and the likely impacts on their analyses. In this paper, we have documented how the current structure of data processing in KoFlux has been established through the changes and improvements to ensure transparency, reliability and usability of the KoFlux database. Due to increasing diversity and complexity of flux site instrumentation and organization, we have re-implemented the previously ignored or simplified procedures in data processing (e.g., frequency response correction, stationarity test), and added new methods for $CH_4$ flux gap-filling and $CO_2$ flux correction and partitioning. To evaluate the effects of the changes, we processed the data measured at a flat and homogeneous paddy field (i.e., HPK) and a deciduous forest in complex and heterogeneous topography (i.e., GDK), and quantified the differences. Based on the results from our overall assessment, it is confirmed that (1) the frequency response correction (HPK: 11~18% of biases for annually integrated values, GDK: 6~10%) and the stationarity test (HPK: 4~19% of biases for annually integrated values, GDK: 9~23%) are important for quality control and (2) the minimization of the missing data and the choice of the appropriate driver (rather than the choice of the gap-filling method) are important to reduce the uncertainty in gap-filled fluxes. These results suggest the future directions for the data processing technology development to ensure the continuity of the long-term KoFlux database.

Anticoagulation Management after Mitral Valve Replacement with the St. Jude Medical Prosthesis (승모판치환 환자의 항응혈제 치료)

  • 김종환;김영태
    • Journal of Chest Surgery
    • /
    • v.31 no.12
    • /
    • pp.1172-1182
    • /
    • 1998
  • Background: Primary goal of anticoagulation treatment in patients with mechanical heart valve is the effective prevention of thromboembolism and safe avoidance of bleeding as well. Material and Method: Two-hundred and nine patients with the St. Jude Medical prosthesis operated on between 1984 and 1995, for mitral(MVR 122), aortic(AVR 39) and double mitral and aortic valve replacement(DVR 48) respectively, were studied on the practically achieved levels of anticoagulation and the clinical outcomes. Patients were on Coumadin and followed up by monthly visit to outpatient clinic for examination and prothrombin time measurement to adjust the International Normalized Ratios(INRs) within the low-intensity target range between 1.5 and 2.5. Result: A total anticoagulation follow-up period was 1082.0 patient- years(mean 62.1 months) and INRs of 10,205 measurements were available for evaluation. The accomplished INRs among the replacement groups were not significantly different and only 65% of INRs were within the target range. And, in individual patients, only 37% of patients had INRs included within the target range in more than 70% of tests during follow-up period. The levels of INRs in patients with atrial fibrillation, which was found in 57% of patients, were definitely higher than the ones measured in patients with regular rhythm(p<0.001). Thromboembolisms were experienced by 15 patients with the incidence of 1.265%/patient- year(MVR 1.412%, AVR 0.462% and DVR 1.531%/patient-year) and major bleeding by 4 patients with the incidence of 0.337%/patient-year(MVR 0.424%, AVR none and DVR 0.383%/patient-year). Frequent as well as prolonged missing of prothrombin time tests was the main risk factor strongly associated with the thromboembolic complications(odds ratio 1.99). The proportion of INRs within target range of less than 60% in individual patient was the highly significant risk factor of both thromboembolic and overall embolic and bleeding complications(p<0.004 and p<0.002 respectively). Conclusion: In conclusion, the low-intensity therapeutic target range of INRs was adequate in patients with AVR and in sinus rhythm. However, the patients with replacement of the mitral valve were more likely to require higher target range of INRs, especially in the presence of atrial fibrillation, to achieve the practical levels of anticoagulation enough to prevent thromboembolic complications effectively. For the higher therapeutic target range of INRs between 2.0∼3.0, further accumulation of clinical evidences are required. It is highly desirable to improve the patients' compliance under continuous instructions in visiting outpatient clinic and in taking daily Coumadin without omission and to keep INRs consistently within optimal range with tight control for minimization of chances and of periods of exposure to the risk of complications. And, particularly, patients with high risk of complications and with wide fluctuation of INRs should be better managed with frequent monitoring anticoagulation levels.

  • PDF

A Study on Relationship between Physical Elements and Tennis/Golf Elbow

  • Choi, Jungmin;Park, Jungwoo;Kim, Hyunseung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.3
    • /
    • pp.183-196
    • /
    • 2017
  • Objective: The purpose of this research was to assess the agreement between job physical risk factor analysis by ergonomists using ergonomic methods and physical examinations made by occupational physicians on the presence of musculoskeletal disorders of the upper extremities. Background: Ergonomics is the systematic application of principles concerned with the design of devices and working conditions for enhancing human capabilities and optimizing working and living conditions. Proper ergonomic design is necessary to prevent injuries and physical and emotional stress. The major types of ergonomic injuries and incidents are cumulative trauma disorders (CTDs), acute strains, sprains, and system failures. Minimization of use of excessive force and awkward postures can help to prevent such injuries Method: Initial data were collected as part of a larger study by the University of Utah Ergonomics and Safety program field data collection teams and medical data collection teams from the Rocky Mountain Center for Occupational and Environmental Health (RMCOEH). Subjects included 173 male and female workers, 83 at Beehive Clothing (a clothing plant), 74 at Autoliv (a plant making air bags for vehicles), and 16 at Deseret Meat (a meat-processing plant). Posture and effort levels were analyzed using a software program developed at the University of Utah (Utah Ergonomic Analysis Tool). The Ergonomic Epicondylitis Model (EEM) was developed to assess the risk of epicondylitis from observable job physical factors. The model considers five job risk factors: (1) intensity of exertion, (2) forearm rotation, (3) wrist posture, (4) elbow compression, and (5) speed of work. Qualitative ratings of these physical factors were determined during video analysis. Personal variables were also investigated to study their relationship with epicondylitis. Logistic regression models were used to determine the association between risk factors and symptoms of epicondyle pain. Results: Results of this study indicate that gender, smoking status, and BMI do have an effect on the risk of epicondylitis but there is not a statistically significant relationship between EEM and epicondylitis. Conclusion: This research studied the relationship between an Ergonomic Epicondylitis Model (EEM) and the occurrence of epicondylitis. The model was not predictive for epicondylitis. However, it is clear that epicondylitis was associated with some individual risk factors such as smoking status, gender, and BMI. Based on the results, future research may discover risk factors that seem to increase the risk of epicondylitis. Application: Although this research used a combination of questionnaire, ergonomic job analysis, and medical job analysis to specifically verify risk factors related to epicondylitis, there are limitations. This research did not have a very large sample size because only 173 subjects were available for this study. Also, it was conducted in only 3 facilities, a plant making air bags for vehicles, a meat-processing plant, and a clothing plant in Utah. If working conditions in other kinds of facilities are considered, results may improve. Therefore, future research should perform analysis with additional subjects in different kinds of facilities. Repetition and duration of a task were not considered as risk factors in this research. These two factors could be associated with epicondylitis so it could be important to include these factors in future research. Psychosocial data and workplace conditions (e.g., low temperature) were also noted during data collection, and could be used to further study the prevalence of epicondylitis. Univariate analysis methods could be used for each variable of EEM. This research was performed using multivariate analysis. Therefore, it was difficult to recognize the different effect of each variable. Basically, the difference between univariate and multivariate analysis is that univariate analysis deals with one predictor variable at a time, whereas multivariate analysis deals with multiple predictor variables combined in a predetermined manner. The univariate analysis could show how each variable is associated with epicondyle pain. This may allow more appropriate weighting factors to be determined and therefore improve the performance of the EEM.

Minimization of Small Bowel Volume within Treatment Fields Using Customized Small Bowel Displacement System(SBDS) (골반부 방사선 조사야 내의 소장 용적을 줄이기 위한 Small Bowel Displacement System(SBDS)의 사용)

  • Lim Do Hoon;Huh Seung Jae;Ahn Yong Chan;Kim Dae Yong;Wu Hong Gyun;Kim Moon Kyung;Choi Dong Rak;Shin Kyung Hwan
    • Radiation Oncology Journal
    • /
    • v.15 no.3
    • /
    • pp.263-268
    • /
    • 1997
  • Purpose : Authors designed a customized Small Bowel Displacement System (SBDS) to displace the small bowel from the Pelvic radiation fields and minimize treatment-related bowel morbidities. Materials and Methods : From August 1995 to Mar 1996. 55 consecutive patients who received pelvic radiation therapy with the SBDS were included in this study. The SBDS consists of a customized styrofoam compression device which can displace the small bowel from the radiation fields and an individualized immobilization abdominal board for easy daily setup in prone position After opacifying the small bowel with Barium3, the patients were laid Prone and posterior-anterior (PA) and lateral (LAT) simulation films were taken with and without the SBDS. The areas of the small bowel included in the radiation fields with and without the SBDS were compared. Results : Using the SBDS, the mean small bowel area was reduced by $59\%;on\;PA\;and\;51\%$ on LAT films (P=0.0001). In six Patients (6/55. $11\%$), it was Possible that no small bowel was included within the treatment fields. The mean upward displacement of the most caudal small bowel was 4.8 cm using the SBDS. Only $15\%$ (8/55) of patients treated with the SBDS manifested diarrhea requiring medication. Conclusion : The SBDS is a novel method that can be used to displace the small bowel away from the treatment portal effectively and reduce the radiation therapy morbidities. Compliance with setup is excellent when the SBDS is used.

  • PDF

Activation Analysis of Dual-purpose Metal Cask After the End of Design Lifetime for Decommission (설계수명 이후 해체를 위한 금속 겸용용기의 방사화 특성 평가)

  • Kim, Tae-Man;Ku, Ji-Young;Dho, Ho-Seog;Cho, Chun-Hyung;Ko, Jae-Hun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.14 no.4
    • /
    • pp.343-356
    • /
    • 2016
  • The Korea Radioactive Waste Agency (KORAD) has developed a dual-purpose metal cask for the dry storage of spent nuclear fuel that has been generated by domestic light-water reactors. The metal cask was designed in compliance with international and domestic technology standards, and safety was the most important consideration in developing the design. It was designed to maintain its integrity for 50 years in terms of major safety factors. The metal cask ensures the minimization of waste generated by maintenance activities during the storage period as well as the safe management of the waste. An activation evaluation of the main body, which includes internal and external components of metal casks whose design lifetime has expired, provides quantitative data on their radioactive inventory. The radioactive inventory of the main body and the components of the metal cask were calculated by applying the MCNP5 ORIGEN-2 evaluation system and by considering each component's chemical composition, neutron flux distribution, and reaction rate, as well as the duration of neutron irradiation during the storage period. The evaluation results revealed that 10 years after the end of the cask's design life, $^{60}Co$ had greater radioactivity than other nuclides among the metal materials. In the case of the neutron shield, nuclides that emit high-energy gamma rays such as $^{28}Al$ and $^{24}Na$ had greater radioactivity immediately after the design lifetime. However, their radioactivity level became negligible after six months due to their short half-life. The surface exposure dose rates of the canister and the main body of the metal cask from which the spent nuclear fuel had been removed with expiration of the design lifetime were determined to be at very low levels, and the radiation exposure doses to which radiation workers were subjected during the decommissioning process appeared to be at insignificant levels. The evaluations of this study strongly suggest that the nuclide inventory of a spent nuclear fuel metal cask can be utilized as basic data when decommissioning of a metal cask is planned, for example, for the development of a decommissioning plan, the determination of a decommissioning method, the estimation of radiation exposure to workers engaged in decommissioning operations, the management/reuse of radioactive wastes, etc.

Suggestion for Technology Development and Commercialization Strategy of CO2 Capture and Storage in Korea (한국 이산화탄소 포집 및 저장 기술개발 및 상용화 추진 전략 제안)

  • Kwon, Yi Kyun;Shinn, Young Jae
    • Economic and Environmental Geology
    • /
    • v.51 no.4
    • /
    • pp.381-392
    • /
    • 2018
  • This study examines strategies and implementation plans for commercializing $CO_2$ capture and storage, which is an effective method to achieve the national goal of reducing greenhouse gas. In order to secure cost-efficient business model of $CO_2$ capture and storage, we propose four key strategies, including 1) urgent need to select a large-scale storage site and to estimate realistic storage capacity, 2) minimization of source-to-sink distance, 3) cost-effectiveness through technology innovation, and 4) policy implementation to secure public interest and to encourage private sector participation. Based on these strategies, the implementation plans must be designed for enabling $CO_2$ capture and storage to be commercialized until 2030. It is desirable to make those plans in which large-scale demonstration and subsequent commercial projects share a single storage site. In addition, the plans must be able to deliver step-wised targets and assessment processes to decide if the project will move to the next stage or not. The main target of stage 1 (2019 ~ 2021) is that the large-scale storage site will be selected and post-combustion capture technology will be upgraded and commercialized. The site selection, which is prerequisite to forward to the next stage, will be made through exploratory drilling and investigation for candidate sites. The commercial-scale applicability of the capture technology must be ensured at this stage. Stage 2 (2022 ~ 2025) aims design and construction of facility and infrastructure for successful large-scale demonstration (million tons of $CO_2$ per year), i.e., large-scale $CO_2$ capture, transportation, and storage. Based on the achievement of the demonstration project and the maturity of carbon market at the end of stage 2, it is necessary to decide whether to enter commercialization of $CO_2$ capture and storage. If the commercialization project is decided, it will be possible to capture and storage 4 million tons of $CO_2$ per year by the private sector in stage 3 (2026 ~ 2030). The existing facility, infrastructure, and capture plant will be upgraded and supplemented, which allows the commercialization project to be cost-effective.