• Title/Summary/Keyword: System verification

Search Result 4,661, Processing Time 0.037 seconds

The study of heavy rain warning in Gangwon State using threshold rainfall (침수유발 강우량을 이용한 강원특별자치도 호우특보 기준에 관한 연구)

  • Lee, Hyeonjia;Kang, Donghob;Lee, Iksangc;Kim, Byungsikd
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.751-764
    • /
    • 2023
  • Gangwon State is centered on the Taebaek Mountains with very different climate characteristics depending on the region, and localized heavy rainfall is a frequent occurrence. Heavy rain disasters have a short duration and high spatial and temporal variability, causing many casualties and property damage. In the last 10 years (2012~2021), the number of heavy rain disasters in Gangwon State was 28, with an average cost of 45.6 billion won. To reduce heavy rain disasters, it is necessary to establish a disaster management plan at the local level. In particular, the current criteria for heavy rain warnings are uniform and do not consider local characteristics. Therefore, this study aims to propose a heavy rainfall warning criteria that considers the threshold rainfall for the advisory areas located in Gangwon State. As a result of analyzing the representative value of threshold rainfall by advisory area, the Mean value was similar to the criteria for issuing a heavy rain warning, and it was selected as the criteria for a heavy rain warning in this study. The rainfall events of Typhoon Mitag in 2019, Typhoons Maysak and Haishen in 2020, and Typhoon Khanun in 2023 were applied as rainfall events to review the criteria for heavy rainfall warnings, as a result of Hit Rate accuracy verification, this study reflects the actual warning well with 72% in Gangneung Plain and 98% in Wonju. The criteria for heavy rain warnings in this study are the same as the crisis warning stages (Attention, Caution, Alert, and Danger), which are considered to be possible for preemptive rain disaster response. The results of this study are expected to complement the uniform decision-making system for responding to heavy rain disasters in the future and can be used as a basis for heavy rain warnings that consider disaster risk by region.

Case study on flood water level prediction accuracy of LSTM model according to condition of reference hydrological station combination (참조 수문관측소 구성 조건에 따른 LSTM 모형 홍수위예측 정확도 검토 사례 연구)

  • Lee, Seungho;Kim, Sooyoung;Jung, Jaewon;Yoon, Kwang Seok
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.981-992
    • /
    • 2023
  • Due to recent global climate change, the scale of flood damage is increasing as rainfall is concentrated and its intensity increases. Rain on a scale that has not been observed in the past may fall, and long-term rainy seasons that have not been recorded may occur. These damages are also concentrated in ASEAN countries, and many people in ASEAN countries are affected, along with frequent occurrences of flooding due to typhoons and torrential rains. In particular, the Bandung region which is located in the Upper Chitarum River basin in Indonesia has topographical characteristics in the form of a basin, making it very vulnerable to flooding. Accordingly, through the Official Development Assistance (ODA), a flood forecasting and warning system was established for the Upper Citarium River basin in 2017 and is currently in operation. Nevertheless, the Upper Citarium River basin is still exposed to the risk of human and property damage in the event of a flood, so efforts to reduce damage through fast and accurate flood forecasting are continuously needed. Therefore, in this study an artificial intelligence-based river flood water level forecasting model for Dayeu Kolot as a target station was developed by using 10-minute hydrological data from 4 rainfall stations and 1 water level station. Using 10-minute hydrological observation data from 6 stations from January 2017 to January 2021, learning, verification, and testing were performed for lead time such as 0.5, 1, 2, 3, 4, 5 and 6 hour and LSTM was applied as an artificial intelligence algorithm. As a result of the study, good results were shown in model fit and error for all lead times, and as a result of reviewing the prediction accuracy according to the learning dataset conditions, it is expected to be used to build an efficient artificial intelligence-based model as it secures prediction accuracy similar to that of using all observation stations even when there are few reference stations.

A Study on the Discourse Regarding the Lineage Transmission to Haewol in the Eastern Learning: Focused on Document Verification (해월의 동학 도통전수 담론 연구 - 문헌 고증을 중심으로 -)

  • Park Sang-kyu
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.48
    • /
    • pp.41-155
    • /
    • 2024
  • Among the records that attest to the period from July to August of 1863, when Suwun was believed to have transmitted the orthodox lineage to Haewol, the oldest documents are The Collection of Suwun's Literary Works (水雲文集), The Collection of Great Master Lord's Literary Works (大先生主文集), and The Records of Dao Origin of Master Choe's Literary Collection (崔先生文集道源記書, hereafter referred to as The Records of Dao Origin). The records regarding Suwun in these three documents are considered to have originated from the same context. The variances embedded in the three documents have led to arguments about which documents accurately reflect the fact of orthodox lineage transmission. Additionally, these variances highlight the necessity of a review regarding the characteristics of early Eastern Learning, such as its faith and organizational systems. Accordingly, by thoroughly examining these three documents, it is possible to elucidate the chronological order, establishment-date, accuracy, descriptive direction, and characteristics of the faith system of early Eastern Learning as these are reflected in each document. If successful, this examination would provide a clearer description of the developmental process of Eastern Learning from 1860 to 1880, facilitating a more in-depth analysis of the significance embedded in various forms of discourse on the movement's orthodox lineage transmission. In comparing the three documents and contrasting them with related sources, the results of the textual examination assert that the documents within the lineage of The Collection of Suwun's Literary Works, given they lack a clear record of the event regarding Haewol's orthodox lineage succession, may be the first draft of The Collection of Great Master Lord's Literary Works and The Records of Dao Origin, as these texts distinctly include that record. This reflects that Haewol's succession was not precisely recognized within and outside of the Eastern Learning order until the time when The Collection of Great Master Lord's Literary Works and The Records of Dao Origin were published. This is further attested to by the fact that during the late 1870s, when various Yeonwon (fountainhead) factions of Eastern Learning began to converge around Haewol, and his Yeonwon became the largest organization within Eastern Learning. At that point, the order's doctrine was reinterpreted, and its organization was reestablished. In this regard, it is necessary to view Eastern Learning after Suwun-especially the orthodox lineage transmission to Haewol-from a perspective that considers it more as competing forms of discourse than as a historical fact. This view enables a new perspective on Haewol's Eastern Learning, which forms a distinct layer from Suwun's, shedding light on the relationship between Haewol and the new religious movements in modern-day Korea.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

The Jurisdictional Precedent Analysis of Medical Dispute in Dental Field (치과임상영역에서 발생된 의료분쟁의 판례분석)

  • Kwon, Byung-Ki;Ahn, Hyoung-Joon;Kang, Jin-Kyu;Kim, Chong-Youl;Choi, Jong-Hoon
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.4
    • /
    • pp.283-296
    • /
    • 2006
  • Along with the development of scientific technologies, health care has been growing remarkably, and as the social life quality improves with increasing interest in health, the demand for medical service is rapidly increasing. However, medical accident and medical dispute also are rapidly increasing due to various factors such as, increasing sense of people's right, lack of understanding in the nature of medical practice, over expectation on medical technique, commercialize medical supply system, moral degeneracy and unawareness of medical jurisprudence by doctors, widespread trend of mutual distrust, and lack of systematized device for solution of medical dispute. This study analysed 30 cases of civil suit in the year between 1994 to 2004, which were selected among the medical dispute cases in dental field with the judgement collected from organizations related to dentistry and department of oral medicine, Yonsei university dental hospital. The following results were drawn from the analyses: 1. The distribution of year showed rapid increase of medical dispute after the year 2000. 2. In the types of medical dispute, suit associated with tooth extraction took 36.7% of all. 3. As for the cause of medical dispute, uncomfortable feeling and dissatisfaction with the treatment showed 36.7%, death and permanent damage showed 16.7% each. 4. Winning the suit, compulsory mediation and recommendation for settlement took 60.0% of judgement result for the plaintiff. 5. For the type of medical organization in relation to medical dispute, 60.0% was found to be the private dental clinics, and 30.0% was university dental hospitals. 6. For the level of trial, dispute that progressed above 2 or 3 trials was of 30.0%. 7. For the amount of claim for damage, the claim amounting between 50 million to 100 million won was of 36.7%, and that of more than 100 million won was 13.3%, and in case of the judgement amount, the amount ranging from 10 million to 30 million won was of 40.0%, and that of more than 100 million won was of 6.7%. 8. For the number of dentist involved in the suit, 26.7% was of 2 or more dentists. 9. For the amount of time spent until the judgement, 46.7% took 11 to 20 months, and 36.7% took 21 to 30 months. 10. For medical malpractice, 46.7% was judged to be guilty, and 70% of the cases had undergone medical judgement or verification of the case by specialists during the process of the suit. 11. In the lost cases of doctors(18 cases), 72.2% was due to violence of carefulness in practice and 16.7% was due to missing of explanation to patient. Medical disputes occurring in the field of dentistry are usually of relatively less risky cases. Hence, the importance of explanation to patient is emphasized, and since the levels of patient satisfaction are subjective, improvement of the relationship between the patient and the dentist and recovery of autonomy within the group dentist are essential in addition to the reduction of technical malpractice. Moreover, management measure against the medical dispute should be set up through complement of the current doctors and hospitals medical malpractice insurance which is being conducted irrationally, and establishment of system in which education as well as consultation for medical disputes lead by the group of dental clinicians and academic scholars are accessible.

Analysis of the Range Verification of Proton using PET-CT (Off-line PET-CT를 이용한 양성자치료에서의 Range 검증)

  • Jang, Joon Young;Hong, Gun Chul;Park, Sey Joon;Park, Yong Chul;Choi, Byung Ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.2
    • /
    • pp.101-108
    • /
    • 2017
  • Purpose: The proton used in proton therapy has a characteristic of giving a small dose to the normal tissue in front of the tumor site while forming a Bragg peak at the cancer tissue site and giving up the maximum dose and disappearing immediately. It is very important to verify the proton arrival position. In this study, we used the off-line PET CT method to measure the distribution of positron emitted from nucleons such as 11C (half-life = 20 min), 150 (half-life = 2 min) and 13N The range and distal falloff point of the proton were verified by measurement. Materials and Methods: In the IEC 2001 Body Phantom, 37 mm, 28 mm, and 22 mm spheres were inserted. The phantom was filled with water to obtain a CT image for each sphere size. To verify the proton range and distal falloff points, As a treatment planning system, SOBP were set at 46 mm on 37 mm sphere, 37 mm on 28 mm, and 33 mm on 22 mm sphere for each sphere size. The proton was scanned in the same center with a single beam of Gantry 0 degree by the scanning method. The phantom was scanned using PET-CT equipment. In the PET-CT image acquisition method, 50 images were acquired per minute, four ROIs including the spheres in the phantom were set, and 10 images were reconstructed. The activity profile according to the depth was compared to the dose profile according to the sphere size established in the treatment plan Results: The PET-CT activity profile decreased rapidly at the distal falloff position in the 37 mm, 28 mm, and 22 mm spheres as well as the dose profile. However, in the SOBP section, which is a range for evaluating the range, the results in the proximal part of the activity profile are different from those of the dose profile, and the distal falloff position is compared with the proton therapy plan and PET-CT As a result, the maximum difference of 1.4 mm at the 50 % point of the Max dose, 1.1 mm at the 45 % point at the 28 mm sphere, and the difference at the 22 mm sphere at the maximum point of 1.2 mm were all less than 1.5 mm in the 37 mm sphere. Conclusion: To maximize the advantages of proton therapy, it is very important to verify the range of the proton beam. In this study, the proton range was confirmed by the SOBP and the distal falloff position of the proton beam using PET-CT. As a result, the difference of the distally falloff position between the activity distribution measured by PET-CT and the proton therapy plan was 1.4 mm, respectively. This may be used as a reference for the dose margin applied in the proton therapy plan.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Evaluation of Combine IGRT using ExacTrac and CBCT In SBRT (정위적체부방사선치료시 ExacTrac과 CBCT를 이용한 Combine IGRT의 유용성 평가)

  • Ahn, Min Woo;Kang, Hyo Seok;Choi, Byoung Joon;Park, Sang Jun;Jung, Da Ee;Lee, Geon Ho;Lee, Doo Sang;Jeon, Myeong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.201-208
    • /
    • 2018
  • Purpose : The purpose of this study is to compare and analyze the set-up errors using the Combine IGRT with ExacTrac and CBCT phased in the treatment of Stereotatic Body Radiotherapy. Methods and materials : Patient who were treated Stereotatic Body Radiotherapy in the ulsan university hospital from May 2014 to november 2017 were classified as treatment area three brain, nine spine, three pelvis. First using ExacTrac Set-up error calibrated direction of Lateral(Lat), Longitudinal(Lng), Vertical(Vrt), Roll, Pitch, Yaw, after applied ExacTrac moving data in addition to use CBCT and set-up error calibrated direction of Lat, Lng, Vrt, Rotation(Rtn). Results : When using ExacTrac, the error in the brain region is Lat $0.18{\pm}0.25cm$, Lng $0.23{\pm}0.04cm$, Vrt $0.30{\pm}0.36cm$, Roll $0.36{\pm}0.21^{\circ}$, Pitch $1.72{\pm}0.62^{\circ}$, Yaw $1.80{\pm}1.21^{\circ}$, spine Lat $0.21{\pm}0.24cm$, Lng $0.27{\pm}0.36cm$, Vrt $0.26{\pm}0.42cm$, Roll $1.01{\pm}1.17^{\circ}$, Pitch $0.66{\pm}0.45^{\circ}$, Yaw $0.71{\pm}0.58^{\circ}$, pelvis Lat $0.20{\pm}0.16cm$, Lng $0.24{\pm}0.29cm$, Vrt $0.28{\pm}0.29cm$, Roll $0.83{\pm}0.21^{\circ}$, Pitch $0.57{\pm}0.45^{\circ}$, Yaw $0.52{\pm}0.27^{\circ}$ When CBCT is performed after the couch movement, the error in brain region is Lat $0.06{\pm}0.05cm$, Lng $0.07{\pm}0.06cm$, Vrt $0.00{\pm}0.00cm$, Rtn $0.0{\pm}0.0^{\circ}$, spine Lat $0.06{\pm}0.04cm$, Lng $0.16{\pm}0.30cm$, Vrt $0.08{\pm}0.08cm$, Rtn $0.00{\pm}0.00^{\circ}$, pelvis Lat $0.06{\pm}0.07cm$, Lng $0.04{\pm}0.05cm$, Vrt $0.06{\pm}0.04cm$, Rtn $0.0{\pm}0.0^{\circ}$. Conclusion : Combine IGRT with ExacTrac in addition to CBCT during Stereotatic Body Radiotherapy showed that it was possible to reduce the set-up error of patients compared to single ExacTrac. However, the application of Combine IGRT increases patient set-up verification time and absorption dose in the body for image acquisition. Therefore, depending on the patient's situation that using Combine IGRT to reduce the patient's set-up error can increase the radiation treatment effectiveness.

  • PDF

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.