• Title/Summary/Keyword: Learning rates

Search Result 494, Processing Time 0.022 seconds

A Ship-Wake Joint Detection Using Sentinel-2 Imagery

  • Woojin, Jeon;Donghyun, Jin;Noh-hun, Seong;Daeseong, Jung;Suyoung, Sim;Jongho, Woo;Yugyeong, Byeon;Nayeon, Kim;Kyung-Soo, Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.77-86
    • /
    • 2023
  • Ship detection is widely used in areas such as maritime security, maritime traffic, fisheries management, illegal fishing, and border control, and ship detection is important for rapid response and damage minimization as ship accident rates increase due to recent increases in international maritime traffic. Currently, according to a number of global and national regulations, ships must be equipped with automatic identification system (AIS), which provide information such as the location and speed of the ship periodically at regular intervals. However, most small vessels (less than 300 tons) are not obligated to install the transponder and may not be transmitted intentionally or accidentally. There is even a case of misuse of the ship'slocation information. Therefore, in this study, ship detection was performed using high-resolution optical satellite images that can periodically remotely detect a wide range and detectsmallships. However, optical images can cause false-alarm due to noise on the surface of the sea, such as waves, or factors indicating ship-like brightness, such as clouds and wakes. So, it is important to remove these factors to improve the accuracy of ship detection. In this study, false alarm wasreduced, and the accuracy ofship detection wasimproved by removing wake.As a ship detection method, ship detection was performed using machine learning-based random forest (RF), and convolutional neural network (CNN) techniquesthat have been widely used in object detection fieldsrecently, and ship detection results by the model were compared and analyzed. In addition, in this study, the results of RF and CNN were combined to improve the phenomenon of ship disconnection and the phenomenon of small detection. The ship detection results of thisstudy are significant in that they improved the limitations of each model while maintaining accuracy. In addition, if satellite images with improved spatial resolution are utilized in the future, it is expected that ship and wake simultaneous detection with higher accuracy will be performed.

Adverse Effects on EEGs and Bio-Signals Coupling on Improving Machine Learning-Based Classification Performances

  • SuJin Bak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.133-153
    • /
    • 2023
  • In this paper, we propose a novel approach to investigating brain-signal measurement technology using Electroencephalography (EEG). Traditionally, researchers have combined EEG signals with bio-signals (BSs) to enhance the classification performance of emotional states. Our objective was to explore the synergistic effects of coupling EEG and BSs, and determine whether the combination of EEG+BS improves the classification accuracy of emotional states compared to using EEG alone or combining EEG with pseudo-random signals (PS) generated arbitrarily by random generators. Employing four feature extraction methods, we examined four combinations: EEG alone, EG+BS, EEG+BS+PS, and EEG+PS, utilizing data from two widely-used open datasets. Emotional states (task versus rest states) were classified using Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) classifiers. Our results revealed that when using the highest accuracy SVM-FFT, the average error rates of EEG+BS were 4.7% and 6.5% higher than those of EEG+PS and EEG alone, respectively. We also conducted a thorough analysis of EEG+BS by combining numerous PSs. The error rate of EEG+BS+PS displayed a V-shaped curve, initially decreasing due to the deep double descent phenomenon, followed by an increase attributed to the curse of dimensionality. Consequently, our findings suggest that the combination of EEG+BS may not always yield promising classification performance.

An Exploratory Study on the Strategic Responses to ESG Evaluation of SMEs (중소기업의 ESG평가에 대한 전략적 대응방안 탐색적 연구)

  • Park, Yoon Su
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.47-65
    • /
    • 2023
  • As stakeholder demands and sustainable finance grow, ESG management and ESG evaluation are becoming important. SMEs should also prepare for the trends of ESG rating practices that affects supply chain management and financial transactions. However, SMEs have no choice but to focus on survival first, so there are restrictions on putting into ESG management. In addition, there is a lack of research on the legitimacy of ESG management by SMEs, and volatility in ESG evaluation systems and rating grades is also increasing. Accordingly, it is necessary to review ESG evaluation trends and practical guidelines along with the review of previous studies. As a result of the exploratory study, SMEs need to implement ESG management and make efforts to specialize in ESG related new businesses under conditions in which their survival base is guaranteed in terms of implementation strategies. In addition, it is necessary to focus on the strategic use of various evaluation results along with accumulating information favorable for ESG evaluation through organizational learning and software management. The implications of this study are that various studies such as the classification criteria for SMEs and the relationship between ESG evaluation grades and long-term survival rates are needed in ESG evaluation of SMEs. At the government policy level, it is time to consider the ESG evaluation system exclusively for SMEs so that ESG management can be implemented and ESG evaluation at different levels by industry and size.

  • PDF

Conventional Versus Artificial Intelligence-Assisted Interpretation of Chest Radiographs in Patients With Acute Respiratory Symptoms in Emergency Department: A Pragmatic Randomized Clinical Trial

  • Eui Jin Hwang;Jin Mo Goo;Ju Gang Nam;Chang Min Park;Ki Jeong Hong;Ki Hong Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.3
    • /
    • pp.259-270
    • /
    • 2023
  • Objective: It is unknown whether artificial intelligence-based computer-aided detection (AI-CAD) can enhance the accuracy of chest radiograph (CR) interpretation in real-world clinical practice. We aimed to compare the accuracy of CR interpretation assisted by AI-CAD to that of conventional interpretation in patients who presented to the emergency department (ED) with acute respiratory symptoms using a pragmatic randomized controlled trial. Materials and Methods: Patients who underwent CRs for acute respiratory symptoms at the ED of a tertiary referral institution were randomly assigned to intervention group (with assistance from an AI-CAD for CR interpretation) or control group (without AI assistance). Using a commercial AI-CAD system (Lunit INSIGHT CXR, version 2.0.2.0; Lunit Inc.). Other clinical practices were consistent with standard procedures. Sensitivity and false-positive rates of CR interpretation by duty trainee radiologists for identifying acute thoracic diseases were the primary and secondary outcomes, respectively. The reference standards for acute thoracic disease were established based on a review of the patient's medical record at least 30 days after the ED visit. Results: We randomly assigned 3576 participants to either the intervention group (1761 participants; mean age ± standard deviation, 65 ± 17 years; 978 males; acute thoracic disease in 472 participants) or the control group (1815 participants; 64 ± 17 years; 988 males; acute thoracic disease in 491 participants). The sensitivity (67.2% [317/472] in the intervention group vs. 66.0% [324/491] in the control group; odds ratio, 1.02 [95% confidence interval, 0.70-1.49]; P = 0.917) and false-positive rate (19.3% [249/1289] vs. 18.5% [245/1324]; odds ratio, 1.00 [95% confidence interval, 0.79-1.26]; P = 0.985) of CR interpretation by duty radiologists were not associated with the use of AI-CAD. Conclusion: AI-CAD did not improve the sensitivity and false-positive rate of CR interpretation for diagnosing acute thoracic disease in patients with acute respiratory symptoms who presented to the ED.

Classification of latent classes and analysis of influencing factors on longitudinal changes in middle school students' mathematics interest and achievement: Using multivariate growth mixture model (중학생들의 수학 흥미와 성취도의 종단적 변화에 따른 잠재집단 분류 및 영향요인 탐색: 다변량 성장혼합모형을 이용하여)

  • Rae Yeong Kim;Sooyun Han
    • The Mathematical Education
    • /
    • v.63 no.1
    • /
    • pp.19-33
    • /
    • 2024
  • This study investigates longitudinal patterns in middle school students' mathematics interest and achievement using panel data from the 4th to 6th year of the Gyeonggi Education Panel Study. Results from the multivariate growth mixture model confirmed the existence of heterogeneous characteristics in the longitudinal trajectory of students' mathematics interest and achievement. Students were classified into four latent classes: a low-level class with weak interest and achievement, a high-level class with strong interest and achievement, a middlelevel-increasing class where interest and achievement rise with grade, and a middle-level-decreasing class where interest and achievement decline with grade. Each class exhibited distinct patterns in the change of interest and achievement. Moreover, an examination of the correlation between intercepts and slopes in the multivariate growth mixture model reveals a positive association between interest and achievement with respect to their initial values and growth rates. We further explore predictive variables influencing latent class assignment. The results indicated that students' educational ambition and time spent on private education positively affect mathematics interest and achievement, and the influence of prior learning varies based on its intensity. The perceived instruction method significantly impacts latent class assignment: teacher-centered instruction increases the likelihood of belonging to higher-level classes, while learner-centered instruction increases the likelihood of belonging to lower-level classes. This study has significant implications as it presents a new method for analyzing the longitudinal patterns of students' characteristics in mathematics education through the application of the multivariate growth mixture model.

Studying the Comparative Analysis of Highway Traffic Accident Severity Using the Random Forest Method. (Random Forest를 활용한 고속도로 교통사고 심각도 비교분석에 관한 연구)

  • Sun-min Lee;Byoung-Jo Yoon;WutYeeLwin
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.156-168
    • /
    • 2024
  • Purpose: The trend of highway traffic accidents shows a repeating pattern of increase and decrease, with the fatality rate being highest on highways among all road types. Therefore, there is a need to establish improvement measures that reflect the situation within the country. Method: We conducted accident severity analysis using Random Forest on data from accidents occurring on 10 specific routes with high accident rates among national highways from 2019 to 2021. Factors influencing accident severity were identified. Result: The analysis, conducted using the SHAP package to determine the top 10 variable importance, revealed that among highway traffic accidents, the variables with a significant impact on accident severity are the age of the perpetrator being between 20 and less than 39 years, the time period being daytime (06:00-18:00), occurrence on weekends (Sat-Sun), seasons being summer and winter, violation of traffic regulations (failure to comply with safe driving), road type being a tunnel, geometric structure having a high number of lanes and a high speed limit. We identified a total of 10 independent variables that showed a positive correlation with highway traffic accident severity. Conclusion: As accidents on highways occur due to the complex interaction of various factors, predicting accidents poses significant challenges. However, utilizing the results obtained from this study, there is a need for in-depth analysis of the factors influencing the severity of highway traffic accidents. Efforts should be made to establish efficient and rational response measures based on the findings of this research.

Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: the INBDE, ADAT, and DAT

  • Mahmood Dashti;Shohreh Ghasemi;Niloofar Ghadimi;Delband Hefzi;Azizeh Karimian;Niusha Zare;Amir Fahimipour;Zohaib Khurshid;Maryam Mohammadalizadeh Chafjiri;Sahar Ghaedsharaf
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.271-275
    • /
    • 2024
  • Purpose: Recent advancements in artificial intelligence (AI), particularly tools such as ChatGPT developed by OpenAI, a U.S.-based AI research organization, have transformed the healthcare and education sectors. This study investigated the effectiveness of ChatGPT in answering dentistry exam questions, demonstrating its potential to enhance professional practice and patient care. Materials and Methods: This study assessed the performance of ChatGPT 3.5 and 4 on U.S. dental exams - specifically, the Integrated National Board Dental Examination (INBDE), Dental Admission Test (DAT), and Advanced Dental Admission Test (ADAT) - excluding image-based questions. Using customized prompts, ChatGPT's answers were evaluated against official answer sheets. Results: ChatGPT 3.5 and 4 were tested with 253 questions from the INBDE, ADAT, and DAT exams. For the INBDE, both versions achieved 80% accuracy in knowledge-based questions and 66-69% in case history questions. In ADAT, they scored 66-83% in knowledge-based and 76% in case history questions. ChatGPT 4 excelled on the DAT, with 94% accuracy in knowledge-based questions, 57% in mathematical analysis items, and 100% in comprehension questions, surpassing ChatGPT 3.5's rates of 83%, 31%, and 82%, respectively. The difference was significant for knowledge-based questions(P=0.009). Both versions showed similar patterns in incorrect responses. Conclusion: Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4's perfect score in comprehension questions underscores its trainability in specific subjects. However, both versions exhibited weaker performance in mathematical analysis, suggesting this as an area for improvement.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.