• Title/Summary/Keyword: Qualitative Models

Search Result 393, Processing Time 0.025 seconds

Numerical Modeling of Thermoshearing in Critically Stressed Rough Rock Fracture: DECOVALEX-2023 Task G (임계응력 하 거친 암석 균열의 Thermoshearing 수치모델링: 국제공동연구 DECOVALEX-2023 Task G)

  • Jung-Wook Park;Chan-Hee Park;Li Zhuang;Jeoung Seok Yoon;Changlun Sun;Changsoo Lee
    • Tunnel and Underground Space
    • /
    • v.33 no.3
    • /
    • pp.189-207
    • /
    • 2023
  • In the present study, the thermoshearing experiment on a rough rock fracture were modeled using a three-dimensional grain-based distinct element model (GBDEM). The experiment was conducted by the Korea Institute of Construction Technology to investigate the progressive shear failure of fracture under the influence of thermal stress in a critical stress state. The numerical model employs an assembly of multiple polyhedral grains and their interfaces to represent the rock sample, and calculates the coupled thermo-mechanical behavior of the grains (blocks) and the interfaces (contacts) using 3DEC, a DEM code. The primary focus was on simulating the temperature evolution, generation of thermal stress, and shear and normal displacements of the fracture. Two fracture models, namely the mated fracture model and the unmated fracture model, were constructed based on the degree of surface matedness, and their respective behaviors were compared and analyzed. By leveraging the advantage of the DEM, the contact area between the fracture surfaces was continuously monitored during the simulation, enabling an examination of its influence on shear behavior. The numerical results demonstrated distinct differences depending on the degree of the surface matedness at the initial stage. In the mated fracture model, where the surfaces were in almost full contact, the characteristic stages of peak stress and residual stress commonly observed in shear behavior of natural rock joints were reasonably replicated, despite exhibiting discrepancies with the experimental results. The analysis of contact area variation over time confirmed that our numerical model effectively simulated the abrupt normal dilation and shear slip, stress softening phenomenon, and transition to the residual state that occur during the peak stress stage. The unmated fracture model, which closely resembled the experimental specimen, showed qualitative agreement with the experimental observations, including heat transfer characteristics, the progressive shear failure process induced by heating, and the increase in thermal stress. However, there were some mismatches between the numerical and experimental results regarding the onset of fracture slip and the magnitudes of fracture stress and displacement. This research was conducted as part of DECOVALEX-2023 Task G, and we expect the numerical model to be enhanced through continued collaboration with other research teams and validated in further studies.

Predicting Future ESG Performance using Past Corporate Financial Information: Application of Deep Neural Networks (심층신경망을 활용한 데이터 기반 ESG 성과 예측에 관한 연구: 기업 재무 정보를 중심으로)

  • Min-Seung Kim;Seung-Hwan Moon;Sungwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.85-100
    • /
    • 2023
  • Corporate ESG performance (environmental, social, and corporate governance) reflecting a company's strategic sustainability has emerged as one of the main factors in today's investment decisions. The traditional ESG performance rating process is largely performed in a qualitative and subjective manner based on the institution-specific criteria, entailing limitations in reliability, predictability, and timeliness when making investment decisions. This study attempted to predict the corporate ESG rating through automated machine learning based on quantitative and disclosed corporate financial information. Using 12 types (21,360 cases) of market-disclosed financial information and 1,780 ESG measures available through the Korea Institute of Corporate Governance and Sustainability during 2019 to 2021, we suggested a deep neural network prediction model. Our model yielded about 86% of accurate classification performance in predicting ESG rating, showing better performance than other comparative models. This study contributed the literature in a way that the model achieved relatively accurate ESG rating predictions through an automated process using quantitative and publicly available corporate financial information. In terms of practical implications, the general investors can benefit from the prediction accuracy and time efficiency of our proposed model with nominal cost. In addition, this study can be expanded by accumulating more Korean and international data and by developing a more robust and complex model in the future.

Detection and Grading of Compost Heap Using UAV and Deep Learning (UAV와 딥러닝을 활용한 야적퇴비 탐지 및 관리등급 산정)

  • Miso Park;Heung-Min Kim;Youngmin Kim;Suho Bak;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.33-43
    • /
    • 2024
  • This research assessed the applicability of the You Only Look Once (YOLO)v8 and DeepLabv3+ models for the effective detection of compost heaps, identified as a significant source of non-point source pollution. Utilizing high-resolution imagery acquired through Unmanned Aerial Vehicles(UAVs), the study conducted a comprehensive comparison and analysis of the quantitative and qualitative performances. In the quantitative evaluation, the YOLOv8 model demonstrated superior performance across various metrics, particularly in its ability to accurately distinguish the presence or absence of covers on compost heaps. These outcomes imply that the YOLOv8 model is highly effective in the precise detection and classification of compost heaps, thereby providing a novel approach for assessing the management grades of compost heaps and contributing to non-point source pollution management. This study suggests that utilizing UAVs and deep learning technologies for detecting and managing compost heaps can address the constraints linked to traditional field survey methods, thereby facilitating the establishment of accurate and effective non-point source pollution management strategies, and contributing to the safeguarding of aquatic environments.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

A Study on Low-Light Image Enhancement Technique for Improvement of Object Detection Accuracy in Construction Site (건설현장 내 객체검출 정확도 향상을 위한 저조도 영상 강화 기법에 관한 연구)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.208-217
    • /
    • 2024
  • There is so much research effort for developing and implementing deep learning-based surveillance systems to manage health and safety issues in construction sites. Especially, the development of deep learning-based object detection in various environmental changes has been progressing because those affect decreasing searching performance of the model. Among the various environmental variables, the accuracy of the object detection model is significantly dropped under low illuminance, and consistent object detection accuracy cannot be secured even the model is trained using low-light images. Accordingly, there is a need of low-light enhancement to keep the performance under low illuminance. Therefore, this paper conducts a comparative study of various deep learning-based low-light image enhancement models (GLADNet, KinD, LLFlow, Zero-DCE) using the acquired construction site image data. The low-light enhanced image was visually verified, and it was quantitatively analyzed by adopting image quality evaluation metrics such as PSNR, SSIM, Delta-E. As a result of the experiment, the low-light image enhancement performance of GLADNet showed excellent results in quantitative and qualitative evaluation, and it was analyzed to be suitable as a low-light image enhancement model. If the low-light image enhancement technique is applied as an image preprocessing to the deep learning-based object detection model in the future, it is expected to secure consistent object detection performance in a low-light environment.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

Evaluation of Hazardous Zones by Evacuation Scenario under Disasters on Training Ships (실습선 재난 시 피난 시나리오 별 위험구역 평가)

  • SangJin Lim;YoonHo Lee
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.2
    • /
    • pp.200-208
    • /
    • 2024
  • The occurrence a fire on a training ship with a large number of people on board can lead to severe casualties. Hence the Seafarers' Act and Safety Life At Sea(SOLAS) emphasizes the importance of the abandon ship drill. Therefore, in this study, the training ship of Mokpo National Maritime University, Segero, which has a large number of people on board, was selected as the target ship and the likelihood and severity of fire accidents on each deck were predicted through the preliminary hazard analysis(PHA) qualitative risk assessment. Additionally, assuming a fire in a high-risk area, a simulation of evacuation time and population density was performed to quantitatively predict the risk. The the total evacuation time was predicted to be the longest at 501s in the meal time scenario, in which the population distribution was concentrated in one area. Depending on the scenario, some decks had relatively high population densities of over 1.4pers/m2, preventing stagnation in the number of evacuees. The results of this study are expected to be used as basic data to develop training scenarios for training ships by quantifying evacuation time and population density according to various evacuation scenarios, and the research can be expanded in the future through comparison of mathematical models and experimental values.

An Empirical Study on Business-Viability-Assessment Method Based on Subscription Software Model (구독형SW 모델의 사업성 평가 방안에 관한 실증연구)

  • Kigon Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.155-165
    • /
    • 2024
  • Software as a Service (SaaS) has become one of the fastest-growing software business models in recent years. Even during the economic downturn following the pandemic, the SaaS business has emerged as a crucial model for IT companies. The revenue structure of SaaS, which is based on the subscription economy model, ensures that users pay only for the services used. In other words, SaaS operates on a subscription-based billing model, thus providing subscribers access to software uploaded to cloud computers via the Internet. This study aimed to explore the manner by which software-solution firms have to counteract the decline in profit and loss sales caused by changing their business-model orientation from on-premise deployment software to subscription-based software. Additionally it analyzes a method for selecting a subscription-based pricing model and rapidly recovering the investment costs via quantitative business-viability assessment. By calculating subscription fees via a more quantitative business-viability evaluation instead of focusing on conventional business-planning methods that rely on qualitative methods, companies are expected to be equipped in providing services to customers at reasonable costs. This strategy will facilitate them in leading emerging growth sectors.

Analysis of the Landscape Characteristics of Island Tourist Site Using Big Data - Based on Bakji and Banwol-do, Shinan-gun - (빅데이터를 활용한 섬 관광지의 경관 특성 분석 - 신안군 박지·반월도를 대상으로 -)

  • Do, Jee-Yoon;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.2
    • /
    • pp.61-73
    • /
    • 2021
  • This study aimed to identify the landscape perception and landscape characteristics of users by utilizing SNS data generated by their experiences. Therefore, how to recognize the main places and scenery appearing on the island, and what are the characteristics of the main scenery were analyzed using online text data and photo data. Text data are text mining and network structural analysis, while photographic data are landscape identification models and color analysis. As a result of the study, First, as a result of frequency analysis of Bakji·Banwol-do topics, we were able to derive keywords for local landscapes such as 'Purple Bridge', 'Doori Village', and location, behavior, and landscape images by analyzing them simultaneously. Second, the network structure analysis showed that the connection between key and undrawn keywords could be more specifically analyzed, indicating that creating landscapes using colors is affecting regional activation. Third, after analyzing the landscape identification model, it was found that artificial elements would be excluded to create preferred landscapes using the main targets of "Purple Bridge" and "Doori Village", and that it would be effective to set a view point of the sea and sky. Fourth, Bakji·Banwol-do were the first islands to be created under the theme of color, and the colors used in artificial facilities were similar to the surrounding environment, and were harmonized with contrasting lighting and saturation values. This study used online data uploaded directly by visitors in the landscape field to identify users' perceptions and objects of the landscape. Furthermore, the use of both text and photographic data to identify landscape recognition and characteristics is significant in that they can specifically identify which landscape and resources they prefer and perceive. In addition, the use of quantitative big data analysis and qualitative landscape identification models in identifying visitors' perceptions of local landscapes will help them understand the landscape more specifically through discussions based on results.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF