• Title/Summary/Keyword: modeling system

Search Result 10,747, Processing Time 0.038 seconds

A Study on Health Impact Assessment and Emissions Reduction System Using AERMOD (AERMOD를 활용한 건강위해성평가 및 배출저감제도에 관한 연구)

  • Seong-Su Park;Duk-Han Kim;Hong-Kwan Kim;Young-Woo Chon
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.93-105
    • /
    • 2024
  • Purpose: This study aims to quantitatively determine the impact on nearby risidents by selecting the amount of chemicals emitted from the workplace among the substances subject to the chemical emission plan and predicting the concentration with the atmospheric diffusion program. Method: The selection of research materials considered half-life, toxicity, and the presence or absence of available monitoring station data. The areas discharged from the materials to be studied were selected as the areas to be studied, and four areas with floating populations were selected to evaluate health risks. Result: AERMOD was executed after conducting terrain and meteorological processing to obtain predicted concentrations. The health hazard assessment results indicated that only dichloromethane exceeded the threshold for children, while tetrachloroethylene and chloroform appeared at levels that cannot be ignored for both children and adults. Conclusion: Currently, in the domestic context, health hazard assessments are conducted based on the regulations outlined in the "Environmental Health Act" where if the hazard index exceeds a certain threshold, it is considered to pose a health risk. The anticipated expansion of the list of substances subject to the chemical discharge plan to 415 types by 2030 suggests the need for efficient management within workplaces. In instances where the hazard index surpasses the threshold in health hazard assessments, it is judged that effective chemical management can be achieved by prioritizing based on considerations of background concentration and predicted concentration through atmospheric dispersion modeling.

Three Qualities of OTT Services: A Mixed Methods Approach (OTT 서비스의 세 가지 질적 요소: 혼합적 연구방법을 통한 접근)

  • Jae Sun Yoo;Jaecheol Park;Hyun Jun Jeon;Jai-Yeol Son
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.59-87
    • /
    • 2022
  • Since over-the-top (OTT) service has emerged as a new way of consuming video contents, OTT markets grow exponentially and the competition among the OTT services becomes intense. Only limited systematic research effort has been paid to understand why users subscribe such OTT services among other services. Therefore, we used developmental sequential mixed methods approach to find out the quality factors and their effect on post-subscription experiences and continuance intention. In the qualitative study, we derived six factors which a user considers important to continue the subscription. Based on the explored factors, we hypothesized a research model with modified three qualities from ISSM. The proposed research model was validated through quantitative research, a survey of 226 OTT service users in South Korea, using structural equation modeling. The results indicated that content quality is the key factor affecting both perceived enjoyment and satisfaction whereas system quality affects satisfaction, and service quality only affects enjoyment. Enjoyment affects satisfaction which sequentially affects continuance usage intention. This study contributes to research by modifying ISSM through mixed methods. It also provides OTT service providers with insight to enhance users' post experience and continuance intention to use the service through qualities derived from the interview.

Analysis of a Groundwater Flow System in Fractured Rock Mass Using the Concept of Hydraulic Compartment (수리영역 개념을 적용한 단열암반의 지하수유동체계 해석)

  • Cho Sung-Il;Kim Chun-Soo;Bae Dae-Seok;Kim Kyung-Su;Song Moo-Young
    • The Journal of Engineering Geology
    • /
    • v.16 no.1 s.47
    • /
    • pp.69-83
    • /
    • 2006
  • This study aims to evaluate a complex groundwater flow system around the underground oil storage caverns using the concept of hydraulic compartment. For the hydrogeological analysis, the hydraulic testing data, the evolution of groundwater levels in 28 surface monitoring boreholes and pressure variation of 95 horizontal and 63 vertical water curtain holes in the caverns were utilized. At the cavern level, the Hydraulic Conductor Domains(fracture zones) are characterized one local major fracture zone(NE-1)and two local fracture zones between the FZ-1 and FZ-2 fracture zones. The Hydraulic Rock Domain(rock mass) is divided into four compartments by the above local fracture zones. Two Hydraulic Rock Domains(A, B) around the FZ-2 zone have a relatively high initial groundwater pressures up to $15kg/cm^2$ and the differences between the upper and lower groundwater levels, measured from the monitoring holes equipped with double completion, are in the range of 10 and 40 m throughout the construction stage, indicating relatively good hydraulic connection between the near surface and bedrock groundwater systems. On the other hand, two Hydraulic Rock Domains(C, D) adjacent to the FZ-1, the groundwater levels in the upper and lower zones are shown a great difference in the maximum of 120 m and the high water levels in the upper groundwater system were not varied during the construction stage. This might be resulted from the very low hydraulic conductivity$(7.2X10^{-10}m/sec)$ in the zone, six times lower than that of Domain C, D. Groundwater recharge rates obtained from the numerical modeling are 2% of the annual mean precipitation(1,356mm/year) for 20 years.

Development of a Planting Density-Growth-Harvest Chart for Common Ice Plant Hydroponically Grown in Closed-type Plant Production System (식물 생산 시스템에서 수경재배한 Common Ice Plant의 재식밀도-생육-수확 도표 개발)

  • Cha, Mi-Kyung;Park, Kyoung Sub;Cho, Young-Yeol
    • Journal of Bio-Environment Control
    • /
    • v.25 no.2
    • /
    • pp.106-110
    • /
    • 2016
  • In this study, a planting density-growth-harvest (PGH) chart was developed to easily read the growth and harvest factors such as crop growth rate, relative growth rate, shoot fresh weight, shoot dry weight, harvesting time, marketable rate, and marketable yield of common ice plant (Mesembryanthemum crystallinum L.). The plants were grown in a nutrient film technique (NFT) system in a closed-type plant factory using fluorescent lamps with three-band radiation under a light intensity of $140{\mu}mol{\cdot}m^{-2}{\cdot}s^{-1}$ and a photoperiod of 12 h. Growth and yield were analyzed under four planting densities ($15{\times}10cm$, $15{\times}15cm$, $15{\times}20cm$, and $15{\times}25cm$). Shoot fresh and dry weights per plant increased at a higher planting density until reached an upper limit and yield per area was also same tendency. Crop growth rate, relative growth rate and lost time were described using quadratic equation. A linear relationship between shoot dry weight and fresh weights was observed. PGH chart was constructed based on the growth data and making equations. For instance, with within row spacing (= 20 cm) and fresh weight per plant at harvest (= 100 g), we can estimate all the growth and harvest factors of common ice plant. The planting density, crop growth rate, relative growth rate, lost time, shoot dry weight per plant, harvesting time, and yield were $33plants/m^2$, $20g{\cdot}m^{-2}{\cdot}d^{-1}$, $0.27g{\cdot}g^{-1}{\cdot}d^{-1}$, 22 days, 2.5 g/plant, 26 days after transplanting, and $3.2kg{\cdot}m^{-2}$, respectively. With this chart, we could easily obtain the growth factors such as planting density, crop growth rate, relative growth rate, lost time and the harvest factors such as shoot fresh and dry weights, harvesting time, marketable rate, and marketable yield with at least two parameters, for instance, planting distance and one of harvest factors of plant. PGH charts will be useful tools to estimate the growth and yield of crops and to practical design of a closed-type plant production system.

Comparative Analysis of SWAT Generated Streamflow and Stream Water Quality Using Different Spatial Resolution Data (SWAT모형에서 공간 입력자료의 다양한 해상도에 따른 수문-수질 모의결과의 비교분석)

  • Park, Jong-Yoon;Lee, Mi-Seon;Park, Geun-Ae;Kim, Seong-Joon
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.11
    • /
    • pp.1079-1094
    • /
    • 2008
  • This study is to evaluate the impact of varying spatial resolutions on the uncertainty of Soil and Water Assessment Tool (SWAT) predicted streamflow, non-point source (NPS) pollution loads transport in a small agricultural watershed (1.21 $km^2$) for three cases of model input; Case A is the combination of 2 m DEM, QuickBird land use, Case B is the combination of 10 m DEM, 1/25,000 land use, and Case C is the combination of 30 m DEM, Landsat land use, soil data is used 1/25,000 for three cases respectively. The model was calibrated for 2 years (1999-2000) using daily streamflow and monthly water quality records, and verified for another 2 years (2001-2002). The average Nash and Sutcliffe model efficiency was 0.59 for streamflow and RMSE were 2.08, 4.30 and 0.70 tons/yr for sediment, T-N and T-P respectively. The model was run for a small agricultural watershed with three cases of spatial input data. The hydrological results showed that output uncertainty was biggest by spatial resolution of land use. Streamflow increase the watershed average CN value of QucikBird land use was 0.4 and 1.8 higher than those of 1/25,000 and Landsat land use caused increase of streamflow. On the other hand, The NPS loadings from the model prediction showed that the sediment, T-N and T-P of QuickBird land use (Case A) showed 23.7 %, 43.3 % and 48.4 % higher value than 1/25,000 land use (Case B) and 50.6 %, 50.8 % and 56.9 % higher value than Landsat land use (Case C) respectively.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

A Study on the Moderating Effect of Perceived Voluntariness in the Organizational Information System Usage and Performance (정보시스템 사용과 성과에 있어서 자발성의 조절효과에 관한 연구)

  • Lee, Seung-Chang;Lee, Ho-Geun;Jung, Chang-Wook;Chung, Nam-Ho;Suh, Eung-Kyo
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.195-221
    • /
    • 2009
  • According to an industry report, a large number of organizations have invested in Organizational Information System(OIS) in the past few years. Several research results indicate that successful investments in OIS lead to productivity enhancement, while failed ones result in undesirable consequences such as financial losses and dissatisfaction among employees. In spite of huge investments, however, many organizations have failed in achieving the hoped-for returns from OIS. Thus, understanding user acceptance, adoption, and usage of new IS(Information Systems) is an important issue for IS practitioners. Indeed, study of the user acceptance of new information system has been one of the most important research topics in the contemporary IS literature. Several theoretical models are tested to examine 'user acceptance' and 'usage behavior' in IS context. While many research models incorporate 'ease of use' or 'usefulness' as important factors in explaining user acceptance, Technology Acceptance Model(TAM) has been one of the most widely applied models in user acceptance and usage behavior. Even in recent IS studies that employ theories of innovation diffusion in the area of IS implementation, a major focus has been on the user's perception of information technologies. In this research, we study 'voluntariness' as an important factor in IS acceptance by users. Voluntariness is defined as "the degree to which the use of the innovation is perceived as being voluntary, or of free will" When examining the diffusion of accepting OIS, a thoughtful consideration should be given to 'perceived voluntariness.' Current article has following research questions: 1) What models are appropriate to explain the success of OIS? and 2) How does the 'voluntariness' affect the success of OIS? In order to answer these questions, a research model is proposed to describe the detailed nature of association among three independent variables (IT usage level, task interdependency, and organizational support), a mediating variable (IS usage), a dependent variable (perceived performance), and a moderating variable(perceived voluntariness). The central claim of this article is that organizations hardly realize expected returns from OIS investments unless perceived voluntariness is effectively managed after operating OIS. As an example of OIS in this study we have selected the Intranet of Republic of Korea Air Force (ROKAF). ROKAF has implemented the Intranet in an attempt to improve communication and coordination within the organization. To test our research model and hypotheses, survey questionnaires were first sent out to 400 Intranet users. With the assistance of ROKAF, Intranet users were initially identified among its members, and subjects were randomly drawn from the pool. 377 survey responses were finally returned. The unit of measurement and analysis in this research is a personal level. Path analysis based on structural equation modeling was used to test research hypotheses. Construct validity represents accordance between the theoretical base concept of constructs and its measurement items. Tests for the reliability and discriminant validity are accepted, thus verifying our survey instrument. In this research, we have proposed a conceptual framework to highlight the importance of perceived voluntariness after organization deploys OIS. The results of our analysis present several key finding. First, all three independent variables (IT usage level, task interdependency, and organizational support) have significant effects on IS usage, which will eventually improve performance. Thus, IS usage plays a mediating role between antecedent variables (IT usage level. task interdependency, and organizational support) and performance improvement. Second, the effect of the task dependency was the highest for IS usage among the three antecedent variables. This is highly plausible since one of the Intranet's major capabilities is to facilitate communication among members within an organization. Accordingly, we conclude that the higher the task dependency, the higher Intranet usage. The effect of user's IT usage level was the second, while the effect of the organizational support was the third. Finally, the perceived voluntariness plays a pivotal role in enhancing perceived performance in personal level after launching the Intranet. Relationships among investigated variables were significantly different between groups with a high level and a low level of voluntariness. The impact of the Intranet usage on the performance was greater in the higher level voluntariness group than in the lower one. For the lower level voluntariness group, the user's IT usage had the highest effect on the Intranet usage among the three antecedent variables. In short, our study suggests that the higher the perceived voluntariness is the more IS usage will be. Perceived voluntariness was found to have a moderating effect on the relationships among user IT usage level, task interdependency, IS usage, and perceived performance, supporting all the hypotheses on the moderating effect. Most of all, user IT usage level has the strongest influence on IS usage, indicating that users with superior IT usage are more likely to enjoy a high level of perceived performance.

Location Service Modeling of Distributed GIS for Replication Geospatial Information Object Management (중복 지리정보 객체 관리를 위한 분산 지리정보 시스템의 위치 서비스 모델링)

  • Jeong, Chang-Won;Lee, Won-Jung;Lee, Jae-Wan;Joo, Su-Chong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.985-996
    • /
    • 2006
  • As the internet technologies develop, the geographic information system environment is changing to the web-based service. Since geospatial information of the existing Web-GIS services were developed independently, there is no interoperability to support diverse map formats. In spite of the same geospatial information object it can be used for various proposes that is duplicated in GIS separately. It needs intelligent strategies for optimal replica selection, which is identification of replication geospatial information objects. And for management of replication objects, OMG, GLOBE and GRID computing suggested related frameworks. But these researches are not thorough going enough in case of geospatial information object. This paper presents a model of location service, which is supported for optimal selection among replication and management of replication objects. It is consist of tree main services. The first is binding service which can save names and properties of object defined by users according to service offers and enable clients to search them on the service of offers. The second is location service which can manage location information with contact records. And obtains performance information by the Load Sharing Facility on system independently with contact address. The third is intelligent selection service which can obtain basic/performance information from the binding service/location service and provide both faster access and better performance characteristics by rules as intelligent model based on rough sets. For the validity of location service model, this research presents the processes of location service execution with Graphic User Interface.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.